Reflections on IDEL

Has it only been twelve weeks since the start of IDEL? It feels like a lifetime ago, yet just a fleeting moment at the very same time. Having come through the other side I feel a heady mix of inspiration, an excitement for the future, a sense that my understanding in key areas has really come along, with a slight tinge of battle-weariness and trepidation heading into the assignment. One last push and all that…

It’s been interesting to reflect back on the last three months, particularly in light of my first blog post. I wasn’t entirely sure what to expect when starting. It’s almost like starting a new job (what are my colleagues (students) going to be like? Have I made the right decision? Am I going stick out like a sore thumb?), but it’s been the right move. The experience has been quite the revelation – it’s certainly one of the better decisions I’ve made in the last few years.

Before I get stuck into the actual content of the course, one of the real highlights has been the course experience as a whole. I’ll be honest, I’d lost my faith a little in digital learning before starting the course. I think this was largely on the back of client requests for rather linear course formats that did little to take advantage of what online proffers, and perhaps I’d been suckered it this. I’m glad to say my eyes are re-opened again to the possibilities, with an increased vigour to explore the boundaries where possible.

There are many reasons why IDEL ‘works’, but for me, it’s because of the following factors:

  • The design of the course. It’s well crafted, with an obvious appreciation for feedback and incremental improvements each semester.
  • Each activity seems to be planned with real precision, particularly with regards to the ‘running order’. There seems to be a reason for everything, and this builds confidence as someone new to this game.
  • It places ownership on the student to deliver, but with very solid support and guidance if you need it.
  • It feels like the course ‘practices what it preaches’. This was my suspicion prior to the course, but I’m glad it’s turned out to be true.
  • I was a little skeptical about the mix of contact points throughout. Different tutors every couple of weeks, with a separate contact for the blog. How can a useful bond be struck up without it feeling like been passed from one to another? I needn’t have been worried. I think the various course tutors obviously work closely together, and there’s a sense that a lot is discussed ‘behind the scenes’. There’s an obvious passion for the course, and a real sense of duty (and respect) towards students on the course.
  • The course is very much one of curation, rather than instruction. I realise many of the papers involved some of the Edinburgh team in one form or another, but think this is as a result of the University’s standing in this field. I’d never really been involved in a course this deeply that in many areas happens ‘outside’ the VLE, and this is one of the key points I’ll take moving forwards.

What could be have been different? One of the key themes throughout has been the push to keep a keen eye on things, so keeping with I think it’s pointing out some areas that were a challenge. Personally, I found some of the initial subjects quite tough, but I think that’s down to my primary commercial, rather than educational history. They were hugely interesting and of value, but the learning curve was a little tough to start with. As the course progressed I think it moved away from the educational angle in part, and as such was more familiar territory.

I do think the forum format could be improved somewhat – this seems a little tired and not optimised for dialogue. I felt it difficult to understand the various threads, and which ones I’d already read. As always Moodle could benefit from a ‘lick of paint’ to make it more visually pleasing, but again I think this lack of polish is actually conducive to the course – it doesn’t distract. And if we think of the VLE as part of the University of Edinburgh, I’d probably guess it fits with some of the more historic nature of the buildings? 😉

After the first sanctuary, there was something of a lull between the students. An activity such as a skype chat could have brought everyone back together and may be worth considering for next semester. I just felt we all returned in dribs and drabs and the network wasn’t as strong as it was in the first half of the course. Surprisingly I felt the sense of community was at its peak mid-way through the course. But I also suspect life catches up with many after a few weeks. After that initial impetus to get going there are demands on our time outside of the course that can’t be put off any longer.

Back to the course itself, and it’s difficult to summarise, simply due to its wide expanse. If I had to try and distill down some of the key messages that have had the biggest impact on me, they’d be:

  • Be wary of anyone talking about a revolutionary technology in education, it’s unlikely to have the impact that’s expected, and in the ways they are expecting
  • The role of teacher may change with the increasing use of digital technologies within education, but this does not make it any less critical
  • A digital learning experience is as much as part of the institution that’s delivering it as any bricks and mortar setting
  • We need to make sure everyone has a voice around the table in terms of technological use and development – at present Silicon Valley and commercial companies perhaps shout the loudest
  • Technology is influencing the direction of education as much as it is a tool (the rhizomatic/constructivist viewpoint)
  • Technology is not neutral. It has historical, societal, political influences to it.
  • Automated need not mean less personal.
  • A critical eye is key to avoid being swept up in the latest fashion, and to maintain focus on what really matters.

These are quite broad, but without regurgitating the entire course, it’s difficult to go into more specific detail.

Regarding my own performance on the course, I wish I had attempted more multi-model blog posts, but I think time restricted this. I’m hoping that in future modules I’ll have more time to explore this, but as always time is the critical factor.

On terms of my professional life, the IDEL experience is having an immediate impact. I’ve been able to use what I’ve learned to help challenge some pre-defined beliefs with clients. It’s also given me additional confidence, largely because the content has helped fill some of the (significant) gaps in my thinking and knowledge. I also use the IDEL experience as a benchmark in many ways, and talk about my three months, and how they could use curation and external tools to good effect, for example.

One of the more surprising outcomes has been attempts to explain terms such as ‘instrumentalism’, ‘constructivism’ and ‘rhizome’ to my other half, it’s always useful to try and articulate these to someone not involved in the subject to see if you really understand the concepts!

Finally, a real plus point is all the doors it’s opened, both in terms of new subject areas to explore, and the connections I’ve made. It’s been eye-opening to say the least, and am looking forward to seeing how this develops in 2018!

“LARCing” about

Catching up on Week 11 after a bout of sickness, I’ve been playing with the LARC tool provided by the University to explore the topic of learning analytics.

It provides different styles of reports (lenient, moderate, or strict) based on metrics typically used within learning analytics (attendance; engagement; social; performance; persona). I dived straight in and picked the extremes around engagement, simply as ‘engagement’ to me seems a particularly woolly metric…

LARC report, w/c 20 Nov 17. Strict, engagement.

LARC report, w/c 20 Nov 17. Lenient, engagement.

The contrast between the two is quite stark. The lenient style seems more human – it’s more encouraging (“your active involvement is really great”) and conversational/personable (“you seemed”… compared with “you were noticeably…”).

Despite both being automated, the lenient style feels less ‘systematic’ than the strict. Does this suggest that humans are more likely to be more lenient and accommodating, or is simply that we associate this type of language less with automated language – so it doesn’t feel more ‘human’, just less ‘computer’? This certainly chimes with insights into the Twitter ‘Teacherbot’ from Bayne S. (2015). This line of human/computer is beginning to be increasingly blurred through the use of artificial intelligence, and how students react to these interactions is of particular personal interest.

I think it’s interesting to think about how one responds to each style. Given my engagement appears to ‘satisfactory’ at a base level, the feedback isn’t necessarily looking to provoke a particular response. However, if my engagement was less than satisfactory, then I’m not sure personally which one would personally provoke a better response and get me into action. I guess it depends whether it’s the ‘carrot or the stick’ that is the better driver for the student.

The examples above make me consider the Course Signals project in more detail, which was discussed in Clow (2013) and Gasevic et al (2015). From my understanding, this project provides tutors with relevant information about their students’ performance, and the tutor decides on the format of the intervention (should it be conducive to make one). The LARC project has gone one step further it seems, in that the style of response has been created. Referring to my initial point about choice of style, in the Course Signals approach ultimately the tutor would make this choice based on their understanding of the student. That’s not to say this couldn’t ultimately be delivered automatically with some increased intelligence – it would just need some A/B testing early on in the student’s interaction with the course to test different forms of feedback, and see what provokes the desired response. Of course, this discovery phase would bring with it significant risks, as they are likely to receive erratic and wide-ranging types of feedback when engagement with the course at its most embryonic.

As a side note, Clow (2013) discusses the development of semantic and more qualitative data aggregation and this being able to put to more meaningful use. Given this, perhaps a logical next step would be to develop the algorithms to understand the register and tone of language used in the blog posts and relay any feedback to the student in a similar style (as a way of increasing engagement).

Going back to the LARC project, I thought it’d be useful to look at attendance, particularly in light of Gasevic et al’s (2015) references to the pitfalls in this.

LARC report, w/c 20 Nov 17. Moderate, attendance.

Gasevic uses three “axioms” to discuss the issues in learning analytics. One of these is agency, in that students have the discretion of choice in how they study. Naturally then, a weakness in analysing attendance, in particular, is going to be in benchmarking, both against the student’s prior history and amongst the cohort as a whole. Naturally, this was done by design by the UoE team, but we were asked to generate LARC reports based on a week when activity was largely done outside of the VLE, namely on Twitter. As such there’s an issue here, in that the tool does not have the context of the week factored into it, and raises questions about the term ‘attendance’ as a whole. Attendance has been extrapolated from the number of ‘logins’ by the student, and the two may not be as compatible as may look on first reflection.

When comparing with the wider group, it’s also easy to point out potential holes across the group. One student may prefer to log in once, download all the materials and digest before interacting on the discussion forums. Another may be more of a ‘lurker’, preferring to interact later in the week, perhaps when other commitments permit.

Ultimately this all starts to come down to context, both from a situational, pedagogical and peer perspective and this is where a teacher can add significant value. I think one of the wider challenges for learning analytics is the aggregation of these personal connections and observations, however, this raises the challenges of bias and neutrality. It seems that learning analytics as indicators can offer significant value, and the extent to which metrics are seen to represent the ‘truth’ needs constant challenging.


  • Clow, D. (2013). An overview of learning analytics. Teaching in Higher Education, 18(6) pp.683–695.
  • Gasevic, D, Dawson, S & Siemens, G 2015, ‘Let’s not forget: Learning analytics are about learning’ TechTrends, vol 59, no. 1, pp. 64. DOI: 10.1007/s11528-014-0822
  • Bayne S. (2015) Teacherbot: interventions in automated teaching. Teaching in Higher Education. 20(4):455-467)

Reflections on Learning Analytics

Week 11 builds on the previous week’s theme of big data, providing a spotlight on the use of data specifically for learning. Unsurprisingly, there are many links and reference points from topics throughout the rest of IDEL.

The core readings seem to indicate that the field of learning analytics is still very immature, and when compared with the use of other technologies within education, could be considered to be lagging.

It seems, on the whole, learning analytics operate at surface level at present. Gasevic et al (2015) highlight the complexities around learner agency and pedagogical approaches that can provoke glaring holes in the interpretation of any data. Ultimately in any scenario, educational or not, data needs context to have any meaning (and therefore actionable value), and it seems to be falling short in this area at present.

I enjoyed reading about Purdue University’s Course Signals project in the core readings. The intention behind this project seems to empower the teacher, rather than simply ‘measure’ the student. While the positivity around the results should be taken with a pinch of salt in Clow (2013) (indeed Gasevic et al (2015) does proffer further critique of this), it would seem that involving the teacher in the choice of interactions recognises the absence of relationship and emotion that perhaps these analytics struggle to encompass. However, it does appear that the aggregation and understanding of quantitative data that could bridge this gap is improving (Gasevic et al, 2015).

I particularly liked Clow’s (2013) description of learning analytics as a “‘jackdaw’ field of enquiry, in that it uses a fragmented set of tools and methodologies – it could be seen to be using a rather less cohesive approach that would be required. This is certainly one of Gasevic et al’s (2015) key points – that the field needs to be underpinned with a more robust foundation to really allow it to develop in a useful way.

I wonder if the lack of maturity in this field is an implication of the nature of the field. The learning analytics cycle used by Clow (2012) identified that learning analytics are gathered and used after a learning activity or intervention has taken place. As has become even more apparent to me throughout this course, the pace of technological change is significant and rapid and the impacts on education are quite far-reaching.

If technology and tools are being developed, trialled, integrated, ditched and recycled so rapidly, inevitably it must be a challenge to assess with any rigour. Indeed Gasevic et al (2015) highlight the lack of empirical studies available to attempt to interpret this area. It’s interesting to hear in Clow (2013) that the use of proprietary systems impedes’ this too, with the lack of data available. This is particularly pertinent given their prevalence across the educational sector, and in turn impacts the assessments that can be made across domains and contexts (Gasevic et al, 2015).

A pervading theme across IDEL has been the discourse around the educational ‘voice’ in the development and use of technology, for example, Bayne (2015). Quite rightly this academic world wants to scrutinise, assess and challenge, but it seems the pace of change makes this less and less possible to take place.

For me, the spectre of the private sector is raised in Perrotta & Williamson (2016). It argues that that the use of analytics and interventions are “partially involved in the creation of the realities they claim to measure”. The challenge here is the increasing commercial influence taking place in the field of learning analytics. It cites Pearson as an example, as they have the large-scale distribution of their products to gather learner data, the resources to interpret and mine this, as well as the size to influence wider policy-making. Given the rhizomatic nature of the development of learning analytics, it seems to be that there are many reasons to be fearful of this development, particularly as it looks to be self-perpetuating.

Of course, I’m keen to keep in mind that this is one side of the argument, and I’m sure the likes of Pearson see themselves as helping to push things forward. Certainly, there are areas that the commercial world can help ‘lift the lid’ on learner behaviour, and empower teachers to make interventions – I guess the issue is how much those outside the corporation are at the discussion table. The stark truth is that Pearson’s core responsibility, above all, is to its shareholders, not it’s students.

My own personal experience has been at a ‘macro’ level, or what Clow (2015) refers to as ‘Business Intelligence’. As a commercial training provider, we used learner analytics (at a rather shallow level) to understand learner behaviour, and help us understand the product performance. Given the commercial nature of the people around me, however, there was probably an unhealthy bias or interest in how these can be used to improve commercial metrics. I certainly recognised some of the observations raised by Gasevic et al (2015) around data visualisation, and the pitfalls these can cause.

I think given this week’s reading I’ll certainly be more aware of some of the challenges in this area, particularly around providing metrics without some context. There almost needs to be some ‘priming’ of the viewer before they gain access, just to reduce the risk of mis-interpretation. I think I’ll also be keen to trial the use of analytics data to empower tutors, rather than simply automating prompts which has been the norm in the past. Alongside this, providing students with their own ‘performance’ data would be something I’d be keen to explore.

Last week’s discussions on big data raised concern about the skills needed within institutions to use big data, and I would suggest these are not solely limited to the educational world. The same issues occur in the commercial world, and can oft have quite dramatic implications if not used with care and forethought. It seems like if you are a data analyst with an understanding of educational methodologies you would be able to choose your job!


Data Visualisation

This week in IDEL we communication on the course ventured out of the forums and onto Twitter. As we were investigating the concepts of big data, and its role within education, we were challenged to answer some questions around this on the social media platform using the hashtag #mscidel as the identifier.

We then used a visualisation tool to summarise the week’s activity, and this is the result:

To add to this data, there were 54 nodes, and 151 edges as a result of the week’s conversations.

It seems to me there was a handful of vocal participants, indicated by the increased size of the handle. I don’t think this is a huge surprise, the names that crop up seem to be those that have been more active on Twitter over the last few months. Naturally this does raise a question around the choice of tool, naturally those more comfortable with a tool – or those where the tool is more ingrained in their daily activities – perhaps are more likely to involve themselves in the conversation. This could be as much effect as the cause though, it could be argued those more active on Twitter are more active for a reason.

Although there are 54 nodes, I’m actually a little surprised it’s not gone further than this. The visualisation, to me, shows that the conversation was held within a rather tight group. I think this is interesting for a conversation to occur on an open and public platform not to have more interjections, particularly given the alumni of the MSc programme that is likely to have connections with those leading the conversations.

In terms of an experience, it’s certainly been one of the more intriguing activities.

It felt throughout that there was minimal tutor intervention, and this was something that was actively discussed. There was speculation that this could be by design, simply as a way to avoid colouring the tag output and keeping this ‘organic’. More simply, it was also argued that the conversation was free-flowing anyway, and the tutors’ role was to tee up the conversation and leave it to the students, it’s not as if the programme leaders aren’t busy!

All the activities so far on the course have been private. The blogs have been locked down, and directed conversations have been held within closed environments, for example, Minecraft, skype chats, and the forums. Given we’re now at week 10, this is indicative of our increased confidence in the subject area and lack of nerves about being ‘on show’.

Given Twitter became the central place for conversation, I’ve inevitably found myself making comparisons with the forums, and the relative merits with each. Twitter has some particular strengths of note as a conversational tool.

  • It was easier to keep abreast of the conversations happening, given it’s accessible on our mobile devices more readily, and doesn’t require us to log in. I think with the forums I always set down to ‘do some work’, whereas found myself checking and contributing to the conversations more frequently on Twitter. An interesting point to this is that there is an increased incentive to contribute, as you ultimately know this is going to feed into the visualisation. It could be argued that this links into the notion of badges looked at in week 9. Although this is largely used as a reward for activity or completion, it’s another influence on learning behaviour and motivation.
  • Twitter caps posts to 280 characters (it would have been interesting to do this exercise with the until-recent 140 character limit). I think this encourages brevity and more salient points. Of course, the trade-off here is the depth of response and consideration put into it. There’s probably a place for both.
  • It seemed to me that the conversations on brought in more informal articles and blog postings, whereas the forums seem to be more aligned with journals and papers. Perhaps there’s an aspect of formality here, and how the community is expected to behave in each of the mediums. In the forums, it could be seen as more within the ‘virtual walls’ of the university (week 6 and 7 flags here!) so there’s a perception of what’s expected. Outside of these walls, there are less concerns. Perhaps we feel less observed and on display to the ‘guardians’, even if this is completely false.

On the flip side, there are some considerable strengths to the forums.

  • I felt at times on Twitter that it was difficult to ensure I’d seen all the key conversations. Although we were using an understood hashtag, it’s easy to miss this from your posts, and I find Twitter’s inbuilt search facility problematic at times. With the forums, it’s easy to see all the threads and make sure you’ve not missed an important one.
  • Despite our increased confidence as the course has progressed, everyone may not be in the same position within the group. Ultimately Twitter is open, and this could be a discouragement to some. Given the granular options to be involved in Twitter (e.g. ‘liking’ posts), this may provide lurkers with the opportunity to stay involved without contributing. This doesn’t concern me personally, but sometimes that push can help vocalise your thoughts.

I felt the activity also deepened some of my relationships with the peers, and the increased contact was probably well overdue since the Minecraft interactions. For me, Twitter blurs the line between synchronous and asynchronous activity. There were times when some of the conversations were happening in real time. It was also interesting that in this format the timezone factor did seem to come in. Some of the students are based in North America, so you’d often wake to a flurry of tweets on your timeline – I guess I should count myself lucky that I’m in the ‘right’ timezone for the course. It’d be interesting to hear those views from those who had a different experience as a result of this.

Sandra Flynn’s early blog post about the visualisation raised some interesting thoughts about the nature of data, and almost an inherent implication to start comparing or competing. I’ve struggled to find any research into this area, but it’s an interesting notion – once you can start to measure something, does it change the nature of what is being measured?

The visualisation also became a useful social object once these were produced. However, this didn’t spark off the conversations I was expecting it to. This could be a simple case of it being towards the end of the week, but given one of the activities was to blog about this, I suspect many of us (myself included) are using our observations to add to our blog.

It turned out useful to have done the bulk of the core reading early in Week 10 to give the Twitter conversations more meaning. For me, the more pertinent discussion areas during the week revolved around:

Looking forward to exploring this in more detail next week – learning analytics!

Big Data

As IDEL progressed we seem to have moved away from perhaps some of the more theoretical and human topics, towards a more technical focus. Week 9 was focused on digital badges and blockchaining, and week 10 looks at the concept of ‘big data’ and its implications for education.

Like blockchaining, I’m aware of the concept of big data, but have not got to grips on what this means, or what it can do for us. I think there’s a danger some of these initiatives can be seen as simply buzzwords or part of the zeitgeist, without any longevity to them. But it’s apparent by digging into both blockchaining and big data that these are unlikely to be fleeting developments and are likely to underpin many technical changes over the next few years.

Unsurprisingly, and in line with the rest of the course so far, the focus on the readings has been to contrast the opportunities big data provides with some of the pitfalls. There’s also been a focus on some of the blind spots in this area.

So starting with the positive aspects, with Selwyn (2015) highlighting three areas:

  • Increased ability to use data to measure goals, targets, benchmarks, performance indicators etc
  • An ability to harmonise and standardise across borders, whether these be institutional or geographic
  • To provide a basis for an infrastructure for education to be understood and organised

But naturally, there are some challenges. Eynon (2013) puts the spotlight on three aspects for concern:

  • The ethics behind the sourcing, mining, interpretation and ultimately use of the data
  • The scope of the data, what can be measured as a results, and the questions it can (and can’t) help us answer
  • Inequalities linked to the sourcing and accessing of any data.

The area of ethics is one discussed in detail across several of the recommended readings, as it seems to be a grave cause for concern. A wonderful example is given by Williamson (2015) on use of data provided by Facebook, and the criticism afterward about the permission (or lack thereof) around the use of the data (the defence being that it was already in the ‘public realm’).

To provide an example related to university admissions, at present applications are (largely) based on Academic results at an undergraduate level. However ‘big data’ could provide the ability to forecast degree completion, and perhaps future earning potential, and even link this to social backgrounds and family history. Naturally, on the one hand, this could be empowering – providing institutes with more insight into how to support their students to succeed. The rather dystopian view is that this could prejudice entry requirements, and given some of the metrics being forced upon universities (e.g. ‘satisfaction ratings) and the subsequent ‘slap on the wrists’ as a result of this, it doesn’t seem impossible that big data will be used in this way at some point.

Selwyn (2015) extrapolates these issues further to explore what they may mean for institutes and their students. He argues that this increase in performance metrics may create an “intensification of managerialism within education”, which suggests a move towards a workplace more typical of a commercial organisation.

This commercial influence is unsurprising, given the history of big data. It seems to me that using the processes and themes of big data in education comes with it with ‘baggage’, because of its very design. Because of the commercial background too, it’s important to critique the strength and role of the educational or academic voice within technical developments such as this. This harks back to one of IDEL’s earliest topics, about the role of technology in education, and who is at the table when it comes to the discussions and implementation of this. This seems to be another example of where education could be perceived as a recipient of this technology, rather than helping to shape it.

This is of particular concern when you read about Pearson’s developments in Williamson (2015). Some of the criticism of Pearson could be seen as a reaction against their developments, but when Williamson argues that Pearson could be using big data to create new “models of cognitive development and learner progression”, then this would be a major red flag. Pearson’s main responsibility is to their shareholders, to their students, so it’s important that the progression of technical initiatives like big data is not left to commercial educational companies to drive.

A common issue picked up by Eynon (2013), Selwyn (2015) and WIlliamson (2015), is that of an institution’s capability, or more specifically the personnel within it, to use big data. ‘Use’ in this context, is quite a broad term, from sourcing and mining the data, through to combining with different sources and interpreting it. Williamson argues that there are “several competencies for education data science”, and that there is a significant deficit in the numbers of those equipped with the necessary skills. The skills are a blend of the technical (computational and statistical skills), the educational and an understanding of the ethical and social concerns in this area. As such, Williamson argues that educational data science is very much a field in its own right, rather than an appendage to statistical analysis. Naturally, if this is an area that is significantly under-resourced, then this reduces the impact education can have in shaping big data.

This may also be more difficult to fix than Eynon envisages. The demand for talent – given the nature of the role – is spread across both commercial and educational organisations, meaning commercial companies may be able to outbid educational institutes for their services. It may be one thing to recognise the issue, but fixing it may be increasingly difficult.

I picked up on several themes across the papers that have been discussed earlier in IDEL.

Given the rise of commercial influence in this area (in particular), there seemed to be a ‘call to action’ to the wider educational crowd to become more vocal, and come more centre-stage in these discussions. Selwyn (2013) argues that “the opportunity now exists for educational research to develop nuanced approaches to understanding, and then offering alternatives to, the dominant data conditions that are being established across educational contexts”. This reminded me of Biesta (2013), in his call for teachers to teach, and Bayne (2015) to ensure academia has a role to play in wider technological developments.

Biesta’s references to a neo-liberalistic agenda also pop up in Selwyn – “expanded access to data allows institutions and individuals to operate more efficiently, effectively and equitably”, and Eynon also references themes of efficiency and cost-effectiveness in big data.

Selwyn (2013) also uses the metaphor of water in his discussions around big data. ‘Deluge’, ‘flow’, and ‘flood’ are terms used, and I think this possibly inevitable. The comparisons between data and water are natural – is it aplenty, can travel at speed (rivers) or not (lakes), comes from many different sources and directions, and requires real skill to manage. It’s also a fundamental of life, and you can argue data is the bedrock of economies now (it’s even been termed as more valuable than oil). The dystopian view is that it can also be a dangerous force of huge power, and like recent devastating floods all over the planet, can pose an immediate danger to us through years of mismanagement.

I thought it was interesting that Selwyn (2015) points out that the sociological approach to data is to assume that there are already some inherent issues with it. This admission of lack of neutrality is quite refreshing, and makes a lot of sense. It’s a battle that’s difficult to fight – it’s probably a better use of time to acknowledge this and work out how to deal with it than try and fix at source. The rhizomatic metaphor is also apparent here, in that Selwyn argues that “this approach is careful to acknowledge that data are profoundly shaping of, as well as shaped by, social interests”.

As a final thought, I liked this quote from Eynon (2013) – “We must not get seduced by Big Data”. I think if you were to replace ‘Big Data’ with ‘technology’, you’ve probably got the core theme of IDEL in a nutshell.


Blockchain and Digital Badges

Building on the paper by Halavais (2012), I’ve been reading around the subjects of blockchain and badges, as prompted by this week’s activities.

It’s taken some wrestling to get my head around the concept of blockchain. Although I’m well aware of Bitcoin, the underpinning technology has been explained to me in the past and it’s made little sense. But having read the recommended readings such as this, and this, I feel I’m getting to grips with it. As far as I can tell, it provides a permanent, incremental record of data, which as a result helps establishes a sense of ‘trust’. Naturally then, it’s been linked to the concept of badges within education, as this could have an obvious benefit in establishing and ratifying any credential, and removing any ambiguity about authenticity.

However, having been exposed to the concept of open badges previously (as mentioned here), I still have some fundamental questions around this that I cannot seem to resolve. They key one being, if anyone can create and issue a badge, how can a sense of ‘value’ by attributed to this? Halavais (2012) argues that for a badge “to carry social currency they must represent a significant sacrifice”. As such, how do we define a common understanding of ‘sacrifice’ and attribute to this in a consistent way? It could be a unit of time, for example, 30 hours of ‘learning’ – but what does hour mean in this way? WHat the hell does learning mean in this context? I think without a common measure, it becomes impossible to compare. What’s better, a table or cheese?

To gauge a sense of sacrifice, you may have to resort to type, and align it to traditional methods of verification. For example, upon viewing a badge you may make a judgement on the issuing authority (e.g. a university) and the level of qualification (e.g. Masters). If this is the case, what’s the purpose of an ‘open’ badge?

A badge is intended as a quick visual reference (initially) that is symbolic of something – if this is not easily understand by quick review, it fails to be a badge, it’s just an image. A joke isn’t funny if you have to explain it – it just becomes a statement. A joke is also only found funny by an audience receptive to this, and aware of the context that forms the joke. If an open badge can literally be from any sphere, the context may only be apparent through luck, not by design.

I also think when many things are ‘open’, that this often results in an overwhelming increase of instances of the thing that is open.This can mean it’s harder to actually wade through the ocean of information. Youtube, for example, has provided an open platform for anyone to display their creations, say for example music creations. But while a good musician can come from anywhere, it does not mean anyone can be a good musician, and the wealth of music on here can simply make an unpolished diamond ever harder to unearth.

It’s interesting to explore this in terms of my own professional history. I spent many years working within the TEFL training industry (Teaching English as a Foreign Language). This is notorious for being a tad ‘wild west’ – there’s no overarching accrediting body for TEFL, meaning that in essence anyone can set up a TEFL training school and start issuing certificates. This is driven by the fact that some/many schools overseas are simply happy to hire based on native-English speaking abilities, rather than any teaching competency. This is changing over time as the market matures and there is a realisation that nativeness does not equal good teacher, but is still very much the case in many areas around the world. I bring this up as this has definite links, in that there’s no authority within this sphere. It’s as almost as anyone can issue a badge – an open badge you could say. The same questions persist – you can validate that they have achieved that badge, but what does the badge mean? Sure, there’s metadata that can give you a further insight, but does this metadata provide any real qualitative information?

Back to blockchain, and more specifically the development of Blockcerts. Blockcerts are blockchain-based credentials. Belshaw makes an interesting comment that, in his view “blockchain-based credentials are good… in high stakes situations”, and that he’d “be happy for my doctoral certificate, for example, to be on the blockchain.”

Aligning this to the social classes discussed by Halavais (2012), we can assign the stewardship approach, or governance class towards the blockcerts initative, and the commercial class, or stewardship approach to the ‘weird and wonderful’, as he puts it.

I think the irony here, is openness and blockchain actually strengthening the governing approach, as ultimately this has the traditional authority to issue credentials, and still relies on this to provide any weight.

The only way around this (in my opinion) is to focus on the end goal. It’s not about the credential, it’s about how they perform in the classroom – ultimately how competent they are the profession. Obviously, this is also open to its own judgements, bias, and hidden agendas.

Widen this further, and it links to the Edublocks concept outlined in Audrey Watters’ guide to blockchain in education. In essence, Edublocks link employment (and more tangibly pay) to educational history. This seems to be the natural progression, but even for me, as someone who is probably overly pragmatic about things, this rings some loud alarm bells. The key one being that by tying everything back to employability, what happened to the simple pleasure of wanting to learn about something, simply for the curiosity and intrigue to know more? Perhaps there are links here to the neo-liberalistic influence (Biesta 2012) permeating across education, it certainly seems to be another example of this.


Reflections on badges

I think we’ve all got our own personal history with badges. Somewhere tucked up in the loft is a scouts sweater stitched with a heap of merit badges, and probably in the same box is a Blue Peter badge for a competition entry.

Online, it’s difficult not to notice the Brexit situation creating a plethora of both pro-and anti-EU imagery associated with twitter accounts. A more subtle example may be the triple brackets also appearing around twitter handles. Although these were initially implemented by users looking to subvert a disagreeable Google plug-in, they’ve now become as much a sign of solidarity and defiance as doing anything tangible.

I’ve also encountered badges in my professional life. Several of our clients at Candle are interested in providing students with access to digital badges upon completion, as a way of adding additional value to their training products and to help create unwitting ambassadors for their brands. A few year ago I was also lucky enough to present at an elearning event, where Doug Belshaw, who led the Mozilla Open Badges project, was also speaking.

Alexander M.C. Halavais’ paper on the ‘genealogy of badges’ provides a detailed insight into the history and nature of badges, which should aid the direction of travel in this area. In my experience it’s still quite a woolly topic ‘on the ground’. Many people in my network would have been exposed to the idea, but from my experience I don’t think there’s a consensus on why they should be used, and what they should be used for.

My key takeaways (there’s a few…):

  • The concept of badges goes way back, with many early examples military and religious roots
  • Badges can be used for a variety of purposes. They can:
    • show support for a cause (a ‘button’)
    • Be used as a way of shaping a persona, either intentionally or unintentionally
      display membership of a group (both for positive and negative reasons e.g. star of David)
    • indicate status within a group e.g. moderator, or new member
    • display achievements (and provide a route to more detail on this)
    • display granularity or progression of expertise
    • be used as a motivational tool and incentivise progression
  • Badges can often be a combination of many of these factors. This mix can have positive and negative consequences
  • There are different types of badges
    • Badges of honor, authority and privilege
    • Badges of achievement, qualification, and experience
    • Badges of experience and expression (the most common sort of badge found on the web)
    • Badges of survival
    • Boundary badges and monstrous badges
    • Learning badges
  • Badges can be acquired through different means: a demonstrated skill; experience; upon recognition by peers; awarded by a regulatory body; upon achievement of milestones (e.g. number of posts)
  • Badges could be used as a form of compensation for sacrifice, in lieu of other forms of compensation e.g. monetary
  • Badges have primary value in a community where they are earned and displayed. Transferrable value minimal, although attempts are being made to bridge this gap (e.g. Mozilla).
  • Badges are more than just ‘skin deep’ in terms of the symbolism and the dynamics of symbolism behind them. In a sense badges could be considered to have a constructivist make-up. (There are some echoes here of the ‘Medium is the message’ video referenced earlier in the course.)
  • In an online content, badges have prevalence in communities and environments where flat hierarchy is pursued, yet echo less flat structures. There seems to be a dichotomy at play here, and there are obvious links to the critique of OER in Bayne, Ross, Knox (2015).
  • Trust in the what the badge is conveying is very much at the core of its success as a symbol.
  • The perceived value in a badge is tied into the effort and sacrifice used to attain them.
  • Badges with perceived high value are understandably more likely to be ‘faked’. A clear example of this is knock-off fashion items – it’s not the materials that are important per se, it’s the logo/badge that appears on them.

Obviously this is not a small summary, but I think this reflects the real depth of thought the author has provided. I think the term ‘badge’ is obviously quite a broad term, in that it can they can be applied in a many ways, in many contexts, for many different purposes.

My own reflection is that the paper primarily focusses on badges earned through activity. This may be as a result of the author’s interest, or a reflection of the situation at the time of writing (2011), and I’d say it’s probably the latter given the quote “but so far the mechanisms for verifying such badges do not exist.”

The use of badges has strong links to the that of identity and community, topics discussed early on in IDEL. They are very much a manifestation of aspects of this, and influence the nature of community and the social interactions within this.

Professionally, we’ve started to see increasing demand in the use of badges for digital credentialling. It seems there are plenty of vendors developing solutions for this market, such as credly and accredible. I think the rise in this area taps into increasing demand to build personal brand (and trust/proof in this) and is a viable solution to this as:

  • Meta-data allows increased visibility and depth into the achievement. It can also keep it ‘valid’, in that through some vendors the awarding body had the power to invalidate credentials
  • For the awarding body, it allows them to take advantage of the inherent viral nature of the web, and put their brand in front of the peers of students (e.g. Linkedin connections).

I think for these badges to work, there needs to be a common understanding that this meta data exists, how to access it, what it represents, and wider digital literacy around what makes a digital badge ‘reputable’.

I’ve also seen badges been used as a way of incentivising progression, as the paper highlights. I’ve seen significant levels of cynicism around this – I think as it’s seen as treating students as dumb, and being easy to motivate/manipulate. Perhaps given it’s a concept that’s come over primarily from gaming, it’s seen as applying something from a non-serious situation, to a serious one, particularly when one’s facebook stream seems rampant with easy-to-attain candy crush badges. The paper argues that the social currency of a badge is linked to the sacrifice, which ties in with this.

In the author’s reference to the origins of the ‘badge’, and the uses of these within the military, badges were used as a way of fostering community and establishing commonalities. But alongside this, they were also used as a way of ‘command and control’. They provided an easier way to identify a group of people within battle, and manage them accordingly. Bringing this to the current day, are badges (in their more abstract form) being used in this way now, and are they be exploited? Technology allows aggregation on a mass scale, so there could be ways to scrape the web, and corral groups of badge wearers accordingly. By displaying their ‘values’ through an icon, are badge wearers unknowingly categorising themselves and allowing them to be targetted as part of a group? Given the current prominence of Russian bots in the news, I can’t help but wonder if this self-labelling by web users is being used as a way of both targetting and infiltrating these groups.

One area I haven’t touched on in this post, is the rather intriguing notion of guardian and commercial classes. This seems to have wider repercussions that I’d like to dwell on a little. There’s links here (again) to Biesta’s view on the learnification of education, and the roles teachers play within this. These societal and political influences really penetrates all aspects of education, so it shouldn’t be a surprise to me that this affects the tangible aspects, such as badges, but it does. I need to think this through and follow-up with a more considered post…


  • Alexander M.C. Halavais (2012) A GENEALOGY OF BADGES, Information, Communication & Society, 15:3, 354-373, DOI: 10.1080/1369118X.2011.641992
  • Bayne, S., Knox, J., Ross, J. (2015). Open Education: the need for a critical approach.Learning, Media and Technology Special Issue: Critical Approaches to Open Education. 40(3). pp. 247-250.
  • Biesta, G. (2012) Giving teaching back to education: responding to the disappearance of the teacher. Phenomenology & Practice. 6(2), 35-49. journal article

MOOC Reflections

As prompted by the IDEL course, I decided to tackle the MOOC titled ‘Religion and Conflict’ on the FutureLearn platform. MOOCs aren’t a particularly new experience for me, like most participants, I’ve started (and not completed) several.

I initially contemplated starting a MOOC related to the current IDEL studies (e.g. there’s a course currently on offer from the University of Leeds on Blended Learning), but felt a change of subject might do more good, and perhaps be able to detach the format from the subject with greater ease.

So how open is a ‘Massive Open Online Course’? Well as Bayne, Knox, Ross (2015) argues, in some respects it is very open, in others it’s less so. They argue that the premise of open in digital education is largely based on three factors – geography, hierarchy and financial.

The MOOC I’ve taken certainly ticks the box in terms of geography. The course is provided by the University of Groningen, who I would never normally get to study with. But it’s important to keep a critical eye on this, as there is little sense that I am studying with the University, or get a sense of Groningen. In weeks 6 and 7 of IDEL we looked at what it meant to study at the University of Edinburgh, and on reflection, using the UoE VLE as the backbone for the course was actually part of the identity in here. With the MOOC, the course is ‘provided’ by the University of Groningen, but I don’t feel like it is ‘delivered’ by them. The format is a fairly regimented experience, rolled out across many institutions (understandably this has to have commonalities to make it ‘FutureLearn’). So how open is this? From my perspective, it’s given me access to expertise from a university, but I’m not accessing the experience of the university.

Regarding institutional structures, this is certainly more ‘horizontal’ than through a traditional university experience. But at what cost? By delivering to the masses, and at scale, then it could be more ‘vanilla’ in terms of the curricula and content.

From a financial perspective, the MOOC is free to participate in, but in my opinion, this is as much a blessing as a curse. There’s no ‘skin in the game’, it’s as easy to leave the course as it is to join, and this lack of commitment does affect the overall course experience and ultimately value gained. IDEL works particularly well because of the social interaction between the students, and the role the tutors play within this dialogue. As much as the conversations are instigated by the tutors, the cost and time commitment required on IDEL do push the student to fully take part. Unlike a MOOC you are also aware the MSc is a long-term commitment, so the reward for building these connections is felt over many years (and beyond).

Given the removal of these barriers, how has this impacted the take-up of online educational opportunities? Not much by all accounts. Sandra Flynn posted a link to an insightful study (Emanuel, E.J., 2013) on the forums, demonstrating that those taking the opportunities presented by MOOCs are largely those already degree-qualified. It would suggest that there could be awareness, desirability, and curriculum issues around open educational opportunities at play. If the ambition of MOOCs is to increase educational opportunities for the masses, then perhaps it is failing in this respect.

On the flipside, many of the inadequacies highlighted in Bayne, Knox, Ross (2015) manifest itself in my experiences of the MOOC so far.

Self-direction and motivation is a key oversight picked out (and to reaffirm a previous blog post, I love the phrase “imagined autonomous subject”), and this is plain to see. Already the frequency of student-generated content (largely comments) has significantly dropped from week 1. As much as this is an issue of commitment, and perhaps value, it’s also (in my opinion) down to the very nature of the MOOC. Community aspects are a core activity of the MOOC (I’m sure the course designers are aware that this is a central part of a successful learning experience), but could also be viewed as quite peripheral. I think the latter is ultimately down to the open access, it’s simply very difficult to forge any deep interactions of real value with such large volumes. Ironically this scale puts people off continuing the course, and then the volume of participation drops to a level where the community could actually be of real value! I’m sure there are many factors in this lack of motivation after the initial sign-up, and one could be as simple as ‘life getting in the way’.

Another concern raised in the critique is the role of a teacher within the MOOC. My experience so far is that the role of the teacher is largely that of content provider, as there is little or no interaction in the communication channels. There also doesn’t seem to be an obvious route to contact them. This is understandable, given the sheer scale of enquiries. The explicit nature of an open course seems to be having a detrimental impact on the course itself.

Finally, there is a sense of isolation amongst students. This ties into this community feel and is driven by the quite limited opportunities to connect with each other On this course, the other participants are simply a list of names. There are no opportunities to connect outside of the FutureLearn platform or the prescribed activities.

There are early signs that students are simply there to ‘complete’ rather than interact. For example, it’s useful to note the lack of replies in the introductions. This could be for several reasons, but a suggested factor could be the sheer volume (there are currently 12 pages of introductions!). As a student myself, there is little incentive to reply if there’s no chance of fostering relationships or keeping track of people you’ve noted. Harking back to an earlier part of IDEL, it’s interesting to note that no rich media is used by students, it’s primarily the written word. As we know, the written word does not have the ‘thick’ descriptions (REF) that an image of video can provide. It would be natural to draw comparisons with IDEL, when our introductions were videos and images using external tools. This visualisation ‘embodied’ the student and humanised our peers, which I think plays a factor in driving relationships.

I thought it was interesting to observe that as a student it is possible to comment on specific elements, rather than discuss as a group. The videos and entries on specific pages could be considered the social objects, rather than the broader themes perhaps. This doesn’t particularly facilitate an ‘open conversation’.

Image: Many discussion posts have no replies.

Another aspect to consider in ‘openness’ is the overall course journey. In the MOOC some activities are locked, it’s very much a fixed pathway. Course outcomes are fixed and defined – it’s very prescriptive in terms of what you can gain. In comparison with IDEL, while there is a strong sense of direction, there is the opportunity to meander and align with your own personal goals. There is also the sense of flexibility, for example, the additional Minecraft session that was discussed. It’s perhaps not fair to compare the two courses, but I think it’s useful to highlight the contrasts.

In summary, I think it can be argued that MOOCs remove some of the barriers in place, as highlighted by Bayne, Knox, Ross (2015). However the very format of open impacts the course delivery, which in turn throttles the value that can be gained.

Access is open, but once part of it, the experience can be quite closed. Compare this with IDEL, where access is significantly more restricted, but once part of it, the experience is very open.

Overall, I do think it’s important to acknowledge that the MOOC can be a good thing. They do open up many doors and provide access to knowledge and expertise that could be seen as less accessible. I think if they were repositioned however as ‘tasters’ and a way of exploring interest in new topics, this may help generate more interest. Perhaps it would also drive changes in the format that does not seem to motivate the students.

In terms of taking these reflections into my professional life, I think it only reaffirms some of the thoughts I’ve had throughout this course. Mainly that a synchronous course, with a ‘social presence’ at the heart of it, and key involvement from a course leader leads to a much more fruitful experience for all involved. I think the MOOC observations add weight to the understanding that a commitment is key (this doesn’t necessarily have to be financial), and that less is perhaps more in terms of cohort size.


  • Bayne, S., Knox, J., Ross, J. (2015). Open Education: the need for a critical approach. Learning, Media and Technology Special Issue: Critical Approaches to Open Education. 40(3). pp. 247-250.
  • Emanuel, E.J., 2013. Online education: MOOCs taken by educated few. Nature, 503(7476), pp.342-342.

Open Education – the importance of a critical approach

As Bayne, Knox, Ross (2015) highlight in their paper, there has been a continuing trend towards ‘openness’ in many walks of life. My own experiences in this space have largely been in open technology, and the use of this within education/training. There’s certainly a lot of positivity about it: it’s seen as low-cost, tapping into a wealth of expertise, and something that integrates well with other open source technologies.

But open source technology isn’t the panacea it’s often portrayed as. ‘Crowd-sourced’ technology can often be bloated, and this can make it expensive to maintain. Given technical developers are usually at the core of designing and building the software, I’ve also found that this often comes at the expense of the user-experience (more specifically experience for the student).

Take Moodle as an example. This is largely designed from the perspective of an administrator’s point of view. Although there has been significant progress made to make it a better experience for students, it still follows the block format and poor design. I’ve found that many people using Moodle often don’t see the inherent flaws in the student experience, but I think that’s largely down to becoming accustomed to its nature.

Open source technology has also been championed as it’s been seen as wrestling control away from private vendors, and creating something ‘for the people’. However, the downsides of this approach are often overlooked.

In many ways, Bayne, Knox, Ross (2015) echo this from the perspective of open education. They suggest that a critique of open education is well warranted, and their paper aims to provoke discussion around this.

The first things that came to mind when ‘open education’ was brought up were the Open University and MOOCs. The paper argues that ‘open’ is largely defined as negating the hierarchical, economic and geographic aspects of ‘closed’ education – the Open university certainly aims to reduce geographical and financial boundaries, and MOOCs perhaps all three. However, the paper argues that these three pillars are not the only ones that should be considered.

There are many recurring themes in this paper from some of the previous readings:

  • The term ‘open’ needs tightening. This ambiguity has created a tendency to focus on the more attractive aspects of ‘open’, and not necessary given the critique it warrants. This echoes Bayne (2015), when discussing the terminology of ‘Technology-enhanced learning’ and that the choice of terminology has quite far-reaching consequences. In the same way, open education is framed by the fact that it is ‘not closed’, and again the subtleties, richness and complexities could be missed by taking this perspective.
  • An increasing neoliberalistic influence on education is being manifested through open education. This focus on efficiency has been mentioned several times, Biesta, G. (2012) being of particular note.
  • The role of the teacher is also under flux. Selwyn, N. (2011) discusses this from the perspective of new technologies. While open and technology are not the same, there is an inherent link here, and therefore the same issues are apparent.

I’m glad this paper picks up on some of the themes of the self-directed learner. Most of the discussion in this area has been largely focussed on the role of the teacher, and not how things have changed from a learner’s perspective. I find this is at the core of the many flaws around ‘elearning’ (if this term is to be attributed to the self-taught style that is commonplace). It assumes that the role of the teacher can be baked into the content and that the user is self-motivated enough to complete this without any interaction. I particularly like the phrase that “such openness is only a solution for the imagined autonomous subject”.

I’m looking forward to digging into these themes over the rest of the week!

Bayne, S., Gallagher, M. S., & Lamb, J. (2014) Reflections

Being back ‘at’ university over the last seven weeks, after so long outside of more formal education has been a challenging, but invigorating experience. This week we’ve been building on previous themes around online environments, community and spaces to think about our experiences as distance education students, and what it means for us to be ‘at’ university.

Bayne, Gallagher and Lamb’s paper is a great resource as it dissects the views of previous MSc students on this very course – so fantastic material for IDEL and this topic!

Some key points I took from the paper:

  • Distance education is often theorised through the lens of on-campus education. There are inherent issues and limitations with this. “The distancing of education makes possible new spatial practices, new patterns of movement and ‘new proximities’.”
  • Traditional perspectives from social science on spaces are not adequate to look at the ‘hosts, guests, buildings, objects, and machines’ at play in the topology of courses such as this – the ‘new mobilities paradigm’.
  • The authors used 4 kinds of proposed social spaces to consider social and mobility aspects of distance learning – regional, network, fluid and fire. These are intertwined and occur simultaneously, they are not mutually exclusive.
  • The physical campus is important to distance students, but in varying ways to different students.
  • Some students have an emotional/sentimental/nostalgic connection with the physical location  – others very little.
  • Absence is as important as presence in looking at the relationship between university and student (and can be overlooked if just viewing through ‘on-campus’ perspective).
  • The campus can be fluid and transient – a “cognitive (piece of)… real estate”.
  • There may need to be ongoing calibration of what it means to study ‘at’ a university in light of this, for all stakeholders including tutors, administrators, academic leads and students themselves.

This paper came at a good time – I’d already been wrestling in my head what it meant to be studying at the University of Edinburgh. I certainly share similar thoughts to some of the students who were interviewed as part of the research, in particular, Matthew Gillon’s quote that:

“In a strange way, I didn’t feel that I wasn’t in Edinburgh.”

This double negative is important. I don’t feel like I’m in Edinburgh (as I’m not), but I don’t feel as if I’m not in Edinburgh – there’s an important distinction here.

Erik Credle’s perspectives also strike a chord:

“I feel a sense of belonging to the University, but at the same time I dont feel that I am actually part of the University.”

I certainly feel a sense of belonging to the course and the team behind it, my fellow students, but perhaps differently to Erik I don’t feel a real connection with the university. At present, I feel more of an affinity and connection with the city than the university, which feels like a contradiction in terms. I think this may be the way I am viewing the university, as a physical place.

Having strong interactions with the course and its various participants creates the bond with the course. I think having visited Edinburgh before, and being able to visualise/feel/experience the environment, this is an important contributor.

I can’t help but feel having never visited the university campus this has a detrimental impact on my relationship with the university, but in any case, I don’t think this an important factor for me. The university is almost more important pre-course, when gauging the likely rigour and quality gone into creating and curating the learning experience. I think I’ll need more time to let this percolate and consider fully!

Reviewing this paper in light of recent readings, I again spotted some recurring themes:

  • The course repeatedly questions the approaches used in current discourse within digital education. Like Friesen questions the instrumentalist and essentialist approaches to technology (and the blind spots this causes), in this paper this is echoed in the approach taken to viewing ‘non-campus education’. These critical approaches can only be a good thing, and is at the very core of good scientific practice!
  • Rather than seeing things as separate strands (e.g. Friesen’s view on essentialism and instrumentalism), there is an ongoing challenge to view things as symbiotic and intertwined (with good reasons why!). In here we see this approach taken to the spatial topologies discussed.
  • Again there are issues with terminology and the implications of the choices made on this. Like Bayne argued the weaknesses with the phrase ‘Technology-Enhanced Learning’, here we see a similar argument around ‘Distance Learning’. These could be seen as ‘growing pains’ with the increasing introduction and usage of technology, but important to be discussed and addressed.

Before reading the paper, and the comments from alumni as part of this, it was useful to consider the questions posed.

My own personal view of arriving at the University of Edinburgh was the rather overwhelming contact points and breadth of information sent to me by email! It didn’t feel like a smooth experience, but in retrospect, I think this was a key aspect of realising that I’m studying at university again. With all the different departments and administrative functions in contact, you get a sense of the size and scale of studying at an institution, and I think this was an important realisation. And I’m not sure of the importance of this yet, but receiving the invoice and paying the course fees also had an effect, I’m still trying to distill what this was! All these different touch-points contribute to processing and establishing the experience, and without this, I may have taken a different approach (or level of focus?) to the course. As a contrast, you don’t get this with a MOOC for example (emails on matriculation, information on freshers week), and this inevitably adds to the unconscious feeling of uni ‘-lite’. As someone who is involved in positioning and marketing of products, this really stood out.

On reflection, I think twitter has been an essential tool for me to get a wider understanding of the University of Edinburgh. Although I’ve not actively engaged with them, it’s interesting that I feel I have a connection with two of the writers of the paper, Sian Bayne and Michael Gallacher. I see what they are discussing, what peaks their interest, who they, in turn, engage with. It helps position them as thought leaders who are in active debate and makes the course feel even more ‘alive’ and ever-evolving. I think it’s interesting that Twitter’s not been a central part of the course, yet without this, I feel the wider experience would have been poorer.

Finally, I absolutely loved the concept of a ‘digital postcard’. Wow, what a fantastic idea. Naturally, the first thought is “why not just use a video, that’d capture audio?”, but video forces you to go along with the pre-set pace and narrative, rather than allowing the viewer to explore and digest at their own speed. When tool we use heavily in my business is H5P. It’s a WordPress plug-in, used to add interactivity to media using HTML5 (think of it as being a poor person’s Thinglink). Continuing the experimentation, here’s my own digital postcard from my place of study.

(Caveat – my desk isn’t usually this untidy. It’s been a crazy week, but all the junk adds to the experience!).