IDEL Assignment: The value of open badges in education

Open badges are already in use by many educational institutes, not-for-profits and companies as a way of visually communicating skills and achievements (Mozilla, 2017). These are a branch of digital badges, differing because they are built from a shared technical framework that allows any organisation, commercial/education or otherwise to issue open badges with consistent attributes.

Digital badges are used in education in several prominent ways (Gibson, 2012; Jovanovic, 2015). Firstly, as a way of motivating students to progress in their learning journey. Secondly, as an alternative assessment medium. Thirdly, as a pedagogical tool to signpost learning journeys. Finally, digital badges are increasingly used as a method for credentialing a learner’s achievements.

To evaluate the value of open badges in education, we must consider each of the ways in which open badges are used, with an eye on the benefit to the different stakeholders interacting with them. We must also consider the specific affordances of open badges compared with non-complex digital badges too.

Open badges as a motivational tool

Gibson (2015) highlights the use of badges as a tool to use as a gamification method, alongside other options such as points and leaderboards.

While “meaningful gamification” (Nicholson, 2012) may be seen as a way of building intrinsic motivation, gamification as a whole is not without its critics (Bogost, 2011), and there are views that empirical evidence to date is yet to yield strong evidence of its impact (Hung, 2017), largely due to the broad definition of the terminology and how it is used in practice.

For teachers and educators considering using digital badges in their course design as a motivational tool, they should consider that “motivation is affected by context” (Hartnett, 2016). Like any educational tool, a strong understanding of the student and their motivations to learn will impact on what and how they deploy this.

Abramovich (2013) highlights how differing levels of prior knowledge, as well as the type of badge used, can affect the usefulness of badges as a motivational tool. It’s perhaps also pertinent to consider the notion of sacrifice (Halavais, 2012) in helping to avoid the undesirable outcome of a badge actually becoming demotivating. This could happen if the way to acquire a badge becomes too easy to attain (and therefore cheapening their value) or on the flip side being too difficult to acquire.

Educators should also be wary of the risk that the acquisition of badges may come at the expense of the learning activity required to achieve them (Jovanovic, 2015).

While there is considerable discourse about the value of digital badges as a motivational tool, there seems to limited insight or evidence about the how the unique attributes of open badges contribute in this area. However, there are examples (Badgecraft, 2015) of how the technical nature of open badges has been thought to encourage participation amongst groups of learners, which in turn may influence the depth, volume and nature of learning. Given this, it is useful to consider the role of community (Halavais, 2012) and “symbolic capital” has within the value of portability of open badges.

Open badges as an alternative source of assessment

Jovanovic (2015) highlights the opportunity open badges present as a new device for assessment, for instance, peer assessment and the recognition of soft skills.

There are advocates for open badges as a tool in this way (Strunk, 2017). Using open badges in this way perhaps inevitably intertwines into their use as a credential, but as an assessment tool, the metadata underpinning the open badge can help provide a verifiable depth to a learner’s progress. This in turn can be used to understand a “transparent narrative of a learner’s knowledge” (Strunk, 2017).

It’s important to note here that open badges have a distinct advantage over standard badges – the use of metadata makes the use here more appropriate. It could be argued however that as open badges have not yet been tested in this area with any great scope, any challenges may not yet be on the radar.

Open badges as a pedagogical tool

Jovanovic (2015) introduces digital badges as a way to “scaffold” a learning experience. As teachers and students, this is particularly pertinent given the understanding of how autonomy can aid student motivation (Hartnett, 2016). As such badges could be seen as a useful instrument to help explore the sweet-spot between choice and guidance. There’s a natural link here into motivation, as badges as signposts may “focus student attention”, and “nudge student exploration” (Rughiniş, 2013).

Again the research into how the specific aspects of open badges factor into this usage seems limited. However to speculate, the common framework could allow institutes to collaborate and signpost other ‘tracks’ that travel out of their own institutional boundaries. This could also widen to incorporate learning analytics, and provide increased understanding of how open badges can influence learning pathways (Strunk, 2017).

Open badges as a credential

While it could be argued there is value in the use of badges in the contexts discussed so far, the evidence for the enhanced value open badges provide seems limited. This criticism could be judged as unfair, as the most “obvious use” (Glover, 2013) for open badges is as a method in credentialing achievements in learning. The focus on this use is perhaps due to the metadata attached to open badges. This provides insight into the “context, meaning, process and result of an activity” (Gibson, 2015), distinguishing them from simple badges. Despite this leaning towards open badges been most akin to being used as a credential, there is considerable discourse in the weaknesses.

There are three perspectives to consider (Kerver, 2016) when judging the value of open badges as a credential. These are the badge holder (or student), the badge issuer (e.g. educational institute) and the badge enquirer (a different educational institute or employer). I will primarily focus on the badge enquirer, as ultimately as a credential, the badge holder wishes to display it for inquiry and it is the enquirer’s viewpoints which are of concern to the student.

The first challenge for an enquirer is to gauge what the badges represent. Given an open backpack can hold any badge from any issuer (for example the portfolio could hold badges of progress, a micro credential badges or a representation of a more traditional qualification), this may cause challenges due to the conflicting rationale for acquisition, or the “monstrous hybrids” (Halavais, 2012). While the learner may have the flexibility to decide on what, how and where their achievements are displayed, there’s no guarantee that a learner would choose to do this, or indeed know what an enquirer wishes to view from their portfolio. Adding to this, there is also the inevitable aspersions caused by any enquirer viewing badges as a motivational tool and not a credential, thus the risk of cheapening the credential (Bull, 2014).

Given the range of badge visualisations that could be employed by different issuers, at first glance it may be difficult to gauge the ‘sacrifice’ (a key contributor to the perception of badge value identified by Halavais, 2012), and the ranking of sacrifice amongst the portfolio. (In practical terms, it’s quite possible that a badge representing a Ph.D. could be sat alongside a yoga participation badge). Bear in mind that a badge, by its very nature, is intended to be a visual shortcut. It seems that while a badge may have symbolic capital within the community it was created, this may hold little sway outside of that community, and ultimately this is the premise of ‘openness’.

It may also be worth considering the replication issues around badges too. As Halavais (2012) quite rightly points out, something is only as valuable as it is difficult to fake. Given anyone can without barrier issue credentials, and with no constraints around design, it could be easy to falsify, and there is a lure for issuers to piggyback the symbolic capital that may be perceived from other issuers in their visualisation choices.

Given the challenges around the context in which badges can be viewed, the visualisation choices and the ability to fake, it seems a logical suggestion that for an enquirer to gauge the value of an open badge, they have to fall back on the elements of the very guardian classes (Halavais, 2012) that the premise of open badges intends to up-seat. The digital artifact included in this essay attempts to represent this ‘vicious circle’ visually.

It’s important to note that the challenges in understanding the value of an achievement (open badge or otherwise) are not a new thing. Enquirers have always had to make judgments based on what a credential represents, whether this is paper-based, online or otherwise. However it seems the construct of open badges does not resolve this problem, and indeed once scaled up will still struggle to articulate what a learner has accomplished, and what this represents.

Putting the student to one side for a moment, it’s important to acknowledge some of the benefits that open badges do provide institutions. The technical infrastructure permits increased control to manage credentials over time; allows them to digitally sign and verify them; provides the opportunity to tie them into learning analytics to gauge enquirer behaviour; and can provide increased brand awareness opportunities for the institute (Kerver, 2016). However, it could be argued that these benefits can only have a significant impact if open badges become the standard. Without universal adoption (which I’d argue the flaws identified above may prevent) then they could simply become another tool to manage. So rather than saving time, could actually increase overhead.

When we consider the value of open badges, one should also consider the fact that it is still an emerging technology. Digital badges 2.0 has just been released (IMS Global, 2017), however, this seems to be focussed around technical enhancements. At first glance, these seem to do little to counter some of the fundamental issues outlined earlier. There’s also a sense of pragmatic acceptance that badge systems will fail before they succeeded (Carey, 2012), however, five years after this viewpoint “the value of open digital badges has yet to be validated by compelling evidence” (Strunk, 2017).

It’s also possible to challenge the notion that open badges are indeed ‘open’.There are many providers of open badge ‘backpacks’ (Hamson-Utley, 2016), and this requires registration with one of these parties. A single point of registration becomes a potential single point of failure, and in practice has been noted as a barrier, albeit small (Hole, 2014). So while badges are portable, they are only portable to the extent of the technology they are carried in allows them to be. There could be increasing concerns about the openness of the platforms themselves for educational stakeholders, with commercial companies like Pearson (Belshaw, 2017) and Salesforce looking to explore this area (Google Groups, 2017).

It’s difficult to envisage a future for open badges as a meaningful form of credentialing without referencing the hierarchy it is intended to disrupt. There are attempts to bring a sense of hierarchy into the visualisation of badges (Belshaw, 2015), however, this may be little more than skin-deep. Perhaps if open badges were to move towards the proposition that they are intended to be a gateway to a portfolio of accomplishments, rather the end point in themselves, this could encourage adoption.

I propose the notion that the very openness of open badges is the very chink in the armour that makes acceptance as a format by learners and enquirers unlikely, and this acceptance is the critical factor in universal adoption. While open badges do have value in several respects, there does not seem to be clear evidence of the increased value of openness over standard badges in these usages, and arguably their greatest advantage (as a credential) could be seen to be flawed.

References:

  • Mozilla (2017) About Open Badges, Open Badges (Accessed: 14 December 2017).
  • Alexander M.C. Halavais (2012) A GENEALOGY OF BADGES, Information, Communication & Society, 15:3, 354-373, DOI: 10.1080/1369118X.2011.641992
  • Gibson, D., Ostashewski, N., Flintoff, K. et al. Educ Inf Technol (2015) Digital Badges in Education. 20: 403. doi:10.1007/s10639-013-9291-7.
  • Jovanovic, Jelena and Vladan Devedzic. “Open Badges: Novel Means to Motivate, Scaffold and Recognize Learning.” Technology, Knowledge and Learning 20 (2015): 115-122.
  • Nicholson S. (2015) A RECIPE for Meaningful Gamification. In: Reiners T., Wood L. (eds) Gamification in Education and Business. Springer, Cham
  • Bogost, Ian. (2011). Gamification is bullshit.  (Accessed: 18 December 2017).
  • Aaron Chia Yuan Hung. Adelphi University. A Critique and Defense of Gamification. Journal of Interactive Online Learning Volume 15, Number 1, Summer 2017 ISSN: 1541-4914 57
  • Hartnett, Maggie, Motivation in Online Education 2016. Singapore: Springer Singapore. DOI 10.1007/978-981-10-0700-2. Pages 78-79
  • Abramovich, S., Schunn, C. & Higashi, R.M. Education Tech Research Dev (2013) 61: 217. https://doi.org/10.1007/s11423-013-9289-2
  • Viktoria Strunk and James Willis. Digital Badges and Learning Analytics Provide Differentiated Assessment Opportunities. Educause Review. (Accessed: 15 December 2017).
  • Badgecraft, 2017. Open Badges to Motivate Engagement. ‘Eastern Partnership Youth Forum’. (Accessed: 18 December 2017).
  • Rughiniş R., Matei S. (2013) Digital Badges: Signposts and Claims of Achievement. In: Stephanidis C. (eds) HCI International 2013 – Posters’ Extended Abstracts. HCI 2013. Communications in Computer and Information Science, vol 374. Springer, Berlin, Heidelberg
  • GLOVER, Ian and LATIF, Farzana (2013). Investigating perceptions and potential of open badges in formal higher education. In: HERRINGTON, Jan, COUROS, Alec and IRVINE, Valerie, (eds.) Proceedings of World Conference on Educational Multimedia, Hypermedia and Telecommunications 2013. Chesapeake, VA, AACE, 1398-1402
  • Kerver (2016). Whitepaper on open badges and micro-credentials. Surf.NL.  (Accessed: 20 December 2017).
  • IMS Global. Open Badges v2.0 IMS Candidate Final / Public Draft. (Accessed: 14 December 2017).
  • Bull, Bernard, 2014. “Beware of Badges as Biscuits”. Etale – Education, Innovation, Experimentation. (Accessed: 20 December 2017).
  • Hamson-Utley, Jordan. “TITLE–BADGING MODELS IN FACULTY DEVELOPMENT.” (Accessed: 20 December 2017).
  • Anne Hole University of Sussex, UK. Open Badges: exploring the potential and practicalities of a new way of recognising skills in higher education. Journal of Learning Development in Higher Education ISSN: 1759-667X Special Edition: Digital Technologies, November 2014 (Accessed: 18 December 2017).
  • Belshaw, D. 2017. Open Educational Thinkering. Pearson WTF. (Accessed: 20 December 2017).
  • Google Groups, 2017. Open Badges. (Accessed: 19 December 2017).
  • Belshaw, D. 2015. Open Educational Thinkering.Towards a visual hierarchy of Open Badges. (Accessed: 20 December 2017).

Icons in graphic used courtesy of flaticon.com

Reflections on IDEL

Has it only been twelve weeks since the start of IDEL? It feels like a lifetime ago, yet just a fleeting moment at the very same time. Having come through the other side I feel a heady mix of inspiration, an excitement for the future, a sense that my understanding in key areas has really come along, with a slight tinge of battle-weariness and trepidation heading into the assignment. One last push and all that…

It’s been interesting to reflect back on the last three months, particularly in light of my first blog post. I wasn’t entirely sure what to expect when starting. It’s almost like starting a new job (what are my colleagues (students) going to be like? Have I made the right decision? Am I going stick out like a sore thumb?), but it’s been the right move. The experience has been quite the revelation – it’s certainly one of the better decisions I’ve made in the last few years.

Before I get stuck into the actual content of the course, one of the real highlights has been the course experience as a whole. I’ll be honest, I’d lost my faith a little in digital learning before starting the course. I think this was largely on the back of client requests for rather linear course formats that did little to take advantage of what online proffers, and perhaps I’d been suckered it this. I’m glad to say my eyes are re-opened again to the possibilities, with an increased vigour to explore the boundaries where possible.

There are many reasons why IDEL ‘works’, but for me, it’s because of the following factors:

  • The design of the course. It’s well crafted, with an obvious appreciation for feedback and incremental improvements each semester.
  • Each activity seems to be planned with real precision, particularly with regards to the ‘running order’. There seems to be a reason for everything, and this builds confidence as someone new to this game.
  • It places ownership on the student to deliver, but with very solid support and guidance if you need it.
  • It feels like the course ‘practices what it preaches’. This was my suspicion prior to the course, but I’m glad it’s turned out to be true.
  • I was a little skeptical about the mix of contact points throughout. Different tutors every couple of weeks, with a separate contact for the blog. How can a useful bond be struck up without it feeling like been passed from one to another? I needn’t have been worried. I think the various course tutors obviously work closely together, and there’s a sense that a lot is discussed ‘behind the scenes’. There’s an obvious passion for the course, and a real sense of duty (and respect) towards students on the course.
  • The course is very much one of curation, rather than instruction. I realise many of the papers involved some of the Edinburgh team in one form or another, but think this is as a result of the University’s standing in this field. I’d never really been involved in a course this deeply that in many areas happens ‘outside’ the VLE, and this is one of the key points I’ll take moving forwards.

What could be have been different? One of the key themes throughout has been the push to keep a keen eye on things, so keeping with I think it’s pointing out some areas that were a challenge. Personally, I found some of the initial subjects quite tough, but I think that’s down to my primary commercial, rather than educational history. They were hugely interesting and of value, but the learning curve was a little tough to start with. As the course progressed I think it moved away from the educational angle in part, and as such was more familiar territory.

I do think the forum format could be improved somewhat – this seems a little tired and not optimised for dialogue. I felt it difficult to understand the various threads, and which ones I’d already read. As always Moodle could benefit from a ‘lick of paint’ to make it more visually pleasing, but again I think this lack of polish is actually conducive to the course – it doesn’t distract. And if we think of the VLE as part of the University of Edinburgh, I’d probably guess it fits with some of the more historic nature of the buildings? 😉

After the first sanctuary, there was something of a lull between the students. An activity such as a skype chat could have brought everyone back together and may be worth considering for next semester. I just felt we all returned in dribs and drabs and the network wasn’t as strong as it was in the first half of the course. Surprisingly I felt the sense of community was at its peak mid-way through the course. But I also suspect life catches up with many after a few weeks. After that initial impetus to get going there are demands on our time outside of the course that can’t be put off any longer.

Back to the course itself, and it’s difficult to summarise, simply due to its wide expanse. If I had to try and distill down some of the key messages that have had the biggest impact on me, they’d be:

  • Be wary of anyone talking about a revolutionary technology in education, it’s unlikely to have the impact that’s expected, and in the ways they are expecting
  • The role of teacher may change with the increasing use of digital technologies within education, but this does not make it any less critical
  • A digital learning experience is as much as part of the institution that’s delivering it as any bricks and mortar setting
  • We need to make sure everyone has a voice around the table in terms of technological use and development – at present Silicon Valley and commercial companies perhaps shout the loudest
  • Technology is influencing the direction of education as much as it is a tool (the rhizomatic/constructivist viewpoint)
  • Technology is not neutral. It has historical, societal, political influences to it.
  • Automated need not mean less personal.
  • A critical eye is key to avoid being swept up in the latest fashion, and to maintain focus on what really matters.

These are quite broad, but without regurgitating the entire course, it’s difficult to go into more specific detail.

Regarding my own performance on the course, I wish I had attempted more multi-model blog posts, but I think time restricted this. I’m hoping that in future modules I’ll have more time to explore this, but as always time is the critical factor.

On terms of my professional life, the IDEL experience is having an immediate impact. I’ve been able to use what I’ve learned to help challenge some pre-defined beliefs with clients. It’s also given me additional confidence, largely because the content has helped fill some of the (significant) gaps in my thinking and knowledge. I also use the IDEL experience as a benchmark in many ways, and talk about my three months, and how they could use curation and external tools to good effect, for example.

One of the more surprising outcomes has been attempts to explain terms such as ‘instrumentalism’, ‘constructivism’ and ‘rhizome’ to my other half, it’s always useful to try and articulate these to someone not involved in the subject to see if you really understand the concepts!

Finally, a real plus point is all the doors it’s opened, both in terms of new subject areas to explore, and the connections I’ve made. It’s been eye-opening to say the least, and am looking forward to seeing how this develops in 2018!

“LARCing” about

Catching up on Week 11 after a bout of sickness, I’ve been playing with the LARC tool provided by the University to explore the topic of learning analytics.

It provides different styles of reports (lenient, moderate, or strict) based on metrics typically used within learning analytics (attendance; engagement; social; performance; persona). I dived straight in and picked the extremes around engagement, simply as ‘engagement’ to me seems a particularly woolly metric…

LARC report, w/c 20 Nov 17. Strict, engagement.

LARC report, w/c 20 Nov 17. Lenient, engagement.

The contrast between the two is quite stark. The lenient style seems more human – it’s more encouraging (“your active involvement is really great”) and conversational/personable (“you seemed”… compared with “you were noticeably…”).

Despite both being automated, the lenient style feels less ‘systematic’ than the strict. Does this suggest that humans are more likely to be more lenient and accommodating, or is simply that we associate this type of language less with automated language – so it doesn’t feel more ‘human’, just less ‘computer’? This certainly chimes with insights into the Twitter ‘Teacherbot’ from Bayne S. (2015). This line of human/computer is beginning to be increasingly blurred through the use of artificial intelligence, and how students react to these interactions is of particular personal interest.

I think it’s interesting to think about how one responds to each style. Given my engagement appears to ‘satisfactory’ at a base level, the feedback isn’t necessarily looking to provoke a particular response. However, if my engagement was less than satisfactory, then I’m not sure personally which one would personally provoke a better response and get me into action. I guess it depends whether it’s the ‘carrot or the stick’ that is the better driver for the student.

The examples above make me consider the Course Signals project in more detail, which was discussed in Clow (2013) and Gasevic et al (2015). From my understanding, this project provides tutors with relevant information about their students’ performance, and the tutor decides on the format of the intervention (should it be conducive to make one). The LARC project has gone one step further it seems, in that the style of response has been created. Referring to my initial point about choice of style, in the Course Signals approach ultimately the tutor would make this choice based on their understanding of the student. That’s not to say this couldn’t ultimately be delivered automatically with some increased intelligence – it would just need some A/B testing early on in the student’s interaction with the course to test different forms of feedback, and see what provokes the desired response. Of course, this discovery phase would bring with it significant risks, as they are likely to receive erratic and wide-ranging types of feedback when engagement with the course at its most embryonic.

As a side note, Clow (2013) discusses the development of semantic and more qualitative data aggregation and this being able to put to more meaningful use. Given this, perhaps a logical next step would be to develop the algorithms to understand the register and tone of language used in the blog posts and relay any feedback to the student in a similar style (as a way of increasing engagement).

Going back to the LARC project, I thought it’d be useful to look at attendance, particularly in light of Gasevic et al’s (2015) references to the pitfalls in this.

LARC report, w/c 20 Nov 17. Moderate, attendance.

Gasevic uses three “axioms” to discuss the issues in learning analytics. One of these is agency, in that students have the discretion of choice in how they study. Naturally then, a weakness in analysing attendance, in particular, is going to be in benchmarking, both against the student’s prior history and amongst the cohort as a whole. Naturally, this was done by design by the UoE team, but we were asked to generate LARC reports based on a week when activity was largely done outside of the VLE, namely on Twitter. As such there’s an issue here, in that the tool does not have the context of the week factored into it, and raises questions about the term ‘attendance’ as a whole. Attendance has been extrapolated from the number of ‘logins’ by the student, and the two may not be as compatible as may look on first reflection.

When comparing with the wider group, it’s also easy to point out potential holes across the group. One student may prefer to log in once, download all the materials and digest before interacting on the discussion forums. Another may be more of a ‘lurker’, preferring to interact later in the week, perhaps when other commitments permit.

Ultimately this all starts to come down to context, both from a situational, pedagogical and peer perspective and this is where a teacher can add significant value. I think one of the wider challenges for learning analytics is the aggregation of these personal connections and observations, however, this raises the challenges of bias and neutrality. It seems that learning analytics as indicators can offer significant value, and the extent to which metrics are seen to represent the ‘truth’ needs constant challenging.

References:

  • Clow, D. (2013). An overview of learning analytics. Teaching in Higher Education, 18(6) pp.683–695.
  • Gasevic, D, Dawson, S & Siemens, G 2015, ‘Let’s not forget: Learning analytics are about learning’ TechTrends, vol 59, no. 1, pp. 64. DOI: 10.1007/s11528-014-0822
  • Bayne S. (2015) Teacherbot: interventions in automated teaching. Teaching in Higher Education. 20(4):455-467)

Reflections on Learning Analytics

Week 11 builds on the previous week’s theme of big data, providing a spotlight on the use of data specifically for learning. Unsurprisingly, there are many links and reference points from topics throughout the rest of IDEL.

The core readings seem to indicate that the field of learning analytics is still very immature, and when compared with the use of other technologies within education, could be considered to be lagging.

It seems, on the whole, learning analytics operate at surface level at present. Gasevic et al (2015) highlight the complexities around learner agency and pedagogical approaches that can provoke glaring holes in the interpretation of any data. Ultimately in any scenario, educational or not, data needs context to have any meaning (and therefore actionable value), and it seems to be falling short in this area at present.

I enjoyed reading about Purdue University’s Course Signals project in the core readings. The intention behind this project seems to empower the teacher, rather than simply ‘measure’ the student. While the positivity around the results should be taken with a pinch of salt in Clow (2013) (indeed Gasevic et al (2015) does proffer further critique of this), it would seem that involving the teacher in the choice of interactions recognises the absence of relationship and emotion that perhaps these analytics struggle to encompass. However, it does appear that the aggregation and understanding of quantitative data that could bridge this gap is improving (Gasevic et al, 2015).

I particularly liked Clow’s (2013) description of learning analytics as a “‘jackdaw’ field of enquiry, in that it uses a fragmented set of tools and methodologies – it could be seen to be using a rather less cohesive approach that would be required. This is certainly one of Gasevic et al’s (2015) key points – that the field needs to be underpinned with a more robust foundation to really allow it to develop in a useful way.

I wonder if the lack of maturity in this field is an implication of the nature of the field. The learning analytics cycle used by Clow (2012) identified that learning analytics are gathered and used after a learning activity or intervention has taken place. As has become even more apparent to me throughout this course, the pace of technological change is significant and rapid and the impacts on education are quite far-reaching.

If technology and tools are being developed, trialled, integrated, ditched and recycled so rapidly, inevitably it must be a challenge to assess with any rigour. Indeed Gasevic et al (2015) highlight the lack of empirical studies available to attempt to interpret this area. It’s interesting to hear in Clow (2013) that the use of proprietary systems impedes’ this too, with the lack of data available. This is particularly pertinent given their prevalence across the educational sector, and in turn impacts the assessments that can be made across domains and contexts (Gasevic et al, 2015).

A pervading theme across IDEL has been the discourse around the educational ‘voice’ in the development and use of technology, for example, Bayne (2015). Quite rightly this academic world wants to scrutinise, assess and challenge, but it seems the pace of change makes this less and less possible to take place.

For me, the spectre of the private sector is raised in Perrotta & Williamson (2016). It argues that that the use of analytics and interventions are “partially involved in the creation of the realities they claim to measure”. The challenge here is the increasing commercial influence taking place in the field of learning analytics. It cites Pearson as an example, as they have the large-scale distribution of their products to gather learner data, the resources to interpret and mine this, as well as the size to influence wider policy-making. Given the rhizomatic nature of the development of learning analytics, it seems to be that there are many reasons to be fearful of this development, particularly as it looks to be self-perpetuating.

Of course, I’m keen to keep in mind that this is one side of the argument, and I’m sure the likes of Pearson see themselves as helping to push things forward. Certainly, there are areas that the commercial world can help ‘lift the lid’ on learner behaviour, and empower teachers to make interventions – I guess the issue is how much those outside the corporation are at the discussion table. The stark truth is that Pearson’s core responsibility, above all, is to its shareholders, not it’s students.

My own personal experience has been at a ‘macro’ level, or what Clow (2015) refers to as ‘Business Intelligence’. As a commercial training provider, we used learner analytics (at a rather shallow level) to understand learner behaviour, and help us understand the product performance. Given the commercial nature of the people around me, however, there was probably an unhealthy bias or interest in how these can be used to improve commercial metrics. I certainly recognised some of the observations raised by Gasevic et al (2015) around data visualisation, and the pitfalls these can cause.

I think given this week’s reading I’ll certainly be more aware of some of the challenges in this area, particularly around providing metrics without some context. There almost needs to be some ‘priming’ of the viewer before they gain access, just to reduce the risk of mis-interpretation. I think I’ll also be keen to trial the use of analytics data to empower tutors, rather than simply automating prompts which has been the norm in the past. Alongside this, providing students with their own ‘performance’ data would be something I’d be keen to explore.

Last week’s discussions on big data raised concern about the skills needed within institutions to use big data, and I would suggest these are not solely limited to the educational world. The same issues occur in the commercial world, and can oft have quite dramatic implications if not used with care and forethought. It seems like if you are a data analyst with an understanding of educational methodologies you would be able to choose your job!

References: