“LARCing” about

Catching up on Week 11 after a bout of sickness, I’ve been playing with the LARC tool provided by the University to explore the topic of learning analytics.

It provides different styles of reports (lenient, moderate, or strict) based on metrics typically used within learning analytics (attendance; engagement; social; performance; persona). I dived straight in and picked the extremes around engagement, simply as ‘engagement’ to me seems a particularly woolly metric…

LARC report, w/c 20 Nov 17. Strict, engagement.

LARC report, w/c 20 Nov 17. Lenient, engagement.

The contrast between the two is quite stark. The lenient style seems more human – it’s more encouraging (“your active involvement is really great”) and conversational/personable (“you seemed”… compared with “you were noticeably…”).

Despite both being automated, the lenient style feels less ‘systematic’ than the strict. Does this suggest that humans are more likely to be more lenient and accommodating, or is simply that we associate this type of language less with automated language – so it doesn’t feel more ‘human’, just less ‘computer’? This certainly chimes with insights into the Twitter ‘Teacherbot’ from Bayne S. (2015). This line of human/computer is beginning to be increasingly blurred through the use of artificial intelligence, and how students react to these interactions is of particular personal interest.

I think it’s interesting to think about how one responds to each style. Given my engagement appears to ‘satisfactory’ at a base level, the feedback isn’t necessarily looking to provoke a particular response. However, if my engagement was less than satisfactory, then I’m not sure personally which one would personally provoke a better response and get me into action. I guess it depends whether it’s the ‘carrot or the stick’ that is the better driver for the student.

The examples above make me consider the Course Signals project in more detail, which was discussed in Clow (2013) and Gasevic et al (2015). From my understanding, this project provides tutors with relevant information about their students’ performance, and the tutor decides on the format of the intervention (should it be conducive to make one). The LARC project has gone one step further it seems, in that the style of response has been created. Referring to my initial point about choice of style, in the Course Signals approach ultimately the tutor would make this choice based on their understanding of the student. That’s not to say this couldn’t ultimately be delivered automatically with some increased intelligence – it would just need some A/B testing early on in the student’s interaction with the course to test different forms of feedback, and see what provokes the desired response. Of course, this discovery phase would bring with it significant risks, as they are likely to receive erratic and wide-ranging types of feedback when engagement with the course at its most embryonic.

As a side note, Clow (2013) discusses the development of semantic and more qualitative data aggregation and this being able to put to more meaningful use. Given this, perhaps a logical next step would be to develop the algorithms to understand the register and tone of language used in the blog posts and relay any feedback to the student in a similar style (as a way of increasing engagement).

Going back to the LARC project, I thought it’d be useful to look at attendance, particularly in light of Gasevic et al’s (2015) references to the pitfalls in this.

LARC report, w/c 20 Nov 17. Moderate, attendance.

Gasevic uses three “axioms” to discuss the issues in learning analytics. One of these is agency, in that students have the discretion of choice in how they study. Naturally then, a weakness in analysing attendance, in particular, is going to be in benchmarking, both against the student’s prior history and amongst the cohort as a whole. Naturally, this was done by design by the UoE team, but we were asked to generate LARC reports based on a week when activity was largely done outside of the VLE, namely on Twitter. As such there’s an issue here, in that the tool does not have the context of the week factored into it, and raises questions about the term ‘attendance’ as a whole. Attendance has been extrapolated from the number of ‘logins’ by the student, and the two may not be as compatible as may look on first reflection.

When comparing with the wider group, it’s also easy to point out potential holes across the group. One student may prefer to log in once, download all the materials and digest before interacting on the discussion forums. Another may be more of a ‘lurker’, preferring to interact later in the week, perhaps when other commitments permit.

Ultimately this all starts to come down to context, both from a situational, pedagogical and peer perspective and this is where a teacher can add significant value. I think one of the wider challenges for learning analytics is the aggregation of these personal connections and observations, however, this raises the challenges of bias and neutrality. It seems that learning analytics as indicators can offer significant value, and the extent to which metrics are seen to represent the ‘truth’ needs constant challenging.

References:

  • Clow, D. (2013). An overview of learning analytics. Teaching in Higher Education, 18(6) pp.683–695.
  • Gasevic, D, Dawson, S & Siemens, G 2015, ‘Let’s not forget: Learning analytics are about learning’ TechTrends, vol 59, no. 1, pp. 64. DOI: 10.1007/s11528-014-0822
  • Bayne S. (2015) Teacherbot: interventions in automated teaching. Teaching in Higher Education. 20(4):455-467)

Reflections on Learning Analytics

Week 11 builds on the previous week’s theme of big data, providing a spotlight on the use of data specifically for learning. Unsurprisingly, there are many links and reference points from topics throughout the rest of IDEL.

The core readings seem to indicate that the field of learning analytics is still very immature, and when compared with the use of other technologies within education, could be considered to be lagging.

It seems, on the whole, learning analytics operate at surface level at present. Gasevic et al (2015) highlight the complexities around learner agency and pedagogical approaches that can provoke glaring holes in the interpretation of any data. Ultimately in any scenario, educational or not, data needs context to have any meaning (and therefore actionable value), and it seems to be falling short in this area at present.

I enjoyed reading about Purdue University’s Course Signals project in the core readings. The intention behind this project seems to empower the teacher, rather than simply ‘measure’ the student. While the positivity around the results should be taken with a pinch of salt in Clow (2013) (indeed Gasevic et al (2015) does proffer further critique of this), it would seem that involving the teacher in the choice of interactions recognises the absence of relationship and emotion that perhaps these analytics struggle to encompass. However, it does appear that the aggregation and understanding of quantitative data that could bridge this gap is improving (Gasevic et al, 2015).

I particularly liked Clow’s (2013) description of learning analytics as a “‘jackdaw’ field of enquiry, in that it uses a fragmented set of tools and methodologies – it could be seen to be using a rather less cohesive approach that would be required. This is certainly one of Gasevic et al’s (2015) key points – that the field needs to be underpinned with a more robust foundation to really allow it to develop in a useful way.

I wonder if the lack of maturity in this field is an implication of the nature of the field. The learning analytics cycle used by Clow (2012) identified that learning analytics are gathered and used after a learning activity or intervention has taken place. As has become even more apparent to me throughout this course, the pace of technological change is significant and rapid and the impacts on education are quite far-reaching.

If technology and tools are being developed, trialled, integrated, ditched and recycled so rapidly, inevitably it must be a challenge to assess with any rigour. Indeed Gasevic et al (2015) highlight the lack of empirical studies available to attempt to interpret this area. It’s interesting to hear in Clow (2013) that the use of proprietary systems impedes’ this too, with the lack of data available. This is particularly pertinent given their prevalence across the educational sector, and in turn impacts the assessments that can be made across domains and contexts (Gasevic et al, 2015).

A pervading theme across IDEL has been the discourse around the educational ‘voice’ in the development and use of technology, for example, Bayne (2015). Quite rightly this academic world wants to scrutinise, assess and challenge, but it seems the pace of change makes this less and less possible to take place.

For me, the spectre of the private sector is raised in Perrotta & Williamson (2016). It argues that that the use of analytics and interventions are “partially involved in the creation of the realities they claim to measure”. The challenge here is the increasing commercial influence taking place in the field of learning analytics. It cites Pearson as an example, as they have the large-scale distribution of their products to gather learner data, the resources to interpret and mine this, as well as the size to influence wider policy-making. Given the rhizomatic nature of the development of learning analytics, it seems to be that there are many reasons to be fearful of this development, particularly as it looks to be self-perpetuating.

Of course, I’m keen to keep in mind that this is one side of the argument, and I’m sure the likes of Pearson see themselves as helping to push things forward. Certainly, there are areas that the commercial world can help ‘lift the lid’ on learner behaviour, and empower teachers to make interventions – I guess the issue is how much those outside the corporation are at the discussion table. The stark truth is that Pearson’s core responsibility, above all, is to its shareholders, not it’s students.

My own personal experience has been at a ‘macro’ level, or what Clow (2015) refers to as ‘Business Intelligence’. As a commercial training provider, we used learner analytics (at a rather shallow level) to understand learner behaviour, and help us understand the product performance. Given the commercial nature of the people around me, however, there was probably an unhealthy bias or interest in how these can be used to improve commercial metrics. I certainly recognised some of the observations raised by Gasevic et al (2015) around data visualisation, and the pitfalls these can cause.

I think given this week’s reading I’ll certainly be more aware of some of the challenges in this area, particularly around providing metrics without some context. There almost needs to be some ‘priming’ of the viewer before they gain access, just to reduce the risk of mis-interpretation. I think I’ll also be keen to trial the use of analytics data to empower tutors, rather than simply automating prompts which has been the norm in the past. Alongside this, providing students with their own ‘performance’ data would be something I’d be keen to explore.

Last week’s discussions on big data raised concern about the skills needed within institutions to use big data, and I would suggest these are not solely limited to the educational world. The same issues occur in the commercial world, and can oft have quite dramatic implications if not used with care and forethought. It seems like if you are a data analyst with an understanding of educational methodologies you would be able to choose your job!

References: