Category Archives: e-Pedagogy

Crowdsourcing while learning


This is an interesting approach. Duolingo promises to translate the Web while users learn a language. I liked their nice intro video, which explains how it works: Duolingo uses your native speaker skills to have you translate foreign sentences into your own language. This is a sensible approach which is also used in translation sciences and interpreting – always translate into your native tongue.

How do you translate from a language you don’t understand? Duolingo adjusts to your competence level and provides help on the fly, such as translation suggestions. I suppose this approach works reasonably well with languages that have an association to yours, e.g. English and Spanish (through their shared Latin vocabulary: library – libereria). But I’d be interested to see how this is done with Chinese or Finnish.

Google does similar stuff with its translation service, but what’s innovative here is that Duolingo promises learning in return for your translations. There are two open questions for me: Firstly, what does translating the Web mean? i.e. how are the translations fed back to the Web, and will they be free and open? Secondly, what is the learning model behind, since merely translating sentences only gets you so far in language learning? In language learning you want to be able to produce the other idiom not only understand it passively. How are repetition and grammatical structure analysis incorporated in the tool?

It’s in private beta, but I am curious about the didactical model once I get access to it.

Metacognition and Learning Analytics


Following the first live session in the latest MOOC on Learning and Knowledge Analytics (LAK12), I did some reflection on the direction that Learning Analytics has taken over the past year or two. As far as I can see, Learning Analytics follows to a large extent the line of web analytics, but with the intention to improve learning by gaining insights into hitherto invisible connections between user characteristics and actions.

However, web analytics has, I believe, a very different objective when analysing people’s navigation patterns and tracking their activities online. This objective is to better influence user behaviour in order to direct them (unknowingly and personalised) to the pages and activities that matter – to the company not the user. In almost parallel, the expressed attitude and examples brought forward in favour of Learning Analytics, puts the main focus on understanding and influencing learner behaviour, and only to an extremely limited extent if at all, their cognitive development.

An often mentioned example is that of a jogger who trains up for a marathon run, and through collection of performance data becomes more motivated, is able to see progress, compares this to other runners, etc. Similarly, tools that track the usage of software applications on your computer, provide feedback that is useful if you think you should change the amount of time you spend on e-mails. Equally, tracking your own smoking or eating habits, will hopefully lead to achieving a personal goal. These are all valid examples where and how feedback loops can improve a person’s acustomed performance.

It is vitally important, though, that if Learning Analytics is supposed to make a beneficial impact on (self-directed) learning, it does not stop at manipulating learners in a way that these are merely conditioned into different behaviours! It is not enough to check behaviour patterns of learners even though some such feedback might be helpful at times. We need more LA applications that support metacognition and cognitive development. Even memory joggers are quite useful at this. One of the oldest I am familiar with and which I haved used to great benefit are vocabulary trainers. In using those, I could see that in the first run, I was able to answer maybe 46% of a given wordlist, increasing to 65% in the next run. Over only a few runs I was able to answer 96% of all questions. Not only was this summative feedback in % a motivator and excellent for my own benchmarking; I also was able to detect decline in memorised vocabulary and identify which words I was most likely to forget, once I stopped actively revising (say three weeks later).

Since I am most interested in cognitive development and less in learning behaviour patterns, I would like to see more Learning Analytics tools that allow this to happen.

Good teaching comes from the teacher!


Over the holidays, I watched the 12 part video lecture series by Neil deGrasse Tyson called “my favorite universe”. Not only is this a fascinating topic anyhow, but the astrophysicist and director of the Hayden Planetarium brings it to life. As one commenter put it:

They need to clone Tyson and put him in every class room across the planet. I bet the world would be a better place for it…

What thrilled me was the enthusiasm that Tyson radiated, the love for his subject discipline and the love for telling people about it. Now here’s a good teacher if ever I saw one. What’s even more striking is that he used no technology in his presentations, apart from a few still images illustrating parts of the cosmos. Now this made me wonder, because all the emphasis on being a good teacher that I know and hear about lies on the competent use of technology! Institutions invest zillions of currency into putting a projector and smartboards into every classroom, as well as staff development programmes training people how to use powerpoint or upload a file into the VLE. Who, nowadays, would dare go to a conference without a USB stick with the obligatory presentation on?

Technology surely has its place, especially for reaching out. I would not ever have been able to watch this great series if it were not for youtube, cloud computing, ubiquitous Internet access, and the good man filming and sharing his lectures. But let’s face it, good teaching does not come from technology, it comes from the teacher, presenter, or expert and we need to invest in it!

Notes to Learning Analytics


The recent two day seminar on Learning Analytics, organised by the Dutch SURF academy, brought some interested parties from different education institutions and vendors together. While stimulating in its presentation, the seminar mainly presented technical showcases. What got somehow left behind were relevant pedagogic showcases and a feeling of how receptive the teaching practitioner community is to this kind of innovation. Are we running again into the old pattern of being technology driven?

Some interesting showpieces included tools to elicit what I would call educational business analytics (as opposed to learning analytics). To some extent these were not really new, as business reporting systems on student grades, drop-out figures, and the likes have existed for many years, albeit that they are mainly available to university registrars. It is not yet clear what these figures do to teaching and learning when presented to teaching staff instead of administrators, but this would be a novel approach.

Here are some notes that came to my mind while listening to the presentations:

  • LA tools are a bit like a cooking thermometer or oven thermostat. It doesn’t give you an indication of what meal a person is preparing or whether it will taste good or not, but it may be a vital (on-demand) instrument to determine the right action at the right time to get it done.

  • How do we avoid teachers being turned into controlers, sitting like Homer Simpson in front of a dashboard and control panel looking at visualisations of their students’ datasets? Does an increase in such activities reduce their contact time with students?
  • One common assumption I noted is the belief that all students are ambitious and only aim for top grades and the best learning experience. Being a father and having seen a few student generations, I contest this assumption. Many, if not most students, just want to pass. Studying isn’t in fact what they perceive as the prime focus of their lives. Tactical calculations that students are used to doing (how often can I still be absent; what’s the minimum mark I need for passing, etc.) maybe ‘prehistoric’ forms of Learning Analytics that have existed for as long as assessments have been around!

eduMOOC and when do we start to learn?


It probably comes as no surprise that among the 2400 participants of the current massive open online course eduMOOC a sense of confusion has spread.

Typical questions raised are “what are the learning objectives?”, “what are MOOCs about?”, or “how do I master the abundant wealth of content?” In response help arrives from veteran MOOCers. This mostly comes in form of advice for un-learning: “forget normal course structures”, “forget catching up with all postings”, “set your own objectives”, etc.

Indeed, filtering noise and identifying the threads, tools, and groupings that are relevant to you is hard work, and there is always the danger that a MOOC gets drowned in anectotes and story telling, which may pose a stumbling block to the credibility and applicability of knowledge in its creation.

However that may be, the real questions remain unanswered: “what is learning in a MOOC?”, and “how do we know that we are learning”?

Here, I think, as well as in other free unstructured learning experiences lies an unpublished secret – the fact that learning is a feeling of wellbeing!

It’s the satisfactory feeling of serendipitous discovery, enlightened clarity, and, finally, the feeling of identity through the shared knowledge and experience that connects you to others, as well as the feeling that you yourself have made a step forward in your own existence. MOOCs as well as formal forms of education need to take more care that learning can be felt – not measured! – by those who it affects, the learners.

Pedagogy and the Learning Analytics model


I received valuable feedback on the proposed design framework for Learning Analytics. A key question people asked was where pedagogy was in the model. Here is how I see it:

LA pedagogy model

Pedagogic strategies and learning activities as such are not part of the analytics process but are implicitly contained in the input datasets that encapsulate the pedagogic behaviour of users. As we know, this behaviour depends a great deal on the platform and the pedagogic vision the developers built in (cf. Dron & Anderson, 2011).  For example, data from a content sharing platform will have a behaviourist/cognitivist pedagogy attached to the learner behaviour, since this is the pedagogic model underlying the technology. In any case, only the pedagogic patterns exhibited in the dataset can be analysed and this will vary.

Additionally, pedagogy can be explicitly addressed in the goals and objectives that the LA designer sets. The LA method will determine the outcome of the analysis and together with the interpretation applied may lead to a large variety of options for consequences and interventions. If such pedagogic interventions are applied they lead to new behaviours which, once again, can be analysed through the available data.

A simple analogy would be boiling water in a pan. At any time (or continuously) you can stick a thermometer in and measure its temperature. The goal would be to determine whether you need to turn up the heat or not. The result of the analysis can then lead to the actions you want to take. The thermometer is only one method for such an analysis. An alternative would be to observe and wait until the water bubbles. Setting a threshold expectation (in the goals design) can inform you when it is time for the teabag to go in.

The model takes note that pedagogic success and performance are not the only thing that Learning Analytics can measure. Learning Analytics are snapshots taken from educational datasets. These snapshots can be used to reflect or predict, in order to make adjustments and interventions (either by a human or by a system). Through connecting the corner stones of the design model in different ways, different use cases can be constructed.

A key element of the Learning Analytics process that is not explicitly present in the model is that the outcome of any analysis needs to lead to a decision process which determines the consequences. Whether these are pedagogic or not, depends very much on the goals specified. Decision making can be stimulated and executed through the method applied and the algorithms chosen, for example in recommender systems. But decisions can also be taken by a human (e.g. teacher or self-directed learner). In any case they would lead to consequences and through a feedback loop the process can be chosen to be iterative.

Decisions based on Learning Analytics are a critical issue, because they determine the usefulness and consequences for the stakeholders. It is here where ethics play an enormously important role. Imagine an educational dataset that determines that children of immigrants are performing worse in reading tasks. Several options present themselves, and by all likelihood will be exploited by political parties or others: (1) more support for immigrant children could be offered; (2) segregation of immigrant vs non-immigrant schools; (3) right wing politicians will not hesitate to point out the deteriorating quality of schools due to immigration. Hence, data analysis could have dramatic (and unwanted) consequences. We need to be aware of this danger!

Locking down teaching (and learning)


Data analysis is a big deal these days. Ratings are another. Unavoidably, analysis of measurable data and quantification leads to comparison and, thus, to ratings and rankings.

The Collegiate Learning Assessment (CLA) services offer ways to measure how greatly your HE institution has improved the students’ higher order competencies and thinking skills. Apparently this allows institutions to benchmark where they currently stand. They claim not to do this with ranking in mind, but this is in my view humbug and disguise. They say it’s about “highlighting differences between them [i.e. colleges] that can lead to improvements in teaching and learning”. But we already know that institutions are different in everything: quality of teaching, aptitude of their students, funding, and output. So what do we hope to learn from such a measuring exercise?

The comparison is carried out using specially designed tests! Yet another anachronistic approach for students to be tested not for their own achievement, but to provide a stick for their institution. The only people interested in such an exercise would be a government with further austerity plans to cut public funding for education. Who else would give a damn about the validity of such tests?

What we learned from e.g. the research assessment exercises is that such benchmarking and comparisons hardly improve the quality of the bottom half of institutions. If anything it widened the gap between good and poor quality institutions by turning it into a football like economy where good players are transferred to rich clubs.

Apart from turning the university into a police state, it’s the teachers who take all the blame for student failure. The CLA doesn’t take personal factors into account like crises with boy/girlfriends, working late in hamburger joints, etc. There is little or no room for shared responsibility of learning or even student ownership of their learning and success.

Additionally, as I mentioned in another post, mainstreaming and elevating pedagogic strategies to the level of national uniformity leads to loss of innovation and creative new approaches in learning. It chains teachers to a statistical mean and locks down teaching and learning to a single vision.

Legacy – things never go away, do they?


In evolution, species adapt into new life forms or die out. It’s that simple. Not so in technology enhanced learning (TEL). Legacy concepts never go away, or so it seems. How else could we explain that there are still users on Internet Explorer 6 and older?

The Internet has seen a number of key developments and phases, now conveniently called Web 1.0 and Web 2.0 with many different varieties of Web 2.5 and Web 3.0 concepts thrown about. But it is not that this has been part of an evolution which replaced earlier forms, as is suggested by the version numbers. Web 2.0 did not replace Web 1.0. And it is not about backward compatibility either. It’s more to do with enlargements. In a biological analogy, a species would grow a second head…

Interestingly, the same is true for pedagogic theories and the perception of knowledge:

pedagogic theories emergence

The reason for the continued presence and importance of legacy concepts in pedagogic theory is that in reality they are not legacy, as many people would want to have it.

Behaviorist and instructivist approaches are far from being obsolete. Uni-directional knowledge transmission (in form of lectures and presentations, podcasts or books) is still relevant and in many ways the most efficient way of learning for some types and levels of knowledge, e.g. relating to (cognitive) apprenticeship. Scientific conferences deliberately hang on to the transmission model as a format for information-rich knowledge sharing. Cloud sharing of slide presentations or podcasts is no less a lecture than a teacher in front of a class.

Certainly gone are the days of didactic monopolies. While this is enriching and enabling, the downside of it is that a variety of {devices, strategies, technologies,…} can lead to fragmentation and disorientation. Unfortunately, the biggest problem we are facing is that because TEL innovation slavishly follows the latest technology developments, it’s all driven by the big commercial players, the mass media that promote the hype, and by the sheepish crowd that follows.

Learning Analytics framework


Slowly, a common understanding of Learning Analytics is evolving. George Siemens’ definition is the following:

Learning analytics is the use of intelligent data, learner-produced data, and analysis models to discover information and social connections, and to predict and advise on learning

My view is generally in accordance with this. To me, Learning Analytics is a way to take advantage of available educational datasets for the discovery of new insights into educational practice and the development of new educational services that support the goals and objectives of learners, teachers, and institutions. Unlike other people, I feel that Learning Analytics serves reflection as much as prediction.

The careful design of Learning Analytics approaches needs to take a number of different perspectives, so I started to draw up a first framework diagramme that takes these into account. This is only a first draft, but I hope to develop it further as our knowledge and experience increases. Feel free to elaborate further, and I am, of course, happy to have feedback on this.

Click on the image to enlarge

Critical soft issues that I perceive in the exploitation of educational datasets for Learning Analytics, are the competences to interpret, critically evaluate, and derive conclusions (and pedagogic actions) from the analysis. A holistic perspective is essential, because what’s left out of the data coverage is as important as what is in.

Privacy issues aside, which are a legal constraint, I see a number of ethical issues connected with accessing learner data. There’s the issue that some teachers might abuse Learning Analytics as a means for policing and surveyance rather than a support tool. The same could perhaps be said when it comes to institutions gaining insights into teacher performances. The danger being that innovative and creative teaching might be ousted because it does not show up as falling into line with the ‘traditional’ algorithmic performance. As such the data might easily be abused to exercise certain pressures upon the data constitutency.

Lessons from offline learning


This may sound counter-intuitive from a person living and breathing online learning, but it is good for looking at what could still be improved in e-learning.

When did YOU last attend a chalk and board course? I had the pleasure of following a ten week evening class, with no other technology than a CD-player! Yes, there was the blackboard (not the VLE!), there was chalk, there was a physical teacher and fellow students sitting at a desk.

Let me repeat this for clarity: it was a pleasure! I thoroughly enjoyed it and not only because it was a change from the usual things I do, or because I got a retro feeling.

Upon reflection, one of the things I enjoyed most over online learning experiences was the feeling of undistracted belonging. There was a dedicated time of the week (Mondays and Wednesdays 6-9 p.m.) where we shut ourselves off from our worldly surroundings and did nothing but focus on our learning – no e-mail or phone interruptions, no other browser windows, no family entertainment, no servicing the tea kettle, no button where we could switch the course off any time we wanted.

Don’t get this wrong, I like flexibility, but when flexibility means fragmentation of time and spreading my attention to more things than I can chew, I find it ineffective for learning.

Another joy was the bonding and social ties that went beyond the course itself. Interestingly, the evening class consisted entirely of professionals, most of them doing some or other computer-enhanced day job. Despite all the social tools we now have online, they are still only ‘technology’ whence social connectedness is ‘technology-mediated’. This is different to sitting in the classroom together. What’s different? In a physical environment you have e.g. the opportunity for situated humor or for impromptu remarks and support. This can make you popular or unpopular with your peers, but it definitely has an effect on the social fabric of the group. In the beginning of the course, I was the only one to spend the break in the cafeteria – by the end it has become a ritual that everyone gathered there and chatted about things unrelated to the course itself.

Yes, I sometimes thought that various technologies could have been used (powerpoint slides, projector, etc.) to spice up the delivery, but what counted in the end was the satisfying feeling of benefiting from the course, both in terms of learning and in terms of socialising.