Monthly Archives: June 2011

Facebook competition from Google+

Finally, there looks like a real challenge to Facebook, coming – surprise surprise – from Google. Google have tried for years to get into the social networking business, but did not quite manage. Maybe the Google+ project can be more successful than Buzz or Orkut.

It actually looks quite promising. First and foremost, it doesn’t call everyone you meet a “friend”. Google recognises that we share different things with different sets of people we know (or don’t know). The name ‘circles’ is quite appropriate for this and reflects in my mind social reality much better than Facebook.

I agree with all three points of their assessment:

* We only want to connect with certain people at certain times, but online we hear from everyone all the time.
* Every online conversation (with over 100 “friends”) is a public performance.
* We all define “friend” and “family” differently—in our own way, on our own terms.

Google+ is still by invitation only, so we have to wait for what the sharing of content that they promise looks like, but at present at least it sounds good. I also expect the much bemoaned search and archive functionality in Facebook would be easily outperformed. Google+ will be available for Android, as an Apple App and as mobile web app, so plenty of mobility for connecting is guaranteed.

Since everyone on the eduMOOC learning network is already in Google with Google groups or other tools, I think a Google+ circle might just be the thing to do – provided Google+ opens for business anytime soon.

Pedagogy and the Learning Analytics model

I received valuable feedback on the proposed design framework for Learning Analytics. A key question people asked was where pedagogy was in the model. Here is how I see it:

LA pedagogy model

Pedagogic strategies and learning activities as such are not part of the analytics process but are implicitly contained in the input datasets that encapsulate the pedagogic behaviour of users. As we know, this behaviour depends a great deal on the platform and the pedagogic vision the developers built in (cf. Dron & Anderson, 2011).  For example, data from a content sharing platform will have a behaviourist/cognitivist pedagogy attached to the learner behaviour, since this is the pedagogic model underlying the technology. In any case, only the pedagogic patterns exhibited in the dataset can be analysed and this will vary.

Additionally, pedagogy can be explicitly addressed in the goals and objectives that the LA designer sets. The LA method will determine the outcome of the analysis and together with the interpretation applied may lead to a large variety of options for consequences and interventions. If such pedagogic interventions are applied they lead to new behaviours which, once again, can be analysed through the available data.

A simple analogy would be boiling water in a pan. At any time (or continuously) you can stick a thermometer in and measure its temperature. The goal would be to determine whether you need to turn up the heat or not. The result of the analysis can then lead to the actions you want to take. The thermometer is only one method for such an analysis. An alternative would be to observe and wait until the water bubbles. Setting a threshold expectation (in the goals design) can inform you when it is time for the teabag to go in.

The model takes note that pedagogic success and performance are not the only thing that Learning Analytics can measure. Learning Analytics are snapshots taken from educational datasets. These snapshots can be used to reflect or predict, in order to make adjustments and interventions (either by a human or by a system). Through connecting the corner stones of the design model in different ways, different use cases can be constructed.

A key element of the Learning Analytics process that is not explicitly present in the model is that the outcome of any analysis needs to lead to a decision process which determines the consequences. Whether these are pedagogic or not, depends very much on the goals specified. Decision making can be stimulated and executed through the method applied and the algorithms chosen, for example in recommender systems. But decisions can also be taken by a human (e.g. teacher or self-directed learner). In any case they would lead to consequences and through a feedback loop the process can be chosen to be iterative.

Decisions based on Learning Analytics are a critical issue, because they determine the usefulness and consequences for the stakeholders. It is here where ethics play an enormously important role. Imagine an educational dataset that determines that children of immigrants are performing worse in reading tasks. Several options present themselves, and by all likelihood will be exploited by political parties or others: (1) more support for immigrant children could be offered; (2) segregation of immigrant vs non-immigrant schools; (3) right wing politicians will not hesitate to point out the deteriorating quality of schools due to immigration. Hence, data analysis could have dramatic (and unwanted) consequences. We need to be aware of this danger!

The dilemma of academic publishing

The debate over publishing academic articles with “established” commercial journals has been raging for a while now. We see the ground shifting from under the feet of these publishers, and concessions to the Open Access movement won’t save them in the long run. The services that publishers previously provided (lectoring, editorial, layout, promotion, distribution) have all disappeared, authors do everything themselves nowadays, publishers just hold out their hands.

At the same time, we see institutions and the education establishment counting beans and pressurising researchers for  output via these traditional channels. It’s about time for that to change, as it is only working into the hands of publishing houses, not helping researchers.

To me, the point of academic publishing is the sharing of new insights, discoveries and reflections with the scholarly Community of Practice to open them up for further work. As such, I prefer the idea of peer recognition as a quality criterion over that of a quantitative inventory of writings for specialist journals that end up on a library shelf of a few institutions with enough interest to shed out large sums of money for a subscription. Gravitas in the community is what matters to me as a researcher and lecturer, and this should be provided by kin people not via people with a commercial interest.

When I take my rather modest weblog reflections, I cannot help but notice that in terms of reach this has probably more impact than any journal article I’d produce. Posts are immediately out for grabs, they invite and receive feedback (hence are peer reviewed), they reach the target community swiftly and without barriers. Compare this with the journal articles that follow a mechanical process, spamming people’s inbox with requests for peer reviews for no pay and, in the end, leading to a librarian buying the (e)-publication in the hope that some lonely PhD student will find it on the shelf some day and use it for a short quote in their thesis.

Let’s face it, we all dream of discovering another Pyramid or some such world wonder that would make a dramatic change. But, in today’s reality, new knowledge is created in collective efforts through discussion and sharing – and this can hardly happen in the way things were.

Locking down teaching (and learning)

Data analysis is a big deal these days. Ratings are another. Unavoidably, analysis of measurable data and quantification leads to comparison and, thus, to ratings and rankings.

The Collegiate Learning Assessment (CLA) services offer ways to measure how greatly your HE institution has improved the students’ higher order competencies and thinking skills. Apparently this allows institutions to benchmark where they currently stand. They claim not to do this with ranking in mind, but this is in my view humbug and disguise. They say it’s about “highlighting differences between them [i.e. colleges] that can lead to improvements in teaching and learning”. But we already know that institutions are different in everything: quality of teaching, aptitude of their students, funding, and output. So what do we hope to learn from such a measuring exercise?

The comparison is carried out using specially designed tests! Yet another anachronistic approach for students to be tested not for their own achievement, but to provide a stick for their institution. The only people interested in such an exercise would be a government with further austerity plans to cut public funding for education. Who else would give a damn about the validity of such tests?

What we learned from e.g. the research assessment exercises is that such benchmarking and comparisons hardly improve the quality of the bottom half of institutions. If anything it widened the gap between good and poor quality institutions by turning it into a football like economy where good players are transferred to rich clubs.

Apart from turning the university into a police state, it’s the teachers who take all the blame for student failure. The CLA doesn’t take personal factors into account like crises with boy/girlfriends, working late in hamburger joints, etc. There is little or no room for shared responsibility of learning or even student ownership of their learning and success.

Additionally, as I mentioned in another post, mainstreaming and elevating pedagogic strategies to the level of national uniformity leads to loss of innovation and creative new approaches in learning. It chains teachers to a statistical mean and locks down teaching and learning to a single vision.

Mobile devices boost social activities

Social networks are, of course, available on PCs and laptops as well as handheld devices and tablets. Anything with a browser really. However, in a short survey we conducted with our students, 700 respondents confirmed to us, what we suspected all along. That is that mobile devices (and mobile apps in particular) boost social network activities. What is more, the more devices a person owns, the more active they are likely to be (see graphs below).

twitter use facebook use linkedin use

Frequent use (blue) in the graphs above represents daily or weekly use of the service; rare use (red) stands for less than once a week; while no use of a service (green) is shown on top. Click on thumbnail for full view. Although explainable and expected, the conclusion maybe somewhat skewed, because there are more people with only one mobile device than there are with 4 devices.

Antique books for everyone

… well, as long as you have an iPad, this is for you!

british library icon

When talking about open educational resources (OER), this is a superb example: The British Library just started to open their 19th century collection to the public. This is a nice iPad app that allows you to browse the collection and to freely download facsimile books for offline reading. These digitised versions will not catch dust and you can carry an entire library in your bag!

Legacy – things never go away, do they?

In evolution, species adapt into new life forms or die out. It’s that simple. Not so in technology enhanced learning (TEL). Legacy concepts never go away, or so it seems. How else could we explain that there are still users on Internet Explorer 6 and older?

The Internet has seen a number of key developments and phases, now conveniently called Web 1.0 and Web 2.0 with many different varieties of Web 2.5 and Web 3.0 concepts thrown about. But it is not that this has been part of an evolution which replaced earlier forms, as is suggested by the version numbers. Web 2.0 did not replace Web 1.0. And it is not about backward compatibility either. It’s more to do with enlargements. In a biological analogy, a species would grow a second head…

Interestingly, the same is true for pedagogic theories and the perception of knowledge:

pedagogic theories emergence

The reason for the continued presence and importance of legacy concepts in pedagogic theory is that in reality they are not legacy, as many people would want to have it.

Behaviorist and instructivist approaches are far from being obsolete. Uni-directional knowledge transmission (in form of lectures and presentations, podcasts or books) is still relevant and in many ways the most efficient way of learning for some types and levels of knowledge, e.g. relating to (cognitive) apprenticeship. Scientific conferences deliberately hang on to the transmission model as a format for information-rich knowledge sharing. Cloud sharing of slide presentations or podcasts is no less a lecture than a teacher in front of a class.

Certainly gone are the days of didactic monopolies. While this is enriching and enabling, the downside of it is that a variety of {devices, strategies, technologies,…} can lead to fragmentation and disorientation. Unfortunately, the biggest problem we are facing is that because TEL innovation slavishly follows the latest technology developments, it’s all driven by the big commercial players, the mass media that promote the hype, and by the sheepish crowd that follows.

The foundations of trust

Not only the Internet is built on trust. Successful businesses, teaching and personal relationships rely heavily on it too. But how does trust come about? Here are the three key factors that establish and carry trusted relationships, as long as they last.

In order for trust to be established, an investment needs to be returned in accordance with the subject’s expectations. Software crashes, late delivery of goods or responses, like e.g. ebay goods not arriving, shatter faith in a reliable service.

Repeated experiences and calculatable risk can enhance or diminish trust. These experiences need not necessarily be personal ones, but can originate from third parties or the media (see point 3 on ‘recognition’ below). The more often a train is late, the less passengers will want to take it.

A service and relationship needs to be relevant. Once Google search produces less relevant results, it’s value is reduced. Similarly, if a personal relationship has no longer relevance for the parties concerned, it becomes difficult to maintain.

Recognition carries social currency. People like to walk with the crowd to avoid unnecessary risks. Typically, the larger the number of people relying on a service is, the easier it is to trust. But recognition can also take other shapes than pure scale. Social currency from a known circle of persons (experts, friends) or an accepted authority (state, teacher) can go a long way with inducing trust. This explains why it is difficult to maintain a partnership with someone disliked by your friends, or why we trust national airlines more than budget ones. Note that price differences does neither produce nor reduce trust.

When introducing innovation of any sort, it is very important to cover all three areas well. Only then will the snowball start to gather momentum and grow.