After listening to Ryan Baker’s presentation on Educational Data Mining (EDM), I am more convinced than ever that EDM and Learning Analytics are actually the same side of the same coin. Despite attempts being made to explain them into different zones of influence or different missions, I fail to see such differences, and from reading other LAK12 participants’ reflections, I am not alone in this. Baker’s view that Learning Analytics are somewhat more “holistic” can be refuted with a simple “depends”. What is more, historically, EDM and LA don’t even originate from different scientific communities, such as is the case with metadata communities versus librarians, or with electric versus magnetic force physics – now of course known as electromagnetism.
Both approaches (if there are indeed two) are based on examining datasets to find ‘invisible’ patterns that can be translated into information useful to improve the success and efficiency of the learning processes. A good example Baker mentioned was the detection of students that digress, misunderstand, game the system, or disengage. It’s all in the data.
I would also like to believe that predicting the future leads to changing the future, at least it could give users the air of being in control of their destination. As a promotional message this has quite some power. But even in support of reflection the same can be postulated: knowing past performance can help your future performance! So, once again a strong overlap between predictive and reflective application of data analytics.
For me, all of this can only lead one way: instead of using efforts and energies to differentiate the two domains, which would only lead to reduced communities both ends, and friction in between, we need to think big and marry them into one large community and domain: Let’s twin EDM and LA!
A good post by Sean Mehan commenting on the UK government dropping a bill that would have allowed private for-profit companies to enter the HE market. Refering to a news item in the Telegraph, Sean writes:
The legislation would have allowed state loans to go into profits for for-profits, even allowing foreign companies (yes, companies, not institutions, for that is what they are), into the mix. So, UK taxpayer money goes to profits in a foreign country, while the national infrastructure is forced to compete or rot.
Quite right! Add to this monetary concern the socio-intellectual one, that such a privatisation move would severly damage the mission of HE education to serve wider society not a handful of shareholders.
I find it quite ironic how the UK government announces tighter controls over social networks after the recent riots in England. This comes after years of high-brow outspoken criticism of governments in the Ukraine, Iran, Egypt, etc. on their interference with free speech by trying to suppress online social communication of citizens protesting and rising up against the establishment (for different reasons). With the announcement, the West has lost the moral high ground they tried to portrait while things were running smoothly.
Without losing myself in politics, what concerns me is the picture that emerges of the Internet today and tomorrow. What can we expect when we turn to our browsers? Surely, in some dark and very secret corner, Intelligence Agencies have not only collected data about users, and created backdoor controls to large software portals like Google or Facebook which allows them to monitor our conversations. ISPs too are increasingly under pressure and forced to censor Web traffic as a recent court ruling against BT has shown. It starts with the ongoing battle on piracy, but there are actually many Websites that some people wish to disappear, e.g. Greenpeace or other activist sites, not to mention fundamentalist propaganda.
As if this wasn’t enough, leaving the Web to free market forces, challenges net neutrality through the monetary power of a handful of big players. The preferential treatment of traffic may lead to a Web where organisations with deep pockets get all the attention. Imagine where this leaves democracy, when it comes to marketing smaller political parties before an election. But it isn’t even only the browser developers or the brand companies that are the bad guys, as recent findings on search traffic hijacking show us.
What these developments indicate is that we are no longer only dependent on the secret algorithms from Google that lead us from query A to website B. More and more, our web queries bounce in unknown ways around the web with perhaps unpredictable and unexpected outcomes, monitored by data companies and intelligence services. The danger is that our web experience declines from what we want to see to what others want us to see or not to see!
While MOOC-ing about, I realised how most people I know already are hooked into Google web apps. Practically everyone I encounter on eduMOOC has a Gmail account, I receive invitations to Google Docs almost daily, and now everyone is jumping at Google+. Have people forgotten the black hat Google is wearing?!
My view is that people are so keen on Google+ not because it’s so perfect and so private, but because of the arrogant way Facebook has dealt with their customers. But this is perhaps a naive way of dealing with Google, which, not so long ago, was the bad guy of the Internet.
To give credit where it belongs, G+ does have its merits and the company has learned a lot from past failure, like Wave. There is also great appeal in the integration of the highly usable and good quality productivity services Google provides. Still, I do get the feeling that the lock-in gets tighter and the circle that Google is drawing around us gets narrower and narrower. It is certainly more and more difficult to “escape”.
Could we see Google emerge as the first virtual state taking over the rule of what we are doing online. Will we see a Google virtual Prime Minister soon? With the identity infrastructure they mention in their plans it is certainly feasible. Where will this leave the rest of the Web – will there be anarchic outcasts, outposts of unregulated (un-googlified) web users?
I realise this sounds quite sci-fi for now, but wait and see…!
In a post in early 2009, I anticipated the coming of a new Internet. Unlike people who thought Web 3.0 would give way to the Semantic Web, I long held it that by maturing as a virtual society, the Internet would inevitably require identities around which this society would be structured, or would structure itself. Hence, I firmly believed and still do that we are going to see a Personal Web emerge over time. By this I not so much mean that the web experience is being personalised, but that we have a singular accredited identity, just like we have in real life with our ID cards, social security numbers, etc.
Signs are that Google is going to lead the way there. In this interview, Google’s Eric Schmidt admits that they are working on an identity infrastructure for the web. So this may finally be the plot behind Google+. And certainly Chrome’s brand new browser identity management is a big step in this direction. Schmidt unmistakenly talks about unique identities perhaps with multiple personas and perhaps personalities. There are of course countless advantages in terms of convenience, personal safety, child protection, identity theft and fraud prevention. Numerous disadvantages too, in terms of policing, tracking, or spying.
So far Google’s long term plans are still kept quiet, and while anonymous browsing and chatting might still stay around for a while, in the end this development might mean that someone who claims to be under the age of thirteen, might indeed be under age.
It probably comes as no surprise that among the 2400 participants of the current massive open online course eduMOOC a sense of confusion has spread.
Typical questions raised are “what are the learning objectives?”, “what are MOOCs about?”, or “how do I master the abundant wealth of content?” In response help arrives from veteran MOOCers. This mostly comes in form of advice for un-learning: “forget normal course structures”, “forget catching up with all postings”, “set your own objectives”, etc.
Indeed, filtering noise and identifying the threads, tools, and groupings that are relevant to you is hard work, and there is always the danger that a MOOC gets drowned in anectotes and story telling, which may pose a stumbling block to the credibility and applicability of knowledge in its creation.
However that may be, the real questions remain unanswered: “what is learning in a MOOC?”, and “how do we know that we are learning”?
Here, I think, as well as in other free unstructured learning experiences lies an unpublished secret – the fact that learning is a feeling of wellbeing!
It’s the satisfactory feeling of serendipitous discovery, enlightened clarity, and, finally, the feeling of identity through the shared knowledge and experience that connects you to others, as well as the feeling that you yourself have made a step forward in your own existence. MOOCs as well as formal forms of education need to take more care that learning can be felt – not measured! – by those who it affects, the learners.
Not only the Internet is built on trust. Successful businesses, teaching and personal relationships rely heavily on it too. But how does trust come about? Here are the three key factors that establish and carry trusted relationships, as long as they last.
In order for trust to be established, an investment needs to be returned in accordance with the subject’s expectations. Software crashes, late delivery of goods or responses, like e.g. ebay goods not arriving, shatter faith in a reliable service.
Repeated experiences and calculatable risk can enhance or diminish trust. These experiences need not necessarily be personal ones, but can originate from third parties or the media (see point 3 on ‘recognition’ below). The more often a train is late, the less passengers will want to take it.
A service and relationship needs to be relevant. Once Google search produces less relevant results, it’s value is reduced. Similarly, if a personal relationship has no longer relevance for the parties concerned, it becomes difficult to maintain.
Recognition carries social currency. People like to walk with the crowd to avoid unnecessary risks. Typically, the larger the number of people relying on a service is, the easier it is to trust. But recognition can also take other shapes than pure scale. Social currency from a known circle of persons (experts, friends) or an accepted authority (state, teacher) can go a long way with inducing trust. This explains why it is difficult to maintain a partnership with someone disliked by your friends, or why we trust national airlines more than budget ones. Note that price differences does neither produce nor reduce trust.
When introducing innovation of any sort, it is very important to cover all three areas well. Only then will the snowball start to gather momentum and grow.
Communities of Practice (CoP) by Lave and Wenger (1991) are a well-recognised theoretical construct about expertise and learning. But it is built around individuals in an unconstrained space, and this is now less and less the reality. In CoPs learning paths and expertise building assume individual drivers and freedom in decision making or participation. Here are four reasons why this needs to be reviewed again:
1) CoP peripherality versus team skill models
CoP theory states that people gradually develop expertise and move along a learning path from the periphery toward the centre as they develop their expertise and membership in the community. The more central a person is in the community, the more expertise they possess. However, in collaborative situations, each member of a CoP typically works in their own sphere. This maybe based on their personal strength or dictated by circumstances (human resource requirements). Belbin’s team role model distinguishes 3 categories of roles, oriented towards actions, people, or thoughts, with three levels each. If your role in the team is ‘resource investigator’ you will develop a different expertise than as a ‘specialist’ or a ‘monitor/evaluator’.
Team working is most effective when the team composition is based on complementarity. Expertise then lies in the right connection and chemistry, not in the individual. A teacher plus an IT expert together may be more central to a CoP in e-learning then each of them individually.
3) Social recognition
It has to be said that expertise is not easily attributable to popularity in a network. Social recognition may mostly be based on personal marketing skills and efforts rather than domain knowledge or expertise. Additionally, certain domain standards (e.g. number of publications) may actually distract from expertise. A common pattern is the expectation that members of CoPs simply fall in line and “play the game” and, therefore, become accepted experts. This may lead to the phenomenon of mutually enforcing recognition leading to a false hype. …hey, and everyone in TEL is on Facebook now!
4) Participation does not equate to expertise
Likewise, social currency of expertise cannot be measured by verboseness of people in a community, but should take into account demand and requests from others. Despite of the Twitter phenomenon that the more nonsense you publish the more followers you will have, I don’t think this is a learning path to follow.
Recall and memory are vital parts of learning. If you only have a vague memory of something, you need to revisit the source of information. And this is where Facebook fails.
Facebook is very much a stream service that lives in the present with extremely limited access to the past. The philosophy behind this is “read it and forget it”, which is fine when you’re only following latest happenings and then drop the subject. However, despite the earlier hype around Facebook’s new messaging system that claimed to preserve people’s messages “forever” this has not materialised for the users.
Facebook may store your messages forever, and even sell information on to third parties, but it does not provide easy access to the message owners. The search function in Facebook is simply abysmal. Typing in ‘ebooks’ returned a paragraph from Wikipedia. Looking for a posting I made earlier, returned a negative result:
And yet, here it was only a short time ago:
To be fair, searching short status messages as are used in Facebook or Twitter isn’t easy. The text limitations are such that people have to restrict the semantic message to a minimum, there are no meta-tags or even titles to search for. And no-one is likely to enter “bit.ly/jJKry3” into the search box to be able to find an item.
There is also no hording place or personal archive where to collect interesting infos or messages – like Twitter favorites. This lack of information management reduces Facebook to what it was originally intended for – a social chat engine!