This could be a game changer in the data roaming business! Data roaming in Europe has been a pain to say the least. I live in a border area between Belgium, the Netherlands and Germany and frequently cross over for all kind of activities. Every time I have to take special care that my phone does not dial into a foreign data connection, else it would become quickly very expensive. Ubiquitous mobile learning, I hear you ask? – Forget it! This is a problem that I believe the US doesn’t have, where you can roam from state to state and sea to sea with the same data tarif the mobile telecom provides.
Skype now offers 1,000,000 Wifi access points at low cost from 5c – 16c per hour with unlimited data. This is a much better deal than the data transfer rates other telcos offer. The EU has long argued that data roaming is not competitively priced and the same telco charges the user x-times as much for the same service when they cross the border between EU countries. Now this offer from Skype may be the competitive offer that we all have been waiting for, and it will turn up the heat for telcos to provide a decent pricing structure. This could be a real enabler for ubiquitous mobile learning!
**Oops, I stand corrected: the Skype pricing is per minute not per hour, which, sadly, does not make the Skype offer as competitive as I mentioned! Sorry folks.
I find it quite ironic how the UK government announces tighter controls over social networks after the recent riots in England. This comes after years of high-brow outspoken criticism of governments in the Ukraine, Iran, Egypt, etc. on their interference with free speech by trying to suppress online social communication of citizens protesting and rising up against the establishment (for different reasons). With the announcement, the West has lost the moral high ground they tried to portrait while things were running smoothly.
Without losing myself in politics, what concerns me is the picture that emerges of the Internet today and tomorrow. What can we expect when we turn to our browsers? Surely, in some dark and very secret corner, Intelligence Agencies have not only collected data about users, and created backdoor controls to large software portals like Google or Facebook which allows them to monitor our conversations. ISPs too are increasingly under pressure and forced to censor Web traffic as a recent court ruling against BT has shown. It starts with the ongoing battle on piracy, but there are actually many Websites that some people wish to disappear, e.g. Greenpeace or other activist sites, not to mention fundamentalist propaganda.
As if this wasn’t enough, leaving the Web to free market forces, challenges net neutrality through the monetary power of a handful of big players. The preferential treatment of traffic may lead to a Web where organisations with deep pockets get all the attention. Imagine where this leaves democracy, when it comes to marketing smaller political parties before an election. But it isn’t even only the browser developers or the brand companies that are the bad guys, as recent findings on search traffic hijacking show us.
What these developments indicate is that we are no longer only dependent on the secret algorithms from Google that lead us from query A to website B. More and more, our web queries bounce in unknown ways around the web with perhaps unpredictable and unexpected outcomes, monitored by data companies and intelligence services. The danger is that our web experience declines from what we want to see to what others want us to see or not to see!
By now it is rather well known that the introduction course to Artificial Intelligence (AI) from Stanford University has attracted an unbelievable amount of interest. As of today more than 82,000 people signed up to the free course. Time to reflect whether the recent move of massive open online courses (MOOCs) and this offering share similar features and what it all means.
Maybe it is no surprise at this point that such an offering to a world-wide public is made by the organisers of the course, but it even stunned them to see the viral response the offer received from around the world. Open Course Ware (OCW) and other open educational resources (OERs) have some decade of history behind them, and similar offers have been around for a much longer. More recently, the MOOC movement has gained some considerable momentum in the world of education with people at least in their hundreds registering, and more and more MOOCs being offered.
What is different with the Stanford AI course is that it is a traditional accredited university course that has all the hallmarks of formal education. As such, I would not call it ‘open’, but ‘free’. This is a major difference to recent MOOCs, which, appart from a rough outline syllabus, have had an open structure and very little provision, mostly in terms of moderation and fixed online presentations. In MOOCs, participants were the main movers and shakers, whereas the Stanford course has a textbook, tests, and provides real accreditation (but only to Stanford students). Here I spot the key difference. While both MOOCs and the online AI course are both free of charge and open to join for anyone, the latter I expect to be more like a broadcast experience, similar to early radio and television courses.
It still amazes why this particular subject and delivery mode would attract such a large audience, but maybe we are exaggerating the surprise factor. If we look back at the times when the BBC offered Open University courses on tv, the viewer numbers (in a single country!) may have been even bigger, despite horrible programme hours. Can 82,000 really be considered a benchmark or is it just the beginning of more and larger open education experiences?
Whatever the motivation, both for offering free worldwide online courses and for signing up to them, it has to be welcomed that education is finally opening its doors to everyone!
While MOOC-ing about, I realised how most people I know already are hooked into Google web apps. Practically everyone I encounter on eduMOOC has a Gmail account, I receive invitations to Google Docs almost daily, and now everyone is jumping at Google+. Have people forgotten the black hat Google is wearing?!
My view is that people are so keen on Google+ not because it’s so perfect and so private, but because of the arrogant way Facebook has dealt with their customers. But this is perhaps a naive way of dealing with Google, which, not so long ago, was the bad guy of the Internet.
To give credit where it belongs, G+ does have its merits and the company has learned a lot from past failure, like Wave. There is also great appeal in the integration of the highly usable and good quality productivity services Google provides. Still, I do get the feeling that the lock-in gets tighter and the circle that Google is drawing around us gets narrower and narrower. It is certainly more and more difficult to “escape”.
Could we see Google emerge as the first virtual state taking over the rule of what we are doing online. Will we see a Google virtual Prime Minister soon? With the identity infrastructure they mention in their plans it is certainly feasible. Where will this leave the rest of the Web – will there be anarchic outcasts, outposts of unregulated (un-googlified) web users?
I realise this sounds quite sci-fi for now, but wait and see…!
In a post in early 2009, I anticipated the coming of a new Internet. Unlike people who thought Web 3.0 would give way to the Semantic Web, I long held it that by maturing as a virtual society, the Internet would inevitably require identities around which this society would be structured, or would structure itself. Hence, I firmly believed and still do that we are going to see a Personal Web emerge over time. By this I not so much mean that the web experience is being personalised, but that we have a singular accredited identity, just like we have in real life with our ID cards, social security numbers, etc.
Signs are that Google is going to lead the way there. In this interview, Google’s Eric Schmidt admits that they are working on an identity infrastructure for the web. So this may finally be the plot behind Google+. And certainly Chrome’s brand new browser identity management is a big step in this direction. Schmidt unmistakenly talks about unique identities perhaps with multiple personas and perhaps personalities. There are of course countless advantages in terms of convenience, personal safety, child protection, identity theft and fraud prevention. Numerous disadvantages too, in terms of policing, tracking, or spying.
So far Google’s long term plans are still kept quiet, and while anonymous browsing and chatting might still stay around for a while, in the end this development might mean that someone who claims to be under the age of thirteen, might indeed be under age.
It probably comes as no surprise that among the 2400 participants of the current massive open online course eduMOOC a sense of confusion has spread.
Typical questions raised are “what are the learning objectives?”, “what are MOOCs about?”, or “how do I master the abundant wealth of content?” In response help arrives from veteran MOOCers. This mostly comes in form of advice for un-learning: “forget normal course structures”, “forget catching up with all postings”, “set your own objectives”, etc.
Indeed, filtering noise and identifying the threads, tools, and groupings that are relevant to you is hard work, and there is always the danger that a MOOC gets drowned in anectotes and story telling, which may pose a stumbling block to the credibility and applicability of knowledge in its creation.
However that may be, the real questions remain unanswered: “what is learning in a MOOC?”, and “how do we know that we are learning”?
Here, I think, as well as in other free unstructured learning experiences lies an unpublished secret – the fact that learning is a feeling of wellbeing!
It’s the satisfactory feeling of serendipitous discovery, enlightened clarity, and, finally, the feeling of identity through the shared knowledge and experience that connects you to others, as well as the feeling that you yourself have made a step forward in your own existence. MOOCs as well as formal forms of education need to take more care that learning can be felt – not measured! – by those who it affects, the learners.
I received valuable feedback on the proposed design framework for Learning Analytics. A key question people asked was where pedagogy was in the model. Here is how I see it:
Pedagogic strategies and learning activities as such are not part of the analytics process but are implicitly contained in the input datasets that encapsulate the pedagogic behaviour of users. As we know, this behaviour depends a great deal on the platform and the pedagogic vision the developers built in (cf. Dron & Anderson, 2011). For example, data from a content sharing platform will have a behaviourist/cognitivist pedagogy attached to the learner behaviour, since this is the pedagogic model underlying the technology. In any case, only the pedagogic patterns exhibited in the dataset can be analysed and this will vary.
Additionally, pedagogy can be explicitly addressed in the goals and objectives that the LA designer sets. The LA method will determine the outcome of the analysis and together with the interpretation applied may lead to a large variety of options for consequences and interventions. If such pedagogic interventions are applied they lead to new behaviours which, once again, can be analysed through the available data.
A simple analogy would be boiling water in a pan. At any time (or continuously) you can stick a thermometer in and measure its temperature. The goal would be to determine whether you need to turn up the heat or not. The result of the analysis can then lead to the actions you want to take. The thermometer is only one method for such an analysis. An alternative would be to observe and wait until the water bubbles. Setting a threshold expectation (in the goals design) can inform you when it is time for the teabag to go in.
The model takes note that pedagogic success and performance are not the only thing that Learning Analytics can measure. Learning Analytics are snapshots taken from educational datasets. These snapshots can be used to reflect or predict, in order to make adjustments and interventions (either by a human or by a system). Through connecting the corner stones of the design model in different ways, different use cases can be constructed.
A key element of the Learning Analytics process that is not explicitly present in the model is that the outcome of any analysis needs to lead to a decision process which determines the consequences. Whether these are pedagogic or not, depends very much on the goals specified. Decision making can be stimulated and executed through the method applied and the algorithms chosen, for example in recommender systems. But decisions can also be taken by a human (e.g. teacher or self-directed learner). In any case they would lead to consequences and through a feedback loop the process can be chosen to be iterative.
Decisions based on Learning Analytics are a critical issue, because they determine the usefulness and consequences for the stakeholders. It is here where ethics play an enormously important role. Imagine an educational dataset that determines that children of immigrants are performing worse in reading tasks. Several options present themselves, and by all likelihood will be exploited by political parties or others: (1) more support for immigrant children could be offered; (2) segregation of immigrant vs non-immigrant schools; (3) right wing politicians will not hesitate to point out the deteriorating quality of schools due to immigration. Hence, data analysis could have dramatic (and unwanted) consequences. We need to be aware of this danger!
The debate over publishing academic articles with “established” commercial journals has been raging for a while now. We see the ground shifting from under the feet of these publishers, and concessions to the Open Access movement won’t save them in the long run. The services that publishers previously provided (lectoring, editorial, layout, promotion, distribution) have all disappeared, authors do everything themselves nowadays, publishers just hold out their hands.
At the same time, we see institutions and the education establishment counting beans and pressurising researchers for output via these traditional channels. It’s about time for that to change, as it is only working into the hands of publishing houses, not helping researchers.
To me, the point of academic publishing is the sharing of new insights, discoveries and reflections with the scholarly Community of Practice to open them up for further work. As such, I prefer the idea of peer recognition as a quality criterion over that of a quantitative inventory of writings for specialist journals that end up on a library shelf of a few institutions with enough interest to shed out large sums of money for a subscription. Gravitas in the community is what matters to me as a researcher and lecturer, and this should be provided by kin people not via people with a commercial interest.
When I take my rather modest weblog reflections, I cannot help but notice that in terms of reach this has probably more impact than any journal article I’d produce. Posts are immediately out for grabs, they invite and receive feedback (hence are peer reviewed), they reach the target community swiftly and without barriers. Compare this with the journal articles that follow a mechanical process, spamming people’s inbox with requests for peer reviews for no pay and, in the end, leading to a librarian buying the (e)-publication in the hope that some lonely PhD student will find it on the shelf some day and use it for a short quote in their thesis.
Let’s face it, we all dream of discovering another Pyramid or some such world wonder that would make a dramatic change. But, in today’s reality, new knowledge is created in collective efforts through discussion and sharing – and this can hardly happen in the way things were.
Data analysis is a big deal these days. Ratings are another. Unavoidably, analysis of measurable data and quantification leads to comparison and, thus, to ratings and rankings.
The Collegiate Learning Assessment (CLA) services offer ways to measure how greatly your HE institution has improved the students’ higher order competencies and thinking skills. Apparently this allows institutions to benchmark where they currently stand. They claim not to do this with ranking in mind, but this is in my view humbug and disguise. They say it’s about “highlighting differences between them [i.e. colleges] that can lead to improvements in teaching and learning”. But we already know that institutions are different in everything: quality of teaching, aptitude of their students, funding, and output. So what do we hope to learn from such a measuring exercise?
The comparison is carried out using specially designed tests! Yet another anachronistic approach for students to be tested not for their own achievement, but to provide a stick for their institution. The only people interested in such an exercise would be a government with further austerity plans to cut public funding for education. Who else would give a damn about the validity of such tests?
What we learned from e.g. the research assessment exercises is that such benchmarking and comparisons hardly improve the quality of the bottom half of institutions. If anything it widened the gap between good and poor quality institutions by turning it into a football like economy where good players are transferred to rich clubs.
Apart from turning the university into a police state, it’s the teachers who take all the blame for student failure. The CLA doesn’t take personal factors into account like crises with boy/girlfriends, working late in hamburger joints, etc. There is little or no room for shared responsibility of learning or even student ownership of their learning and success.
Additionally, as I mentioned in another post, mainstreaming and elevating pedagogic strategies to the level of national uniformity leads to loss of innovation and creative new approaches in learning. It chains teachers to a statistical mean and locks down teaching and learning to a single vision.