Scientific productivity is paramount in an academic economy that tries to hold on to the best, and where the best are trying to hold on. But as this interesting article discusses, we might put the wrong measures to it.
With the application of industrial criteria to science, we may have reached the limits of research in more than one way. In every economic domain, upscaling eventually reaches a ceiling where no further growth is possible without cheats, compromises, or loss in quality. Apart from outright fraudulent manipulation of the scientific publishing mechanisms and the lack of objective possibilities to replicate claimed research results, both of which are mentioned in the article, there are other issues that begin to shape up into a kind of scientific sound barrier ahead of us that increasingly separates us from the search for new knowledge to the benefit of humanity:
Productivity pressures that quantify research output in citations and impact measures, at some scaling point leads to “work-arounds” as scholars can no longer meet or maintain expectations. Just like with “backlink” trading in web SEO, similar new spin-off models for increasing impact factors spring into life. At the same time, industrial expectations anticipating “a paper submission per week” have to compromise on quality.
The run for money, i.e. research funding, becomes more important than the quest for knowledge. As long as there is only a handful of institutions searching for funding, this may be (a) easy and (b) successful. As soon as it is mandated and scaled up as a public objective to relieve budgets, a third money stream economy becomes increasingly harder and requires higher pre-investment. This can already be seen in EU funding rounds, where the number of applications has dramatically increased, leading to a much reduced chance of success for competitors. It already leaves many smaller institutions out in the cold. The same thing is true looking at industry sponsorship, making every institution go knocking at company doors for donations or private funding is like having not only one needy hand stretched out, but hundreds!
Similar resource limits are encountered when looking at empirical research. It has become a real challenge to find participants for pilots, surveys, evaluations, etc. People are over-surveyed and over-evaluated. Having one survey a month, was still o.k., but with pilots, tests, and questionnaires becoming a daily diet, this approach turns itself on its head. Scientifically, it leads to the risk of low participation or low quality returns with less scientific relevance. Alternatively, as is often the case, students are forcefully pushed into the role of a lab rat, but with the number of tests and pilots their entire education runs into danger of becoming an experiment.
Peer review, originally conceived as a measure for scientific quality, also suffers from the scaling issue. Doing a peer review on one reputable journal or conference now and then, was superbly rewarding and honorable to be involved in. But with the growth in publication outlets, the requests for reviewers’ unpaid time have also grown beyond proportion. This again leads to poor engagement with the task.
The paradox with all this is that the more organisations try to quantify and control these issues, the more they are failing. Scientific half-life is shortened not only by the speed by which new knowledge is created, but also by the amount of invalidity contained in it. Do we have a bubble that is about to burst?