- higher returns on capital than labour (Piketty factor)
- high incomes from labour and capital are increasingly concentrated in the same peoplez
- technological innovation that favours the rich (capital rents, higher wage dispersion)
- decreasing power of unions (due to changing labour markets)
- high availability of labour (opening up of China, India and USSR in 1990s)
- increasing scalability and emergence of more winner-takes-all markets (e.g. education)
- capture of political process (democracy) and media by the rich
- monopolisation of sectors
- investment in public education
- redistribution of wealth through progressive taxes or social programmes
- wars, epidemics and natural disasters (World Wars or the Plague in medieval times)
- scarcity of labour (can be reduced by immigration)
- technological innovation that favours the poor (speculative)
What is Information? Is it inseparably connected to our human condition? How will the exponentially growing flow of information affect our societies? How is the exploding amount of information affecting us as people, our societies, our democracies? When The Economist talks about a post-truth society, how much of this trend is related to the failure of fact-checking, increasing polarity and fragmentation of media and the distrust of ‘experts’? The Information starts with a reference to Borges’ Library of Babel:
The Library of Babel contains all books, in all languages. Yet no knowledge can be discovered here, precisely because all knowledge is there, shelved side by side with all falsehood. In the mirrored galeries, on the countless shelves, can be found everything and nothing. There can be no more perfect case of information glut. We make our own storehouses. The persistence of infomation, the difficulty of forgettting, so characteristic of our time, accretes confusion. (p. 373)
In The Information, James Gleick takes the reader on a historical world tour to trace the origins of our ‘Information Society’, basically an old term that keeps on being reinvented. It’s a sweeping and monumental tour that takes us from African drumming over alphabets, the beginnings of science, mathematical codes, data, electronics to the spooky world of quantum physics. He shows how information has always been central to who we are as humans. He points to foreshadowings from the current information age such as the origin of the word “network” in the 19th century and how “computers” were people before they were machines.
The core figure in the book is Claude Shannon. In 1948 he invented information theory by making a mathematical theory out of something that doesn’t seem mathematical. He was the first one to use the word ‘bit’ as a measure of information. Until then nobody would have though to measure information in units, like meters or kilograms. He showed how all human creations such as words, music and visual images are all related in the way that can be captured by bits. It’s amazing that this unifying idea of information that has transformed our societies was only conceptualized less than 70 years ago.
It’s Shannon whose fingerprints are on every electronic device we own, every computer screen we gaze into, every means of digital communication. He’s one of these people who so transform the world that, after the transformation, the old world is forgotten.” That old world, Gleick said, treated information as “vague and unimportant,” as something to be relegated to “an information desk at the library.” The new world, Shannon’s world, exalted information; information was everywhere. (New Yorker)
The tools at my disposal now compared to just 10 years ago are extraordinary. A sentence that once might have required a day of library work now might require no more than a few minutes on the Internet. That is a good thing. Information is everywhere, and facts are astoundingly accessible. But it’s also a challenge because authors today must pay more attention than ever to where we add value. And I can tell you this, the value we add is not in the few minutes of work it takes to dig up some factoid, because any reader can now dig up the same factoid in the same few minutes.
“DNA is the quintessential information molecule, the most advanced message processor at the cellular level—an alphabet and a code, 6 billion bits to form a human being.” “When the genetic code was solved, in the early 1960s, it turned out to be full of redundancy. Some codons are redundant; some actually serve as start signals and stop signals. The redundancy serves exactly the purpose that an information theorist would expect. It provides tolerance for errors.”
The library will endure; it is the universe. As for us, everything has not been written; we are not turning into phantoms. We walk the corridors, searching the shelves and rearranging them, looking for lines of meaning amid leagues of cacophony and incoherence, reading the history of the past and of the future, collecting our thoughts and collecting the thoughts of others, and every so often glimpsing mirrors, in which we recognize creatures of the information. (p.426)
Policy makers and the media have shown a remarkable preference for Randomized Controlled Trials or RCTs in recent times. After their breakthrough in medicine, they are increasingly hailed as a way to bring human sciences into the realm of ‘evidence’-based policy. RCTs are believed to be accurate, objective and independent of the expert knowledge that is so widely distrusted these days. Policy makers are attracted by the seemingly ideology-free and theory-free focus on ‘what works’ in the RCT discourse.
Part of the appeal of RCTs lies in their simplicity. Trials are easily explained along the lines that random selection generates two otherwise identical groups, one treated and one not. All we need is to compare two averages. Unlike other methods, RCTs don’t require specialized understanding of the subject matter or prior knowledge. As such, it seems a truly general tool that works in the same way in agriculture, medicine, economics and education.
Deaton cautions against this view of RCTs as the magic bullet in social research. In a lengthy but well readable NBER paper he outlines a range of misunderstandings with RCTs. These broadly fall into two categories: problems with the running of RCTs and problems with their interpretation.
Firstly, RCTs require minimal assumptions, prior knowledge or insight in the context. They are non-parametric and no information is needed about the underlying nature of the data (no assumptions about covariates, heterogeneous treatment effects or shape of statistical distributions of the variables). A crucial disadvantage of this simplicity is that precision is reduced, because no prior knowledge or theories can be used to design a more refined research hypothesis. Precision is not the same as a lack of bias. In RCTs treatment and control groups come from the same underlying distribution. Randomization guarantees that the net average balance of other causes (error term) is zero, but only when the RCT is repeated many times on the same population (which is rarely done). I hadn’t realized this before and it’s almost never mentioned in reports. But it makes sense. In any one trial, the difference in means will be equal to the average treatment effect plus a term that reflects the imbalance in the net effects of the other causes. We do not know the size of this error term, but there is nothing in the randomization that limits its size.
RCTs are based on the fact that the difference in two means is the mean of the individual differences, i.e. the treatment effects. This is not valid for medians. This focus on the mean makes them sensitive to outliers in the data and to asymmetrical distributions. Deaton shows how an RCT can yield completely different results depending on whether an outlier falls in the treatment or control group. Many treatment effects are asymmetric, especially when money or health is involved. In a micro-financing scheme, a few talented, but credit-constrained entrepreneurs may experience a large and positive effect, while there is no effect for the majority of borrowers. Similarly, a health intervention may have no effect on the majority, but a large effect on a small group of people.
A key argument in favour of randomization is the ability to blind both those receiving the treatment and those administering it. In social science, blinding is rarely possible though. Subjects usually know whether they are receiving the treatment or not and can react to their assignment in ways that can affect the outcome other than through the operation of the treatment. This is problematic, not only because of selection bias. Concerns about the placebo, Pygmalion, Hawthorne and John Henry effects are serious.
Deaton recognizes that RCTs have their use within social sciences. When combined with other methods, including conceptual and theoretical development, they can contribute to discovering not “what works,” but why things work.
Unless we are prepared to make assumptions, and to stand on what we know, making statements that will be incredible to some, all the credibility of RCTs is for naught.
Also in cases where there is good reason to doubt the good faith of experimenters, as in some pharmaceutical trials, randomization will be the appropriate response. However, ignoring the prior knowledge in the field should be resisted as a general prescription for scientific research. Thirdly, an RCT may disprove a general theoretical proposition to which it provides a counterexample. Finally, an RCT, by demonstrating causality in some population can be thought of as proof of concept, that the treatment is capable of working somewhere.
Economists and other social scientists know a great deal, and there are many areas of theory and prior knowledge that are jointly endorsed by large numbers of knowledgeable researchers. Such information needs to be built on and incorporated into new knowledge, not discarded in the face of aggressive know-nothing ignorance.
The conclusions of RTCs are often wrongly applied to other contexts. RCTs do not have external validity. Establishing causality does nothing in and of itself to guarantee generalizability. Their results are not applicable outside the trial population. That doesn’t mean that RCTs are useless in other contexts. We can often learn much from coming to understand why replication failed and use that knowledge to make appropriate use of the original findings by looking for how the factors that caused the original result might be expected to operate differently in different settings. However, generalizability can only be obtained by thinking through the causal chain that has generated the RCT result, the underlying structures that support this causal chain, whether that causal chain might operate in a new setting and how it would do so with different joint distributions of the causal variables; we need to know why and whether that why will apply elsewhere.
Bertrand Russell’s chicken provides an excellent example of the limitations to straightforward extrapolation from repeated successful replication.
The bird infers, based on multiple repeated evidence, that when the farmer comes in the morning, he feeds her. The inference serves her well until Christmas morning, when he wrings her neck and serves her for Christmas dinner. Of course, our chicken did not base her inference on an RCT. But had we constructed one for her, we would have obtained exactly the same result.
The results of RCTs must be integrated with other knowledge, including the
practical wisdom of policy makers if they are to be usable outside the context in which they were constructed.
Another limitation of the results of RCTs relates to their scalability. As with other research methods, failure of trial results to replicate at a larger scale is likely to be the rule rather than the exception. Using RCT results is not the same as assuming the same results holds in all circumstances. Giving one child a voucher to go to private school might improve her future, but doing so for everyone can decrease the quality of education for those children who are left in the public schools.
Knowing “what works” in a trial population is of limited value without understanding the political and institutional environment in which it is set. Jean Drèze notes, based on extensive experience in India, “when a foreign agency comes in with its heavy boots and suitcases of dollars to administer a `treatment,’ whether through a local NGO or government or whatever, there is a lot going on other than the treatment.” There is also the suspicion that a treatment that works does so because of the presence of the “treators,” often from abroad, rather than because of the people who will be called to work it in reality. Unfortunately, there are few RCTs which are replicated after the pilot on the scaled-up version of the experiment.
This readable paper from one of the foremost experts in development economics provides a valuable counterweight to the often unnuanced admiration for everything RCTs. In a previous post, I discussed Poor Economics from “randomistas” Duflo and Banerjee. For those who want to know more, there is an excellent debate online between Abhijit Banerjee (J-PAL, MIT) and Angus Deaton on the merits of RCTs.
Developed countries stimulate developing countries to adopt the “good” institutions and “good” policies which will bring them economic growth and prosperity. These are promoted by institutions such as the WTO, the IMF and the World Bank. Recipes such as abolishing trade tariffs, an independent central bank and adhering to intellectual property rights feature high on their agendas.
In his book “Kicking away the ladder” Ha-Joon Chang shows that these policies are not so beneficial for developing countries. Through historical analysis he shows that developed countries actively pursued all types of interventionist policies to achieve economic growth, contradicting the recipes they are now prescribing. A case of poachers turning into gatekeepers.
Policies that were intensively used by the USA and European countries include tariff protection, import and export bans, direct state involvement in key industries, refusal to adopt patent laws, R&D support, granting monopoly rights, smuggling and poaching expert workers. Chang points out that alleged free trade champions, the UK and USA, were the most protective of all and only switched to liberalisation after World War II when and as long as their hegemony was safe (see table below). Asian tigers such as South Korea and Taiwan did the same, which explains their success. Ha-Joon Chang shows that, in comparison, current developing countries offer relatively limited protection to their economies.
What does it imply for development cooperation? Developed countries often expect developing countries to adopt world-class institutions and policies in a nick of time. However, the path to these kinds of institutions for developed countries was a long and winding path, a slow process that took decades, with frequent reversals. We sometimes forget that universal suffrage was only achieved as recently as 1970 (in Canada) or 1971 (Switzerland). It took the USA until 1938 to ban child labour. Switzerland was notoriously late to adopt patent laws (explaining its success with pharmaceutical companies). Imposing world-class institutions or policies on developing countries can be harmful because they take a lot of human and financial resources, which may be better spent elsewhere. In fact, adopting such institutions and policies mainly benefits the developed countries, not the developing ones.
Ha Joon-Chang calls this practice of using successful strategies for economic development and then preventing other countries from applying the same strategy “kicking away the ladder”. The WTO negotiation rounds or regional trade agreements have a lot in common with the “unequal” treaties between colonisers and colonised countries.
Why is institutional development so slow? Are there no last-mover benefits? Chang gives following reasons:
- Institutional development is firmly linked with the state’s capacity to collect taxes. This capacity is linked to its ability to command political legitimacy and its capacity to organize the state (see blog post on Thinking like a State). That’s also another reason why tariffs are so important for developing countries: they are some of the taxes that are easiest to collect. Institutional development is linked to the development of human capacity within a country by its education system. Setting up “good” institutions in countries that don’t have the human capital for it will lead to undermining, bad functioning or draw away scarce resources from other sectors.
- Well-functioning institutions and policies need to fight initial resistance and prejudice. Chang points to the resistance to introducing an income tax at the beginning of the 20th century in western countries. It can take years and gradual policy changes to overcome this. The struggle to raise the retirement age in western countries is another illustration of the sometimes double standards we use toward developing countries.
- Many institutions are more the result of economic development rather than a condition for it. This is contentious, but Chang points to democracy as an example.
Chang advocates for developing countries to pursue an active interventionist economic policy. His thesis confirms the importance of supporting developing countries in the strengthening of their education systems. However, it also illustrates that the financial harm to developing countries as a result of unequal trade policies can be much higher than the aid flows to these countries.
Feminization in education refers to the increasing dominance of females within the teaching profession, especially in early childhood education and primary education, and its consequences. Various arguments are being given on why this is generally a bad thing. The first argument is that it deprives boys and girls from male role models. In South Africa, with a sizable share of one-parent and zero-parent households, this could have a significant effect. Secondly, when teachers are increasingly recruited from only half of the population, there is a higher chance on qualified teacher shortages.
The third argument is potentially the strongest, that increasing feminisation has negative effects on learning outcomes of boys. PISA results have consistently shown that boys are more likely than girls to be overall low-achievers, meaning that they are more likely than girls to perform below the baseline level of proficiency in all three of the subjects that are tested in PISA: reading, mathematics and science. Moreover, boys in OECD countries are twice as likely as girls to report that school is a waste of time, and are 5 percentage points more likely than girls to agree or strongly agree that school has done little to prepare them for adult life when they leave school.
This underachievement and these negative attitudes seem to be strongly related to how girls and boys absorb society’s notions of “masculine” and “feminine” behaviour and pursuits as they grow up. For example, several research studies suggest that, for many boys, it is not acceptable to be seen to be interested in school work. Boys adopt a concept of masculinity that includes a disregard for authority, academic work and formal achievement. For these boys, academic achievement is not “cool” (Salisbury et al., 1999). Although an individual boy may understand how important it is to study and achieve at school, he will choose to do neither for fear of being excluded from the society of his male classmates. Indeed, some studies have suggested that boys’ motivation at school dissipates from the age of eight onwards and that by the age of 10 or 11, 40% of boys belong to one of three groups: the “disaffected”, the “disappointed” and the “disappeared”. Members of the latter group either drop out of the education system or are thrown out. Meanwhile, studies show that girls seem to “allow” their female peers to work hard at school, as long as they are also perceived as “cool” outside of school. Other studies suggest that girls get greater intrinsic satisfaction from doing well at school than boys do. Boys are more likely than girls, on average, to be disruptive, test boundaries and be physically active – in other words, to have less self-regulation. As boys and girls mature, gender differences grow even wider as boys start withdrawing in class and becoming disengaged.
These findings seem to suggest that traditional school settings are more challenging for boys than for girls. Current school environments may inadvertently disadvantage boys with its emphasis on coursework and downplaying of competition. A lack of male teachers may increase the impression among boys that schools are something ‘for girls’. Secondly, male teachers may be more sensitive to and able to deal with these challenges.
The beliefs that teachers and school leaders hold about education are arguably instrumental to their practice. These include beliefs about the purpose of education, beliefs about how people learn, beliefs about the nature of their subject (e.g. math wars) and beliefs about learners (debate to what extent learning outcomes are genetically determined: nature vs nurture debate). In our activities, we often rush to strengthen educators’ knowledge and skills. But shouldn’t we focus more on changing their beliefs? One reason is that changing our beliefs is hard and difficult to measure.
Why are beliefs so hard to change? Psychology might provide us with some answers.
According to Kahneman, we are prone to overconfidence. When making judgements, we rely on information that comes to mind, neglect what we don’t know and construct a coherent story in which our judgement makes sense. 90% of car drivers think they are better than average. I don’t know of any similar research for teachers, but I’m pretty sure more than 50% thinks that they’re better than average. Add to this the fact that uncertainty is not socially acceptable for a “professional”.
Secondly, we tend to surround ourselves with people who confirm our beliefs, gradually locking ourselves into ‘echo chambers’. According to Yochai Benkler in his book “The Wealth of Networks”, individuals with shared interests are far more likely to find each other or converge around a source of information online than offline. Social media enable members of such groups to strengthen each other’s beliefs, by shutting out contradictory information and to take collective action. Even people with fringe beliefs are likely to find like-minded souls online and see their views reinforced. In these ways can establish themselves and persist long after outsiders deem them debunked: see, for example, online communities devoted to the idea that the government is spraying “chemtrails” from high-flying aircraft or that evidence suggesting that vaccines cause autism is being suppressed.
Technology companies play an active role in constructing such echo chambers. In 2011, Eli Pariser, an internet activist, warned for a “filter bubble”. He worried that Google’s search algorithms, which offer users personalised results according to what the system knows of their preferences and surfing behaviour, would prevent people from accessing countervailing views. Facebook subsequently became a much better—or worse—example. Its algorithms are designed to populate people’s news feeds with content similar to material they previously “liked”.
Another explanation may lie in a general a loss of trust in institutions and distrust of experts. The Economist recently ran a briefing on the emerging post-truth politics, in which the value of evidence seems to diminish in favour of so-called “authentic” politicians, who “tell it how it is” (ie, say what people feel). Teachers may trust their own experience more than published research, for example on learning styles, ability grouping or using grades. One reason for this within the education field is that education experts often contradict each other,also because of the difficulty to generalise many findings across contexts and cultures.
Education is one of those domains on which everyone holds an opinion. For most people, these opinions are based on their own experience and intuition about “what feels right” or “what ought to be true”. Many feel that teaching boils down to common sense. These opinions attract a lot of attention. An example is the regular stream of opinion articles lamenting the ‘crisis in education’, the fact that ‘education has not evolved for 100 years’ and that ‘it doesn’t equip our children with 21st century skills.’ Moreover, education seems to be particularly prone to ‘fads’ (one computer or tablet per learner, standardised testing, small class sizes), which often come from the burning desire to ‘fix education. Coming up with a ‘magic bullet’ is easier than changing the oil tanker that an education system is.
Context plays a role as well. Research in Singapore showed that teachers pointed to contextual constraints to account for the inconsistency between their espoused beliefs and the teacher-centric teaching practice. Teachers may feel the pressure to cover the curriculum and get learners ready for examinations. Parents may resist relatively new approaches like inclusivity and heterogeneous grouping and threaten to move their children to another school. Ongoing research in Free State, South Africa, shows no relation between teachers’ preference for a surface versus deep learning focus of teaching and learning outcomes, likely as a result of contextual factors. There might be, in other words, be a gap between the ideal world of educational research and real world of cash-strapped education systems.
Should professional development of teachers and school leaders focus on changing those beliefs?
The main argument to focus on beliefs is that sustainable changes in teaching practice are only likely to occur when teachers support the underlying rationale. However, beliefs are not static. People do change their beliefs. But often gradually, not as a result of one workshop. Good insight in recognising and dealing with resistance, ways how change occurs, effective feedback and adoption of innovations should be important elements in the repertoire of every education advisor.
Mike Hulme’s book on climate change provides some useful recipes from complexity science:
Rather than aiming to find one global solution, a variety of approaches catering to different world views, ideas about governance, science etc. stands a better chance at curbing climate change. Climate change derives from various other problems, such as population growth, unsustainable energy, endemic poverty, food security, deforestation, biodiversity loss… Rather than framing climate change as a mega-problem, requiring a mega-solution, Hulme argues that disentangling the issue, moving climate change to the background, is more likely to yield effect.
Using a variety of strategies (testimonies, research findings, inspiring stories of change) for a variety of beliefs and sensitivities for changing them. Research indicates that the perspectives of administrators and teachers can differ significantly on this point. Administrators tend to perceive nationally normed standardised assessments, whereas teachers grant more validity to classroom observations (Guskey, 2007).
In a trajectory that aims at changing teachers’ beliefs, a possible succession of steps is:
- Making existing beliefs explicit
- Creating conditions in which existing beliefs can be questioned
- Presenting the conflict between their old and new beliefs as challenging rather than threatening
- Providing teachers with the necessary time to reflect on their beliefs, reconcile the new beliefs with their existing knowledge framework and teaching context.
Another point of attention is that we should take care to back up what we introduce in training activities with decent research findings and theoretical underpinnings, and discuss these and their implications with educators. In doing this, we must help educators understand how to cope with the complexities of classroom life and how to apply theory and new findings in real classrooms where the relationship between theory and practice is complex and where numerous constraints and pressures influence teacher thinking.