The Information

the-information-gleickWhat is Information? Is it inseparably connected to our human condition? How will the exponentially growing flow of information affect our societies?  How is the exploding amount of information affecting us as people, our societies, our democracies? When The Economist talks about post-truth society, how much of this trend is related to the failure of fact-checking, increasing polarity and fragmentation of media and the distrust of ‘experts’?  The Information starts with a reference to Borges’ Library of Babel:

The Library of Babel contains all books, in all languages.  Yet no knowledge can be discovered here, precisely because all knowledge is there, shelved side by side with all falsehood.  In the mirrored galeries, on the countless shelves, can be found everything and nothing.  There can be no more perfect case of information glut. We make our own storehouses.  The persistence of infomation, the difficulty of forgettting, so characteristic of our time, accretes confusion. (p. 373)

In The Information, James Gleick takes the reader on a historical world tour to trace the origins of our ‘Information Society’, basically an old term that keeps on being reinvented. It’s a sweeping and monumental tour that takes us from African drumming over alphabets, the beginnings of science, mathematical codes, data, electronics to the spooky world of quantum physics.  He shows how information has always been central to who we are as humans. He points to foreshadowings from the current information age such as the origin of the word “network” in the 19th century and how “computers” were people before they were machines.

shannonThe core figure in the book is Claude Shannon. In 1948 he invented information theory by making a mathematical theory out of something that doesn’t seem mathematical. He was the first one to use the word ‘bit’ as a measure of information. Until then nobody would have though to measure information in units, like meters or kilograms. He showed how all human creations such as words, music and visual images are all related in the way that can be captured by bits. It’s amazing that this unifying idea of information that has transformed our societies was only conceptualized less than 70 years ago.

It’s Shannon whose fingerprints are on every electronic device we own, every computer screen we gaze into, every means of digital communication. He’s one of these people who so transform the world that, after the transformation, the old world is forgotten.” That old world, Gleick said, treated information as “vague and unimportant,” as something to be relegated to “an information desk at the library.” The new world, Shannon’s world, exalted information; information was everywhere. (New Yorker)
At its most fundamental, information is a binary choice.  A bit of information is one yes-or-no choice. This is a very powerful concept that has made a lot of modern technology possible. By this technical definition, all information has a certain value, regardless of the content of the message.  A message might take 1.000 bits and contain complete nonsense. This shows how information is at the same time empowering, but also desiccating. Information is everywhere, but as a result, we find it increasingly hard to find meaning.  Has the easy accessibility of ‘facts’ diminished the value we assign to it?
Despite the progress in producing and storing information, we have remained human in our ability to filter and process information. Gleick gives the example of his own writing process:
The tools at my disposal now compared to just 10 years ago are extraordinary. A sentence that once might have required a day of library work now might require no more than a few minutes on the Internet. That is a good thing. Information is everywhere, and facts are astoundingly accessible. But it’s also a challenge because authors today must pay more attention than ever to where we add value. And I can tell you this, the value we add is not in the few minutes of work it takes to dig up some factoid, because any reader can now dig up the same factoid in the same few minutes.
It’s interesting because this feeling of the precariousness of information is everywhere. We think information is so fragile, that if we don’t grab it and store it someplace, we’ll forget it and we’ll never have it again. The reality is that information is more persistent and robust now than it’s ever been in human history. Our ancestors, far more than us, needed to worry about how fragile information was and how easily it could vanish. When the library of Alexandria burned, most of the plays of Sophocles were lost, never to be seen again. Now, we preserve knowledge with an almost infinite ability.
Redundancy is a key characteristic of natural information networks. As Taleb taught us, decentralized networks are much more resilient than centralized structures.  Every natural language has redundancy built in. This is why people can understand text riddled with errors or missing letters and why they can understand conversation in a noisy room.  The best example of a natural information network may be life’s genetic make-up:
“DNA is the quintessential information molecule, the most advanced message processor at the cellular level—an alphabet and a code, 6 billion bits to form a human being.” “When the genetic code was solved, in the early 1960s, it turned out to be full of redundancy. Some codons are redundant; some actually serve as start signals and stop signals. The redundancy serves exactly the purpose that an information theorist would expect. It provides tolerance for errors.”
 Technological innovation has always sparked anxiety. Gleick quotes Plato’s Socrates that the invention of writing “will produce forgetfulness in the minds of those who learn to use it, because they will not practice their memory.” (p.30) Mc Luhan recognized in 1962 the dawn of the information age.  He predicted the confusions and indecisions the new era would bring and wrote about a ‘global knowing’.  Thirty years before H.G. Wells wrote about a World Brain, a widespread world intelligence, taking the form of a network.  Wells saw this network as a gigantic decentralized encyclopedia, managed by a small group of ‘people of authority’. The network would rule the world in a ‘post-democratic’ world order.
Gleick writes that we’re still only at the start of the Information Age. Some effects on us and on our societies will only become apparent in the coming decades. Will the internet continue to evolve into a world brain or will it splinter into various parts. Will the atomisation of our media into countless echo chambers continue and what kind of society will it lead us into?
The library will endure; it is the universe. As for us, everything has not been written; we are not turning into phantoms. We walk the corridors, searching the shelves and rearranging them, looking for lines of meaning amid leagues of cacophony and incoherence, reading the history of the past and of the future, collecting our thoughts and collecting the thoughts of others, and every so often glimpsing mirrors, in which we recognize creatures of the information. (p.426)

Understandings and Misunderstandings about RCTs

angus-deatonPolicy makers and the media have shown a remarkable preference for Randomized Controlled Trials or RCTs in recent times. After their breakthrough in medicine, they are increasingly hailed as a way to bring human sciences into the realm of ‘evidence’-based policy. RCTs are believed to be accurate, objective and independent of the expert knowledge that is so widely distrusted these days. Policy makers are attracted by the seemingly ideology-free and theory-free focus on ‘what works’ in the RCT discourse.

Part of the appeal of RCTs lies in their simplicity.  Trials are easily explained along the lines that random selection generates two otherwise identical groups, one treated and one not. All we need is to compare two averages.  Unlike other methods, RCTs don’t require specialized understanding of the subject matter or prior knowledge. As such, it seems a truly general tool that works in the same way in agriculture, medicine, economics and education.

Deaton cautions against this view of RCTs as the magic bullet in social research. In a lengthy but well readable NBER paper he outlines a range of misunderstandings with RCTs. These broadly fall into two categories: problems with the running of RCTs and problems with their interpretation.

Firstly, RCTs require minimal assumptions, prior knowledge or insight in the context. They are non-parametric and no information is needed about the underlying nature of the data (no assumptions about covariates, heterogeneous treatment effects or shape of statistical distributions of the variables).  A crucial disadvantage of this simplicity is that precision is reduced, because no prior knowledge or theories can be used to design a more refined research hypothesis.  Precision is not the same as a lack of bias.  In RCTs treatment and control groups come from the same underlying distribution. Randomization guarantees that the net average balance of other causes (error term) is zero, but only when the RCT is repeated many times on the same population (which is rarely done). I hadn’t realized this before and it’s almost never mentioned in reports.  But it makes sense. In any one trial, the difference in means will be equal to the average treatment effect plus a term that reflects the imbalance in the net effects of the other causes. We do not know the size of this error term, but there is nothing in the randomization that limits its size.

RCTs are based on the fact that the difference in two means is the mean of the individual differences, i.e. the treatment effects.  This is not valid for medians. This focus on the mean makes them sensitive to outliers in the data and to asymmetrical distributions. Deaton shows how an RCT can yield completely different results depending on whether an outlier falls in the treatment or control group.  Many treatment effects are asymmetric, especially when money or health is involved. In a micro-financing scheme, a few talented, but credit-constrained entrepreneurs may experience a large and positive effect, while there is no effect for the majority of borrowers. Similarly, a health intervention may have no effect on the majority, but a large effect on a small group of people.

A key argument in favour of randomization is the ability to blind both those receiving the treatment and those administering it.  In social science, blinding is rarely possible though. Subjects usually know whether they are receiving the treatment or not and can react to their assignment in ways that can affect the outcome other than through the operation of the treatment. This is problematic, not only because of selection bias. Concerns about the placebo, Pygmalion, Hawthorne and John Henry effects are serious.

Deaton recognizes that RCTs have their use within social sciences. When combined with other methods, including conceptual and theoretical development, they can contribute to discovering not “what works,” but why things work.

Unless we are prepared to make assumptions, and to stand on what we know, making statements that will be incredible to some, all the credibility of RCTs is for naught.

Also in cases where there is good reason to doubt the good faith of experimenters, as in some pharmaceutical trials, randomization will be the appropriate response. However, ignoring the prior knowledge in the field should be resisted as a general prescription for scientific research.  Thirdly, an RCT may disprove a general theoretical proposition to which it provides a counterexample. Finally, an RCT, by demonstrating causality in some population can be thought of as proof of concept, that the treatment is capable of working somewhere.

Economists and other social scientists know a great deal, and there are many areas of theory and prior knowledge that are jointly endorsed by large numbers of knowledgeable researchers.  Such information needs to be built on and incorporated into new knowledge, not discarded in the face of aggressive know-nothing ignorance.

The conclusions of RTCs are often wrongly applied to other contexts. RCTs do not have external validity.  Establishing causality does nothing in and of itself to guarantee generalizability. Their results are not applicable outside the trial population. That doesn’t mean that RCTs are useless in other contexts. We can often learn much from coming to understand why replication failed and use that knowledge to make appropriate use of the original findings by looking for how the factors that caused the original result might be expected to operate differently in different settings. However, generalizability can only be obtained by thinking through the causal chain that has generated the RCT result, the underlying structures that support this causal chain, whether that causal chain might operate in a new setting and how it would do so with different joint distributions of the causal variables; we need to know why and whether that why will apply elsewhere.

Bertrand Russell’s chicken provides an excellent example of the limitations to straightforward extrapolation from repeated successful replication.

The bird infers, based on multiple repeated evidence, that when the farmer comes in the morning, he feeds her. The inference serves her well until Christmas morning, when he wrings her neck and serves her for Christmas dinner. Of course, our chicken did not base her inference on an RCT. But had we constructed one for her, we would have obtained exactly the same result.

The results of RCTs must be integrated with other knowledge, including the
practical wisdom of policy makers if they are to be usable outside the context in which they were constructed.

Another limitation of the results of RCTs relates to their scalability. As with other research methods, failure of trial results to replicate at a larger scale is likely to be the rule rather than the exception. Using RCT results is not the same as assuming the same results holds in all circumstances.  Giving one child a voucher to go to private school might improve her future, but doing so for everyone can decrease the quality of education for those children who are left in the public schools.

Knowing “what works” in a trial population is of limited value without understanding the political and institutional environment in which it is set. Jean Drèze notes, based on extensive experience in India, “when a foreign agency comes in with its heavy boots and suitcases of dollars to administer a `treatment,’ whether through a local NGO or government or whatever, there is a lot going on other than the treatment.” There is also the suspicion that a treatment that works does so because of the presence of the “treators,” often from abroad, rather than because of the people who will be called to work it in reality. Unfortunately, there are few RCTs which are replicated after the pilot on the scaled-up version of the experiment.

This readable paper from one of the foremost experts in development economics provides a valuable counterweight to the often unnuanced admiration for everything RCTs.  In a previous post, I discussed Poor Economics from “randomistas” Duflo and Banerjee. For those who want to know more, there is an excellent debate online between Abhijit Banerjee (J-PAL, MIT) and Angus Deaton on the merits of RCTs.

Kicking Away The Ladder

ladderDeveloped countries stimulate developing countries to adopt the “good” institutions and “good” policies which will bring them economic growth and prosperity.   These are promoted by institutions such as the WTO, the IMF and the World Bank.  Recipes such as abolishing trade tariffs, an independent central bank and adhering to intellectual property rights feature high on their agendas.

In his book “Kicking away the ladder” Ha-Joon Chang shows that these policies are not so beneficial for developing countries.  Through historical analysis he shows that developed countries actively pursued all types of interventionist policies to achieve economic growth, contradicting the recipes they are now prescribing.  A case of poachers turning into gatekeepers.

Policies that were intensively used by the USA and European countries include tariff protection, import and export bans, direct state involvement in key industries, refusal to adopt patent laws, R&D support, granting monopoly rights, smuggling and poaching expert workers.  Chang points out that alleged free trade champions, the UK and USA, were the most protective of all and only switched to liberalisation after World War II when and as long as their hegemony was safe (see table below).  Asian tigers such as South Korea and Taiwan did the same, which explains their success.  Ha-Joon Chang shows that, in comparison, current developing countries offer relatively limited protection to their economies.

chang_tariffs

What does it imply for development cooperation? Developed countries often expect developing countries to adopt world-class institutions and policies in a nick of time.  However, the path to these kinds of institutions for developed countries was a long and winding path, a slow process that took decades, with frequent reversals.  We sometimes forget that universal suffrage was only achieved as recently as 1970 (in Canada) or 1971 (Switzerland). It took the USA until 1938 to ban child labour. Switzerland was notoriously late to adopt patent laws (explaining its success with pharmaceutical companies).  Imposing world-class institutions or policies on developing countries can be harmful because they take a lot of human and financial resources, which may be better spent elsewhere.  In fact, adopting such institutions and policies mainly benefits the developed countries, not the developing ones.

Ha Joon-Chang calls this practice of using successful strategies for economic development and then preventing other countries from applying the same strategy “kicking away the ladder”.  The WTO negotiation rounds or regional trade agreements have a lot in common with the “unequal” treaties between colonisers and colonised countries.

Why is institutional development so slow? Are there no last-mover benefits?  Chang gives following reasons:

  1. Institutional development is firmly linked with the state’s capacity to collect taxes. This capacity is linked to its ability to command political legitimacy and its capacity to organize the state (see blog post on Thinking like a State).  That’s also another reason why tariffs are so important for developing countries: they are some of the taxes that are easiest to collect. Institutional development is linked to the development of human capacity within a country by its education system. Setting up “good” institutions in countries that don’t have the human capital for it will lead to undermining, bad functioning or draw away scarce resources from other sectors.
  2. Well-functioning institutions and policies need to fight initial resistance and prejudice. Chang points to the resistance to introducing an income tax at the beginning of the 20th century in western countries.  It can take years and gradual policy changes to overcome this. The struggle to raise the retirement age in western countries is another illustration of the sometimes double standards we use toward developing countries.
  3. Many institutions are more the result of economic development rather than a condition for it. This is contentious, but Chang points to democracy as an example.

Chang advocates for developing countries to pursue an active interventionist economic policy.  His thesis confirms the importance of supporting developing countries in the strengthening of their education systems.  However, it also illustrates that the financial harm to developing countries as a result of unequal trade policies can be much higher than the aid flows to these countries.

Is Boys’ Underachievement Caused by Too Much Feminization in Education?

Feminization in education refers to the increasing dominance of females within the teaching profession, especially in early childhood education and primary education, and its consequences.  Various arguments are being given on why this is generally a bad thing. The first argument is that it deprives boys and girls from male role models.  In South Africa, with a sizable share of one-parent and zero-parent households, this could have a significant effect.  Secondly, when teachers are increasingly recruited from only half of the population, there is a higher chance on qualified teacher shortages.

The third argument is potentially the strongest, that increasing feminisation has negative effects on learning outcomes of boys.  PISA results have consistently shown that boys are more likely than girls to be overall low-achievers, meaning that they are more likely than girls to perform below the baseline level of proficiency in all three of the subjects that are tested in PISA: reading, mathematics and science.  Moreover, boys in OECD countries are twice as likely as girls to report that school is a waste of time, and are 5 percentage points more likely than girls to agree or strongly agree that school has done little to prepare them for adult life when they leave school.

This underachievement and these negative attitudes seem to be strongly related to how girls and boys absorb society’s notions of “masculine” and “feminine” behaviour and pursuits as they grow up. For example, several research studies suggest that, for many boys, it is not acceptable to be seen to be interested in school work. Boys adopt a concept of masculinity that includes a disregard for authority, academic work and formal achievement. For these boys, academic achievement is not “cool” (Salisbury et al., 1999). Although an individual boy may understand how important it is to study and achieve at school, he will choose to do neither for fear of being excluded from the society of his male classmates. Indeed, some studies have suggested that boys’ motivation at school dissipates from the age of eight onwards and that by the age of 10 or 11, 40% of boys belong to one of three groups: the “disaffected”, the “disappointed” and the “disappeared”. Members of the latter group either drop out of the education system or are thrown out. Meanwhile, studies show that girls seem to “allow” their female peers to work hard at school, as long as they are also perceived as “cool” outside of school. Other studies suggest that girls get greater intrinsic satisfaction from doing well at school than boys do.  Boys are more likely than girls, on average, to be disruptive, test boundaries and be physically active – in other words, to have less self-regulation. As boys and girls mature, gender differences grow even wider as boys start withdrawing in class and becoming disengaged.

These findings seem to suggest that traditional school settings are more challenging for boys than for girls.  Current school environments may inadvertently disadvantage boys with its emphasis on coursework and downplaying of competition. A lack of male teachers may increase the impression among boys that schools are something ‘for girls’. Secondly, male teachers may be more sensitive to and able to deal with these challenges.

Why Are Beliefs About Education Hard to Change?

gearThe beliefs that teachers and school leaders hold about education are arguably instrumental to their practice. These include beliefs about the purpose of education, beliefs about how people learn, beliefs about the nature of their subject (e.g. math wars) and beliefs about learners (debate to what extent learning outcomes are genetically determined: nature vs nurture debate).  In our activities, we often rush to strengthen educators’ knowledge and skills.  But shouldn’t we focus more on changing their beliefs? One reason is that changing our beliefs is hard and difficult to measure.

Why are beliefs so hard to change? Psychology might provide us with some answers.

According to Kahneman, we are prone to overconfidence. When making judgements, we rely on information that comes to mind, neglect what we don’t know and construct a coherent story in which our judgement makes sense.  90% of car drivers think they are better than average.  I don’t know of any similar research for teachers, but I’m pretty sure more than 50% thinks that they’re better than average.  Add to this the fact that uncertainty is not socially acceptable for a “professional”.

Secondly, we tend to surround ourselves with people who confirm our beliefs, gradually locking ourselves into ‘echo chambers’.  According to Yochai Benkler in his book “The Wealth of Networks”, individuals with shared interests are far more likely to find each other or converge around a source of information online than offline. Social media enable members of such groups to strengthen each other’s beliefs, by shutting out contradictory information and to take collective action.  Even people with fringe beliefs are likely to find like-minded souls online and see their views reinforced. In these ways can establish themselves and persist long after outsiders deem them debunked: see, for example, online communities devoted to the idea that the government is spraying “chemtrails” from high-flying aircraft or that evidence suggesting that vaccines cause autism is being suppressed.

Technology companies play an active role in constructing such echo chambers. In 2011, Eli Pariser, an internet activist, warned for a “filter bubble”. He worried that Google’s search algorithms, which offer users personalised results according to what the system knows of their preferences and surfing behaviour, would prevent people from accessing countervailing views. Facebook subsequently became a much better—or worse—example. Its algorithms are designed to populate people’s news feeds with content similar to material they previously “liked”.

Another explanation may lie in a general a loss of trust in institutions and distrust of experts.  The Economist recently ran a briefing on the emerging post-truth politics, in which the value of evidence seems to diminish in favour of so-called “authentic” politicians, who “tell it how it is” (ie, say what people feel). Teachers may trust their own experience more than published research, for example on learning styles, ability grouping or using grades.  One reason for this within the education field is that education experts often contradict each other,also because of the difficulty to generalise many findings across contexts and cultures.

Education is one of those domains on which everyone holds an opinion.  For most people, these opinions are based on their own experience and intuition about “what feels right” or “what ought to be true”.  Many feel that teaching boils down to common sense.  These opinions attract a lot of attention. An example is the regular stream of opinion articles lamenting the ‘crisis in education’, the fact that ‘education has not evolved for 100 years’ and that ‘it doesn’t equip our children with 21st century skills.’  Moreover, education seems to be particularly prone to ‘fads’ (one computer or tablet per learner, standardised testing, small class sizes), which often come from the burning desire to ‘fix education. Coming up with a ‘magic bullet’ is easier than changing the oil tanker that an education system is.

Context plays a role as well.  Research in Singapore showed that teachers pointed to contextual constraints to account for the inconsistency between their espoused beliefs and the teacher-centric teaching practice. Teachers may feel the pressure to cover the curriculum and get learners ready for examinations.  Parents may resist relatively new approaches like inclusivity and heterogeneous grouping and threaten to move their children to another school. Ongoing research in Free State, South Africa, shows no relation between teachers’ preference for a surface versus deep learning focus of teaching and learning outcomes, likely as a result of contextual factors. There might be, in other words, be a gap between the ideal world of educational research and real world of cash-strapped education systems.

Should professional development of teachers and school leaders focus on changing those beliefs?

The main argument to focus on beliefs is that sustainable changes in teaching practice are only likely to occur when teachers support the underlying rationale.  However, beliefs are not static. People do change their beliefs. But often gradually, not as a result of one workshop. Good insight in recognising and dealing with resistance, ways how change occurs, effective feedback and adoption of innovations should be important elements in the repertoire of every education advisor.

Mike Hulme’s book on climate change provides some useful recipes from complexity science:

Rather than aiming to find one global solution, a variety of approaches catering to different world views, ideas about governance, science etc. stands a better chance at curbing climate change.  Climate change derives from various other problems, such as population growth, unsustainable energy, endemic poverty, food security, deforestation, biodiversity loss…  Rather than framing climate change as a mega-problem, requiring a mega-solution, Hulme argues that disentangling the issue, moving climate change to the background, is more likely to yield effect.

Using a variety of strategies (testimonies, research findings, inspiring stories of change) for a variety of beliefs and sensitivities for changing them.  Research indicates that the perspectives of administrators and teachers can differ significantly on this point. Administrators tend to perceive nationally normed standardised assessments, whereas teachers grant more validity to classroom observations (Guskey, 2007).

In a trajectory that aims at changing teachers’ beliefs, a possible succession of steps is:

  • Making existing beliefs explicit
  • Creating conditions in which existing beliefs can be questioned
  • Presenting the conflict between their old and new beliefs as challenging rather than threatening
  • Providing teachers with the necessary time to reflect on their beliefs, reconcile the new beliefs with their existing knowledge framework and teaching context.

Another point of attention is that we should take care to back up what we introduce in training activities with decent research findings and theoretical underpinnings, and discuss these and their implications with educators.  In doing this, we must help educators understand how to cope with the complexities of classroom life and how to apply theory and new findings in real classrooms where the relationship between theory and practice is complex and where numerous constraints and pressures influence teacher thinking.

The Role of Gender in Fighting Poverty

gender-saSouth Africa has made great strides towards achieving gender equality in education. According to the 2015 Millennium Development Goals (MDG) report, gender parity on all education indicators has been achieved. In terms of access to primary education, gender gaps on educational accessibility are closed.  Proportionally more females are enrolled in secondary and tertiary level education than males.  Females are also more likely to have higher levels of educational attainment than males.

This progress towards gender equality has contributed to poverty reduction in various ways:

  • Increased levels of education have empowered females as they get equipped with the self-confidence, knowledge and skills to participate fully in their communities.
  • Education of girls has resulted in higher family incomes and increased productivity.
  • Health benefits of education with an impact on poverty reduction include delayed marriages, reduced fertility rates and improved maternal and child health.

Despite the achievements, the latest Gender Series report on Education (2004-2014) from Statistics SA reveals many remaining challenges:

  1. There are large gaps in subject preferences and performance, which often result in varying social and economic gender inequalities. For example, there is a significant gap in South Africa in favour of males qualified in Science, Technology, Engineering and Mathematics (STEM) subjects, resulting in male dominance in STEM-related occupations such as engineers and architects.  More females are enrolled for Business and Commerce, Education and Other Humanities study fields. Education has the highest gender disparities, followed by Other Humanities (3/1 and 1,77/1 ratios respectively).  Male-dominated professions are often those that command higher pay.  Gender prejudice within the education system about maths and science abilities of females remain widespread.
  2. Males are more likely than females to enrol in Masters and Doctorate studies.  Females enrol more frequently in lower tertiary qualifications.
  3. Employment absorption rates for females are significantly lower than for males, leaving them more at risk of poverty despite their education levels.  Moreover, South African women remain disadvantaged in terms of pay, promotion, job stability and status.
  4. There are persistent gender inequalities in performing unpaid care or house work.  Women spend more time on unpaid work, having a negative impact on their careers in the labour market, impacting their work status and pay, regardless of levels of education obtained.
  5. Boys’ performance in literacy is significantly lower than that of girls. Boys are also more likely to leave school early. Some of the negative social impacts associated with boys dropping out of school include crime and gender-based violence. This confirms the need for a growing focus within the gender rights movement on the construction of masculinity.
  6. Girls are more likely to stay at home due to family commitments such as child minding, which seems to have a bigger impact on girls living in urban areas as opposed to rural areas. Pregnancy and marriage also act as social gender-specific obstacles that hinder access to education for girls, particularly those from rural areas.

The most fundamental impact of education and gender equality on poverty reduction may lie in its impact on slowly changing norms in a society:

“In particular, the education of boys and girls—beyond its role in building human capital—is crucial in shaping norms. In multiple discussions, adolescent boys and girls described how education exposed them to new ideas and knowledge, enlarging their capacity to analyse and encouraging critical scrutiny of established gender relations and the status quo. These discussions reaffirmed what is already known about the intergenerational transfer and reproduction of norms within households. Education fosters learning away from the household environment where gender roles are played out in every interaction and action. The research team realised the importance of ensuring that school curricula offer gender-neutral learning opportunities.”  (World Bank, 2012)

 

These results confirm some of the findings from the World Bank Report which I wrote about earlier.

“One of the more consistent findings across the 97 research sites is the universality and resilience of the norms that underpin gender roles.  In every research location, women and men of all generations identified the dominance of women’s domestic role and men’s bread winning role as absolutely core to female and male identities. Some of the focus groups gave evidence of gender norms changing, albeit slowly and incrementally, with new economic opportunity, markets, and urbanization.”

This blog post is a contribution to the IIEP Course “Monitoring and Evaluating Gender Equality in Education’, edition 2016.

Sources:

Taylor, S. & Spaull, N. 2015. Measuring access to learning over a period of increased access to schooling: The case of Southern and Eastern Africa since 2000, International Journal of Educational Development, 41, 47-59.

Statistics SA, 2015, Gender Series Volume II, Education, Report No. 03-10-12 (2004–2014)

World Bank, 2012, On Norms and Agency Conversations about Gender Equality with Women and Men in 20 Countries (link)

 

Who Owns the Future? (Jaron Lanier)

lanierWith Who Owns the Future, Jaron Lanier has delivered another wildly thought-provoking work of “speculative advocacy” after his 2010 work “You Are Not a Gadget”.

Lanier is a kind of technology wizard-sociologist. The New York Times described him as the father of virtual reality in the gaudy, reputation-burnishing way that Michael Jackson was the king of pop.

The book problematises our relation with the online environment, which has evolved from a largely open web to a much more closed one.  The early Web was characterised by people designing their own websites, registering their own domains, creatively building their unique online space. The closed Web is dominated by a few ‘Siren Servers’, technology behemoths that have come to dominate how we interact with the Web.  Driven by an one-sided notion of ‘openness’, people are encouraged to ‘share’ everything for free and be open about one’s personal information.  However, these companies are secretive about the algorithms they use to lure advertisers and decide what appears in front of you. “You don’t get to know what correlations have been calculated about you by Google, Facebook, an insurance company or a financial entity and that’s the kind of data that influences your life in a networked world.” (p.202)

“We want free online experiences so badly that we are happy to not be paid for information that comes from us now or ever. That sensibility also implies that the more dominant information becomes in our economy, the less most of us will be worth.”

Why should we care if people equal the Web with Facebook and search with Google? Lanier presents some far-reaching economic and political consequences.

Internet companies have succeeded in making people believe that data should be freely given to them.  The early internet years have fetishized open access and knowledge-sharing in a way that has distracted people from demanding fairness and job security in an information economy. Through the spread of smartphones and increasingly, the internet of things, we increasingly leave a constant data trail that is eagerly hoovered up by companies to improve their algorithms who make these same people economically redundant.

These aggregate data are clearly very lucrative, given these companies’ market values Networks create network effects, as every additional user renders the network more powerful (Metcalfe’s Law).  These network effects tend to lead to monopolies (or oligopolies) wielding enormous power and preventing newcomers from entering the market.  This threatens the diversity capitalism needs. Lanier lays out how evolutions in Artificial Intelligence, genetics and Virtual Reality, combined with Moore’s Law, are strengthening this tendency.

A second impact lies in the ‘demonetisation’ of more and more sectors of the economy.  Lanier describes how the advantage of local knowledge is gradually reduced by mining large datasets.  Instead of local knowledge about a place, an algorithm identifies which places you are likely to want to visit on your city trip.  GPS and Uber have replaced the value of local knowledge of a city to get around.  These shifts have two important consequences for markets:

  • Markets shrink as the total value that is created in sectors decreases;
  • Proportionally more people lose income as the distribution of earnings in more sectors turns from a bell-shaped curve, where the largest share goes to the middle class, to a right-skewed distribution where the bulk of earnings is concentrated among a small group of people (winner-takes-all markets).

The xMOOC rationale that aims at replacing face-to-face lectures by the lecture videos of the ‘best lecturers in the world’ fits in this story. Data mining and ‘smart’ algorithms that tailor explanations and exercises to students’ interests and progress may not necessarily be better than current education systems, but are likely to be cheaper and benefit a few companies like Coursera. With every posting on the forum, every video you watch, every quiz you take, the system becomes ‘smarter’, gradually reducing the economic value of most people who take part in these courses.  Similar trends are taking shape in lots of middle class professions such as accounting, medicine and transport.

The resulting political consequences are similarly profound. Democracies can only function in societies with a large middle class.  Large inequalities in a society increase the risk that either the elite financially captures the state or that the mass will vote populists into office with self-destructing policies.

Lanier does not offer easy ways out.  Banning technological process tends to be futile. Nor is Lanier a ‘leftie’, advocating large scale redistribution. He pleads for a ‘humanistic digital economy’ where technology is in function of society and in which the continued existence of a thriving middle class is supported.  In such an economy, information is valued fairly and transactions occur transparently. People would receive small payments (nanopayments) every time their information is used. They would also pay to use information, like using a search engine or creating a social media profile.  However, they would be paid a small but fair amount if their data are used by companies such as Google or Facebook to improve their algorithms. In such an economy we would, throughout our lives, be financially rewarded by an accumulation of small remunerations.

“By making opportunity more incremental, open and diverse than it was in the Sirenic era, most people ought to find some way to build up material dignity in the course of their lives.  The alternative would have been feeding data into Siren Servers, which lock people in by goading them into free-will-leeching feedback loops so that they become better represented by algorithms.” (p.347)

A re-design of the internet from a one-way network to a two-way network would make this possible.   In a one-way network you can create a link to a website or copy a file, but the original author will not know that you created a link or made a copy unless you inform the author.  Lanier’s concept of provenance – the recording of where value originates – is fundamental to such an ethical information economy.  In a two-way network information flows in both directions. In such a system, illegal copying is no longer possible.  Lanier compares it with systems like the Apple store or the Amazon e-book store, where you don’t buy actually copies of apps or books, but only the right to use or read it. People could protect their privacy by making the cost to use their personal data prohibitive.

The solution Lanier advocates is optimistic and might be utopian. It is also deeply realistic though in its acceptance that people are unlikely to forgo their desire for ‘free’ services anytime soon. Technology companies have become a ‘third force’, next to the state and religion. This book may not provide many answers (“It is too early for me to solve every problem brought up by the approach I’m advocating here”), but it does articulate a desperate need for them.