The Information

the-information-gleickWhat is Information? Is it inseparably connected to our human condition? How will the exponentially growing flow of information affect our societies?  How is the exploding amount of information affecting us as people, our societies, our democracies? When The Economist talks about post-truth society, how much of this trend is related to the failure of fact-checking, increasing polarity and fragmentation of media and the distrust of ‘experts’?  The Information starts with a reference to Borges’ Library of Babel:

The Library of Babel contains all books, in all languages.  Yet no knowledge can be discovered here, precisely because all knowledge is there, shelved side by side with all falsehood.  In the mirrored galeries, on the countless shelves, can be found everything and nothing.  There can be no more perfect case of information glut. We make our own storehouses.  The persistence of infomation, the difficulty of forgettting, so characteristic of our time, accretes confusion. (p. 373)

In The Information, James Gleick takes the reader on a historical world tour to trace the origins of our ‘Information Society’, basically an old term that keeps on being reinvented. It’s a sweeping and monumental tour that takes us from African drumming over alphabets, the beginnings of science, mathematical codes, data, electronics to the spooky world of quantum physics.  He shows how information has always been central to who we are as humans. He points to foreshadowings from the current information age such as the origin of the word “network” in the 19th century and how “computers” were people before they were machines.

shannonThe core figure in the book is Claude Shannon. In 1948 he invented information theory by making a mathematical theory out of something that doesn’t seem mathematical. He was the first one to use the word ‘bit’ as a measure of information. Until then nobody would have though to measure information in units, like meters or kilograms. He showed how all human creations such as words, music and visual images are all related in the way that can be captured by bits. It’s amazing that this unifying idea of information that has transformed our societies was only conceptualized less than 70 years ago.

It’s Shannon whose fingerprints are on every electronic device we own, every computer screen we gaze into, every means of digital communication. He’s one of these people who so transform the world that, after the transformation, the old world is forgotten.” That old world, Gleick said, treated information as “vague and unimportant,” as something to be relegated to “an information desk at the library.” The new world, Shannon’s world, exalted information; information was everywhere. (New Yorker)
At its most fundamental, information is a binary choice.  A bit of information is one yes-or-no choice. This is a very powerful concept that has made a lot of modern technology possible. By this technical definition, all information has a certain value, regardless of the content of the message.  A message might take 1.000 bits and contain complete nonsense. This shows how information is at the same time empowering, but also desiccating. Information is everywhere, but as a result, we find it increasingly hard to find meaning.  Has the easy accessibility of ‘facts’ diminished the value we assign to it?
Despite the progress in producing and storing information, we have remained human in our ability to filter and process information. Gleick gives the example of his own writing process:
The tools at my disposal now compared to just 10 years ago are extraordinary. A sentence that once might have required a day of library work now might require no more than a few minutes on the Internet. That is a good thing. Information is everywhere, and facts are astoundingly accessible. But it’s also a challenge because authors today must pay more attention than ever to where we add value. And I can tell you this, the value we add is not in the few minutes of work it takes to dig up some factoid, because any reader can now dig up the same factoid in the same few minutes.
It’s interesting because this feeling of the precariousness of information is everywhere. We think information is so fragile, that if we don’t grab it and store it someplace, we’ll forget it and we’ll never have it again. The reality is that information is more persistent and robust now than it’s ever been in human history. Our ancestors, far more than us, needed to worry about how fragile information was and how easily it could vanish. When the library of Alexandria burned, most of the plays of Sophocles were lost, never to be seen again. Now, we preserve knowledge with an almost infinite ability.
Redundancy is a key characteristic of natural information networks. As Taleb taught us, decentralized networks are much more resilient than centralized structures.  Every natural language has redundancy built in. This is why people can understand text riddled with errors or missing letters and why they can understand conversation in a noisy room.  The best example of a natural information network may be life’s genetic make-up:
“DNA is the quintessential information molecule, the most advanced message processor at the cellular level—an alphabet and a code, 6 billion bits to form a human being.” “When the genetic code was solved, in the early 1960s, it turned out to be full of redundancy. Some codons are redundant; some actually serve as start signals and stop signals. The redundancy serves exactly the purpose that an information theorist would expect. It provides tolerance for errors.”
 Technological innovation has always sparked anxiety. Gleick quotes Plato’s Socrates that the invention of writing “will produce forgetfulness in the minds of those who learn to use it, because they will not practice their memory.” (p.30) Mc Luhan recognized in 1962 the dawn of the information age.  He predicted the confusions and indecisions the new era would bring and wrote about a ‘global knowing’.  Thirty years before H.G. Wells wrote about a World Brain, a widespread world intelligence, taking the form of a network.  Wells saw this network as a gigantic decentralized encyclopedia, managed by a small group of ‘people of authority’. The network would rule the world in a ‘post-democratic’ world order.
Gleick writes that we’re still only at the start of the Information Age. Some effects on us and on our societies will only become apparent in the coming decades. Will the internet continue to evolve into a world brain or will it splinter into various parts. Will the atomisation of our media into countless echo chambers continue and what kind of society will it lead us into?
The library will endure; it is the universe. As for us, everything has not been written; we are not turning into phantoms. We walk the corridors, searching the shelves and rearranging them, looking for lines of meaning amid leagues of cacophony and incoherence, reading the history of the past and of the future, collecting our thoughts and collecting the thoughts of others, and every so often glimpsing mirrors, in which we recognize creatures of the information. (p.426)
Advertisements

Who Owns the Future? (Jaron Lanier)

lanierWith Who Owns the Future, Jaron Lanier has delivered another wildly thought-provoking work of “speculative advocacy” after his 2010 work “You Are Not a Gadget”.

Lanier is a kind of technology wizard-sociologist. The New York Times described him as the father of virtual reality in the gaudy, reputation-burnishing way that Michael Jackson was the king of pop.

The book problematises our relation with the online environment, which has evolved from a largely open web to a much more closed one.  The early Web was characterised by people designing their own websites, registering their own domains, creatively building their unique online space. The closed Web is dominated by a few ‘Siren Servers’, technology behemoths that have come to dominate how we interact with the Web.  Driven by an one-sided notion of ‘openness’, people are encouraged to ‘share’ everything for free and be open about one’s personal information.  However, these companies are secretive about the algorithms they use to lure advertisers and decide what appears in front of you. “You don’t get to know what correlations have been calculated about you by Google, Facebook, an insurance company or a financial entity and that’s the kind of data that influences your life in a networked world.” (p.202)

“We want free online experiences so badly that we are happy to not be paid for information that comes from us now or ever. That sensibility also implies that the more dominant information becomes in our economy, the less most of us will be worth.”

Why should we care if people equal the Web with Facebook and search with Google? Lanier presents some far-reaching economic and political consequences.

Internet companies have succeeded in making people believe that data should be freely given to them.  The early internet years have fetishized open access and knowledge-sharing in a way that has distracted people from demanding fairness and job security in an information economy. Through the spread of smartphones and increasingly, the internet of things, we increasingly leave a constant data trail that is eagerly hoovered up by companies to improve their algorithms who make these same people economically redundant.

These aggregate data are clearly very lucrative, given these companies’ market values Networks create network effects, as every additional user renders the network more powerful (Metcalfe’s Law).  These network effects tend to lead to monopolies (or oligopolies) wielding enormous power and preventing newcomers from entering the market.  This threatens the diversity capitalism needs. Lanier lays out how evolutions in Artificial Intelligence, genetics and Virtual Reality, combined with Moore’s Law, are strengthening this tendency.

A second impact lies in the ‘demonetisation’ of more and more sectors of the economy.  Lanier describes how the advantage of local knowledge is gradually reduced by mining large datasets.  Instead of local knowledge about a place, an algorithm identifies which places you are likely to want to visit on your city trip.  GPS and Uber have replaced the value of local knowledge of a city to get around.  These shifts have two important consequences for markets:

  • Markets shrink as the total value that is created in sectors decreases;
  • Proportionally more people lose income as the distribution of earnings in more sectors turns from a bell-shaped curve, where the largest share goes to the middle class, to a right-skewed distribution where the bulk of earnings is concentrated among a small group of people (winner-takes-all markets).

The xMOOC rationale that aims at replacing face-to-face lectures by the lecture videos of the ‘best lecturers in the world’ fits in this story. Data mining and ‘smart’ algorithms that tailor explanations and exercises to students’ interests and progress may not necessarily be better than current education systems, but are likely to be cheaper and benefit a few companies like Coursera. With every posting on the forum, every video you watch, every quiz you take, the system becomes ‘smarter’, gradually reducing the economic value of most people who take part in these courses.  Similar trends are taking shape in lots of middle class professions such as accounting, medicine and transport.

The resulting political consequences are similarly profound. Democracies can only function in societies with a large middle class.  Large inequalities in a society increase the risk that either the elite financially captures the state or that the mass will vote populists into office with self-destructing policies.

Lanier does not offer easy ways out.  Banning technological process tends to be futile. Nor is Lanier a ‘leftie’, advocating large scale redistribution. He pleads for a ‘humanistic digital economy’ where technology is in function of society and in which the continued existence of a thriving middle class is supported.  In such an economy, information is valued fairly and transactions occur transparently. People would receive small payments (nanopayments) every time their information is used. They would also pay to use information, like using a search engine or creating a social media profile.  However, they would be paid a small but fair amount if their data are used by companies such as Google or Facebook to improve their algorithms. In such an economy we would, throughout our lives, be financially rewarded by an accumulation of small remunerations.

“By making opportunity more incremental, open and diverse than it was in the Sirenic era, most people ought to find some way to build up material dignity in the course of their lives.  The alternative would have been feeding data into Siren Servers, which lock people in by goading them into free-will-leeching feedback loops so that they become better represented by algorithms.” (p.347)

A re-design of the internet from a one-way network to a two-way network would make this possible.   In a one-way network you can create a link to a website or copy a file, but the original author will not know that you created a link or made a copy unless you inform the author.  Lanier’s concept of provenance – the recording of where value originates – is fundamental to such an ethical information economy.  In a two-way network information flows in both directions. In such a system, illegal copying is no longer possible.  Lanier compares it with systems like the Apple store or the Amazon e-book store, where you don’t buy actually copies of apps or books, but only the right to use or read it. People could protect their privacy by making the cost to use their personal data prohibitive.

The solution Lanier advocates is optimistic and might be utopian. It is also deeply realistic though in its acceptance that people are unlikely to forgo their desire for ‘free’ services anytime soon. Technology companies have become a ‘third force’, next to the state and religion. This book may not provide many answers (“It is too early for me to solve every problem brought up by the approach I’m advocating here”), but it does articulate a desperate need for them.

Web 2.0 tools and how they affect learners, educators and institutions. Some thoughts.

How do Web2.0 tools affect educational institutions and the educators and learners within them?  Do institutions better prepare for a complete overhaul in order to stay relevant?  Or, can they integrate Web2.0 as an extra layer into their current practices?  These are a few of the key questions addressed in Weeks 21 and 22 of H800.
There’s a lot of recent research on the topic with, among others, two large studies (Redecker, 2009 and JISC, 2009) offering a plethora of case studies.  The background reading on the topic is a text from Conole (2011).  Below, I describe  some personal conclusions.
First, educators often seem to be the driving force behind integrating Web2.0 in teaching.  Few institutions seem to have an official policy on the topic, although issues such as privacy, reliability and assessment surely affect the institution as a whole.  So, how could institutions deal with Web2.0 cases?  From the readings, I would conclude the following:
  • Encourage early adopters, the technology enthusiasts that are willing to invest the time and climb the learning curves to design activities.  Although plenty of cases are available, translating them to the particular lesson context is often time consuming.
  • Document. Stimulate these early adopters to keep track of their experiences, reflections and decisions, if possible publicly.  Support them to monitor and evaluate the experiences of learners, in order to assess improvements or scaling-up options.  The documentation can prove useful for later in-service training activities.
  • Allow for time for trying out, ‘tinkering’ and experimenting with Web2.0 applications.
  • Think about a policy, or at least some guidelines.  How to deal with the privacy of students when using blogs?  How to assess individual contributions when working with a wiki?  How to avoid time consuming plowing through forum posts in blogs? How to deal with external software that is suddenly unavailable or behind a paywall?

The cases also discuss the implications for learners.  Here, I had following thoughts
          Web2.0 tools are often touted as supporting the way of teaching we currently see as most desirable, being collaborative, social, authentic, differentiated, lifelong and life-wide.  With Web2.0 tools teachers have an extra battery to turn their lessons into student-centered feasts.  However, students may also need to make a mental switch and turn from passive ‘receivers’ into active ‘creators, or turning from competitive individuals into sharing collaborators. It’s important to make sure that students are aware of these changing expectations.
          Learners cannot hide anymore from digital technologies that have turned / are turning into a necessary life skill.  However, some technologies may pose a steep learning curve for students and get in the way of the topics that they are supposed to be learning (as well).
So, should educational institutions consider sweeping changes in the way they are running in order to incorporate the Web 2.0 army?  I’m not so sure.  I think that Web 2.0 is offering a wide range of interesting applications to improve teaching and learning.  It suffices to skim through the case studies to get convinced of that.  But these case studies also show that students still need guidance, assessment and engaging activities in order to learn.  Web2.0 tools offer a medium, but the input from teachers and students give them their added value.