Homo Deus: A Brief History of Tomorrow by Yuval Noah Harari, a Professor of History at Hebrew University, Jerusalem, is a thought-provoking book that builds on his previous book, Sapiens. In Sapiens, Harari described how humans have been successful through our ability to cooperate in large groups, helped by fictional stories like religions and ideologies. As a result, the challenges of famine, plague and war have largely been reigned in. In 2020, more people will die from obesity than from covid-19. For most people, McDonald’s and Coca Cola pose a large greater threat to their health than any virus.
Artificial intelligence, made possible through breakthroughs in biology and information technology, is turning our focus to the challenges of achieving bliss, immortality and divinity. The title Homo deus refers to humankind taking up the powers to create humans and animals according to our wishes, traditionally the preserve of gods.
This has profound implications on our lives. Our humanistic world view changes to an algorithmic world view. Humanism says that human feelings and our free will are the highest authority in the world. However, stories about free will are likely to be increasingly challenged, because human feelings are just biochemical algorithms shaped by millions of years of evolution. As we gain more understanding of these algorithms, we will understand people much better than we can understand ourselves.
This integration of computer science and life sciences lead to the uncoupling of intelligence and consciousness. They are coupled in humans, but as we develop artificial cognitive and emotional intelligence, the two will be increasingly separated. We are not going to build machines any time soon that have feelings like us: that’s consciousness. But we have already built biochemical sensors, machines and data-processing networks that can record our feelings better than we know them ourselves: that’s emotional intelligence.
Humans can’t handle these massive amounts of data generated by biochemical sensors. So gradually, more and more decisions and responsibilities will be transferred to machines.
Harari calls this trust in the power of data and algorithms dataism (after theism and humanism). Its followers put their faith in information as the highest source of value. As is often the case, medicine is a frontrunner. Progress in genetics and biotech will make that decisions about our health will be increasingly be made by algorithms. An early example is the decision by Angelina Jolie to undergo a double mastectomy.
However, we are likely to see a similar shift in authority in other fields. For example, in online learning, data are collected on how long each learner spends on each page, which exercises are done successfully, and which cause problems. Through learning analytics, we can assess very accurately which students are at risk of dropping out and offer them additional support. In the next step, which is already possible, online learning will be connected with students’ senses so instructional designers can see where students are really interested and where their attention slips. In the near future, their learning will be connected to internal sensors, measuring blood pressure, eye movements, heart rate etc. As a result, a learning provider will be able to see exactly where a student got excited, angry, bored etc. At the end of a course, the course provider will know the student much better than the student knows him/herself. Companies will be able to measure the emotional impact of everything you read or watch. They will know exactly who you are and how to press your emotional buttons. For now, a teacher or professor still makes the decision to pass or fail the student, but how long still? If algorithms can get a much better insight in what a student has done and learned, why would you still need a teacher? Authority in education will shift from individual humans to networked algorithms.
However, access to this is bound to be unequal. There will be a small super elite with access to the master algorithms and the corporate profits. The rest of humanity will simply be tools in their vast schemes. The result of this might be that, in the 21st century, most men and women will lose their economic value. Many may find themselves in a new class of useless people. We will need less and less need for people to work in the fields, work in factories and offices or fight in wars. People will struggle to find purpose in life, as work, parenting and entertainment are regulated by AI. Will people be happy and how will they find purpose in life? Might drugs, entertainment and computer games be used to keep people in an eternal bliss? Chris Andrade’s Dignity offers a glimpse into what it means for communities to lose their sense of purpose, that was traditionally been provided by work, church and family. This rising socio-economic inequality will gradually translate into biological inequality, as this class of super rich people will have the means to upgrade themselves via genetic engineering and biological implants. This evolution risks making inequality within and between societies permanent, as the traditional ways of bridging the gap (cheap labour, education) risk becoming less relevant.
Does it necessarily follow that AI will take over? Can we stop this evolution?
Healing and safety are the initial justification for every upgrade. You can’t be against being healthy or being safe, if not for you, for your loved ones.
Find some professors experimenting in genetic engineering or brain–computer interfaces and ask them why they are engaged in such research. Likely they would reply that they are doing it to cure disease. ‘With the help of genetic engineering,’ they would explain, ‘we could defeat cancer. And if we could connect brains and computers directly, we could cure schizophrenia.’ Maybe, but it will surely not end there.
Secondly, in the days of European imperialism, conquistadors and merchants bought entire islands and countries in exchange for some coloured beads. As observers of the tech industry like Jaron Lanier have been pointing out for many years, in the twenty-first century our personal data is probably the most valuable resource most humans have to offer, and we are giving it to the tech giants in exchange for email services and funny cat videos. Most people have no clue of the value of their data and sleepwalk into the era of dataism.
Thirdly, most important evolutions in our society are not the result of elections. Nobody voted about the structure of the internet. Similarly, decisions about AI are made by a small group of people. Technology is not deterministic. Technological inventions can be used to make a democratic, communist or fascist system, but they are unlikely to be stopped.
What are some of the implications of Homo deus for our education systems?
- Inequality is likely to rise. Education for all is a critical counterweight to develop a well-informed and critical mass of citizens that can make sense of the evolutions in our society.
- The merger of ICT and biology. STEM subjects will rise in importance. Scientific literacy and basic programming skills are absolutely crucial to understand our society and stay relevant.
- The pace of change is accelerating. People need to be trained and re-trained regularly and will need to be able to cope with the uncertainty and have the mental flexibility to re-design themselves.
- Many jobs will disappear and might be replaced by new jobs. We mustn’t be complacent and assume that, because previous revolutions did not lead to massive job losses, that it will the same now. Education systems should regularly analyze how our society is evolving and be much bolder in designing curricula.
- The separation between a school and work life is blurring. It becomes increasingly ridiculous to separate people’s lives into a school life and a work life. It makes much more sense to have brief periods of intensive learning, followed by internships and jobs, followed by sabbaticals and new periods of learning.