Effectiveness of ICT in the classroom: Findings from the IDB study on OLPC Peru

Credit: jdebner

The Edutech Debate and the World Bank Blog from Michael Trucano regularly provide excellent background reading on the effectiveness on ICT in education (‘computers in the classroom’).  Discussion surged again with the publication of an Inter-American Development Bank (IDB) evaluation of the One Laptop Per Child (OLPC) project in Peru, which sparked a lengthy but worthwhile discussion on the EduTech website.

The IDB study on the OLPC programme in Peru did not find any tangible effect of the programme on students’ test results (national assessments in Math and language test results) 15 months after implementation (Link to Economist article on the study).

The IDB applied a randomized control experiment by sampling five students per grade per school out of 320 schools (2/3 of which were in a treatment receiving the intervention) at intervals of 3 and 15 months.  The recently published assessment brief covered the 15 month data.

Some people were quick to point to the impact of the programme beyond the immediate effect on test results.  Nicholas Negroponte, founder and chairman of the OLPC non-profit stressed that the purpose of OLPC was not to improve classroom learning only, but learning in the child’s whole life.  The computers arguably improved reading comprehension, parent involvement, critical and creative thinking, initiative and discovery. Also the OLPC Peru team defends the project and points out that attitudes and expectations of students, parents and teachers have changed (within 15 months of implementation of the project).  According to project leader Oscar Bacerro (reply to Economist and article in EduTech Debate) the Peru programme the educational system in such poor shape that improving the quality of its teachers would require 10-15 years. He focuses on the lack of a fertile environment in the Peruvian educational system to introduce the programme. 

“It was clear to us the main challenge for our project would not be “teacher training” on how to use computers in the classroom because most of our teachers needed exceedingly much more than ICT literacy courses.”

He argues that many teachers lack basic numeracy and literacy skills. Notwithstanding the fact that programme design included 40 hours of training for teachers, this was clearly insufficient for the majority of teachers.  The IDB analysis showed that almost half the students were prohibited from taking the XOs home and that half of all teachers didn’t use the devices in the classroom. Changes in cognitive abilities take many years to materialize.  Is this a case of too much emphasis on the infrastructure and not enough attention on technical and pedagogical support?

The OLPC project was not implemented in isolation.  There were several interventions aimed at improving the pedagogical and institutional framework of the schools.  A large in-service training programme was included in the programme design.  The OLPC part, of course, gets all the media attention.  I’m not sure what percentage of the budget went to infrastructure and what part to capacity development of teachers and educational stakeholders. In the VVOB programme in Cambodia the share of infrastructure is less than 20% which might even be too high.

Stanford professor Larry Cuban has been a famous sceptic of technology in the classroom since many years.  Cuban’s core point is that school improvement is hard and at its core is not about technology.  In fact, it’s the organization’s skill at defining a shared vision, communicating, collaborating, evaluating, changing, etc. that is the driver of effective outcomes. 

I do not doubt the valuable role ICT can play in education. I fully believe that technology can improve access to learning and its quality. However, I also believe that ICT in education often is about the least cost efficient way to improve education – with some notable exceptions.  Toyama refers to the difference between the purchase cost of a laptop and the total operation cost (TOC), which includes maintenance, electricity, software and connectivity.  The TOC is usually 5 – 10 times higher than the purchase cost, meaning that a 300 USD computer represents an investment of approx. 300 USD per year (assuming a generous 5 year computer lifetime).  This is quite a lot, taking into account that the Peruvian government spends on average 686 USD per child in primary education and 782 USD in secondary education – in Cambodia it is 54 USD/ child.  The question is whether this amount could not be better spent on improvements in teacher education, classroom infrastructure or a better curriculum or assessment structure.

There is a notable difference between the enthusiasm that policy makers and teacher trainers show for ICT in education and its effective use in classrooms.  Whenever I present the planning of our education programme, the part on computer hardware gets most attention and questions, the part on student-centred approaches, low-cost experiments or (printed) posters very much less so. Cuban uses a metaphor of hurricanes to describe educational reforms.  The high waves at the surface are the grand policy statements by politicians.  Underneath the surface the turbulent water is alike educational technologists who predict and analyse the policy effects.  At the bottom of the sea however water flow is hardly affected by the hurricane.  Similarly, teachers in classrooms soldier on, dealing with invisible barriers and ‘details’, that in practice make all the difference.

Such ‘details’ include poor uptime due to lack of maintenance, low technical skills, lack of and unstable power supply, causing technical defects to adapters and batteries and a school culture that considers ICT equipment as a ‘trophy’ to protect rather than as an instrument to use.  Its use for learning is only as good as the teacher in the classroom. In Peru – as in Cambodia – with exceptional teachers, it becomes a useful tool, improving dialogic and problem-based learning. With an ordinary teacher, it is just a means of entertainment and reinforces the teacher-driven mode of instruction.

An interesting aspect of the OLPC discussion is the question how to measure its success.  ‘Believers’ and ‘sceptics’ use a different measuring stick.  OLPC implementation programs in Peru and Uruguay also expressed different objectives:

In assessing a program’s effectiveness, it’s important to distinguish the difference between outputs and outcomes as well ensure alignment with measurement and evaluation criteria. Thursday’s discussion pointed out Uruguay’s clear objective of social inclusion, which produced a near 100% primary school penetration rate through a national 1-to-1 program. The Uruguay assessment focused on access, use, and experience, reflecting a focus on social inclusion as an outcome. In the case of the assessment of Peru, math, language, and cognitive test results showed outputs, but no clear connection to Peru’s 2007 stated objectives which targeted pedagogical training and application. If objectives and outcomes are not clearly aligned with assessment criteria, can “effectiveness” be appropriately measured?

Cuban uses the ‘black box’ metaphor for classroom practice.  Inputs (computers, new curricula, lab materials, pedagogical innovations) are introduced and outputs (learning outcomes, usage data) are collected, but with little information on what goes on in the classroom.  Regular (and preferably unexpected) lesson observations and many interviews are probably the only way to get insight into what goes on in the ‘black box’.    However, they don’t yield the ‘hard’ data that reporting with SMART indicators requires.

For me, the OLPC approach illustrates the failure in practice of the concept of ‘minimally invasive education‘, popularized by Sugatra Mitra’s ‘Hole in the Wall’ project in India.  This approach claims that children’s ‘natural inquisitiveness’ combined with a computer is sufficient to get them learning. Audrey Watters hits the mark by stating that ‘there remains a strange tension between dropping in a Western technological “solution” and insisting doing so is “non-invasive”.

Anyway, the debate on the project is excellent and inspiring, perhaps more than the evaluation study itself.  Programme leaders joined in the discussion.  It illustrates that success and failure in such kind of development projects are up for debate.  A debate that will inspire future project formulations.  Feel free to add your thoughts and suggestions in the comments!

Advertisements

#H807 The Final Verdict

In a few weeks, I resume my MAODE studies at the OU with the module H810, Accessible online learning: supporting disabled students. I submitted by H807 EMA hours before boarding the plane to Belgium, and I didn’t got to blogging during my holiday.  As the dust settles – but without the EMA scores in yet – I want to write a few final reflections on H807.  

I would summarize my overall feelings on the course by saying that it was better than I expected, but less than the previous course, H800. Lower expectations were due to the facts that the course was actually quite old, being its seventh and last presentation and that there would be some overlap with the themes discussed in H800 (assessment, feedback, Web 2.0).  Actually, both of these concerns turned out to be very relevant concerns.

Yes, there are few papers more recent than 2008 in the course materials, but the TMAs and EMA require you to complement the core readings with papers and other resources you find yourself, allowing plenty of scope to include more recent materials. Often, seminal papers on the topic are not very recent either, for example the papers from Nicol (2006, 2007) and Black & Wiliam (2001) on assessment.  The same goes for the alleged overlap between H807 and H800.  You have some freedom in choosing the topics you want to elaborate on in your writings, leaving it up to you to what degree you want to cover the same ground.  Tutors/ Assessment software also check your previously submitted papers.

Where are the differences then with the – excellently evaluated – H800 course?  First and foremost, the course design does not reach the same quality.  A well-designed course text should read as if the course instructor is sitting next to you, it should motivate you to delve into the readings, providing a good introduction and posing thoughtful questions.  It should also provide coherence to the course, adding rationale why a certain theme is encountered at that particular point in the course and linking course themes with each other.  The design  should also introduce various media.  Not only academic papers, but also podcasts, video lectures, blog posts and newspaper articles.  All this was present, but in a lesser degree in H807.

Second, there are no tutor-led Blackboard Collaborate (Elluminate) tutorials in H807.  I found the tutor-led tutorials in H800 valuable to get a better understanding of course concepts (e.g. Sfard’s metaphors of learning), but they also helped creating a supportive and enjoyable atmosphere among learners.  After a few tutor-led sessions, learners felt ok to set up their own sessions.  In H807 learners were encouraged to have synchronous sessions, but it didn’t take off.  Starting with one or more tutor-led sessions could have helped, although the composition of the particular tutor group likely played a part as well.

Regarding the assessment, both courses consisted of some challenging and some less interesting assignments.  However, for H807, course activities went on until 2 weeks before the deadline, whereas for H800 there were quite a few weeks available, allowing more  time for exploration and reflection than for H807.

Some of the differences are in small details.  The weekly welcoming message, for example, nicely updated for H800, whereas for H807, the message from week 1 stayed unchanged until the end of the course. Also, the fact that courses can be done at any order, implies that some items (introducing B.Collaborate, introduce library etc.) are repeated in every course.  

Anyway, some items were excellent, such as the parts on non-verbal communication, e-tivities and elements of successful feedback.  I’m now looking forward to starting H810, a course that has  collected excellent reviews, at least from those learners I know – OU end-of-course evaluations are unfortunately not made public.

References:

Nicol, D. (2006) ‘Assessment for learner self-regulation: enhancing the first year experience using learning technologies’, In paper presented at the 10th International Computer Assisted Assessment Conference, 4-5 July 2006, pp. 329–340, [online] Available from: https://dspace.lboro.ac.uk/dspace-jspui/handle/2134/4413.

Nicol, D. (2007) ‘Principles of good assessment and feedback: theory and practice’, In from the REAP International Online Conference on Assessment Design for Learner Responsibility, 29-31 May 2007, [online] Available from: http://www.reap.ac.uk/reap/public/papers//Principles_of_good_assessment_and_feedback.pdf.

Black, P. and Wiliam, D. (2001) Inside the black box: Raising standards through classroom assessment, British Educational Research Association, [online] Available from: http://weaeducation.typepad.co.uk/files/blackbox-1.pdf.