2012, Year of the MOOC?

funny-farm-animals-04In various places (such as the New York Times) 2012 has been heralded as the year of the Massive Open & Online Course, also called MOOC.  Although MOOCs have been around since 2008 or so, developed by researchers like Stephen Downes, David Courmier, George Siemens, Jim Groom and others.

“In the summer of 2008 I invited George Siemens and Stephen Downes to come to edtechtalk and tell us about the new course they were teaching. They had 25 people registered (paid), at the university of Manitoba, but they had opened the class for online registration to whomever wanted to come along. Hundreds (and then a couple thousand) people took them up on it. We started talking about what it meant to have lots and lots of people learning together… somewhere in there, i called them a massive open online course… for which i have been often chastised :)” (from Dave Courmier’s blog)

They are based on a connectivist pedagogy, characterised by distributed content, network formation, creation of artefacts outside course-related structures and superfluous course boundaries.  MOOCs based on these principles are often dubbed cMOOCs, to distinguish them from their less salubrious nephews.

The main in change in 2012 has been the entering of Ivy League institutions in the MOOC fray.  As OU vice-chancellor Martin Bean notes, the arrival of great brands, lots of (venture capital) money has vastly increased the forces of disruption.  The entrance of Silicon Valley in MOOCs has been spearheaded by Coursera, Udacity (both offshoots from an open Artificial Intelligence course at Stanford University) and edX (grown from MITx after investment and participation from Harvard University and UC Berkeley).  Online courses from these providers routinely attract tens of thousands of people (although drop-out rates are stratospheric).  Mass media have picked up the phenomenon (New York Times, The Economist, Financial Times).  Coursera has been gradually expanding its offer to non-US universities and currently offers more than 200 courses from 62 universities and 14 countries, including France, The Netherlands, Hong Kong and Italy (no, not from Belgium yet, no surprises there).  It’s interesting to note that these institutions have largely missed out the evolution to online learning so far and their  Silicon-centredness and lack of regard for 40 years of research in distance and online learning has been derided by researchers.  In the UK the Open University (OU) has recently announced its own MOOC platform, Futurelearn – here is a worthwhile reflection from OU researcher Doug Clow – and in March a ‘regular’ online course (h817) will be offered partly as a free open course.

Dubbing 2012 the ‘year of the MOOC’ may seem condescending to institutions and researchers who have been active on the topic for years.  However, there’s no denying in the worldwide appeal of the Ivy League institutions and their disruptive power.  Many challenges remain, in terms of business models, learner interaction, accreditation and quality.  It will be interesting to watch if also this disruptive innovation, like radio and television before, will evolve from an open, bottom-up structure full of creativity towards a commercialised and closed system, as described so beautifully by Tim Wu in ‘The Master Swith’ (blog post on the book).

#H809 Can Technology ‘Improve’ Learning? And can we find out?

In education and learning we cannot isolate our research objects from outside influences, unlike in positive sciences.  In a physics experiment we would carefully select variables we want to measure (dependent variables) and variables that we believe could influence those (independent variables).  In education this is not possible.  Even in Randomized Controlled Trials (RCT), put forward by researchers as Duflo and Banerjee (see my post that discusses their wonderful book ‘Poor Economics’) as a superior way to investigate policy effects, we cannot, in my opinion, fully exclude context.

This is why, according to Diana Laurillard, many studies talk about the ‘potential’ of technology in learning, as it conveniently avoids dealing with the messiness of the context. Other studies present positive results, that take place in favourable external circumstances.  Laurillard argues that the question if technology improves education is senseless, because it depends on so many factors:

There is no way past this impasse. The only sensible answer to the question is ‘it depends’, just as it would be for any X in the general form ‘do X’s improve learning?’. Try substituting teachers, books, schools, universities, examinations, ministers of education – any aspect of education whatever, in order to demonstrate the absurdity of the question. (Laurillard, 1997)

In H810 we discussed theories of institutional change and authors such as Douglas North and Ozcan Konur, who highlighted the importance of formal rules, informal constraints and enforcement characteristics to explain policy effects in education.  Laurillard talks about ‘external layers of influence’. A first layer surrounding  student and teacher (student motivation, assessment characteristics, perceptions, available hard- en software, student prior knowledge, teacher motivation to use technology etc.) lies within the sphere of influence of student and teacher.  Wider layers (organisational and institutional policies, culture of education in society, perceived social mobility…) are much harder to influence directly.

That doesn’t mean she believes educational research is impossible.  She dismisses the ‘cottage industry’ model of education (See this article from Sir John Daniel on the topic), in which education is seen as an ‘art’, best left to the skills of the teacher as artist.  Rather, she argues for a change in direction of educational research.

Laurillard dismisses much educational research as ‘replications’ rather than ‘findings’, a statement that echoes the plea from Clayton Christensen to focus more on deductive, predictive rather research than descriptive, correlational studies.  He argues to focus less on detecting correlations and more on theory formation and categorisation of the circumstances in which individual learners can benefit from certain educational interventions.  A body of knowledge advances by testing hypotheses derived from theories.  To end with a quote from the great Richard Feynman (courtesy the fantastic ‘Starts with a Bang‘ blog):

“We’ve learned from experience that the truth will come out. Other experimenters will repeat your experiment and find out whether you were wrong or right. Nature’s phenomena will agree or they’ll disagree with your theory. And, although you may gain some temporary fame and excitement, you will not gain a good reputation as a scientist if you haven’t tried to be very careful in this kind of work.” -Richard Feynman

References

Konur, O. (2006) ‘Teaching disabled students in higher education’, Teaching in Higher Education, 11(3), pp. 351–363.
Laurillard, D. (1997) ‘How Can Learning Technologies Improve Learning?’, Law Technology Journal, 3(2), pp. (c) Warwick Law School; presented at the Higher Education 1998: Transformed by Learning Technology, Swedish–British Workshop 14–17 May 1993, University of Lund, Sweden.
North, D.C. (1994) Institutional Change: A Framework Of Analysis, Economic History, EconWPA

Time To Listen (3): Donor Policies and Agendas

ImageThis is the third post on the Time to Listen report, that aims at giving a voice to those on the receiving end of development assistance.  Earlier posts can be found here and here.

Donors usually have external interests and agendas that influence international aid.  These agendas are not always shared by the recipients and have negative effects on their societies.

A grandmother who was caring for her orphaned grandchildren explained that a decision to provide aid only to people who tested positive for HIV/AIDS meant she got food to feed only one granddaughter, who was infected, while her other grandchildren were also hungry. She was amazed that donors set a political policy that forced her to choose among her hungry grandchildren. Others noted that the focus on humanitarian aid only for those affected by HIV/AIDs left able-bodied children and children who had living parents without any support. This neglect of healthy children and families was, they felt, short-sighted because it could undermine the country’s future development.

“It appears there is a need to be in a war situation before we can get assistance. We have to risk our lives in order to get development aid.” – Community members, Philippines

Procedures intended to make aid more transparent and consistent have often the side effect of being complicated, rigid, and counter-productive  reducing efficiency and effectiveness and wasting both time and money.

Western concepts of vulnerability and worthiness do not always match local concepts. For minority ethnic groups in Cambodia, who stated that they believe everyone is equal and deserves the same aid, foreign concepts of vulnerability clashed with local concepts of fairness. “They come and ask about our needs and then come with district officials to distribute…. We don’t agree with the selection. Poverty assessment is based on whether or not the family owns a motorbike or a wooden house (richer) or no motorbike and bamboo house (poorer).” People were angered by the selection criteria and stormed out of the community meeting. (Listening Project Report, Cambodia).

There is wide agreement that outside aid providers should work through existing institutions where they are strong and support them, if weak, to help them gain experience and resources for bettering their societies. Receivers and providers of aid together recognize that international donors are only temporary actors in recipient societies and that governments and local organizations know their contexts better than outsiders do.  However, local institutions may have their own motives for selecting activities or target groups.

“If you are from the opposite party, you will get no aid to develop your area. And the ruling party will accuse the other parties of not helping people. Aid is manipulated for political favor and to disfavor other political parties. Foreign assistance is used to show that the ruling party is generous.” – Local NGO staff, Cambodia

Corruption is a daily concern of many involved in development work.  Beyond the unambiguous manifestations of corruption through theft, diversion, and unfair distribution, people often raise three other issues. These are aspects of international assistance that they see as “corrupting influences”. These include what people see as extravagant spending or needless waste by international aid agencies and their staff, the delivery of too much aid (too quickly), and the absence of serious or effective accountability in aid efforts.

When sizeable resources come into otherwise poor communities with the message that these must be spent quickly to comply with donor guidelines, it is not surprising that this prompts abuse. A number of people are surprised that international aid providers continue to make this mistake, which leads, they say, to misuse.

Some principles should guide the development of an alternative funding system: “Enough, but not too much”; “Available, but not necessarily spent”; “Steady, but no burn-rate requirement”—such funding principles would connect resource flows to a mutually developed strategy and to a given context.

Donors need to be honest and forthright about what they really mean by ‘participation.’ Is it simply a consultation with communities to get approval or support for a project that has already been pre-determined, or really to decide jointly and to work together?

An aid worker in Senegal admitted, “It is true that an obstacle to getting real involvement of local populations may be the cost and time commitment that it entails. With the emphasis on speed and efficiency, there is little time for true community involvement.”

“Not until I spent three weeks staying in a village did I feel like I was getting truthful information about what the community really needed and wanted. Only after they knew me and trusted me, did this frank exchange became possible.” (Aid worker, Lebanon)

“Presence takes time and money. Presence requires openness and humility. Presence involves prioritizing time and resources and delineating roles and responsibilities between levels (outsider, insider, stakeholders of various sorts).” (International Aid Worker, Denmark)

Opinions on paying people to participate are divided.  Many practitioners observe that such payments can undermine the principle of participation, influence the quality of relationships, raise expectations, and create perverse incentives for people to “participate” in aid processes. Some aid providers and recipients believe that paying people to participate erodes the traditions of mutual self-help in communities. Others argue that aid agencies should not expect local people to contribute their time, input, and efforts without being compensated. Some feel that giving people money or other forms of payment constitutes a gesture of appreciation and respect for people’s effort to spend a away from their regular duties. Others feel that payments for involvement feed into a monetization of what should be community-based or volunteer activities.

Local organizations report that when wealthier aid providers pay higher participation fees, recipients will sometimes refuse to engage in activities that do not pay as well. Local NGOs can rarely compete with international agencies’ larger budgets and find it challenging to work with the “professional workshop-goers” that this precedent has created.  The latter is certainly relevant in donor-darling Cambodia.  There are guidelines issued by the government on the amount of per diem fees, curbing some of the misuses of ‘per diem collection’.

Time To Listen (2): Introduction of Business Principles in Aid

ImageThis is the second post on the Time to Listen report, a book focusing on giving a voice to those on the receiving end of development aid.  You can find the first post here.

The adoption of business principles and practices is a strong trend in many aid providers, illustrated by focus on ‘value for money’ and ‘evidence-based’ programmes.  Information such as the number of people that have been reached, or the amount invested per child to reach a certain outcome are important criteria for success.  It is primarily motivated by the aid providers’ desire to be more accountable both for funds spent and results achieved.

However, this adoption has been markedly selective. While corporations depend on the satisfaction of the end users of their products and services for survival, aid agencies depend on donors to whom they “sell” their projects and programs to provide aid to recipients.  The main concern of aid providers is the satisfaction of donors, rather than the poor receiving the aid. 

Aid agency field directors say that they are promoted and respected if they “grow” their portfolios or budget every year and gain little recognition when they manage to save money for their agencies. Aid donors urge implementing agencies to monitor and maintain the “burn rate” of funds to keep on schedule. When aid budgets are under-spent, donors consider this practice “bad management” and often cut future funding. By contrast, in many businesses, cost savings can be rewarded by bonuses.

“We need strategic, long-term partnerships with donors. The impact doesn’t come overnight. We need to know that we can rely on their support not only tomorrow. If they want to make a change that lasts, they need to start taking longer breaths.” (Coordinator of local NGO in Lebanon)

“Donors only look at the ratio of expenditure to number of beneficiaries, so several of our proposals were not funded by donors. I suggest that donors should adjust selection criteria … donor interests and needs of people do not always align…. Even if the number of people is small, they still need aid as they are very poor.” (Secretary of a community council, Cambodia)

Attitudes and actions of aid recipients are affected by a focus on delivery. To many, this is one of the most disturbing results of the delivery system.  Even though most are clear that they do not want to need aid, they tell how—as aid recipients—they develop skills focused on getting the most aid they can, rather than on developing without assistance. Entrepreneurs become experts in proposal writing, not in running businesses; others become good at manipulating the system by appearing to meet the poverty or other criteria they know will “qualify” them for aid.

Many feel that the delivery system objectifies them. Some feel that international actors use their poverty to raise funds, and many say that more precise policies and standardized procedures among aid providers have reduced the space for them, as recipients, to be involved in considering options, weighing alternatives, and developing strategies for their own development.

People in recipient communities in every location said that, instead of being in the business of delivery, aid providers should be “present.” Many ascribe great and positive changes to the single idea of presence, noting that if “donors spent time with us,” they would “understand our realities,” “provide appropriate things,” “reduce corruption,” and be able to develop respectful, trusting relationships. Listening Teams were struck by the universal and repeated call for aid providers to be “present.”

International and local staff of assistance agencies (and their bosses!) frequently say that they “do not have time” to simply listen to and talk with people because their agencies expect them to focus on “project activities,” programmed around delivering aid on time and on budget.

“Local NGO staff suggested that it is important not to come into a community offering goods but to spend significant time building a relationship.” (Listening Project Report, Cambodia)

This is one of the determining advantages of having one’s office with the partner (in my case a teacher training institute). Contact with the teacher trainers is frequent and gradually, I believe, we have managed to build trust and a respectful relationship, in which they can express their concerns and frustrations, and we can communicate our limitations and donor needs.  I do agree though that building such a relationship takes time, requires knowledge of the local language and is under pressure of needs for reporting and (increasingly) communicating (with the donor public).

Time To Listen (1): Hearing the Voices from People Receiving International Aid

time-to-listen-cover1-218x300  I recently spent time reading Time to Listen: Hearing People on the Receiving End of   International Aid, by Mary B. Anderson, Dayna Brown and Isabella Jean. It’s published by CDA Collaborative Learning Projects, a non-profit organisation based in Cambridge, Massachusetts.  It is a distillation of 6000 interviews carried out from 2005-9 with people who have received or been involved in aid – individuals, local NGOs, international NGOs, bilateral aid agencies etc.

The methodology of doing the interviews and distilling the findings is valuable and the subject of another post. I’ve selected the main findings and quotes I found relevant.  The book is certainly worth reading in full though and can be freely downloaded. Many of the findings are not new for people active in development cooperation. However, it’s convincing and often uncomfortable to read them from people on the receiving end from many different countries.

Most people feel that international aid is a good thing. They are glad it exists and want it to continue. Many tell positive stories about specific projects, individual staff, or special planning or decision making processes that they credit with achieving what they hoped for. Some of the positive impacts are lasting, such as when a road improves access to a market or women develop skills that they feel improve their families’ lives.

People often distinguish between the beneficial short-term impacts of certain projects and the cumulative negative long-term impact of aid.  This focus on the cumulative impact of aid on poor people is really valuable, because it contrasts with most interviews that intend to get feedback on the results of a specific programme.

The interviews lay bare some of the weaknesses and perverse incentives that aid generates, often remarkably consistent among countries.  Everywhere people described markedly similar experiences with the processes of assistance and explained how these processes undermined the very goals of the assistance.

A main side-effect of much aid is that it increases dependency and powerlessness.  The “Message” of Aid extends beyond ‘we care’ to  ‘you don’t have to worry, we will take care’.

“By giving out so easily, you are turning them into beggars. Some villages received too much to stop and think of the value of all the things they have been given.” (Policeman, Thailand)

“It’s important not to get things for free so that people are not programmed to get aid. If you give it for free, you take away the sense of responsibility they had.”  (Karen leader, Thai-Burma border)

“One truth about external aid that occasionally presents itself is a double dependency … whereas grassroots people can develop a dependency on NGOs and other supportive entities, the NGOs in turn become dependent on grassroots leaders and groups. They need them to launch their projects, bring out the people, generate enthusiasm in the participants, and finally, to demonstrate to supervisors, donors, and visitors their achievements, or at least that the projects are underway. Their positions, salaries, and sense of efficiency are all linked to the cooperation and conformity of the aid recipients.” (Listening Project Report, Ecuador).

A recurrent them in the book is that the selection of beneficiaries is often not transparent or perceived as unfair. Criteria for disadvantaged groups are arbitrary, selection of target areas is based on political criteria or  external priorities.  In all but one country, international aid over time had introduced or reinforced tensions among groups and that, cumulatively, it had increased the potential for violence and/or fundamental divisions within their societies.

“I feel jealous. I don’t know why NGOs help [the refugee village] and not our village. The refugee village has electricity; the road is better there, and here it is muddy. It makes me feel they are better than us.”  (A male in a village next to refugee returnees, Cambodia)

#H809 Can Computers Overcome Tensions between Qualitative and Quantitative Research Approaches?

The second paper in H809, from Wegerif and Mercer, uses computer-based language analysis as an opportunity to discuss qualitative and quantitative approaches in educational research.  The paper dates from 1997 and, similar to the Hiltz and Meinke paper, its main objective seems to highlight the role novel computer-based technologies can play in research.

Quantitative data analysis enables testing of research hypotheses, creating evidence and making generalisations.  This helps to build a body of knowledge and making predictions for new situations.  Qualitative approaches allow much finer level of analysis and more attention to the particular context.

The time required for analysis and the space required for presentation mean that there is a de facto relationship between degree of abstraction useful in the data and the sample size of a study or the degree of generalisation. More concrete data such as video-recordings of events cannot be used to generalise across a range of events without abstracting and focusing on some key features from each event. (276)

Increasing computer power allows for analysis of much higher amounts of data in more detail and reduce the required level of abstraction in the categorisation. Large amounts of data can be analysed with more sensitivity to content and context.

We believe that the incorporation of computer-based methods into the study of talk offers a way of combining the strengths of quantitative and qualitative methods of discourse analysis while overcoming some of their main weaknesses. (271)

I agree that computer-based discourse analysis may overcome some weaknesses of both approaches, language may still be difficult to capture quantitatively because:

  • nonverbal language plays an important part in communication
  • meanings may be ambiguous
  • meanings may change over time or vary among persons and among contexts.

I’m not sure it’s helpful to analyse language with the same tools and rigour as positive sciences, as context is much more prevalent in language than in positive sciences.

More computer power doesn’t mean that the subjective role of the researcher can be completely discarded.  Researchers may have various  motives for their research.  As a researcher you always need to make interpretative decisions. Even with computer-based text analysis the researcher still decides which categories to use, which hypotheses to test and which excerpts to publish.  I believe it’s best to document these decisions as well and transparently as possible.  For example, the researchers could discuss limitations and weaknesses of their research or suggest alternative explanations (e.g. perhaps learners knew each other better the second time).  Also making data publicly available would help, so other researchers can scrutinize the results (although few may have time and incentive to do this).

Reference:

Wegerif, R. and Mercer, N.(1997) ‘Using Computer-based Text Analysis to Integrate Qualitative and Quantitative Methods in Research on Collaborative Learning’, Language and Education, 11(4), pp. 271–286.

#H809 Methodological Reflections on the Hiltz and Meinke Paper

The first paper in H809 is an oldie, a paper from Starr Roxanne Hiltz and Robert Meinke published in 1989.  The paper aims at comparing the learning outcomes in a few courses between online and face-to-face delivery.

Research Questions & Design
The article seeks to find out whether a virtual course implementation (VC) produces different learning outcomes than a traditional face-to-face (F2F) approach. Secondly, it looks to determine variables (student, instructor and course characteristics) associated with these outcomes.  The research uses quantitative research methods, using pre- and post-course survey data.  It complements these with evaluation reports from the course instructors.
 
Concepts
The research aims at relating the mode of delivery to learning outcomes, measured by data such as SAT courses.  It takes a behavioural view on learning.  Alternatively, the research could have focused on the degree of understanding, the development of ‘soft skills’ 
 
Limitations of the research:
1. Distribution of students in groups (VC vs F2F) was done through self-selection (quasi-experimental approach).  Student characteristics may thus not be similar.  Perhaps, more disciplined or motivated students chose to take the VC approach.
 
2. The use of self-reporting pre- and post-course surveys may be prone to response bias. Responses may have been skewed by a desire to please the researchers. As students were asked to compare VC experiences with previous F2F experiences, they needed to rely on (distorting) memory. 
 
3. The scope of the research was limited to two institutions and a small student population.  Not surprisingly, few results were statistically significant: “In many cases, results of quantitative analysis are inconclusive in determining which was better, the VC approach or the F2F approach.  The overall answer is: It depends.”  Setting up methodologically sound quantitative research designs in a ‘real’ educational setting is challenging, as there are so many environmental variables that may influence the outcomes and which, in an ideal setting, should be kept constant in order to have conclusive results for the dependent variable.
 
4. The researchers mention implementation problems, such as resistance by faculty members.  Unfortunately, they don’t elaborate on this.
 
5. The same teacher, text and other printed materials were used in both modes.  This seems like an objective way to compare two modes, but it may not be. The teacher may have been less familiar with online delivery or failed to adapt his/her mode of instruction.  Texts and other printed materials may be suitable for F2F delivery, but online delivery calls for different course designs (See the work of Mayer and Clark).  For example, online delivery requires short chunks of text for online reading, proximity of a graph with the explanation of this graph and removal of redundancies in information.
 
6. The research focuses on the comparison of delivery modes (VC vs. F2F).  However, in their discussion on collaborative learning, the authors seem to suggest that it is mainly the selection of instructional strategies that counts, in particular the inclusion of collaborative learning activities like seminar-style presentations and discussions. 
 
Ethics
The self-selecting of the samples is a weakness in the study.  Random selection would arguably provide a better basis to compare the learning outcomes of two delivery modes.  However, assigning students to a delivery mode, which you may suspect will put them to a disadvantage  and for which they have paid good money, raises ethical questions.  Providing the courses free of charge for students willing to take part of the study could be an option, although this may in turn affect the research.  Students may behave differently in a course for which they paid.
 
Findings
The study found little evidence of statistically significant correlations in learning outcomes between the two modes of delivery.  The pre- and post-test surveys did show some significant correlations in subjective assessments such as interest. Correlations were in both directions. For a mathematics course the online course generated higher interest course, whereas for an introductory sociology course, the result was opposite.  The authors suggest that this may be related to the fact that the sociology cohort was an academically weak group, as illustrated by their SAT scores.  
 
What counts as evidence in the paper?
The researchers look for statistically significant correlations.  I believe such a correlation gives more support for a claim, by indicating its strength and reliability. However, the claim is limited to the particular circumstances in which the research took place (characteristics of students, teachers, institutions, courses…) and cannot be extended to other circumstances without insight in the nature of the circumstances and their causality with the learning outcomes. In what circumstances do students achieve better learning outcomes in an online course?  For what types of courses does online learning offer a better learning experience?  The authors do discuss these circumstances, but base themselves mainly on their personal experiences as instructors rather than statistical tests.
 
A next step in the research could be to look for anomalies in the data.  Students, courses and implementation strategies that fail the hypotheses made.  For example the hypothesis that online learning is beneficial for more mature learners. Or the hypothesis that online learning is less suitable for wide, introductory courses that touch upon many topics.  
 
The research could form input to meta-analysis, which could compare the claims with other studies and try to distil findings based on a more diverse set of circumstances.