#H809 Research on MOOCs

credit: Freedigitalphotos

credit: Freedigitalphotos

Week 12 in the H809 course and MOOCs – the official educational buzzword of 2012 – couldn’t remain absent.  The focus in this course is not so much on what MOOCs are, their history and the different types with their various underlying pedagogies and ideologies.  I blogged on MOOCs before, as a participant in LAK11, a connectivist MOOC on learning analytics.  In H809 the focus lies on issues such as:

  • What kind of information and research is available on MOOCs?
  • What kind of MOOC research would be interesting to do?
  • What are benefits and limitations of the type of information on MOOCs that is around?
  • What is the educational impact (rather than the press impact) of MOOCs?

Much information on MOOCs consists of the so-called grey literature.  Main information sources include:

  • blogs from practitioners and academics, with an overrepresentation of academics from Athabasca Un. and the OU.
  • blogs from participants in MOOCs, sharing their experiences
  • articles in open academic journals such as IRRODL, EURODL, Open Praxis
  • articles in more popular education magazines such as Inside Higher Education and The Chronicle of HE.
  • articles in the general press such as The Economist and The New York Times

Some comments on these sources:

  1. The term ‘grey literature’ may sound a bit disparagingly.  However, as Martin Weller writes, notions of scholarship and  academic publishing are evolving.  Blogs and open journals constitute alternative forms of scholarship with more interaction, less formality and shorter ‘turnaround’ times.
  2. Information and research on MOOCs is heavily Anglo-Saxon centred (or perhaps better Silicon Valley-centred?).  I couldn’t hardly find any articles on MOOCs in Dutch, although that might not be so surprising.  Although MOOCs (xMOOCs) are often touted as a ‘solution’ for developing countries, there are few perspectives from researchers from developing countries.  As Mike Trucano writes on the EdTech blog from the World Bank:

    “Public discussions around MOOCs have tended to represent viewpoints and interests of elite institutions in rich, industrialized countries (notably the United States) — with a presumption in many cases that such viewpoints and interests are shared by those in other places.”

  3. It’s interesting to see how many of the more general news sources seem to have ‘discovered’ MOOCs only after the Stanford AI course and the subsequent influx of venture capital in start-ups such as Coursera, Udacity and edX.  The ‘original’ connectivist MOOCs, that have been around since 2008, let alone open universities are hardly mentioned in those overviews.  A welcome exception is the Open Praxis paper from Peter and Deimann that discusses historical manifestations of openness such as the coffee houses in the 17th century.
  4. The advantage of this grey literature is that it fosters a tremendously rich discussion on the topic. Blog posts spark other blog posts and follow-up posts. Course reflections are online immediately after the course. Events such as a failing Coursera MOOC or an OU MOOC initiative get covered extensively from all angles. This kind of fertile academic discussion can hardly be imagined with the closed peer-review publication system.
  5. The flipside of this coin is that there are a lot of opinions around, a lot of thinly-disguised commercialism and a lot of plain factual mistakes (TED talks!).  MOOCs may be heading for a ‘trough of disappointment’ in Gartner’s hype cycle.  Rigorous research would still be valuable.  For example, most research is descriptive rather than experimental and is based on ridiculously small samples collected in a short time.  Interrater reliability may be a problem in much MOOC research .  Longitudinal studies that investigate how conversations and interactions evolve over time are absent.
  6. Sir John Daniel’s report ‘Making Sense of MOOCs‘ offers a well-rounded and dispassionate overview of MOOCs until September 2012.

Interesting research questions for research on MOOCs could be:

  • What constitutes success in a MOOC for various learners?
  • How do learners interact in a MOOC? Are there different stages?  Is there community or rather network formation? Do cMOOCs really operate according to connectivist principles?
  • What are experiences from MOOC participants and perspectives of educational stakeholders (acreditation agencies, senior officials, university leaders) in developing countries?
  • Why do people choose not to participate in a MOOC and still prefer expensive courses at brick-and-mortar institutions?
  • What factors inhibit or enhance the learning experience within a MOOC?
  • How to design activities within a MOCO that foster conversation without causing information overload?
  • How do MOOCs affect hosting institutions (e.g. instructor credibility and reputation) and what power relations and decision mechanisms are at play (plenty of scope for an activity theoretical perspective here).

A few comments:

  • High drop-out rates in MOOCs have caught a lot of attention.  Opinions are divided whether this is a problem or not.  As they are free, the barrier to sign up is much lower.  Moreover, people may have various goals and may just be interested in a few parts of the MOOC.
  • MOOCs (at least the cMOOCs) are by its nature decentralized, stimulating participants to create artefacts using their own tools and networks, rather than a central LMS.  cMOOCs remain accessible online and lack the clear start and beginning of traditional courses. This complicates data collection and research.
  • Although MOOCs are frequently heralded as a solution for higher education in developing countries, it would be interesting to read accounts from learners from developing countries for whom a MOOC actually was a serious alternative to formal education. The fact that MOOCs are not eligible for credits (at the hosting institution) plays a role, as well as cultural factors, such as a prevalent teacher-centred view on education in Asian countries.


Overview of posts on MOOCs from Stephen Downes: http://www.downes.ca/mooc_posts.htm

Overview of posts on MOOCs from George Siemens: https://www.diigo.com/user/gsiemens/mooc

OpenPraxis theme issue on Openness in HE: http://www.openpraxis.org/index.php/OpenPraxis/issue/view/2/showToc

IRRODL theme issue on Connectivism, and the design and delivery of social networked learning: http://www.irrodl.org/index.php/irrodl/issue/view/44

Armstrong, L. (2012) ‘Coursera and MITx – sustaining or disruptive? – Changing Higher Education’,

Peter, S. and Deimann, M. (2013) ‘On the role of openness in education: A historical reconstruction’, Open Praxis, 5(1), pp. 7–14.
Daniel, J. (2012) ‘Making sense of MOOCs: Musings in a maze of myth, paradox and possibility’, Journal of Interactive Media in Education, 3, [online] Available from: http://www-jime.open.ac.uk/jime/article/viewArticle/2012-18/html

#H809 Does student interaction lead to higher grades? The Davies and Graff (2005) paper

credit: FreeDigitalPhotos.net

credit: FreeDigitalPhotos.net

Key paper in week 12 of H809 is a research paper from Davies and Graff (2005) investigating the relation between students’ activity in online course forums and their grades.  It might be either due to the course team’s selection of papers or to our wider familiarity with methodological gaps in much educational research, but I found this paper to suffer from some rather obvious shortcomings.

  1. There is a problem with the operationalization of the concepts of participation and learning, or the construct validity. Participation has been quantified as the number of logins in the Blackboard LMS, and learning as the final grades. These are simplifications and the paper should at least discuss how these simplifications may distort results.
  2. There could well be co-variance between the factors.  Both participation and learning may be influenced by third variables, such as prior knowledge, motivation, age… and a multivariate analysis might be more suitable to reveal these relations.  There is no discussion in the paper about these underlying variables and possible co-variances.
  3. The question whether participation influences final grades may be irrelevant, as participation arguably has other beneficial effects for students beyond a possible effect on grades. Participation helps to foster a sense of community, may reduce feelings of isolation with some students and can promote ‘deeper’ learning.  These perceived benefits are mentioned in the introduction of the paper, but not discussed in the conclusions.
  4. The study is based on a sample of 122 undergraduate students from a 1st year of a business degree.  The sample size is quite small to get statistically significant results and is certainly too narrow to make sweeping conclusions about the relation between interaction and learning. One could argue what the objective is of a quantitative analysis on such a l
  5. The course context likely plays a strong role in the relation between interaction and learning.  Variation between courses is higher than variation within a course, suggesting an important role for course design. Interaction in a course is not something that happens automatically, but it needs to be designed for, for example using a framework like Salmon’s e-tivity model.  We don’t learn a lot about the context where the research took place.  Did interaction took place through asynchronous or synchronous communication?  Were there face-to-face interactions?   Was the student cohort subdivided into smaller tutor groups? Lack of insight in the context limits the external validity of the research.
  6. I would argue that for this kind of research the analysis of outliers would be interesting (See Outliers from Malcolm Gladwell and Black Swans from Nassim Nicholas Taleb).  The relation between online participation and course grades is not very surprising, but the correlation is far from perfect.  Analysing learners who did interact a lot, but achieved poor grades and vice versa would yield insights in the circumstances when the relation is valid.  This would result in more predictive knowledge at the student level about when non-participating students are at risk of failing. This relates to the next paper about the Course Signals project at Purdue University where learning analytics is used to devise a kind of early warning system for students. Interestingly, the (proprietary) algorithm uses variables such as residency, age and prior grades (together with participation, measured by logins in the course system) as predictors for identifying students at risk.