#H809 Issues with Student Experience Surveys

The analysis of the Ardalan et al paper, that compares students’ responses to paper-based and online course evaluation surveys, for TMA03 made me look at a paper from Mantz Yorke (Yorke, 2009) that empirically analyses the effect of some design elements in student experience surveys.  The paper is worthwhile alonefor its extensive literature overview of research findings and underlying psychological constructs that attempt to explain those findings.

Schematic overview of Yorke (2009) paper

Schematic overview of Yorke (2009) paper

In the empirical part of the paper the author looks at 4 research questions:

  1. Does the directionality of the presentation of a set of response options (‘strongly agree’ to ‘strongly disagree’, and vice versa) affect the responses?
  2. When there are negatively stated items, does the type of negativity affect the outcome?
  3. Does using solely positively stated items produce a different response pattern from a mixture of positively and negatively stated items?
  4. Does having negatively stated items in the early part of a questionnaire produce a different pattern of responses than when such items are left until later in the instrument?

Despite the lack of statistically significant findings the author writes:

‘Statistically non-significant findings seem often to be treated as if they were of no practical significance. The investigations reported in this article do, however, have a practical significance even though very little of statistical significance emerged’ (Yorke, 2009, p.734).

The nature of the reflection will depend on the context, such as the purpose (formative vs. summative) of the survey and the local culture (Berkvens, 2012).  The author offers a rich overview of items that should be part of such a reflection and discusses explanatory frameworks from psychology.  Unlike the Ardalan paper, the attempt to explain findings by referring to psychological theory moves the paper beyond mere correlations and creates  causal and predictive value.

Advertisements

#H809 Learning Analytics: The Arnold and Pistilli (2012) paper

la photoIn a paper for the Learning Analytics Conference of 2012, Arnold and Pistilli  explore the value of learning analytics in the Course Signals product, a pioneering learning analytics programme at Purdue University.  The researchers used three years of data from a variety of modules.  For some modules learning analytics was used to identify students ‘at risk of failing’, based on a proprietary algorithm that took into account course-related factors such as login data, but also prior study results and demographic factors.  Students  ‘at risk’ were confronted with a yellow or red traffic light on their LMS dashboard.  Based on the information tutors could decide to contact the student by e-mail or phone.  The researchers compared retention rates for cohorts of students who entered university from 2007 until 2009.  They complemented this analysis with feedback from students and instructors.

Modules with use of CS showed increased retention rates – likely due to the use of CS.  These courses also showed lower than average  test results, possible a consequence of the higher retention.  Student feedback indicated that 58% of students wanted to use CS in every course, not a totally convincing number.

The research paper generated following issues/ questions:

Methodology

  • The correlation doesn’t necessarily point to a causal link (although the relation seems quite intuitive)
  • It’s unclear how courses were selected to be used with CS or not. Possibility of bias?
  • The qualitative side of the research seems neglected.   Interesting information such as the large group of students who are apparently not eager to use CS in every course is not further explored.

Relevance

  • The underlying algorithm is proprietary and is thus a black box for outsiders, which severely limits its applicability and relevance for others.
  • It’s unclear with exactly what the use of CS is compared.  If  students in non-CS modules get little personal learner support, CS may look like a real improvement.
  • The previous point relates with the need for clear articulation what the objective(s) of CS or learning analytics in general are. Including an analysis of tutor time saved or money saved through retention rates would have given a more honest and complete overview of the benefits that are likely perceived as important, instead of a rather naive focus on retention rates.

Ethical issues

  • It;s unclear if and how informed consent of students is obtained.  Is it part of the ‘small print’ that comes with enrolment?
  • How about false positives and negatives?  Some students may get a continuous red light or face a bombardment of e-mails, if they belong to a demographic or socio-economic group ‘at risk’.  Others may complain when they don’t receive any warnings despite having problems to stay in the course.
  • The authors have been closely involved in the development of the learning analytics programme at Purdue University.  This raises questions about objectivity and underlying motives of the paper.

References

Arnold, K.E. and Pistilli, M.D. (2012) ‘Course signals at Purdue: using learning analytics to increase student success’, In Proceedings of the 2nd International Conference on Learning Analytics and Knowledge, LAK  ’12, New York, NY, USA, ACM, pp. 267–270, [online] Available from: http://doi.acm.org/10.1145/2330601.2330666.

#H809 Research on MOOCs

credit: Freedigitalphotos

credit: Freedigitalphotos

Week 12 in the H809 course and MOOCs – the official educational buzzword of 2012 – couldn’t remain absent.  The focus in this course is not so much on what MOOCs are, their history and the different types with their various underlying pedagogies and ideologies.  I blogged on MOOCs before, as a participant in LAK11, a connectivist MOOC on learning analytics.  In H809 the focus lies on issues such as:

  • What kind of information and research is available on MOOCs?
  • What kind of MOOC research would be interesting to do?
  • What are benefits and limitations of the type of information on MOOCs that is around?
  • What is the educational impact (rather than the press impact) of MOOCs?

Much information on MOOCs consists of the so-called grey literature.  Main information sources include:

  • blogs from practitioners and academics, with an overrepresentation of academics from Athabasca Un. and the OU.
  • blogs from participants in MOOCs, sharing their experiences
  • articles in open academic journals such as IRRODL, EURODL, Open Praxis
  • articles in more popular education magazines such as Inside Higher Education and The Chronicle of HE.
  • articles in the general press such as The Economist and The New York Times

Some comments on these sources:

  1. The term ‘grey literature’ may sound a bit disparagingly.  However, as Martin Weller writes, notions of scholarship and  academic publishing are evolving.  Blogs and open journals constitute alternative forms of scholarship with more interaction, less formality and shorter ‘turnaround’ times.
  2. Information and research on MOOCs is heavily Anglo-Saxon centred (or perhaps better Silicon Valley-centred?).  I couldn’t hardly find any articles on MOOCs in Dutch, although that might not be so surprising.  Although MOOCs (xMOOCs) are often touted as a ‘solution’ for developing countries, there are few perspectives from researchers from developing countries.  As Mike Trucano writes on the EdTech blog from the World Bank:

    “Public discussions around MOOCs have tended to represent viewpoints and interests of elite institutions in rich, industrialized countries (notably the United States) — with a presumption in many cases that such viewpoints and interests are shared by those in other places.”

  3. It’s interesting to see how many of the more general news sources seem to have ‘discovered’ MOOCs only after the Stanford AI course and the subsequent influx of venture capital in start-ups such as Coursera, Udacity and edX.  The ‘original’ connectivist MOOCs, that have been around since 2008, let alone open universities are hardly mentioned in those overviews.  A welcome exception is the Open Praxis paper from Peter and Deimann that discusses historical manifestations of openness such as the coffee houses in the 17th century.
  4. The advantage of this grey literature is that it fosters a tremendously rich discussion on the topic. Blog posts spark other blog posts and follow-up posts. Course reflections are online immediately after the course. Events such as a failing Coursera MOOC or an OU MOOC initiative get covered extensively from all angles. This kind of fertile academic discussion can hardly be imagined with the closed peer-review publication system.
  5. The flipside of this coin is that there are a lot of opinions around, a lot of thinly-disguised commercialism and a lot of plain factual mistakes (TED talks!).  MOOCs may be heading for a ‘trough of disappointment’ in Gartner’s hype cycle.  Rigorous research would still be valuable.  For example, most research is descriptive rather than experimental and is based on ridiculously small samples collected in a short time.  Interrater reliability may be a problem in much MOOC research .  Longitudinal studies that investigate how conversations and interactions evolve over time are absent.
  6. Sir John Daniel’s report ‘Making Sense of MOOCs‘ offers a well-rounded and dispassionate overview of MOOCs until September 2012.

Interesting research questions for research on MOOCs could be:

  • What constitutes success in a MOOC for various learners?
  • How do learners interact in a MOOC? Are there different stages?  Is there community or rather network formation? Do cMOOCs really operate according to connectivist principles?
  • What are experiences from MOOC participants and perspectives of educational stakeholders (acreditation agencies, senior officials, university leaders) in developing countries?
  • Why do people choose not to participate in a MOOC and still prefer expensive courses at brick-and-mortar institutions?
  • What factors inhibit or enhance the learning experience within a MOOC?
  • How to design activities within a MOCO that foster conversation without causing information overload?
  • How do MOOCs affect hosting institutions (e.g. instructor credibility and reputation) and what power relations and decision mechanisms are at play (plenty of scope for an activity theoretical perspective here).

A few comments:

  • High drop-out rates in MOOCs have caught a lot of attention.  Opinions are divided whether this is a problem or not.  As they are free, the barrier to sign up is much lower.  Moreover, people may have various goals and may just be interested in a few parts of the MOOC.
  • MOOCs (at least the cMOOCs) are by its nature decentralized, stimulating participants to create artefacts using their own tools and networks, rather than a central LMS.  cMOOCs remain accessible online and lack the clear start and beginning of traditional courses. This complicates data collection and research.
  • Although MOOCs are frequently heralded as a solution for higher education in developing countries, it would be interesting to read accounts from learners from developing countries for whom a MOOC actually was a serious alternative to formal education. The fact that MOOCs are not eligible for credits (at the hosting institution) plays a role, as well as cultural factors, such as a prevalent teacher-centred view on education in Asian countries.

References

Overview of posts on MOOCs from Stephen Downes: http://www.downes.ca/mooc_posts.htm

Overview of posts on MOOCs from George Siemens: https://www.diigo.com/user/gsiemens/mooc

OpenPraxis theme issue on Openness in HE: http://www.openpraxis.org/index.php/OpenPraxis/issue/view/2/showToc

IRRODL theme issue on Connectivism, and the design and delivery of social networked learning: http://www.irrodl.org/index.php/irrodl/issue/view/44

Armstrong, L. (2012) ‘Coursera and MITx – sustaining or disruptive? – Changing Higher Education’,

Peter, S. and Deimann, M. (2013) ‘On the role of openness in education: A historical reconstruction’, Open Praxis, 5(1), pp. 7–14.
Daniel, J. (2012) ‘Making sense of MOOCs: Musings in a maze of myth, paradox and possibility’, Journal of Interactive Media in Education, 3, [online] Available from: http://www-jime.open.ac.uk/jime/article/viewArticle/2012-18/html