#H809 Does student interaction lead to higher grades? The Davies and Graff (2005) paper

credit: FreeDigitalPhotos.net

credit: FreeDigitalPhotos.net

Key paper in week 12 of H809 is a research paper from Davies and Graff (2005) investigating the relation between students’ activity in online course forums and their grades.  It might be either due to the course team’s selection of papers or to our wider familiarity with methodological gaps in much educational research, but I found this paper to suffer from some rather obvious shortcomings.

  1. There is a problem with the operationalization of the concepts of participation and learning, or the construct validity. Participation has been quantified as the number of logins in the Blackboard LMS, and learning as the final grades. These are simplifications and the paper should at least discuss how these simplifications may distort results.
  2. There could well be co-variance between the factors.  Both participation and learning may be influenced by third variables, such as prior knowledge, motivation, age… and a multivariate analysis might be more suitable to reveal these relations.  There is no discussion in the paper about these underlying variables and possible co-variances.
  3. The question whether participation influences final grades may be irrelevant, as participation arguably has other beneficial effects for students beyond a possible effect on grades. Participation helps to foster a sense of community, may reduce feelings of isolation with some students and can promote ‘deeper’ learning.  These perceived benefits are mentioned in the introduction of the paper, but not discussed in the conclusions.
  4. The study is based on a sample of 122 undergraduate students from a 1st year of a business degree.  The sample size is quite small to get statistically significant results and is certainly too narrow to make sweeping conclusions about the relation between interaction and learning. One could argue what the objective is of a quantitative analysis on such a l
  5. The course context likely plays a strong role in the relation between interaction and learning.  Variation between courses is higher than variation within a course, suggesting an important role for course design. Interaction in a course is not something that happens automatically, but it needs to be designed for, for example using a framework like Salmon’s e-tivity model.  We don’t learn a lot about the context where the research took place.  Did interaction took place through asynchronous or synchronous communication?  Were there face-to-face interactions?   Was the student cohort subdivided into smaller tutor groups? Lack of insight in the context limits the external validity of the research.
  6. I would argue that for this kind of research the analysis of outliers would be interesting (See Outliers from Malcolm Gladwell and Black Swans from Nassim Nicholas Taleb).  The relation between online participation and course grades is not very surprising, but the correlation is far from perfect.  Analysing learners who did interact a lot, but achieved poor grades and vice versa would yield insights in the circumstances when the relation is valid.  This would result in more predictive knowledge at the student level about when non-participating students are at risk of failing. This relates to the next paper about the Course Signals project at Purdue University where learning analytics is used to devise a kind of early warning system for students. Interestingly, the (proprietary) algorithm uses variables such as residency, age and prior grades (together with participation, measured by logins in the course system) as predictors for identifying students at risk.

#H809 Validity and Reliability

Two key terms in H809, originally introduced by Campbell and Stanley (1963) and often confused.  Validity in itself is a contested term, with a variety of category schemes designed over the years.  Below a scheme summarizing the two terms, based on references recommended in the course text.

Apart from focusing on validity, reliability and its sub-categories, the course texts suggests using a list of critical questions to evaluate research findings, such as:

  • Does the  study discuss how the findings are generalisable to other contexts?
  • Does the study show correlations or causal relationships?
  • Does the study use an underlying theoretical framework to predict and explain findings?
  • How strong is the evidence? (in terms of statistical significance, triangulation of methods, sample size…)
  • Are there alternative explanations?
validity reliability

Scheme summarizing validity and reliability, based on Trochim (2007)

The Hawthorne effect, the name derived from a series of studies in the 1920s at the Hawthorne Works manufacturing plants in the mid-western US.  It’s often misinterpreted (‘mythical drift’) as a kind of scientific principle, describing the effect that the researcher has on the experiment, or the effect of the awareness by those being studied that they’re part of an experiment.   In reality, the Hawthorne studies are useful to highlight some of the pitfalls of dealing with people (both the researcher as the research objects) in research.

References

  • Anon (2009) ‘Questioning the Hawthorne effect: Light work’, The Economist, [online] Available from: http://www.economist.com/node/13788427 (Accessed 28 April 2013).
  • Olson, Ryan, Hogan, Lindsey and Santos, Lindsey (2005) ‘Illuminating the History of Psychology: tips for teaching students about the Hawthorne studies’, Psychology Learning & Teaching, 5(2), p. 110.

 

#H809 Comparing paper-based and web-based course surveys: The Ardalan (2007) paper

The second paper in week 11 of H809 looks at the effects of the medium when soliciting course feedback from students.  A switch from paper-based to web-based survey methods (2002-2003) provided a natural experiment setting for Ardalan and colleagues to compare the two modes for a variety of variables.  As for the Richardson paper , we were asked to critically look at the methodology and issues such as validity and reliability.  A lively (course-wide) forum helps to collect a variety of issues.

Ardalan 2007-copy

Schematic representation of Ardalan et al.(2007) paper

Generalisability

  • The study aims at presenting a ‘definitive verdict’ to some of the conflicting issues surrounding paper-based and web-based surveys.  The paper clearly favours statistically significant correlations as proof.  However, despite the large sample, the research is based on courses in one North-American university (Old Dominion University, Virginia) during two consecutive academic years (2002-2003).  The context of this university and academic years is not described in detail, limiting the applicability of the paper to other contexts.  Generalisability could be enhanced by including more institutions over a longer period of time.

Causality

  • The study succeeds in identifying some correlations, notably effects on the response rate and the nature of responses (less extreme).  However, it doesn’t offer explanations for the differences.  Changes in response rates could be due to a lack of access to computers by some students, they could be due to contextual factors (communication of the survey, available time, incentives, survey fatigue…), or they could be due to fundamental differences between the two survey modes .  We don’t know. The study doesn’t offer an explanatory framework, sticking to what Christensen describes as the descriptive phase of educational research.

Methodology

  • It’s a pity that the study wasn’t complemented by interviews with students.  This could have yielded interesting insights in perceived differences (response rates, nature) and similarities (quantity, quality).
  • I found the paper extremely well-structured with a clear overview of literature, research hypotheses,

Validity

  • The difference response rate may well have had an impact on the nature of the sample.  The two samples may have been biased in terms of gender, age, location, socio-economic status (access to web-connected computer).  Perceived differences between the modes may have been due to sample differences.
  • I’m not sure whether the research question is very relevant.  Potential cost savings for institutions from switching to web-based surveys are huge, making that institutions will use online surveys anyway.

Even a medium-size institution with a large number of surveys to conduct realises huge cost savings by converting its paper-based surveys to the web-based method. With the infrastructure for online registration, web-based courses and interactive media becoming ubiquitous in higher education, the marginal cost savings above the sunk costs of existing infrastructure are even more significant. (Ardalan et al., 2007, p.1087)

Lower response rates with web-based surveys can be dealt with by increasing the sample size.  Rather than comparing paper-based and web-based surveys (a deal that is done anyway), it would be more interesting to analyze whether web-based surveys manage to capture a truthful image of the quality of a course as perceived by all students and what are influencing factors and circumstances.

#H809 Richardson Paper: Face-to-Face versus Online Tuition

The paper from Richardson (2012) investigates whether the persistent attainment gap in higher education is affected by the tuition mode. Arguments can be made that online tuition both widens and narrows the gap.  The paper looks to answer 1/ whether ethicity affects the choice for face-to-face vs. online tuition, and 2/ whether ethnicity patterns were different in both tuition modes.

I’ve summarized the main elements of the paper in the scheme below.  I wasn’t impressed with the findings.  The main limitations seemed to be the narrow sample and the sole focus on ethnicity which, in my opinion, is not an explaining variable for student performance, but rather a proxy for other socio-economic and cultural variables.  These should be explored in more detail in order to gain a better understanding of this attainment gap.

Schematic representation of Richardson (2012) paper

Schematic representation of Richardson (2012) paper

Generalisability:

– sample limited to 1 university and 2 courses

– two modules yield different outcomes (possibly due to variance in online tuition quality)

 Causation:

– self-selected sample: are characteristics of learners choosing online/ f2f tuition mode identical?

– ethnicity proxy variable for other factors affecting attainment (internet access, job situation, family status, geographical factors)

Methodology

– little insight in reasons why learners choose particular mode of tuition.

– unclear how learners themselves assess the quality of tuition.

Reference:

Richardson, J.T.E. (2012) ‘Face-to-face versus online tuition: Preference, performance and pass rates in white and ethnic minority students’, British Journal of Educational Technology, 43(1), pp. 17–27.