Key paper in week 12 of H809 is a research paper from Davies and Graff (2005) investigating the relation between students’ activity in online course forums and their grades. It might be either due to the course team’s selection of papers or to our wider familiarity with methodological gaps in much educational research, but I found this paper to suffer from some rather obvious shortcomings.
- There is a problem with the operationalization of the concepts of participation and learning, or the construct validity. Participation has been quantified as the number of logins in the Blackboard LMS, and learning as the final grades. These are simplifications and the paper should at least discuss how these simplifications may distort results.
- There could well be co-variance between the factors. Both participation and learning may be influenced by third variables, such as prior knowledge, motivation, age… and a multivariate analysis might be more suitable to reveal these relations. There is no discussion in the paper about these underlying variables and possible co-variances.
- The question whether participation influences final grades may be irrelevant, as participation arguably has other beneficial effects for students beyond a possible effect on grades. Participation helps to foster a sense of community, may reduce feelings of isolation with some students and can promote ‘deeper’ learning. These perceived benefits are mentioned in the introduction of the paper, but not discussed in the conclusions.
- The study is based on a sample of 122 undergraduate students from a 1st year of a business degree. The sample size is quite small to get statistically significant results and is certainly too narrow to make sweeping conclusions about the relation between interaction and learning. One could argue what the objective is of a quantitative analysis on such a l
- The course context likely plays a strong role in the relation between interaction and learning. Variation between courses is higher than variation within a course, suggesting an important role for course design. Interaction in a course is not something that happens automatically, but it needs to be designed for, for example using a framework like Salmon’s e-tivity model. We don’t learn a lot about the context where the research took place. Did interaction took place through asynchronous or synchronous communication? Were there face-to-face interactions? Was the student cohort subdivided into smaller tutor groups? Lack of insight in the context limits the external validity of the research.
- I would argue that for this kind of research the analysis of outliers would be interesting (See Outliers from Malcolm Gladwell and Black Swans from Nassim Nicholas Taleb). The relation between online participation and course grades is not very surprising, but the correlation is far from perfect. Analysing learners who did interact a lot, but achieved poor grades and vice versa would yield insights in the circumstances when the relation is valid. This would result in more predictive knowledge at the student level about when non-participating students are at risk of failing. This relates to the next paper about the Course Signals project at Purdue University where learning analytics is used to devise a kind of early warning system for students. Interestingly, the (proprietary) algorithm uses variables such as residency, age and prior grades (together with participation, measured by logins in the course system) as predictors for identifying students at risk.