#H809 Comparing paper-based and web-based course surveys: The Ardalan (2007) paper

The second paper in week 11 of H809 looks at the effects of the medium when soliciting course feedback from students.  A switch from paper-based to web-based survey methods (2002-2003) provided a natural experiment setting for Ardalan and colleagues to compare the two modes for a variety of variables.  As for the Richardson paper , we were asked to critically look at the methodology and issues such as validity and reliability.  A lively (course-wide) forum helps to collect a variety of issues.

Ardalan 2007-copy

Schematic representation of Ardalan et al.(2007) paper

Generalisability

  • The study aims at presenting a ‘definitive verdict’ to some of the conflicting issues surrounding paper-based and web-based surveys.  The paper clearly favours statistically significant correlations as proof.  However, despite the large sample, the research is based on courses in one North-American university (Old Dominion University, Virginia) during two consecutive academic years (2002-2003).  The context of this university and academic years is not described in detail, limiting the applicability of the paper to other contexts.  Generalisability could be enhanced by including more institutions over a longer period of time.

Causality

  • The study succeeds in identifying some correlations, notably effects on the response rate and the nature of responses (less extreme).  However, it doesn’t offer explanations for the differences.  Changes in response rates could be due to a lack of access to computers by some students, they could be due to contextual factors (communication of the survey, available time, incentives, survey fatigue…), or they could be due to fundamental differences between the two survey modes .  We don’t know. The study doesn’t offer an explanatory framework, sticking to what Christensen describes as the descriptive phase of educational research.

Methodology

  • It’s a pity that the study wasn’t complemented by interviews with students.  This could have yielded interesting insights in perceived differences (response rates, nature) and similarities (quantity, quality).
  • I found the paper extremely well-structured with a clear overview of literature, research hypotheses,

Validity

  • The difference response rate may well have had an impact on the nature of the sample.  The two samples may have been biased in terms of gender, age, location, socio-economic status (access to web-connected computer).  Perceived differences between the modes may have been due to sample differences.
  • I’m not sure whether the research question is very relevant.  Potential cost savings for institutions from switching to web-based surveys are huge, making that institutions will use online surveys anyway.

Even a medium-size institution with a large number of surveys to conduct realises huge cost savings by converting its paper-based surveys to the web-based method. With the infrastructure for online registration, web-based courses and interactive media becoming ubiquitous in higher education, the marginal cost savings above the sunk costs of existing infrastructure are even more significant. (Ardalan et al., 2007, p.1087)

Lower response rates with web-based surveys can be dealt with by increasing the sample size.  Rather than comparing paper-based and web-based surveys (a deal that is done anyway), it would be more interesting to analyze whether web-based surveys manage to capture a truthful image of the quality of a course as perceived by all students and what are influencing factors and circumstances.

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s