Debunking Maslow’s Hierarchy Of Needs Theory

A recommended education blog is that of Donald Clark.  In various blog posts, he debunks some popular educational theories, such as learning stylesKirkpatrick 4-levels of evaluation, left-right brain people and hot air-selling educational gurus such as Ken Livingstone and Sugata Mitra.  James Atherton formulates it perfectly:

So often in education, shallow unsubstantiated TED talks replace the real work of researchers and those who take a more rigorous view of evidence. Sir Ken Robinson, is, I suspect, the prime example of this romantic theorising, Sugata Mitra the second. Darlings of the conference circuit, they make millions from talks but do untold damage when it comes to the real word and the education of our children.’

Like in management, popular but unsubstantiated theories seem to be a predicament of education, where research struggles to find its way to the classroom and where consultants make a nice buck selling these theories to a captive teacher professional development public.

Maslowr

First, Maslow himself updated his model in 1970, but this updated model hardly found its way into the professional development circuit. Second, the model doesn’t stand the test of basic scientific scrutiny:

Although hugely influential, his work was never tested experimentally at the time and when it was, from the 70s onwards, was found wanting. Empirical evidence showed no real evidence in terms of a strict hierarchy, not the categories, as defined by Maslow.

The self-actualisation theory is now regarded as having no real value as it is wholly subjective. The problem is his slapdash use of evidence. Self-actualised people are selected by him then used as evidence for self-actualisation.

An even weaker aspect of the theory is its strict hierarchy. It is clear that the higher needs can be fulfilled before the lower needs are satisfied. There are many counter-examples and indeed, creativity can atrophy and die on the back of success. Maslow himself, felt that the lines were not that clear. In short, subsequent research has shown that his hierarchy is crude, as needs are pursued non-hierarchically, often in parallel. A different set of people could be chosen to prove that self-actualisation was the result of, say, trauma or poverty (Van Gogh etc.).

Most sets of indicators for the well being of children are more complex, sophisticated and do not fall into a simple hierarchy. There are many such schemas at international (UNICEF) and national levels. They rarely bear much resemblance to Maslow’s hierarchy.

Indeed, research in economic development in developing countries shows that people frequently prefer investing in things like cellphones, local traditions such as marriages and funeral ceremonies and education, before their basic needs are met.

Extensive research on needs’ fulfillment and social well-being (Tay and Diener, 2011) shows little support for Maslow’s hypothesis:

Our analyses reveal that as hypothesized by Maslow, people tend to achieve basic and safety needs before other needs. However, fulfilling the various needs has relatively independent effects on SWB (Societal Well Being). For example, a person can gain wellbeing by meeting psychosocial needs regardless of whether his or her basic needs are fully met.

Another implication of our findings is that need fulfillment needs to be achieved at the societal level, not simply at the individual level. Although Maslow focused on individuals, we found that there are societal effects as well. It helps one’s SWB if others in one’s nation have their needs fulfilled.

More rigour in teacher professional development is certainly needed.  Frustratingly, in our first workshop in South Africa, university (!) lecturers came in with their materials on left-right brains and learning styles. On the positive side, it helps to weed out the lazy or incompetent providers from the quality ones.

#H809 Issues with Student Experience Surveys

The analysis of the Ardalan et al paper, that compares students’ responses to paper-based and online course evaluation surveys, for TMA03 made me look at a paper from Mantz Yorke (Yorke, 2009) that empirically analyses the effect of some design elements in student experience surveys.  The paper is worthwhile alonefor its extensive literature overview of research findings and underlying psychological constructs that attempt to explain those findings.

Schematic overview of Yorke (2009) paper

Schematic overview of Yorke (2009) paper

In the empirical part of the paper the author looks at 4 research questions:

  1. Does the directionality of the presentation of a set of response options (‘strongly agree’ to ‘strongly disagree’, and vice versa) affect the responses?
  2. When there are negatively stated items, does the type of negativity affect the outcome?
  3. Does using solely positively stated items produce a different response pattern from a mixture of positively and negatively stated items?
  4. Does having negatively stated items in the early part of a questionnaire produce a different pattern of responses than when such items are left until later in the instrument?

Despite the lack of statistically significant findings the author writes:

‘Statistically non-significant findings seem often to be treated as if they were of no practical significance. The investigations reported in this article do, however, have a practical significance even though very little of statistical significance emerged’ (Yorke, 2009, p.734).

The nature of the reflection will depend on the context, such as the purpose (formative vs. summative) of the survey and the local culture (Berkvens, 2012).  The author offers a rich overview of items that should be part of such a reflection and discusses explanatory frameworks from psychology.  Unlike the Ardalan paper, the attempt to explain findings by referring to psychological theory moves the paper beyond mere correlations and creates  causal and predictive value.

#H809 Does student interaction lead to higher grades? The Davies and Graff (2005) paper

credit: FreeDigitalPhotos.net

credit: FreeDigitalPhotos.net

Key paper in week 12 of H809 is a research paper from Davies and Graff (2005) investigating the relation between students’ activity in online course forums and their grades.  It might be either due to the course team’s selection of papers or to our wider familiarity with methodological gaps in much educational research, but I found this paper to suffer from some rather obvious shortcomings.

  1. There is a problem with the operationalization of the concepts of participation and learning, or the construct validity. Participation has been quantified as the number of logins in the Blackboard LMS, and learning as the final grades. These are simplifications and the paper should at least discuss how these simplifications may distort results.
  2. There could well be co-variance between the factors.  Both participation and learning may be influenced by third variables, such as prior knowledge, motivation, age… and a multivariate analysis might be more suitable to reveal these relations.  There is no discussion in the paper about these underlying variables and possible co-variances.
  3. The question whether participation influences final grades may be irrelevant, as participation arguably has other beneficial effects for students beyond a possible effect on grades. Participation helps to foster a sense of community, may reduce feelings of isolation with some students and can promote ‘deeper’ learning.  These perceived benefits are mentioned in the introduction of the paper, but not discussed in the conclusions.
  4. The study is based on a sample of 122 undergraduate students from a 1st year of a business degree.  The sample size is quite small to get statistically significant results and is certainly too narrow to make sweeping conclusions about the relation between interaction and learning. One could argue what the objective is of a quantitative analysis on such a l
  5. The course context likely plays a strong role in the relation between interaction and learning.  Variation between courses is higher than variation within a course, suggesting an important role for course design. Interaction in a course is not something that happens automatically, but it needs to be designed for, for example using a framework like Salmon’s e-tivity model.  We don’t learn a lot about the context where the research took place.  Did interaction took place through asynchronous or synchronous communication?  Were there face-to-face interactions?   Was the student cohort subdivided into smaller tutor groups? Lack of insight in the context limits the external validity of the research.
  6. I would argue that for this kind of research the analysis of outliers would be interesting (See Outliers from Malcolm Gladwell and Black Swans from Nassim Nicholas Taleb).  The relation between online participation and course grades is not very surprising, but the correlation is far from perfect.  Analysing learners who did interact a lot, but achieved poor grades and vice versa would yield insights in the circumstances when the relation is valid.  This would result in more predictive knowledge at the student level about when non-participating students are at risk of failing. This relates to the next paper about the Course Signals project at Purdue University where learning analytics is used to devise a kind of early warning system for students. Interestingly, the (proprietary) algorithm uses variables such as residency, age and prior grades (together with participation, measured by logins in the course system) as predictors for identifying students at risk.

#H809 Validity and Reliability

Two key terms in H809, originally introduced by Campbell and Stanley (1963) and often confused.  Validity in itself is a contested term, with a variety of category schemes designed over the years.  Below a scheme summarizing the two terms, based on references recommended in the course text.

Apart from focusing on validity, reliability and its sub-categories, the course texts suggests using a list of critical questions to evaluate research findings, such as:

  • Does the  study discuss how the findings are generalisable to other contexts?
  • Does the study show correlations or causal relationships?
  • Does the study use an underlying theoretical framework to predict and explain findings?
  • How strong is the evidence? (in terms of statistical significance, triangulation of methods, sample size…)
  • Are there alternative explanations?
validity reliability

Scheme summarizing validity and reliability, based on Trochim (2007)

The Hawthorne effect, the name derived from a series of studies in the 1920s at the Hawthorne Works manufacturing plants in the mid-western US.  It’s often misinterpreted (‘mythical drift’) as a kind of scientific principle, describing the effect that the researcher has on the experiment, or the effect of the awareness by those being studied that they’re part of an experiment.   In reality, the Hawthorne studies are useful to highlight some of the pitfalls of dealing with people (both the researcher as the research objects) in research.

References

  • Anon (2009) ‘Questioning the Hawthorne effect: Light work’, The Economist, [online] Available from: http://www.economist.com/node/13788427 (Accessed 28 April 2013).
  • Olson, Ryan, Hogan, Lindsey and Santos, Lindsey (2005) ‘Illuminating the History of Psychology: tips for teaching students about the Hawthorne studies’, Psychology Learning & Teaching, 5(2), p. 110.

 

#H809 Comparing paper-based and web-based course surveys: The Ardalan (2007) paper

The second paper in week 11 of H809 looks at the effects of the medium when soliciting course feedback from students.  A switch from paper-based to web-based survey methods (2002-2003) provided a natural experiment setting for Ardalan and colleagues to compare the two modes for a variety of variables.  As for the Richardson paper , we were asked to critically look at the methodology and issues such as validity and reliability.  A lively (course-wide) forum helps to collect a variety of issues.

Ardalan 2007-copy

Schematic representation of Ardalan et al.(2007) paper

Generalisability

  • The study aims at presenting a ‘definitive verdict’ to some of the conflicting issues surrounding paper-based and web-based surveys.  The paper clearly favours statistically significant correlations as proof.  However, despite the large sample, the research is based on courses in one North-American university (Old Dominion University, Virginia) during two consecutive academic years (2002-2003).  The context of this university and academic years is not described in detail, limiting the applicability of the paper to other contexts.  Generalisability could be enhanced by including more institutions over a longer period of time.

Causality

  • The study succeeds in identifying some correlations, notably effects on the response rate and the nature of responses (less extreme).  However, it doesn’t offer explanations for the differences.  Changes in response rates could be due to a lack of access to computers by some students, they could be due to contextual factors (communication of the survey, available time, incentives, survey fatigue…), or they could be due to fundamental differences between the two survey modes .  We don’t know. The study doesn’t offer an explanatory framework, sticking to what Christensen describes as the descriptive phase of educational research.

Methodology

  • It’s a pity that the study wasn’t complemented by interviews with students.  This could have yielded interesting insights in perceived differences (response rates, nature) and similarities (quantity, quality).
  • I found the paper extremely well-structured with a clear overview of literature, research hypotheses,

Validity

  • The difference response rate may well have had an impact on the nature of the sample.  The two samples may have been biased in terms of gender, age, location, socio-economic status (access to web-connected computer).  Perceived differences between the modes may have been due to sample differences.
  • I’m not sure whether the research question is very relevant.  Potential cost savings for institutions from switching to web-based surveys are huge, making that institutions will use online surveys anyway.

Even a medium-size institution with a large number of surveys to conduct realises huge cost savings by converting its paper-based surveys to the web-based method. With the infrastructure for online registration, web-based courses and interactive media becoming ubiquitous in higher education, the marginal cost savings above the sunk costs of existing infrastructure are even more significant. (Ardalan et al., 2007, p.1087)

Lower response rates with web-based surveys can be dealt with by increasing the sample size.  Rather than comparing paper-based and web-based surveys (a deal that is done anyway), it would be more interesting to analyze whether web-based surveys manage to capture a truthful image of the quality of a course as perceived by all students and what are influencing factors and circumstances.

#H809 Can Technology ‘Improve’ Learning? And can we find out?

In education and learning we cannot isolate our research objects from outside influences, unlike in positive sciences.  In a physics experiment we would carefully select variables we want to measure (dependent variables) and variables that we believe could influence those (independent variables).  In education this is not possible.  Even in Randomized Controlled Trials (RCT), put forward by researchers as Duflo and Banerjee (see my post that discusses their wonderful book ‘Poor Economics’) as a superior way to investigate policy effects, we cannot, in my opinion, fully exclude context.

This is why, according to Diana Laurillard, many studies talk about the ‘potential’ of technology in learning, as it conveniently avoids dealing with the messiness of the context. Other studies present positive results, that take place in favourable external circumstances.  Laurillard argues that the question if technology improves education is senseless, because it depends on so many factors:

There is no way past this impasse. The only sensible answer to the question is ‘it depends’, just as it would be for any X in the general form ‘do X’s improve learning?’. Try substituting teachers, books, schools, universities, examinations, ministers of education – any aspect of education whatever, in order to demonstrate the absurdity of the question. (Laurillard, 1997)

In H810 we discussed theories of institutional change and authors such as Douglas North and Ozcan Konur, who highlighted the importance of formal rules, informal constraints and enforcement characteristics to explain policy effects in education.  Laurillard talks about ‘external layers of influence’. A first layer surrounding  student and teacher (student motivation, assessment characteristics, perceptions, available hard- en software, student prior knowledge, teacher motivation to use technology etc.) lies within the sphere of influence of student and teacher.  Wider layers (organisational and institutional policies, culture of education in society, perceived social mobility…) are much harder to influence directly.

That doesn’t mean she believes educational research is impossible.  She dismisses the ‘cottage industry’ model of education (See this article from Sir John Daniel on the topic), in which education is seen as an ‘art’, best left to the skills of the teacher as artist.  Rather, she argues for a change in direction of educational research.

Laurillard dismisses much educational research as ‘replications’ rather than ‘findings’, a statement that echoes the plea from Clayton Christensen to focus more on deductive, predictive rather research than descriptive, correlational studies.  He argues to focus less on detecting correlations and more on theory formation and categorisation of the circumstances in which individual learners can benefit from certain educational interventions.  A body of knowledge advances by testing hypotheses derived from theories.  To end with a quote from the great Richard Feynman (courtesy the fantastic ‘Starts with a Bang‘ blog):

“We’ve learned from experience that the truth will come out. Other experimenters will repeat your experiment and find out whether you were wrong or right. Nature’s phenomena will agree or they’ll disagree with your theory. And, although you may gain some temporary fame and excitement, you will not gain a good reputation as a scientist if you haven’t tried to be very careful in this kind of work.” -Richard Feynman

References

Konur, O. (2006) ‘Teaching disabled students in higher education’, Teaching in Higher Education, 11(3), pp. 351–363.
Laurillard, D. (1997) ‘How Can Learning Technologies Improve Learning?’, Law Technology Journal, 3(2), pp. (c) Warwick Law School; presented at the Higher Education 1998: Transformed by Learning Technology, Swedish–British Workshop 14–17 May 1993, University of Lund, Sweden.
North, D.C. (1994) Institutional Change: A Framework Of Analysis, Economic History, EconWPA

Comments on Disrupting Class from Clayton Christensen

56206100_82c8a353f4_zOne of the recommended works from H807  still waiting to be read, was Clayton’s Christensen’s take on technology in education.  Piloted to management stardom after writing the Innovator’s Dilemma, he applied his theory of disruptive innovations to the public education system.

The innovation is online learning.  Research suggests that online learning could provide a more individualized, tailor-made education to learners.  So, why has online learning failed to make a substantial impact on public education?

The current education system is a legacy of the industrial era, with an organisation inspired by Fordist production methods.  Standardized curriculum, textbooks and assessment, categorisation of education  in classes and grades by age were the preferred organisation form to achieve universal literacy and prepare an obedient workforce.  However, this monolithic, homogenized, teacher-led system lead to substantial drop-outs and repetition rates and forces a pace too slow or too fast for non-average learners.

The theory of disruptive innovations offers insights on the success and failure of online learning:

  1. Conservative forces within the current forces tend to cram technology in the current model (‘Computers in the classroom’).  This is due to a lack of imagination, but mostly a protective reflex.  Similarly, companies find it nearly impossible to adopt disruptive innovations, as 1/ they need to satisfy existing customers and 2/ measures of performance and quality are completely different in the disruptive model.  An important point is that a disruption requires a new commercial system to break through, implying that the current education model based on schools is incompatible with the disruption.“To win the support of all the powerful entities within the organisation whose endorsement is critical to getting the innovation funded, the innovative idea morphs into a concept that fits the business model of the organisation, rather than the market for which the innovator originally envisioned it. (…) Schools are not unique in how they have implemented computer-based learning.” (p.53)
  2. Innovations need markets of non-consumers to be able to gradually develop and improve.  Non-consumers are those that are not served by the current system. Examples include the success of online learning with adult learning, professional learning, and specialized courses, compared with its lower success in regular secondary education.  The Sony Walkman was a success because it targeted teenagers without funds to buy full-blown radios rather than existing radio users.
  3. Innovations tend to follow a S-curve, starting slowly before reaching a tipping-point.  We tend to forget sometimes the millions of students who currently study online at the OU, China’s Open University, Universitas Terbuka Indonesia etc.  Increasing financial strain on public education institutions, better online courses and gradually disappearing prejudices will create such a tipping point for student-centric online learning soon, according to Christensen.
disruptive-innovation[1]

Sustaining and Disruptive Innovations

Christensen outlines a more student-centric and modular education system that is decoupled from the standardised package students receive now.  In this system, most courses are online, teachers are coaches providing 1:1 support and materials are shared and retrieved through user networks rather than by off-the-shelf textbooks.  Approaches take more account of students’ interests, learning methods and pace.  Assessment is continuous and provides immediate, actionable feedback.

To realize full benefit of technology, education systems should install ‘heavyweight teams’ composed of key players from various departments, very much like Toyota used autonomous teams to design new processes for the Prius, followed by aggressive codification of these processes.

Much of the value in the book comes from taking an outsider’s perspective on learning.  The book resonates with Tooley’s book in that it considers schools as temporary, outdated organisation forms for education, unsuited for current society.  It offers interesting discussions on why computers in schools are usually a bad idea and explains why technology tends to be used  first to replicate existing processes rather than design wholly new ones.

I found the book interesting as it discusses innovation not only from a technological perspective (early adopters…), but from an economical point of view.  The book should be part of the H807 course rather than in the recommended reading list.

Reference:

Christensen, C., Johnson, C. W. and Horn, M. (2010) Disrupting Class, Expanded Edition: How Disruptive Innovation Will Change the Way the World Learns, 2nd ed. McGraw-Hill.

#H800 What is Learning?

Courtesy Marc Kjerland

There is a lot of research on how people learn, and it’s a central objective of the course to investigate how technology can enhance learning.  This assumes that we know what learning is.  However, learning is not a scientific process or unit that you can define unambiguously. Therefore it seems a good idea to discuss in Week 3 what learning actually is. By lack of a clear definition, we use (without realizing) metaphors to describe what we mean by learning.

Core of the discussion is a paper by Anna Sfard (1998), in which she describes two main metaphors that are used when talking about learning: the acquisition metaphor (AM) and the participatory metaphor (PM). The idea at the heart of Anna Sfard’s article is that metaphors are basic units of conceptual development. The metaphor you choose, determines how you see learning and also how you will see the potential of technology in learning. Two extracts explain the main point.

The language of “knowledge acquisition” and “concept development” makes us think about the human mind as a container to be filled with certain materials and about the learner as becoming an owner of these materials.(p.1)

“Participation” is almost synonymous with “taking part” and “being a part,” and both of these expressions signalize that learning should be viewed as a process of becoming a part of a greater whole (p. 4-5).

The metaphors basically refer to the objective of learning.  In the AM it is gaining knowledge as an individual, whereas in the PM, it is actively being part of a community of practice.  Learning is an ongoing process, that is embedded in a particular context, embedded in a  culture, and influenced by a particular community and idiom.  This relates to the “learning to be” idea, put forward by John Seely Brown the previous week.  He referred to the open source movement as an example of learning by being amidst experts. Students are observing or contributing in the periphery, and gradually, as they become experts, move on to the core of the community.

Both metaphors don’t refer to how learning occurs.  In both metaphors this can be in group or individual, and based on various learning theories, such as learning by transmission or learning or the constructivist models stressing development of knowledge and the construction of meaning.

Sfard warns against the exclusive use of one metaphor in learning, or what she calls “theoretical excesses”.  Educational practice should be based on  different recipes, catering for various study preferences.

Dominance of the AM was present in most geography courses at the K.U.Leuven.  Course material consists of a tome of hundreds of pages, studying entails transferring the information from the manual to the brain as good as possible and assessment is based on recollection of  knowledge elements from the manual. Group tutorials aim at a better understanding of the course material.  In this kind of course, the use of technology aims at a better “storage” of information.  Examples are concept maps, databases and text processing.

Dominance of the PM was very strong in the recent LAK11 course.  A wide range of learning materials was made available, learners were invited to select resources most interesting to them and to engage with the material through contribution (active or passive) in the forums and during the lectures.  Here, technology supports the active involvement in the community, examples are online Moodle forums, Eluminate and possibly Twitter and Facebook.

However, as more information is stored online and is abundantly available, finding, selecting, assessing and retrieving it becomes a matter of participating in a network of people, rather than using your network of neurons.  In this way technology is used to “acquire” information through “participation” with a community, confusing or blurring the boundaries between the metaphors.

Reference

Sfard, A. (1998 ) ‘On two metaphors for learning and the dangers of choosing just one’, Educational Researcher, 27(2), March 1998, American Educational Research Association