What It Means to Have a High Quality of Learning

Quality Measuring the quality of learning is a notably tricky endeavour.  Before engaging in a course, you make a decision based on available information and alleged proxies for its quality.  Things like the reputation of the institution, the fame of the instructors, the garishness of the conference hall and the size of the sports centre. When pressed, many of us will agree that these don’t necessarily say much about the amount of learning that takes place, but what else have we got?

Test results are often used to measure amounts of learning taking place.  Standardized tests would be needed though to compare the quality across institutions. Even then, local contexts differ and students’ socio-economic status and initial knowledge should be controlled for.  Is a school that selects for strong learners and, as a result, produces stellar pass rates, a better school than a school that accepts and works with all learners, but achieving a lower pass rate?  Tests also risk reducing what we see as quality to what can be easily measured.  Good primary schools in South Africa are those with good ANA results in maths and literacy. Nothing about other subjects such as science, let alone hard to measure skills such as motivation, curiosity and working together.  Motivation, passion in and a desire for lifelong learning is not captured in traditional tests, but are often better predictors of good achievement in the future workplace. Learning may not be suitable to be expressed in amounts.

With the South African Council for Educators (SACE) we have been working on a framework to measure quality in teacher professional development. One way is to let the free market play.  The reasoning is that, in time, teachers will automatically gravitate towards those courses that offer good value for money. Information and recommendation systems like used in Uber and other systems, can speed up the proces.  SACE has chosen to take a more centralist approach, requiring every provider and course to be respectively approved and endorsed by evaluators. Information submitted by course providers on the relevance, learning materials, credentials of the facilitators, attention for equity, assessment etc. arguably offers some kind of clue to distinguish good from bad programmes.  But is it a good-enough way to measure quality?  Some recent reflections in the blogosphere offer excellent ideas:

Dave Cormier writes that learning’s first principle should be getting learners to care, because they’re interested in what they learn.

Learners who care can be taught almost anything.  Learners that only have acquired knowledge to pass the test will have forgotten 95% of it after a few months.  Moreover, It turns out that learning a passion to learn is more important for your practical success than learning any particular facts or skills.  Our job, as educators, is to convince students who don’t care to start caring, and to encourage those who currently care, to continue caring.

Cormier detects a tension between education and learning in this.  Education is an institutionalized form of learning, requiring standardized curricula, assessment and accreditation. When learning becomes education, accountability takes control to the expense of learner engagement.  Education systems are not designed to get people engaged. It’s designed to get people to a ‘standard of knowing’, which not necessarily equals getting them engaged with it.  Education is much harder to change than learning. Learning can be done anywhere and anytime.  Reforming education requires a whole range of stakeholders to agree.  A problem arises when there is a total disconnect between education and learning.  When education equals ‘covering’ the curriculum. When a degree is a way of signalling that you passed a strong selection mechanism rather than a proof of learning, as The Economist argued is the case in some prestigious American universities.

SACE’s system for teacher professional development wisely includes informal, individual learning like engaging with books or articles.  However, the system distrusts this kind of learning and has been conceived in such a way that also formal, third-party organized professional development in required. Perhaps justifiably so. Professional standards may be insufficient with many teachers to expect them to engage in professional development without external pressure. However, as Cormier writes:

The problem with threatening people is that in order for it to continue to work, you have to continue to threaten them (well… there are other problems, but this is the relevant one for this discussion). And, as has happened, students no longer care about grades, or their parents believe their low grades are the fault of the teacher, then the whole system falls apart. You can only threaten people with things they care about. I’m not suggesting that we shouldn’t hold learners accountable, but if we’re trying to encourage people to care about their work, about their world, is it practical to have it only work when someone is threatening them?

That is the Achilles’ heel of the whole professional development system. If people don’t care about professional development, about being good at what they’re doing, then no monitoring system or amount of pressure will help. It’s not possible.  People will comply and sit out whatever training is good value for points and dishes out beefy food, but they will not be in it for real.  It won’t have any effect at all:

We have not built an education system that encourages people to be engaged. The system is not designed to do it. It’s designed to get people to a ‘standard of knowing.’ Knowing a thing, in the sense of being able to repeat it back or demonstrate it, has no direct relationship to ‘engagement’. There are certainly some teachers that create spaces where engagement occurs, but they are swimming upstream, constantly battling the dreaded assessment and the need to cover the curriculum.

What can be done?  Teacher professional development should be encouraged, but not as a top-down imposed tick-box exercise.  School-based communities of teachers working together can help, as long as they’re not hijacked by bureaucrats.  Accountability should be bottom-up, from learners and parents, rather than a top-down exercise.  More attention  in teacher training should go to instilling the love of learning and truly expanding one’s knowledge for the subject rather than acquiring knowledge (which is forgotten soon afterwards) and passing a hurdle.

There is actually some evidence about what elements in an education make people successful and happy later in life. Gallup, a large polling company, investigated relations between people’s education and their success and wellbeing a few years after graduation.  Martin Feldstein writes:

Again, the institution type didn’t matter. It really comes down to feeling connected to your school work and your teachers, which does not correlate well with the various traditional criteria people use for evaluating the quality of an educational institution. If you buy Gallup’s chain of argument and evidence this, in turn, suggests that being a hippy-dippy earthy-crunchy touchy-feely constructivy-connectivy commie pinko guide on the side will produce more productive workers and a more robust economy (not to mention healthier, happier human beings who get sick less and therefore keep healthcare costs lower) than being a hard-bitten Taylorite-Skinnerite practical this-is-the-real-world-kid type career coach.

Factors in people’s education that moved the needle in Gallup’s ‘Wellbeing Index’ were:

1.7 times higher if “I had a mentor who encouraged me to pursue my goals and dreams”
1.5 times higher if “I had at least one professor at [College] who made me excited about learning”
1.7 times higher if “My professors at [College] cared about me as a person”
1.5 times higher if “I had an internship or job that allowed me to apply what I was learning in the classroom”
1.1 times higher if “I worked on a project that took a semester or more to complete”
1.4 times higher if “I was extremely active in extracurricular activities and organizations while attending [College]”

The positive thing in all this, is, as Feldstein writes:

You don’t have to have every teacher make you feel excited about learning in order to have a better chance at a better life. You just need one.

Advertisements

Too Hard To Measure: On the Value of Experiments and the Difficulty to Measure Lesson Quality

Interesting article in The Guardian (from some time ago, I’m a slow reader) about the overblown importance attributed to doing experiments during science lessons.

The article reminds me of my experience in Cambodia, where experiments are also frequently espoused as proof of a student-centred lesson.  In reality experiments in Cambodian classrooms are often a very teacher-centred activity:

  • the teacher demonstrates and students (at best) trying to observe what happens.
  • students do the experiment in large groups, by adhering to a strict series of steps outlined in a worksheet.
  • students work in large groups, in which usually only one or two students do the work, The others are merely bystanders.
  • the procedure, observations and interpretation of the experiment are laid down in detail beforehand.

The article touches upon two interesting elements.  First, there is the questionable educational value of many experiments in science classes.  secondly, there is the challenge to measure lesson quality beyond ‘ticking off’ the occurrence of activities such as experiments.

The article refers to ‘The Fallacy of Induction‘ from Rosalind Driver.  Her book ‘Making Sense of Secondary Science’ is an excellent book on misconceptions in science education and has been an important inspiration for me.  

“Driver doesn’t dismiss practical work in science, but argues that ‘Many pupils do not know the purpose of practical activity, thinking that they ‘do experiments’ in school to see if something works, rather than to reflect on how a theory can explain observations.” (Driver et al, 1993, p.7).

She raises two main arguments.  First, practical activities are often presented to students as a simulation of ‘how science really works’, collecting data, making observations, drawing inferences and arriving at a conclusion which is the accepted explanation.  It’s simplistic, and pupils happily play along, following the ‘recipe’ in the ‘cookbook’, checking whether they have ‘the right answer’.  In 

reality, science rarely works this way:

“For a long time philosophers of science and scientists themselves have recognised the limitations of the inductive position and have acknowledged the important role that imagination plays in the construction of scientific theories.” (Driver, 1994, p.43)

The second argument is that pupils don’t arrive in class with a blank slate, but with a whole range of self-constructed interpretations or ‘theories’ on how natural phenomena work. These ‘preconceptions’ require more than an experiment to change, as children tend to fit observations within their own ‘theoretical framework’.

Observations are not longer seen as objective but influenced by the theoretical perspective of the observer. ‘As Popper said, ‘we are prisoners caught in the framework of our theories.’ This too has implications for school science, for children, too, can be imprisoned in this way by their preconceptions, observing the world throught their own particular ‘conceptual spectacles.’ (Driver, 1994, p.44)

“Misconceptions can be changed if they are made explicit, discussed and challenged with contradicting evidence.  After this ‘unlearning’ phase, children may adopt a different framework.  Driver concludes: ‘Experience by itself is not enough. It is the sense that students make of it that matters” (Driver et al, 1993, p.7).  

Discussion activities, in which pupils have the opportunity to make their reasoning explicit and to engage with and try out alternative viewpoints, including the ‘scientific one’, need to be central (cognitive conflict). Practical activities can be complementary to these discussions, instead of the other way around, when discussion and conclusion are quickly reeled off at the end of the practicum.

 

Measuring lesson quality

However, the love for experiments while neglecting the question whether and what students are actually learning also touches upon the difficulty to measure adequately lesson quality.  Limited time and resources result in a focus on outward and visible signs. However, these:

  • deny the complexity of teaching and learning;
  • deny the individuality of students’ learning and understanding;
  • steers teachers and programme staff towards focusing on these outward signs, as they know they will be evaluated on these criteria. 

Collecting valid and reliable data on lesson quality is hard.  Self-assessment instruments are notoriously prone to confirmation bias. Lesson observations don’t give a reliable everyday picture of lesson practice.  They suffer from the fact that teachers pull out special lessons when visitors appear for announced (or unannounced) visits.   Conversely, as Cuban describes beautifully, other teachers tremble and panic when an evaluator walks into their classroom and the lesson becomes a shambles.

Evidence-based evaluation is often touted as the way forward for development projects.  Randomized trials in health have been useful to collect a body of knowledge on what works and what not. In a randomized trial a group of students where teachers received pedagogical training is compared with a group of students where teachers didn’t receive training.  Comparisons can be made with test scores, student satisfaction or drop-outs.


However, test scores are unsuitable as exams are notoriously prone to cheating and questions focus on recollecting factual knowledge, the opposite of what we want to achieve.  A self-designed test could be a solution, but there’s the risk that programme activities will focus more on the test than on improving teaching skills.  Student satisfaction scores are prone to the aforementioned confirmation bias.  Drop-outs are hard to use as they are influenced by many interrelated factors such as geography, economic growth and government policy.


Ownership by the direct target group on the evaluation is part of the solution in my opinion, as well as using a variety of data sources.  In future blog posts I plan to write more on how we try to measure lesson quality.


————————

For more detail see this available study from Prof. James Dillon (pdf) on the value of practical work in science education.
Dri­ver, R. (1994) ‘The fal­lacy of induc­tion in sci­ence teach­ing’, in Teach­ing Sci­ence, ed. Levin­son, R., Lon­don, Rout­ledge, pp.41–48.

Driver, R., Squires, A., Rushworth, P. and Wood-Robinson, V. (1993) Making Sense of Secondary Science, Routledge.