Measuring Pedagogical Content Knowledge for Mathematics

 

 

 

photo-1pck5ut-300x225In a previous blog post I discussed the concept of pedagogical content knowledge for mathematics.  In this post I look how it has been measured.

Many would agree intuitively with the importance of both content and pedagogical knowledge for teachers. However, scholarly evidence for the existence of PCK separate from mathematical content knowledge and its effect on learning outcomes is thin (Hill et al., 2008).  Depaepe et al. (2013) identified six main research lines in PCK: ‘(1) the nature of PCK, (2) the relationship between PCK and content knowledge, (3) the relationship between PCK and instructional practice, (4) the relationship between PCK and students’ learning outcomes, (5) the relationship between PCK and personal characteristics, and (6) the development of PCK.’  Measuring PCK is complicated. It’s hard to distinguish content from pedagogical knowledge and determine their respective effects on student learning.  Research have used both quantitative and qualitative approaches to investigate PCK with mathematics teachers.

Qualitative studies have tended to take a situative view on PCK, something that makes sense only related to classroom practice (Depaepe et al., 2013). These studies rely on case studies, classroom observations, meeting observations, document analysis and interviews, usually during a relatively short period.   Longer-term qualitative studies that investigate the relation between teacher knowledge, quality of instruction and learning outcomes have the advantage they can track evolutions and tensions between theory and practice, but are rare.  An excellent (20 years old!) ethnographic paper from Eisenhart (1993) brings to life Ms Daniels based on months on interviews and observations at the school and teacher training institute.   Unfortunately, the current dominance of ‘evidence-based’ studies, often narrowly interpreted as quasi-experimental and experimental research, crowds out this kind of valuable in-depth studies.  These studies have confirmed the existence of pedagogical content knowledge independent of content knowledge.  A teacher’s repertoire of teaching strategies and alternative mathematical representations is largely dependent on the breadth and depth of their conceptual understanding of the subject.

Most Quantitative research is based on the Shulman’s original cognitive conception of PCK as a property of teachers that can be acquired and applied independently from the classroom context (Depaepe et al., 2013).  Several large-scale studies have sought to ‘prove’ the existence of PCK for mathematics as a separate construct from subject content knowledge using factor analysis.

Hill et al. (2008) used multiple-choice questioning to look for separate dimensions of  content and pedagogical knowledge.  Questions were situated in teaching practice probing teachers for representations they would use to explain certain topics, how they would respond to a student’s confusion or what sequence of examples they would use to teach a certain topic. Questionnaires were complemented by interviews to get more insight in teachers’ beliefs and reasoning (Hill et al., 2008). Several papers contain a useful sample of survey questions.

Item Response Theory (IRT) is used by several authors to assess the validity of these surveys to discriminate between subjects at various ability levels.  IRT quantifies how well a test discriminates between teachers with various levels of PCK.  Test Information Curves (TIC) depict the amount of information yielded by the test at any ability level.  In Hill et al. (2008) a majority of questions with a below-average difficulty level resulted in a test that discriminated well between teachers with low and average levels of PCK, but less well between teachers with good and very good PCK.

Hill_2008_Test_Information_Curve

Test Information Curve from Hill et al. (2008)

 The amount of information decreases rather steadily as the ability level differs from that corresponding to the maximum of the Information Curve. Thus, ability is estimated with some precision near the centre of the ability scale, but as the ability level approaches the extremes of the scale, the accuracy of the test decreases significantly.

When evaluating their survey, Hill et al. (2008) found that teachers relied not only on PCK for mathematics knowledge for solving the questions, but also on subject content knowledge and even test-taking skills.  They used cognitive interviews for additional validity analysis, in which they asked teachers to explain why they had chosen a certain answer.  Secondly, their multiple-choice questions suffered from the fact that few teachers selected outright wrong answers, but differed in the detail of explanations of students’ problems they could give during the interviews.  The researchers found following kinds of interview items to discriminate quite well:

  • Assessing student productions for the level of student understanding they reflect
  • Use of computational strategies by students
  • Reasons for misconceptions or procedural errors

Baumert et al. (2010) analysed teachers’ regular tasks and tests, coding the type of task, level of argumentation required and alignment with the curriculum as indicators for PCK.  They complemented this with students’ ratings on teachers’ quality of adaptive explanations, responses to questions, pacing and teacher-student interaction.  Data from examinations and PISA numeracy tests were used to assess students’ learning outcomes.

Ball et al. (2001) discuss the concept of place value for multiplying numbers as a typical example of questions they used in their survey.  They found that teachers could accurately perform the algorithm – as would numerically literate non-teachers – , but often failed to provide conceptual grounding of the rule, and struggled to come up with sensible reactions to frequently occurring student mistakes.  Many teachers  using ‘pseudo-explanations’ focusing on the ‘trick’ rather than the underlying concept.  Ball et al. (2001) discuss similar examples in teachers’ knowledge of division (e.g. division of fractions), rational numbers (e.g. fractions of rational numbers) and geometry (e.g. relation between perimeter and area for rectangles).

PCK_example_Ball

Recent studies often start from teaching practice in analysing the role of knowledge.  Even teachers with strong PCK (as based on surveys) may, for a variety of reasons, not use all this knowledge when teaching  (Eisenhart, 1993).  Rowland and colleagues (2005) observed and videotaped 24 lessons of teacher trainees.  Significant moments in the lesson that seemed to be informed by mathematical content or pedagogical knowledge were coded. Codes were classified and led to the development of the ‘knowledge quartet’.  They illustrate the framework using a grade 8 lesson on subtraction from a hypothetical student called Naomi.  The framework looks promising as a guide for discussions after lesson observations.  Its focus on the mathematical aspects of lessons, rather than on general pedagogy was positively perceived by mentors and students (Rowland et al., 2005).

Various interpretations of PCK exist and it’s important to make clear which definition of PCK is used or which components are included.  A more cognitive interpretation as devised by Shulman has the advantage that it can be clearly defined, but in that case it is only one (hardly distinguishable) factor of many that affects instructional quality. A more situative approach tends to imply a wider definition of PCK beyond the scope of content knowledge, including affective and contextual factors. This may widen PCK so much that it means ‘everything that makes a good teacher’.

Few studies on measuring PCK have been done in developing countries. In their systematic review, Depaepe et al. (2013) found only one study of PCK that included an African country (Botswana, in Blömeke et al., 2008).  In Cambodia we used surveys with multiple-choice questions and lesson observations to assess teacher trainers’ PCK.  Some lessons learned are:

  • Language is a major barrier, as questions and answers were translated between English and Khmer, complicating assessing conceptual understanding and further probing during interviews and coding during lesson observations.
  • Response bias is an issue in surveys and lesson observations.  Teacher trainers tend to respond what they think the researcher likes or what they think will bring them most benefit in the future. Due to administrative requirements lesson observations are usually announced beforehand, resulting in teacher trainers applying the techniques you want them to apply for the occasion.  This makes that the picture you get is the optimal achievement rather than the average achievement.
  • The initial test we used was based on items from the TIMSS survey. However, most questions were too difficult for teacher trainers, resulting in low ability of the test to discriminate between teacher trainers’ PCK. Recent teacher graduates have much stronger content and teaching skills though.  An IRT analysis would have been helpful here to devise a valid and reliable test.
  • The small population of teacher trainers and the crowded donor landscape makes it hard to devise an experimental study. A more ethnographic approach that also investigates how PCK that is learned during teacher training is applied or fails to be applied in schools seems more useful to me.  However, care should be taken to include a variety of characters, school settings and ages in this fast-changing society.

Finally, PCK seems most useful to me as a theoretical framework to underpin sensible professional development. To be discussed in a next post.

Selected references

  • Ball, D.L., Lubienski, S.T. and Mewborn, D.S. (2001) ‘Research on teaching mathematics: The unsolved problem of teachers’ mathematical knowledge’, 4th ed. In Richardson, V. (ed.), Handbook of research on teaching, Washington, DC, American Educational Research Association, pp. 433–456, [online] Available from: http://www-personal.umich.edu/~dball/chapters/BallLubienskiMewbornChapter.pdf (Accessed 12 September 2013).
  • Hill, H.C., Ball, D.L. and Schilling, S.G. (2008) ‘Unpacking Pedagogical Content Knowledge: Conceptualizing and Measuring Teachers’ Topic-Specific Knowledge of Students’, Journal for Research in Mathematics Education, (4), p. 372.
  • Rowland, T., Huckstep, P. and Thwaites, A. (2005) ‘Elementary Teachers’ Mathematics Subject Knowledge: The Knowledge Quartet and the Case of Naomi’, Journal of Mathematics Teacher Education, 8(3), pp. 255–281.
  • Depaepe, F., Verschaffel, L. and Kelchtermans, G. (2013) ‘Pedagogical content knowledge: A systematic review of the way in which the concept has pervaded mathematics educational research’, Teaching and Teacher Education, 34, pp. 12–25.
Advertisements

Microglia, key to understanding learning?

Fascinating article in New Scientist on the roles of neurons, astrocytes and microglia in the functioning of the brain.  Microglia were long though to lay dormant most of the time, only to spur into action in case of brain defects.  As so often, better data collection is revealing that these ‘elements’ play a much bigger role than thought:

As master multitaskers, microglia play many different roles. On the one hand, they are the brain’s emergency workers, swarming to injuries and clearing away the debris to allow healing to begin. On the other hand, during times of rest, they are its gardeners and caretakers, overseeing the growth of new neurons, cultivating new connections and pruning back regions that threaten to overgrow. They may also facilitate learning, by preparing the ground for memories to form.

 

Three elements of the brain ( (C) New Scientist)

Three elements of the brain ( (C) New Scientist)

 

Interestingly, there are hints that these microglia play an important role in memory and learning (as well as in diseases like Alzheimer’s and autism).  This role is just beginning to emerge.

For instance, besides pruning synapses, microglia cultivate their development, by secreting nutrients called growth factors that promote the sprouting of new neural connections. And once the synapse is formed, they may monitor and tweak the receptors that help pass messages between two neurons. Such changes, dubbed synaptic plasticity, fine-tune the communication across neural networks, and are thought to be a key mechanism for learning. Indeed, Tremblay has found signs of high microglial activity in the hippocampus – a brain region that is central to memory.

It will be fascinating to see how neuroscience will affect our theories of learning and pedagogies.  It makes me wonder whether neuroscience doesn’t deserve more attention in education courses, such as the MAODE?

What knowledge do teachers need to teach? Pedagogical Content Knowledge for Mathematics

What knowledge do mathematics teachers need in order to teach successfully?   In a series of four blog posts I want to summarize some of the research on the topic. The first blog post looks at the concept of pedagogical content knowledge of mathematics.  The second will discuss research attempts to measure teachers’ knowledge and link it to students’ learning outcomes.  In the third one, I write about the implications for teachers’ professional development.  In the final blog post, I will relate the first three blog posts to the South African context.

 Many intuitively feel that a thorough understanding of content is necessary to be a good teacher. However, accurately describing what and how much knowledge a teacher should have to teach successfully, has eluded researchers and policy makers.  Nevertheless, the question matters. Ball et al. (1991) argue that insufficient insight in the knowledge it takes to teach well contributes to low numeracy levels and a lack of interest in maths with many people.  Questions such as “What is the probability that in a class of 25, two people will share a birthday? Or Is a square a rectangle? leave many baffled.

This is due to an excessive focus on procedures, rules of thumb and ‘drill and kill’ kinds of practice, instilled by a dated, but tenacious view of mathematics as a fixed body of knowledge rather than an system of human thought (Ball et al., 1991).  This view is reinforced by textbooks that list ‘hints’ instead of developing conceptual understanding and a wide but shallow curriculum. A similar diagnosis, in my opinion, can be made for other sciences.  Many feel that what makes a good teacher is tacit and consider teaching as an art, based n common sense with little need for professional learning.

Research on the relation between teacher knowledge and student learning took off in the 1960s. Quantitative studies used the number of certificates or mathematics courses taken as proxy variables for teachers’ knowledge.  These studies rejected a straightforward relation between more teacher mathematical knowledge and more student learning.  A (weak) positive relation for undergraduate courses was found, but with non-linear relations and threshold effects complicating matters (Ball et al., 1991).  For more advanced graduate courses the relation was absent and in some cases even negative (e.g. Belge, 1979; Monk, 1994).  This may be due to the increasing compression of knowledge in advanced courses, which can complicate the ‘unpacking’ of content necessary in teaching and with higher exposure to conventional teaching approaches in advanced courses (Ball et al., 1991).

In the 1980s, researchers set upon closer probing of mathematical knowledge, rather than using second-order indicators.   Lee Schulman (1986) conceptualized pedagogical content knowledge (PCK) as a unique domain of teacher knowledge covering various aspects of subject knowledge of mathematics that are relevant for teaching including:

  • Knowledge what concepts students develop in various stages of their development
  • Knowledge of common student misconceptions on mathematical concepts
  • Knowledge of curriculum, threshold concepts and in what order they are best taught.

With PCK Shulman intended to vindicate the central role of subject content knowledge in teaching quality, in addition to generic pedagogical knowledge. It offers a fine-grained conceptualization of the kind of content knowledge a teacher requires for teaching successfully. PCK relates to the ability of not only knowing the content, but the ability to enable others to know it.  A powerful example of PCK is given by Deborah Ball and colleagues on multiplication of decimals, worth quoting in full:

‘The teacher had to know more than how to multiply decimals correctly herself.  She had to understand why the algorithm for multiplying decimals works and what might be confusing about it for students.  She had to understand multiplication as iterated addition and as area, and she had to know representations for multiplication.  She had to be familiar with base-ten blocks and to know how to use them to make such ideas more visible to her students. Place value and the meaning of the places in a number were at play here as well.  She needed to see the connections between multiplication of whole numbers and multiplication of decimals in ways that enabled her to help her students make this extension.  She also needed to recognize where the children’s knowledge of multiplication of whole numbers might interfere with or obscure important aspects of multiplication of decimals.  And she needed to clearly understand and articulate why the rule for placing the decimal point in the answer – that one counts the number of decimals places in the numbers being multiplied and counts over that number of places from the right – works.   In addition, she needed an understanding of linear and area measurement and of how they could be used to model multiplication.  She even needed to anticipate that a fourth-grade students might ask why one does not do this magic when adding or subtracting decimals and to have in mind what she might say.’ (Ball et al., 1991, p.448)

Since Shulman introduced PCK, the concept has been refined.  Krauss et al. (2008) distinguish three dimensions of PCK: knowledge of mathematical tasks as instructional tools, knowledge and interpretation of students’ thinking, and knowledge of multiple representations and explanations of mathematical problems.  Ball et al. (1991) include a component of subject knowledge, called ‘horizon knowledge’, that includes insight in curriculum structure and how concepts are gradually introduced over grades. Hill et al. (2008) use the terms ‘common content knowledge’ and ‘specialised content knowledge’.  The former relates to mathematical knowledge that numerically literate non-teachers are likely to know.  The latter refers to specific knowledge for teaching, such as what kind of mistakes are typically made at what age, or what representations create powerful learning.

PCK_Shulman_and_Ball

Shulman’s conceptualisation of PCK (from Depaepe et al., 2013) and Ball’s refinement

Not everyone finds Shulman’s PCK concept helpful . Margaret Eisenhart (1993) dismisses the distinction between subject and pedagogical content knowledge as fuzzy and prefers to use procedural and conceptual knowledge as components of teacher knowledge.  Cochran et al. (1993) coined the term pedagogical content knowing (PCKg) to stress PCK as a dynamic knowing-to-act’ that is inherently linked to and situated in the act of teaching within a particular context.  Shulman’s concept was a theoretical construct, which proved difficult to confirm with empirical data.  Tim Rowland  (et al., 2005) used grounded theory to develop an empirically based classification, the ‘knowledge quartet’ that distinguishes between foundation, transformation, connection and contingency.

Foundation consists of the teacher’s theoretical knowledge and understanding of mathematics and beliefs about the nature of mathematics, including why and how it should be learned.  It’s called foundation because it determines the potential for the three other categories. These describe how foundation knowledge informs teaching decisions. Transformation refers to the capacity to transform it into powerful pedagogical forms, enabling others to learn.  Connection describes teachers’ ability to convey mathematics’ inherent coherence through well-chosen sequencing of topics, tasks and exercises within and between lessons.  Finally, contingency is the preparedness of the teacher to listen to student responses and readiness to suitably respond and even deviate from the set lesson agenda.

Foundation Awareness of purpose; identifying errors; overt subject knowledge; theoretical underpinning of pedagogy; use of terminology; use of textbook; reliance on procedures.
Transformation Choice of representations and explanations; choice of examples, teacher demonstrations
Connection Making connections between procedures; making connections between concepts; anticipation of complexity; decisions about sequencing; recognition about conceptual appropriateness.
Contingency Responding to children’s ideas; use of opportunities; deviation from agenda; swift and correct analysis of student errors and difficulties

Recent research efforts tend to focus more on the practice of teaching rather than teachers’ knowledge.  Strong content knowledge, or even strong PCK does not always translate into strong teaching, due to both teacher factors as environmental constraints.  In her case study of Ms. Daniels, Eisenhart (1993) splendidly describes the tensions between the focus on conceptual understanding in policy documents and the teacher training courses and elements at the personal level (limited conceptual understanding of students) and the school level (beliefs cooperating teachers, wide curriculum) that push teachers towards more procedural approaches.  Deborah Ball hits the nail on the head:

 ‘The pull toward neat, routinized instruction is very strong.  Teaching measurement by giving out formulas – l x w = some number of square units and l x w x h = some number of cubic units – may seem more efficient than hauling out containers, blocks and rulers and having students explore the different ways to answer questions of ‘how big’ or ‘how much’. With focused, bounded tasks, students get the right answers, and everyone can think they are successful.  The fact that these bounded tasks sometimes results in sixth graders who think that you measure water with rulers may, unfortunately go unnoticed’ (Ball et al., 1991).

In the next post I’ll discuss some of the approaches that have been used to measure teachers’ pedagogical content knowledge.  Comments and suggestions welcome!

Selected references:

Ball, D.L. (1990) ‘The mathematical understandings that prospective teachers bring to teacher education’, The elementary school journal, 90(4), pp. 449–466.

Ball, D.L., Lubienski, S.T. and Mewborn, D.S. (2001) ‘Research on teaching mathematics: The unsolved problem of teachers’ mathematical knowledge’, 4th ed. In Richardson, V. (ed.), Handbook of research on teaching, Washington, DC, American Educational Research Association, pp. 433–456, [online] Available from: http://www-personal.umich.edu/~dball/chapters/BallLubienskiMewbornChapter.pdf (Accessed 12 September 2013).

Baumert, J., Kunter, M., Blum, W., Brunner, M., et al. (2010) ‘Teachers’ Mathematical Knowledge, Cognitive Activation in the Classroom, and Student Progress’, American Educational Research Journal, (1), p. 133.

Cochran, K. F., DeRuiter, J. A., & King, R. A. (1993) ‘Pedagogical content knowing: an integrative model for teacher preparation’, Journal of Teacher Education, 44, pp.263-272.

Depaepe, F., Verschaffel, L. and Kelchtermans, G. (2013) ‘Pedagogical content knowledge: A systematic review of the way in which the concept has pervaded mathematics educational research’, Teaching and Teacher Education, 34, pp. 12–25.

Eisenhart, M., Borko, H., Underhill, R., Brown, C., et al. (1993) ‘Conceptual knowledge falls through the cracks: Complexities of learning to teach mathematics for understanding’, Journal for Research in Mathematics Education, 24(1), pp. 8–40.

Hill, H.C., Ball, D.L. and Schilling, S.G. (2008) ‘Unpacking Pedagogical Content Knowledge: Conceptualizing and Measuring Teachers’ Topic-Specific Knowledge of Students’, Journal for Research in Mathematics Education, (4), p. 372.

Rowland, T., Huckstep, P. and Thwaites, A. (2005) ‘Elementary Teachers’ Mathematics Subject Knowledge: The Knowledge Quartet and the Case of Naomi’, Journal of Mathematics Teacher Education, 8(3), pp. 255–281.

Shulman, L. S. (1986) ‘Those who understand: Knowledge growth in teaching’, Educational Researcher, 15(2), pp.4- 31.

Blog Temporarily Suspended

DSC02924Blogging can be harmful.  A sudden burst of blogging activity, triggered by participating at the WorldSTE2013 conference, apparently triggered a WordPress machine alert.  As a result, access to this blog was -without warning – suspended for nearly a week.  Fortunately, after a few days, just when I considered returning to Blogger, someone at WordPress found the time to review the blog and clear it.

#WorldSTE2013 Conference: Days 3 & 4 (1)

DSC_0218I gave two talks at the WorldSTE2013 Conference.  One discusses some successes and challenges of VVOB‘s SEAL programme.  It relates the programme to Shulman’s Pedagogical Content Knowledge (PCK) and Mishra and Koehler’s extension of Tecnological, Pedagogical and Content Knowledge.  By introducing PCK Shulman aimed at reasserting the importance of content knowledge, as opposed to a sole focus on generic pedagogical skills such as class management that was very much in vogue during the 1980s.  The presentation is embedded below.

The second presentation is based on papers I submitted for MAODE course H810.  It discusses accessibility challenges to (science) education for learners with disabilities in Cambodia. It presents these challenges from an insitutional perspective, applying the Framework of Instutional Change, developed by D.C. North in the 1990s and applied by Ozcan Konur in education.  In particular, it highlights some of slow-changing informal constraints that hamper the effects of changes in formal rules (such as Cambodia’s recent ratification of the UN Convention on the Rights of People with Disabilities) to take much effect in practice.   The framework underlines the importance of aligning formal rules with informal constraints and enforcement characteristics.

On a sidenote, I believe this presentation was about the only one that explicitly discusses inclusive education and how access to science education can be increased for learners with disabilities, despite WHO and the UN estimates that around 15% of learners has some kind of learning disability and that 90% of disabled learners in developing countries do not attend school.

References

North, D.C. (1994) Institutional Change: A Framework Of Analysis, Economic History, EconWPA, [online] Available from: http://ideas.repec.org/p/wpa/wuwpeh/9412001.html (Accessed 23 December 2012).
Konur, O. (2006) ‘Teaching disabled students in higher education’, Teaching in Higher Education, 11(3), pp. 351–363.
Seale, J. (2006) E-Learning and Disability in Higher Education: Accessibility Research and Practice, Abingdon, Routledge.

#WorldSTE2013: Malaysia’s Education Blueprint 2012-2025

I highlight briefly the most interesting sessions of the 3rd and 4th day of the World Conference on Science and Technology Education (WorldSTE2013).

Dr. Azian Abdullah discussed the Malaysia’s Education Blueprint, aimed enhancing the quality of STEM education and aiming at achieving developed nation status by 2020.  The country spurred into action on its education system after sharp drops in the international TIMSS and PISA (PISA+ 2009) rankings for science and maths and alarming signals from employers:

”The growing mismatch between the supply of skills and the requirements of various industries in the local market is a reflection of the inadequacy of the country’s education system in producing the relevant human capital that can drive the country’s economy in this globalised, new world order,” (2010) (1)

results_per_budget_spent

PISA2009+ scores compared with investment in education levels for selected countries

  • Some interesting elements in the blueprint:
  • The blueprint contains a set of clear targets and SMART indicators.
  • Strong attention for early-childhood education, aiming at a 80% enrolment by 2020.
  • International benchmarks to assess the quality of education are deemed more reliable than the local exams by the Ministry of Education itself.
  • Strong attention for achievement gaps between rural and urban areas, socio-economic groups and gender.
  • Focus on the efficiency component of educational quality.
  • Explicit attention for fostering shared values and experiences by embracing diversity between the three main ethnic groups in Malaysia (Chinese, Malay, Indian).
System_Aspirations

Extract from Malaysia Education Blueprint

The situational analysis complementing the plan shows:

  • Lack of awareness about STEM education with parents and students, e.g. about career prospects.  Parents prefer law, business, accounting (as in Cambodia!).
  • Content oriented curriculum: a lot of teaching to the high-stake exams, as teachers are rewarded based on exam scores.
  • Although scientific inquiry been officially promoted since the 1960s, teacher-centred pedagogies continue to prevail.  North’s Framework of Institutional Change or Engeström’s Activity Theory would be very suitable to analyse this!)

Some interesting action points include

  •  Changing timetables to give teachers more time to plan lessons collaboratively and engage in Communities of Practice.
  • Installation of school improvement specialist coaches (SICS+) in low-performing primary and secondary schools who act as mentors and coaches and deliver professional development.
  • Compulsory testing of teachers in their content and pedagogical skills. (This raises the question what knowledge a teacher exactly needs to have in order to teach well, better to organize accountability on the school level instead of the individual level.)
  • Campaign to educate public about STEM career opportunities
  • Mobile science centres to access rural and remote schools
  • The plan contains strong encouragement to study sciences.  Students with good results are (almost) compelled to study sciences, as parents of selected students need to apply to the Ministry to ask permission to something else.
  • Tax relief for parents with children doing stem subjects!

The problem analysis and suggested solutions were clearly laid out.  The impression is really that of a decisive attempt to improve the quality of its education system, focusing on efficiency, learning outcomes and equity.  The policy focus of the presentation was welcome and extremely relevant for the Cambodian delegation, as it faces similar problems (although in at a different level of development).

 

 

#WorldSTE2013 Conference – Day 2: Peer instruction (E. Mazur) and Visible Learning (J.Hattie(

Day 2 of the WorldSTE conference centred on the keynote sessions of two educational ‘rock stars’, Prof. Eric Mazur and Prof. John Hattie.  Both delivered a polished, entertaining presentation, but with little new information for those already familiar with their work.  The conference organizers provided little time for discussion which was, certainly in Hattie’s keynote, a pity.

Mazur’s presentation was a shortened version of the ‘Confessions of a Lecturer’ talk which is available on YouTube in various lengths and colours (recent one).  Concept Tests combined with voting and peer discussion is a powerful way to activate students in lectures.  He referred to Pinker’s ‘curse of knowledge’ as one reason why fellow students are often better than explaining new stuff to each other than lecturers.We have introduced the methodology in Cambodia as well, using voting cards rather electronic clickers.   From my experience, the main challenge for teacher trainers is to get the questions right.  Questions should address a conceptual problem, should preferably relate it to an unfamiliar context and should neither be too easy nor too difficult.

Hattie’s keynote was based on the results of his meta-meta analysis to determine what makes good learning.  It is based on more than 800 meta-analyses into which more than 50 000 individual studies have been integrated.  Starting point are the falsely authoritative claims many teachers and educators make about what works in education, often in conflict with each other.   Extensive reviews of Hattie’s work have been written elsewhere (1, 2).  Here I just write down some personal reflections on his talk:

  1. Hattie likes to unsettle people by listing some of the factors that don’t make a difference, such as teachers’ content knowledge, teacher training, class size, school structures, ability grouping, inquiry-based methods and ICT.  However, I believe that many aspects of teaching quality are interrelated and strengthen or weaken each other.  Content knowledge as such doesn’t make a good teacher, but is a necessary condition for teachers to engage in class discussion or provide meaningful feedback, which are factors that do make a difference in Hattie’s study.  Similarly, class size doesn’t make a difference if  the teacher doesn’t adapt his/her teaching.  However, class size may affect the strategies and possibilities of teachers, as it affects factors such as class management, available space and time. In the same way school structures in itself don’t change teaching quality, but may affect the opportunities for teachers to engage in collaborative lesson preparation, which is strongly endorsed by Hattie.
  2. Similarly, Hattie seemed to admit that many relations are non-linear and that there are threshold effects.  Research on pedagogical content knowledge showed that teachers need to have a good understanding of the concepts they are teaching, but additional specialised subject courses don’t make additional difference.  In Cambodia, limited content knowledge does inhibit teachers to promote deep learning, which also makes a difference in Hattie’s research.

 Overview of effect sizes variables on learning outcomes

Overview of effect sizes variables on learning outcomes

3. This relates to the question how valid results are across countries and cultures.  Hattie’s research is mainly based on research from developed countries and Western cultures, and I wonder how applicable these effect sizes are in other countries and cultures.  The threshold effect size value of 0.4 is based on the typical progression of a student in a developed country.  In a developing country, an effect size of 0.4 may be actually quite high.  Hattie does recognize that the teacher factor is stronger in schools with low-economic status, implying that having a good teacher does matter more for them than for well-off kids.  Banerjee and Duflo have suggested that unlike disappointing results in developed countries ICT may have stronger benefits in developing countries:

“The current view of the use of technology in teaching in the education community is, however, not particularly positive. But this is based mainly on experience from rich countries, where the alternative to being taught by a computer is, to a large extent, being taught by a well-trained and motivated teacher.  This is not always the case in poor countries.  And the evidence from the developing world, though sparse, is quite positive.” (Duflo & Banarjee, Poor Economics,p. 100)

4. Hatie’s research doesn’t take into account factors that lie outside the influence of the school. However, many of the strongest factors in Hattie’s list, such as collaborative lesson preparation and evaluation, class discussions and setting student expectations are well-known for quite some time. Why haven’t they been applied more?  This question has been better addressed by researchers such as North and Konur, who focus on the institutional and organisational analysis of education quality.

5. The concept of effect sizes is statistically shaky.  In a recent paper, Angus Deaton Post writes about effect sizes:

The effect size—the average treatment effect expressed in numbers of standard deviations of the original outcome—though conveniently dimensionless, has little to recommend it. It removes any discipline on what is being compared. Apples and oranges become immediately comparable, as do treatments whose inclusion in a meta-analysis is limited only by the imagination of the analysts in claiming similarity. Beyond that, restrictions on the trial sample will reduce the baseline standard deviation and inflate the effect size. More generally, effect sizes are open to manipulation by exclusion rules. It makes no sense to claim replicability on the basis of effect sizes, let alone to use them to rank projects.

Hattie’s research is wildly ambitious, and therefore a great deal of scrutiny and criticism:

  • sole focus on quantitative research at the expense of qualitative studies (Terhart, 2011, login)
  • statistics underlying effect sizes controversial as well as the premise that effect sizes can be aggregated and compared (blog post on statistics used in Hattie’s research).
  • quality of the studies underlying the meta-analysis varies wildly and shouldn’t simply be aggregated due to publication bias (Higgins and Simpson, 2011, login: an extract

“VL [Visible Learning] seems to suffer from many of the criticisms levelled at meta-analyses and then adds more problems derived from the meta-meta-analysis level. It combines studies across some areas with little apparent conceptual connection; averages results from experimental, nonexperimental, manipulable and non-manipulable studies; effectively ignores subtleties such as implementation cost, additive effects, arbitrary signs and longevity, even when many of the meta-analyses it relies upon carefully highlight these issues. It then combines all the effect sizes by simply adding them together and dividing by the number of studies with no weighting. In this way it develops a simple figure, 0.40, above which, it argues, interventions are ‘worth having’ and below which interventions are not ‘educationally significant’. We argue that the process by which this number has been derived has rendered it effectively meaningless.” (Higgins and Simpson, 2011)

Despite the claim on Hattie’s website, I don’t believe Hattie has finally found the ‘holy grail’ of education research and settled the question of what makes qualitative education. Partly this is due to skepticism whether such a definitive generalized answer across cultures, education levels and economies is possible.  Partly it is due to methodological concerns about the reliability of aggregating aggregations of effect sizes and the validity of excluding qualitative research and all factors that lie outside the influence of the school.

Finally, the Hattie keynote made me nostalgic about the H809 course in MAODE during which papers would be turned inside out until you would be convinced that each constituted the worst kind of educational research ever conducted.  Hattie’s research would fit excellently in such a context.