Measuring Pedagogical Content Knowledge for Mathematics

 

 

 

photo-1pck5ut-300x225In a previous blog post I discussed the concept of pedagogical content knowledge for mathematics.  In this post I look how it has been measured.

Many would agree intuitively with the importance of both content and pedagogical knowledge for teachers. However, scholarly evidence for the existence of PCK separate from mathematical content knowledge and its effect on learning outcomes is thin (Hill et al., 2008).  Depaepe et al. (2013) identified six main research lines in PCK: ‘(1) the nature of PCK, (2) the relationship between PCK and content knowledge, (3) the relationship between PCK and instructional practice, (4) the relationship between PCK and students’ learning outcomes, (5) the relationship between PCK and personal characteristics, and (6) the development of PCK.’  Measuring PCK is complicated. It’s hard to distinguish content from pedagogical knowledge and determine their respective effects on student learning.  Research have used both quantitative and qualitative approaches to investigate PCK with mathematics teachers.

Qualitative studies have tended to take a situative view on PCK, something that makes sense only related to classroom practice (Depaepe et al., 2013). These studies rely on case studies, classroom observations, meeting observations, document analysis and interviews, usually during a relatively short period.   Longer-term qualitative studies that investigate the relation between teacher knowledge, quality of instruction and learning outcomes have the advantage they can track evolutions and tensions between theory and practice, but are rare.  An excellent (20 years old!) ethnographic paper from Eisenhart (1993) brings to life Ms Daniels based on months on interviews and observations at the school and teacher training institute.   Unfortunately, the current dominance of ‘evidence-based’ studies, often narrowly interpreted as quasi-experimental and experimental research, crowds out this kind of valuable in-depth studies.  These studies have confirmed the existence of pedagogical content knowledge independent of content knowledge.  A teacher’s repertoire of teaching strategies and alternative mathematical representations is largely dependent on the breadth and depth of their conceptual understanding of the subject.

Most Quantitative research is based on the Shulman’s original cognitive conception of PCK as a property of teachers that can be acquired and applied independently from the classroom context (Depaepe et al., 2013).  Several large-scale studies have sought to ‘prove’ the existence of PCK for mathematics as a separate construct from subject content knowledge using factor analysis.

Hill et al. (2008) used multiple-choice questioning to look for separate dimensions of  content and pedagogical knowledge.  Questions were situated in teaching practice probing teachers for representations they would use to explain certain topics, how they would respond to a student’s confusion or what sequence of examples they would use to teach a certain topic. Questionnaires were complemented by interviews to get more insight in teachers’ beliefs and reasoning (Hill et al., 2008). Several papers contain a useful sample of survey questions.

Item Response Theory (IRT) is used by several authors to assess the validity of these surveys to discriminate between subjects at various ability levels.  IRT quantifies how well a test discriminates between teachers with various levels of PCK.  Test Information Curves (TIC) depict the amount of information yielded by the test at any ability level.  In Hill et al. (2008) a majority of questions with a below-average difficulty level resulted in a test that discriminated well between teachers with low and average levels of PCK, but less well between teachers with good and very good PCK.

Hill_2008_Test_Information_Curve

Test Information Curve from Hill et al. (2008)

 The amount of information decreases rather steadily as the ability level differs from that corresponding to the maximum of the Information Curve. Thus, ability is estimated with some precision near the centre of the ability scale, but as the ability level approaches the extremes of the scale, the accuracy of the test decreases significantly.

When evaluating their survey, Hill et al. (2008) found that teachers relied not only on PCK for mathematics knowledge for solving the questions, but also on subject content knowledge and even test-taking skills.  They used cognitive interviews for additional validity analysis, in which they asked teachers to explain why they had chosen a certain answer.  Secondly, their multiple-choice questions suffered from the fact that few teachers selected outright wrong answers, but differed in the detail of explanations of students’ problems they could give during the interviews.  The researchers found following kinds of interview items to discriminate quite well:

  • Assessing student productions for the level of student understanding they reflect
  • Use of computational strategies by students
  • Reasons for misconceptions or procedural errors

Baumert et al. (2010) analysed teachers’ regular tasks and tests, coding the type of task, level of argumentation required and alignment with the curriculum as indicators for PCK.  They complemented this with students’ ratings on teachers’ quality of adaptive explanations, responses to questions, pacing and teacher-student interaction.  Data from examinations and PISA numeracy tests were used to assess students’ learning outcomes.

Ball et al. (2001) discuss the concept of place value for multiplying numbers as a typical example of questions they used in their survey.  They found that teachers could accurately perform the algorithm – as would numerically literate non-teachers – , but often failed to provide conceptual grounding of the rule, and struggled to come up with sensible reactions to frequently occurring student mistakes.  Many teachers  using ‘pseudo-explanations’ focusing on the ‘trick’ rather than the underlying concept.  Ball et al. (2001) discuss similar examples in teachers’ knowledge of division (e.g. division of fractions), rational numbers (e.g. fractions of rational numbers) and geometry (e.g. relation between perimeter and area for rectangles).

PCK_example_Ball

Recent studies often start from teaching practice in analysing the role of knowledge.  Even teachers with strong PCK (as based on surveys) may, for a variety of reasons, not use all this knowledge when teaching  (Eisenhart, 1993).  Rowland and colleagues (2005) observed and videotaped 24 lessons of teacher trainees.  Significant moments in the lesson that seemed to be informed by mathematical content or pedagogical knowledge were coded. Codes were classified and led to the development of the ‘knowledge quartet’.  They illustrate the framework using a grade 8 lesson on subtraction from a hypothetical student called Naomi.  The framework looks promising as a guide for discussions after lesson observations.  Its focus on the mathematical aspects of lessons, rather than on general pedagogy was positively perceived by mentors and students (Rowland et al., 2005).

Various interpretations of PCK exist and it’s important to make clear which definition of PCK is used or which components are included.  A more cognitive interpretation as devised by Shulman has the advantage that it can be clearly defined, but in that case it is only one (hardly distinguishable) factor of many that affects instructional quality. A more situative approach tends to imply a wider definition of PCK beyond the scope of content knowledge, including affective and contextual factors. This may widen PCK so much that it means ‘everything that makes a good teacher’.

Few studies on measuring PCK have been done in developing countries. In their systematic review, Depaepe et al. (2013) found only one study of PCK that included an African country (Botswana, in Blömeke et al., 2008).  In Cambodia we used surveys with multiple-choice questions and lesson observations to assess teacher trainers’ PCK.  Some lessons learned are:

  • Language is a major barrier, as questions and answers were translated between English and Khmer, complicating assessing conceptual understanding and further probing during interviews and coding during lesson observations.
  • Response bias is an issue in surveys and lesson observations.  Teacher trainers tend to respond what they think the researcher likes or what they think will bring them most benefit in the future. Due to administrative requirements lesson observations are usually announced beforehand, resulting in teacher trainers applying the techniques you want them to apply for the occasion.  This makes that the picture you get is the optimal achievement rather than the average achievement.
  • The initial test we used was based on items from the TIMSS survey. However, most questions were too difficult for teacher trainers, resulting in low ability of the test to discriminate between teacher trainers’ PCK. Recent teacher graduates have much stronger content and teaching skills though.  An IRT analysis would have been helpful here to devise a valid and reliable test.
  • The small population of teacher trainers and the crowded donor landscape makes it hard to devise an experimental study. A more ethnographic approach that also investigates how PCK that is learned during teacher training is applied or fails to be applied in schools seems more useful to me.  However, care should be taken to include a variety of characters, school settings and ages in this fast-changing society.

Finally, PCK seems most useful to me as a theoretical framework to underpin sensible professional development. To be discussed in a next post.

Selected references

  • Ball, D.L., Lubienski, S.T. and Mewborn, D.S. (2001) ‘Research on teaching mathematics: The unsolved problem of teachers’ mathematical knowledge’, 4th ed. In Richardson, V. (ed.), Handbook of research on teaching, Washington, DC, American Educational Research Association, pp. 433–456, [online] Available from: http://www-personal.umich.edu/~dball/chapters/BallLubienskiMewbornChapter.pdf (Accessed 12 September 2013).
  • Hill, H.C., Ball, D.L. and Schilling, S.G. (2008) ‘Unpacking Pedagogical Content Knowledge: Conceptualizing and Measuring Teachers’ Topic-Specific Knowledge of Students’, Journal for Research in Mathematics Education, (4), p. 372.
  • Rowland, T., Huckstep, P. and Thwaites, A. (2005) ‘Elementary Teachers’ Mathematics Subject Knowledge: The Knowledge Quartet and the Case of Naomi’, Journal of Mathematics Teacher Education, 8(3), pp. 255–281.
  • Depaepe, F., Verschaffel, L. and Kelchtermans, G. (2013) ‘Pedagogical content knowledge: A systematic review of the way in which the concept has pervaded mathematics educational research’, Teaching and Teacher Education, 34, pp. 12–25.
Advertisements

One comment on “Measuring Pedagogical Content Knowledge for Mathematics

  1. […] a first blog post I looked at the concept of pedagogical content knowledge of mathematics.  In the second I discussed research attempts to measure teachers’ knowledge and link it to students’ learning […]

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s