Pedagogical Knowledge for Mathematics: Consequences for Effective Professional Development

(c)svwWhat knowledge do mathematics teachers need in order to teach successfully?   In a first blog post I looked at the concept of pedagogical content knowledge of mathematics.  In the second I discussed research attempts to measure teachers’ knowledge and link it to students’ learning outcomes.  In this one, I write about the implications for teachers’ professional development.  In a next and final blog post, I will try to relate the first three blog posts to the South African context.

Traditional forms of professional development such as workshops and lectures, tend to be top-down, one-off activities, focused on transmitting ‘new’ ideas of teaching and learning. Research shows that such isolated and piecemeal models of intervention bring little significant change to teaching practices and student achievement (e.g. Borko, 2004; Cohen and Ball, 1999). Recent initiatives of teacher professional development follow a ‘socially and culturally situated process of knowledge construction.’  This implies more attention for collaboration, discourse, reflection, inquiry and application. Research indicates that effective professional development requires continuous interactive support over a substantial period of time, should focus on specific educational content under guidance of an expert adopting a hands-off role and revolve around artefacts that help fostering a sense of ownership with teachers (Borko, 2004; Shalem et al., 2013).  Communities of Practice form an attractive theoretical framework for this kind of activities.  This view aligns well with the situative vision on PCK, as ‘knowledge-in-action’ that cannot be separated from the classroom context.  Regular school-based professional development not only has the advantage that it limits teachers’ time away from their classes, but it also promotes involvement from the school management. New teachers are often asked to comply with established practices in the school, regardless of what they learnt and appropriated before (NORRAG, 2013).

Some illustrations of teachers’ training formats that incorporate these principles are lesson study (Sibbald, 2009), curriculum mapping (Shalem et al., 2013) and mentoring programmes (Nilssen, 2010).

Curriculum mapping is a collaborative activity during teachers seek to align what is taught in the classroom (‘enacted curriculum’) with what is expected in state or national standards (‘intended curriculum’) and assessments’ (‘examined curriculum’).  Shalem et al. (2013) report on a curriculum mapping project for basic education mathematics teachers in South Africa (DPIP).  The main objectives of this project were:

  • Improve use of (inter)national assessments by teachers
  • Enhance alignment between enacted and intended curriculum
  • Develop communities of practice based on a joint enterprise and artefact creation
  • Clarify teacher expectations about intended learning outcomes
  • Improve interpretation of an outcome-based curriculum (previously in place in South Africa)

Lesson study has Japanese roots and is based on joint lesson planning combined with observation lessons to refine teacher understanding of all details surrounding a particular lesson.  A detailed lesson plan is collaboratively constructed, tried out and discussed several times, with members of the lesson study group taking turns teaching the lesson.  Positive effects on both content and pedagogical knowledge have been published.  However, the method is time-consuming and requires a collaborative culture within the school.  Initiatives such as in Chile, where a law has been proposed to link test results of teachers’ content and pedagogical knowledge to their salaries, are likely to have an adverse effect, promoting competition among teachers.  Hattie (2009) identifies collaborative work by teachers in preparing and evaluating their teaching as one of the top factors affecting learning.

Ball et al. (2001) suggest that the ideal course would to witness an outstanding fifth grade mathematics class, complemented by later study to extend and make more explicit a global and overarching perspective on the lesson topic.  In fact, Seymour and Lehrer (2006) suggest that PCK for mathematics develops with experience, as a teacher supplants general heuristics with more concrete representations and ‘interanimated’, contextualized combinations of teacher and student discourses develop.

These findings pose challenges for teacher professional development in developing countries.  In countries where many donors are active and that lack a framework for in-service training, such as Cambodia, organizing such a coherent system of regular professional training is challenging. Donors may set different priorities, timeframes and implementation frameworks. In the best case, organisations can organize joint trainings and follow-up activities, as in the cooperation we had with the Stepsam2 project from the Japanese International Cooperation Agency (JICA).  In the worst case, teacher trainers and teachers are overwhelmed by a plethora of one-off workshops, each reducing valuable available time in school. A vision for teacher professional development and a framework wherein various initiatives can be fit is necessary to enhance the quality of professional development, improve alignment with educational goals and to find a balance between time for teaching and time for learning.

Selected references:

Ball, D.L., Lubienski, S.T. and Mewborn, D.S. (2001) ‘Research on teaching mathematics: The unsolved problem of teachers’ mathematical knowledge’, 4th ed. In Richardson, V. (ed.), Handbook of research on teaching, Washington, DC, American Educational Research Association, pp. 433–456, Available here
Borko, H. (2004) ‘Professional development and teacher learning: Mapping the terrain’, Educational researcher, 33(8), pp. 3–15.  Available here
Shalem, Y., Sapire, I. and Huntley, B. (2013) ‘Mapping onto the mathematics curriculum – an opportunity for teachers to learn’, Pythagoras, 34(1), Available here

Seymour, J.R. and Lehrer, R. (2006) ‘Tracing the Evolution of Pedagogical Content Knowledge as the Development of Interanimated Discourses’, Journal of the Learning Sciences, 15(4), pp. 549–582.

Advertisements

Measuring Pedagogical Content Knowledge for Mathematics

 

 

 

photo-1pck5ut-300x225In a previous blog post I discussed the concept of pedagogical content knowledge for mathematics.  In this post I look how it has been measured.

Many would agree intuitively with the importance of both content and pedagogical knowledge for teachers. However, scholarly evidence for the existence of PCK separate from mathematical content knowledge and its effect on learning outcomes is thin (Hill et al., 2008).  Depaepe et al. (2013) identified six main research lines in PCK: ‘(1) the nature of PCK, (2) the relationship between PCK and content knowledge, (3) the relationship between PCK and instructional practice, (4) the relationship between PCK and students’ learning outcomes, (5) the relationship between PCK and personal characteristics, and (6) the development of PCK.’  Measuring PCK is complicated. It’s hard to distinguish content from pedagogical knowledge and determine their respective effects on student learning.  Research have used both quantitative and qualitative approaches to investigate PCK with mathematics teachers.

Qualitative studies have tended to take a situative view on PCK, something that makes sense only related to classroom practice (Depaepe et al., 2013). These studies rely on case studies, classroom observations, meeting observations, document analysis and interviews, usually during a relatively short period.   Longer-term qualitative studies that investigate the relation between teacher knowledge, quality of instruction and learning outcomes have the advantage they can track evolutions and tensions between theory and practice, but are rare.  An excellent (20 years old!) ethnographic paper from Eisenhart (1993) brings to life Ms Daniels based on months on interviews and observations at the school and teacher training institute.   Unfortunately, the current dominance of ‘evidence-based’ studies, often narrowly interpreted as quasi-experimental and experimental research, crowds out this kind of valuable in-depth studies.  These studies have confirmed the existence of pedagogical content knowledge independent of content knowledge.  A teacher’s repertoire of teaching strategies and alternative mathematical representations is largely dependent on the breadth and depth of their conceptual understanding of the subject.

Most Quantitative research is based on the Shulman’s original cognitive conception of PCK as a property of teachers that can be acquired and applied independently from the classroom context (Depaepe et al., 2013).  Several large-scale studies have sought to ‘prove’ the existence of PCK for mathematics as a separate construct from subject content knowledge using factor analysis.

Hill et al. (2008) used multiple-choice questioning to look for separate dimensions of  content and pedagogical knowledge.  Questions were situated in teaching practice probing teachers for representations they would use to explain certain topics, how they would respond to a student’s confusion or what sequence of examples they would use to teach a certain topic. Questionnaires were complemented by interviews to get more insight in teachers’ beliefs and reasoning (Hill et al., 2008). Several papers contain a useful sample of survey questions.

Item Response Theory (IRT) is used by several authors to assess the validity of these surveys to discriminate between subjects at various ability levels.  IRT quantifies how well a test discriminates between teachers with various levels of PCK.  Test Information Curves (TIC) depict the amount of information yielded by the test at any ability level.  In Hill et al. (2008) a majority of questions with a below-average difficulty level resulted in a test that discriminated well between teachers with low and average levels of PCK, but less well between teachers with good and very good PCK.

Hill_2008_Test_Information_Curve

Test Information Curve from Hill et al. (2008)

 The amount of information decreases rather steadily as the ability level differs from that corresponding to the maximum of the Information Curve. Thus, ability is estimated with some precision near the centre of the ability scale, but as the ability level approaches the extremes of the scale, the accuracy of the test decreases significantly.

When evaluating their survey, Hill et al. (2008) found that teachers relied not only on PCK for mathematics knowledge for solving the questions, but also on subject content knowledge and even test-taking skills.  They used cognitive interviews for additional validity analysis, in which they asked teachers to explain why they had chosen a certain answer.  Secondly, their multiple-choice questions suffered from the fact that few teachers selected outright wrong answers, but differed in the detail of explanations of students’ problems they could give during the interviews.  The researchers found following kinds of interview items to discriminate quite well:

  • Assessing student productions for the level of student understanding they reflect
  • Use of computational strategies by students
  • Reasons for misconceptions or procedural errors

Baumert et al. (2010) analysed teachers’ regular tasks and tests, coding the type of task, level of argumentation required and alignment with the curriculum as indicators for PCK.  They complemented this with students’ ratings on teachers’ quality of adaptive explanations, responses to questions, pacing and teacher-student interaction.  Data from examinations and PISA numeracy tests were used to assess students’ learning outcomes.

Ball et al. (2001) discuss the concept of place value for multiplying numbers as a typical example of questions they used in their survey.  They found that teachers could accurately perform the algorithm – as would numerically literate non-teachers – , but often failed to provide conceptual grounding of the rule, and struggled to come up with sensible reactions to frequently occurring student mistakes.  Many teachers  using ‘pseudo-explanations’ focusing on the ‘trick’ rather than the underlying concept.  Ball et al. (2001) discuss similar examples in teachers’ knowledge of division (e.g. division of fractions), rational numbers (e.g. fractions of rational numbers) and geometry (e.g. relation between perimeter and area for rectangles).

PCK_example_Ball

Recent studies often start from teaching practice in analysing the role of knowledge.  Even teachers with strong PCK (as based on surveys) may, for a variety of reasons, not use all this knowledge when teaching  (Eisenhart, 1993).  Rowland and colleagues (2005) observed and videotaped 24 lessons of teacher trainees.  Significant moments in the lesson that seemed to be informed by mathematical content or pedagogical knowledge were coded. Codes were classified and led to the development of the ‘knowledge quartet’.  They illustrate the framework using a grade 8 lesson on subtraction from a hypothetical student called Naomi.  The framework looks promising as a guide for discussions after lesson observations.  Its focus on the mathematical aspects of lessons, rather than on general pedagogy was positively perceived by mentors and students (Rowland et al., 2005).

Various interpretations of PCK exist and it’s important to make clear which definition of PCK is used or which components are included.  A more cognitive interpretation as devised by Shulman has the advantage that it can be clearly defined, but in that case it is only one (hardly distinguishable) factor of many that affects instructional quality. A more situative approach tends to imply a wider definition of PCK beyond the scope of content knowledge, including affective and contextual factors. This may widen PCK so much that it means ‘everything that makes a good teacher’.

Few studies on measuring PCK have been done in developing countries. In their systematic review, Depaepe et al. (2013) found only one study of PCK that included an African country (Botswana, in Blömeke et al., 2008).  In Cambodia we used surveys with multiple-choice questions and lesson observations to assess teacher trainers’ PCK.  Some lessons learned are:

  • Language is a major barrier, as questions and answers were translated between English and Khmer, complicating assessing conceptual understanding and further probing during interviews and coding during lesson observations.
  • Response bias is an issue in surveys and lesson observations.  Teacher trainers tend to respond what they think the researcher likes or what they think will bring them most benefit in the future. Due to administrative requirements lesson observations are usually announced beforehand, resulting in teacher trainers applying the techniques you want them to apply for the occasion.  This makes that the picture you get is the optimal achievement rather than the average achievement.
  • The initial test we used was based on items from the TIMSS survey. However, most questions were too difficult for teacher trainers, resulting in low ability of the test to discriminate between teacher trainers’ PCK. Recent teacher graduates have much stronger content and teaching skills though.  An IRT analysis would have been helpful here to devise a valid and reliable test.
  • The small population of teacher trainers and the crowded donor landscape makes it hard to devise an experimental study. A more ethnographic approach that also investigates how PCK that is learned during teacher training is applied or fails to be applied in schools seems more useful to me.  However, care should be taken to include a variety of characters, school settings and ages in this fast-changing society.

Finally, PCK seems most useful to me as a theoretical framework to underpin sensible professional development. To be discussed in a next post.

Selected references

  • Ball, D.L., Lubienski, S.T. and Mewborn, D.S. (2001) ‘Research on teaching mathematics: The unsolved problem of teachers’ mathematical knowledge’, 4th ed. In Richardson, V. (ed.), Handbook of research on teaching, Washington, DC, American Educational Research Association, pp. 433–456, [online] Available from: http://www-personal.umich.edu/~dball/chapters/BallLubienskiMewbornChapter.pdf (Accessed 12 September 2013).
  • Hill, H.C., Ball, D.L. and Schilling, S.G. (2008) ‘Unpacking Pedagogical Content Knowledge: Conceptualizing and Measuring Teachers’ Topic-Specific Knowledge of Students’, Journal for Research in Mathematics Education, (4), p. 372.
  • Rowland, T., Huckstep, P. and Thwaites, A. (2005) ‘Elementary Teachers’ Mathematics Subject Knowledge: The Knowledge Quartet and the Case of Naomi’, Journal of Mathematics Teacher Education, 8(3), pp. 255–281.
  • Depaepe, F., Verschaffel, L. and Kelchtermans, G. (2013) ‘Pedagogical content knowledge: A systematic review of the way in which the concept has pervaded mathematics educational research’, Teaching and Teacher Education, 34, pp. 12–25.

#WorldSTE2013 Conference: Days 3 & 4 (1)

DSC_0218I gave two talks at the WorldSTE2013 Conference.  One discusses some successes and challenges of VVOB‘s SEAL programme.  It relates the programme to Shulman’s Pedagogical Content Knowledge (PCK) and Mishra and Koehler’s extension of Tecnological, Pedagogical and Content Knowledge.  By introducing PCK Shulman aimed at reasserting the importance of content knowledge, as opposed to a sole focus on generic pedagogical skills such as class management that was very much in vogue during the 1980s.  The presentation is embedded below.

The second presentation is based on papers I submitted for MAODE course H810.  It discusses accessibility challenges to (science) education for learners with disabilities in Cambodia. It presents these challenges from an insitutional perspective, applying the Framework of Instutional Change, developed by D.C. North in the 1990s and applied by Ozcan Konur in education.  In particular, it highlights some of slow-changing informal constraints that hamper the effects of changes in formal rules (such as Cambodia’s recent ratification of the UN Convention on the Rights of People with Disabilities) to take much effect in practice.   The framework underlines the importance of aligning formal rules with informal constraints and enforcement characteristics.

On a sidenote, I believe this presentation was about the only one that explicitly discusses inclusive education and how access to science education can be increased for learners with disabilities, despite WHO and the UN estimates that around 15% of learners has some kind of learning disability and that 90% of disabled learners in developing countries do not attend school.

References

North, D.C. (1994) Institutional Change: A Framework Of Analysis, Economic History, EconWPA, [online] Available from: http://ideas.repec.org/p/wpa/wuwpeh/9412001.html (Accessed 23 December 2012).
Konur, O. (2006) ‘Teaching disabled students in higher education’, Teaching in Higher Education, 11(3), pp. 351–363.
Seale, J. (2006) E-Learning and Disability in Higher Education: Accessibility Research and Practice, Abingdon, Routledge.

National Elections Shake Cambodia

_145738_cambodia300

picture courtesy Jacek Piwowarczyk

On July 28 parliamentary elections took place in Cambodia with 123 assembly seats up for grabs. Since the first, post-Khmer Rouge, UN-supervised elections in 1993, the Cambodia People’s Party (CPP) has steadily increased its grip on the country, securing a sound 90 seats and 2/3 majority in the previous elections in 2008, allowing it to form a government and make constitutional amendments without interference from the opposition. The CPP’s self-proclaimed strongman, Hun Sen, has been prime minister for 28 years.

This time, things did not go completely as planned.  The opposition Cambodian National Rescue Party (CNRP), galvanized by the return – under international pressure – from leader Sam Rainsy from self-imposed exile in France, obtained 55 seats (+23) according to preliminary results.  Despite the fact that the CPP maintains a handsome majority in the National Assemby (68 seats on 120) there is little reason for cheering at the CPP headquarters.  First, the CNRP obtained the majority in the capital Phnom Penh, and populous provinces such as Kampong Cham and Kandal.  Due to limited resources, the CNRP didn’t campaign in some of the more remote provinces. Significantly, the opposition drew most support from the young and urban population. Cambodia has a very young population with a bulge between 20 and 29 years (see graph). Urbanisation has been fast in recent years, driven by growing export-oriented industry such as garment factories. The rise of social media, notably Facebook, seems to have been another help to the opposition, undermining the CPP’s dominance of the traditional media.  Movies of supposedly indelible ink being washed off made rounds and stories of people unable to find their names in voter records quickly surfaced.  The young seem less impressed by the traditional CPP recipe focusing on stability, economic growth and infrastructure.  The CNRP has done well on pounding on the widespread land grabs, rising inequality and pervasive corruption.

For now, the CNRP has rejected the result, claiming that widespread fraud has distorted and possibly reversed the result. It calls for an independent commission to investigate the results.  Prime Minister Hun Sen has not yet formally commented on the result. The situation on the street is tense. The coming days may see mass demonstrations with risk on clashes and violence.  Access to the area around the prime minister’s residence is blocked by armed forces. The opposition’s vitriolic anti-Vietnamese discourse raises concerns and fear with the numerous Vietnamese in Cambodia.  People have started hoarding gasoline and instant noodles. Traffic is unusually calm and many shops are still closed.

The key seems to lie with the CPP’s reaction the coming days. They may remain calm and try weakening the opposition by contacting individual members of the opposition.  However, they may also face an internal power struggle. Optimists hope the weakened CPP will feel the urge to reform and make concessions to the opposition. Anyway, Cambodia’s political landscape seems to have waken up, which is arguably a good thing.  Good additional coverage on the elections’ aftermath from Sebastian Strangio (Asia Times) and The Economist.

Reflections on Accessibility Challenges for Disabled Learners in Cambodia

source: http://pwds.wordpress.com/

Notwithstanding a remarkable recovery in Cambodian education in the last decade, access for disabled learners has lagged behind.  Accessibility goes beyond the availability of computers and teaching resources.  

Challenges

Barriers to education for disabled students start with barriers in society. Large pupil:teacher ratios and small classrooms reduce accessibility. Instruments for diagnosis are not in place and, as a result, students with learning difficulties or ‘invisible’ disabilities such as dyslexia or mental impairments often end up being labelled as stupid and drop out. The education system is strongly centralized with a rigid curriculum and inflexible learning outcomes that emphasize academic achievement, as opposed to all-round development. As a result, teachers are less flexible and pay less attention to individual learning needs.  

Local NGOs tend to establish special schools, rather than develop integrated programs. Assistive technologies they introduce may not be scalable and render learners dependent on technologies that their families or future employers cannot afford. The pre-existence of a segregated education system makes it more difficult to achieve inclusive education later.  Unfortunately, activities of specialized NGOs give other organisations an excuse not to focus on disabled learners, sustaining an old-fashioned, medical approach to disabilities.

There are socio-economic barriers as well.  Many Cambodian parents decide not to send their disabled children to school. They consider education primarily as a way to acquire wealth and wrongly believe that the first few years of education matter less than the next ones. As a consequence, they tend to invest all their resources in the education of one child, rather than in an equitable education for all their children. Moreover, employment opportunities are scarce as employers are not encouraged to hire disabled people. Until 2008 disabled people in Cambodia were even excluded from teaching.

This is enhanced by social discrimination. Buddhist culture considers a disability as bad karma and a punishment for faults committed in a previous life (Krousar Thmey 2010 Annual Report). This leads to social discrimination and instils a sense of complacency in disabled people and their environment.  Online education in general still faces tough cultural challenges.  Online learning is often considered as second-rate education in a society where education is traditionally associated with teacher instruction and memorisation.  Positive role models are important in changing attitudes and behaviour, as well as systems of quality assurance for online education. 

Opportunities 

In 2012 the Cambodian National Assembly decided to ratify the United Nations Convention on the Rights of Persons with Disabilities (UNCRPD). As the Convention enters into force on 19 January 2013, it will legally bind the government to work on inclusive education.  Ratification fuels expectations that the Cambodian government will adopt a social and universal approach to disabilities, in alignment with the WHO’s position. Developing course and materials with accessibility and flexibility in mind benefit all learners, as these cater to a variety in learning styles, learning speeds and impairments. 

Disabled learners may benefit from more online learning as they can study at their own pace at home. Digital learning materials are usually more flexible as font sizes and types, background colours and format can be changed and assistive technologies used. Online learning allows more control on communication and disclosure. However, online learning may also increase barriers, due to badly designed software and learning materials, or due to a lack of personal support.  

In Cambodia ‘blended’ approaches with supporting regional centres, located in schools or centres for teacher education, that  complement online activities and function as places for tutor assistance and peer support, could be explored. A model applied for some time, among others, in Brazil. Online learning would expand educational opportunities for people outside the capital, deploy scarce human resources more efficiently and allow teachers to follow in-service training without having to leave their schools. The Teacher Education in Sub-Saharan Africa (TESSA) programme is a good example of contextualising online learning to teacher education in developing countries.

Changes in legislation do not automatically lead to improvements in accessibility. The government needs to invest in human capital for specialised support services and make assistive technology more widely available, for example through loan kits.  Principles of universal design and awareness of disabilities should be embedded in general teacher education.  Most importantly, disabled people need to be convinced that they as well can have dreams and aspirations, and that they can achieve them as well, with the right support.

Online learning can contribute to an inclusive learning environment by providing a platform for creating and sharing accessible learning materials, creating opportunities for scaling up pre-service and in-service teacher education, allowing learners to study in a more flexible way and opening up access to international courses.

Too Hard To Measure: On the Value of Experiments and the Difficulty to Measure Lesson Quality

Interesting article in The Guardian (from some time ago, I’m a slow reader) about the overblown importance attributed to doing experiments during science lessons.

The article reminds me of my experience in Cambodia, where experiments are also frequently espoused as proof of a student-centred lesson.  In reality experiments in Cambodian classrooms are often a very teacher-centred activity:

  • the teacher demonstrates and students (at best) trying to observe what happens.
  • students do the experiment in large groups, by adhering to a strict series of steps outlined in a worksheet.
  • students work in large groups, in which usually only one or two students do the work, The others are merely bystanders.
  • the procedure, observations and interpretation of the experiment are laid down in detail beforehand.

The article touches upon two interesting elements.  First, there is the questionable educational value of many experiments in science classes.  secondly, there is the challenge to measure lesson quality beyond ‘ticking off’ the occurrence of activities such as experiments.

The article refers to ‘The Fallacy of Induction‘ from Rosalind Driver.  Her book ‘Making Sense of Secondary Science’ is an excellent book on misconceptions in science education and has been an important inspiration for me.  

“Driver doesn’t dismiss practical work in science, but argues that ‘Many pupils do not know the purpose of practical activity, thinking that they ‘do experiments’ in school to see if something works, rather than to reflect on how a theory can explain observations.” (Driver et al, 1993, p.7).

She raises two main arguments.  First, practical activities are often presented to students as a simulation of ‘how science really works’, collecting data, making observations, drawing inferences and arriving at a conclusion which is the accepted explanation.  It’s simplistic, and pupils happily play along, following the ‘recipe’ in the ‘cookbook’, checking whether they have ‘the right answer’.  In 

reality, science rarely works this way:

“For a long time philosophers of science and scientists themselves have recognised the limitations of the inductive position and have acknowledged the important role that imagination plays in the construction of scientific theories.” (Driver, 1994, p.43)

The second argument is that pupils don’t arrive in class with a blank slate, but with a whole range of self-constructed interpretations or ‘theories’ on how natural phenomena work. These ‘preconceptions’ require more than an experiment to change, as children tend to fit observations within their own ‘theoretical framework’.

Observations are not longer seen as objective but influenced by the theoretical perspective of the observer. ‘As Popper said, ‘we are prisoners caught in the framework of our theories.’ This too has implications for school science, for children, too, can be imprisoned in this way by their preconceptions, observing the world throught their own particular ‘conceptual spectacles.’ (Driver, 1994, p.44)

“Misconceptions can be changed if they are made explicit, discussed and challenged with contradicting evidence.  After this ‘unlearning’ phase, children may adopt a different framework.  Driver concludes: ‘Experience by itself is not enough. It is the sense that students make of it that matters” (Driver et al, 1993, p.7).  

Discussion activities, in which pupils have the opportunity to make their reasoning explicit and to engage with and try out alternative viewpoints, including the ‘scientific one’, need to be central (cognitive conflict). Practical activities can be complementary to these discussions, instead of the other way around, when discussion and conclusion are quickly reeled off at the end of the practicum.

 

Measuring lesson quality

However, the love for experiments while neglecting the question whether and what students are actually learning also touches upon the difficulty to measure adequately lesson quality.  Limited time and resources result in a focus on outward and visible signs. However, these:

  • deny the complexity of teaching and learning;
  • deny the individuality of students’ learning and understanding;
  • steers teachers and programme staff towards focusing on these outward signs, as they know they will be evaluated on these criteria. 

Collecting valid and reliable data on lesson quality is hard.  Self-assessment instruments are notoriously prone to confirmation bias. Lesson observations don’t give a reliable everyday picture of lesson practice.  They suffer from the fact that teachers pull out special lessons when visitors appear for announced (or unannounced) visits.   Conversely, as Cuban describes beautifully, other teachers tremble and panic when an evaluator walks into their classroom and the lesson becomes a shambles.

Evidence-based evaluation is often touted as the way forward for development projects.  Randomized trials in health have been useful to collect a body of knowledge on what works and what not. In a randomized trial a group of students where teachers received pedagogical training is compared with a group of students where teachers didn’t receive training.  Comparisons can be made with test scores, student satisfaction or drop-outs.


However, test scores are unsuitable as exams are notoriously prone to cheating and questions focus on recollecting factual knowledge, the opposite of what we want to achieve.  A self-designed test could be a solution, but there’s the risk that programme activities will focus more on the test than on improving teaching skills.  Student satisfaction scores are prone to the aforementioned confirmation bias.  Drop-outs are hard to use as they are influenced by many interrelated factors such as geography, economic growth and government policy.


Ownership by the direct target group on the evaluation is part of the solution in my opinion, as well as using a variety of data sources.  In future blog posts I plan to write more on how we try to measure lesson quality.


————————

For more detail see this available study from Prof. James Dillon (pdf) on the value of practical work in science education.
Dri­ver, R. (1994) ‘The fal­lacy of induc­tion in sci­ence teach­ing’, in Teach­ing Sci­ence, ed. Levin­son, R., Lon­don, Rout­ledge, pp.41–48.

Driver, R., Squires, A., Rushworth, P. and Wood-Robinson, V. (1993) Making Sense of Secondary Science, Routledge.

Changing Physics Education in Cambodia: Beyond the Workshop

Last week saw the organisation of a workshop on physics education for teacher trainers in Cambodia at the regional teacher training centre in Kandal province.  All Cambodian physics teacher trainers were present, what makes around 20 people.  The workshop lasted 5 days.   Each day we discussed a different part from the curriculum.  There were days we focused on sound, mechanics, pressure, optics and electricity and magnetism.  The last day participants collaboratively made a lesson plan using materials they’d learned.   There was a strong emphasis on low-cost experiments, but also attention for simulations and animations and student-centred approaches.  

The concept underlying the workshop – and actually the whole programme – is the TPACK concept (Mishra and Koehler,2006; Koehler and Mishra, 2007; Abbitt, 2011), an extension of Shulman’s idea of pedagogical content knowledge.  This is knowledge of pedagogy that is applicable to the teaching of specific content.  TPACK extends this idea with technologies.  The core idea of TPACK is that the use of technologies in education – and in Cambodia analogous technologies such as experiments, posters or cards play a much larger role than digital technologies – should be considered in relation to content and pedagogy.  Just using an experiment or an animation just for the sake of it, without thinking about how it will make your lesson better is not useful. This may seem obvious but many interventions seem to do just this, introducing certain technologies (blogging, wikis…) or pedagogies (concept mapping, learner-centred methodologies…) without detailed consideration of the curriculum content teachers actually have to cover.

 

The workshop is the result of three years of preparatory work with a wonderful team of teachers and teacher trainers from the college in Kandal.  Since 2008 we’ve worked with them to select materials and activities for those curriculum topics they found most challenging, try them out in their lessons, develop accessible manuals and short experiment videos (See for example this experiment video on toilet rolls and pressure) and learn to facilitate the activities themselves. 

Manuals have been officially approved by the Cambodian Ministry of Education, an important milestone in Cambodia, as it means that they can be distributed and endorsed nation-wide.  Although we do hope that these manuals by themselves are inviting, an official stamp of approval is likely to act as an extra stimulation.  It’s great to see teacher trainers themselves facilitate the workshop without much involvement of us.  Above all, they enjoy it as well to explain all these experiments and activities to their colleagues as well.

The downside of involving all stakeholders is a very long development cycle.  Getting from a first selection of content until the final, approved product has taken us several years.   Having a first edition published sooner would have enabled us to envisage a second edition within the programme lifetime.  

However, our objective is not to organize great workshops, but to improve science teaching.   Whether our workshops will have a strong effect on the ground remains to be seen.  There are quite a few hurdles between a good workshop and improved learning by grade 7-9 pupils.  Teacher trainers may feel insufficiently comfortable with the materials to use them, support from college management may lack, an overloaded curriculum and recalling-based assessment may favour rote learning.  Student teachers may misunderstand techniques, fail to see any benefits or be discouraged by their school environment.

Targeting teacher trainers has been a deliberate decision.  As they teach future teachers the potential impact is very high.  However, the adopted cascading strategy bears the risk of a watering down the content.    Measuring impact is notoriously difficult, perhaps even more so in Asia, where stated preference methods are prone to response and cultural bias.

Despite continuous M&E efforts we don’t have a clear insight yet into the impact of our activities at teacher training level on the pupils.  The main reasons are the fact that measuring impact is time intensive, that an observable impact may take time to manifest and that a clear impact of the programme within the messy complexity of teaching and learning in a crowded donor landscape is hard to distinguish.
References
Abbitt, J.T. (2011) ‘Measuring Technological Pedagogical Content Knowledge in Preservice Teacher Education: A Review of Current Methods and Instruments’, Journal of Research on Technology in Education, 43(4).
Koehler, Matthew J and Mishra, Punya (2005) ‘Teachers learning technology by design’, Journal of Computing in Teacher Education, 21(3), pp. 94–102.
Mishra, Punya and Koehler, Matthew J. (2006) ‘Technological Pedagogical Content Knowledge: A Framework for Teacher Knowledge’, Teachers College Record, 108(6), pp. 1017–1054.