Special issue of SA Journal of Early Childhood Education: Call for abstracts

call for abstracts

The relatively new open-access SA Journal of Childhood Education has recently put out a call for abstracts (see below) for their special issue on “Priorities and policy-making in South African Education” (Guest editors: Nick Taylor and Thabo Mabogoane). Given the policy relevance of this special issue, researchers at ReSEP (including myself) will be submitting a number of abstracts for work we are currently doing and intend to do. If you’re doing work in this field I’d encourage you to do the same, it’s likely to be a great issue!


Special issue: Call for Papers “Priorities and Policy-making in South African Education”

 Guest editors: Nick Taylor and Thabo Mabogoa

Despite considerable expenditures and efforts to improve performance and reduce inequality, there is limited evidence of substantial improvements in educational outcomes, or the equalisation thereof. Periodic reviews of the evidence have shown a number of recurring themes that are especially characteristic of schooling in South Africa. These include unequal access to socially, emotionally and cognitively stimulating environments (both in the home and at school), insufficient resources, low levels of curriculum coverage, low levels of teacher content knowledge, inadequate support and training opportunities for in-service teachers, challenges associated with learning and teaching in a second language, low levels of accountability and high dropout in upper secondary school (among many others).

While education officials are often aware of these challenges, most policy-makers find it difficult to synthesise this evidence, which is necessary for prioritisation and resource allocation. Making sense from research is particularly challenging  when it is presented in isolation from other problem areas and only speaks to other research within its ‘silo’.  It is now widely acknowledged that if government policies are to have the largest possible impact, they need to be based on rigorous evidence and peer-reviewed research. Furthermore the National Development Plan (the government’s guiding framework) has emphasised the need for the “process of prioritisation and sequencing” if the plan is to be implemented. Such a process of prioritisation and sequencing requires rich, inter-connected evidence on education in South Africa.

Consequently, this call for papers focuses on education research in South Africa that speaks directly to policy-making and prioritisation. Papers that synthesise existing evidence across research areas in education are especially welcome.

Instructions for authors: www.sajce.co.za

Journal administrator: childhooded@uj.ac.za

Online submissions and author registration: www.sajce.co.za

Deadline for abstract submission: 31 March 2015
Deadline for full papers (of accepted abstracts): 30 June 2015
Intended publication date: November 2015

The SAJCE is accredited by the Department of Higher Education and Training and  has applied, through the Academy of Science of South Africa  (ASSAF) for listing  on the open journals platform, ScIELO 

“Assessment results don’t add up” (my M&G article on the ANAs)

ANA 2015

(The article below first appeared in the Mail & Guardian on the 12th of December 2014. The original link can be found here.)

Last week, the minister of basic education announced the results of the Annual National Assessments (ANAs) for 2014. The ANAs test all children in grades one to six and nine, using standardised tests in mathematics and languages.

The problem is that these tests are being used as evidence of “improvements” in education when the ANAs cannot show changes over time. There is absolutely no statistical or methodological foundation to make any comparison of ANA results over time or across grades. Any such comparison is inaccurate, misleading and irresponsible. The difficulty levels of these tests differ between years and across grades, yielding different scores that have nothing to do with improvements or deteriorations necessarily, but rather test difficulty and the content covered.

Although the department of basic education tries to make the tests comparable across years, the way it goes about doing this (with teachers and experts setting the tests) means that in reality they are not at all comparable.

And the department knows this. On page 36 of its 2014 report, it states: “Even though care is taken to develop appropriate ANA tests each year, the results may not be perfectly comparable across years as the difficulty level and composition of the tests may not be identical from year to year.” Yet it then goes on to make explicit comparisons.

You can’t have it both ways. I can say categorically that the ANA tests are not at all comparable across years or grades. Despite this cloaked admission of incomparability, the report is full of the rhetoric of comparison, with scores reported side by side for 2012, 2013 and 2014, and 24 references to “increases” or “decreases” relative to last year’s ANA. Similarly the minister in her speech last week Thursday spoke about “consistent improvement in home language” as well as “an upward trend in performance”.

All of these statements are extremely misleading and factually incorrect. The ANAs cannot be compared across grades or years, at least not as they currently stand.

Those of us in the field of educational assessment have been saying this repeatedly for two years. Yet journalists continue to regurgitate these “increases” and “decreases” without any critical analysis, as if they must be true – but they are not. There are different ways of determining whether the quality of education is improving (primarily by using reliable international assessments over time) but the ANAs, in their current form, are not among them.

For tests to be comparable over time, one has to employ advanced statistical methods – for instance, item response theory, which essentially involves using some common questions across tests allowing us to compare performance on the common questions with performance on the noncommon questions within and between tests. This makes it possible to equate the difficulty of the tests (and adjust results) after they have been written. The common questions must also be used across grades and the ANA cycles.

This is standard practice around the world, and yet is not employed with the ANAs. Every single reliable international and national assessment around the world uses these methods if they intend to compare results over time or grades, but not the ANAs. There are no common questions used across any of the ANAs, either grade to grade within one year of an ANA, or between the ANA cycles. Using the ANA results to talk about “improvements” or “deteriorations” has no methodological or statistical justification whatsoever.

There is not a single educational statistician in the country or internationally who would go on record and say that the ANA results can be used to identify “improvements” or “deteriorations” over time or across grades.

Although the ANA report speaks about an “advisory committee” of “local and international experts”, it does not name them. These experts need to come forward and explain why they believe these tests are comparable over time, and if they do not believe they are comparable over time then the report should not refer to them.

On this matter, no one needs to take my word for it: the changes in results are so implausible that they speak for themselves. Take grade one mathematics, for example, where the average score was 68% in 2012, plummeted to 59% in 2013 and then soared to 68% in 2014. Very strange. Or, if we look at the proportion of grade three students with “acceptable achievement” (50% or higher) in mathematics, we have the fastest improving education system in recorded human history. The results went from 36% in 2012 to 65% in 2014. These changes are, educationally speaking, impossible.

Some of the provincial results are equally ridiculous. The average score for grade four home language in Limpopo doubled in two years, from 24% in 2012 to 51% in 2014. Given that the standard deviation for grade four home language in ANA 2012 was 26.5%, this amounts to a one-standard deviation increase in two years. For those who don’t know how large this is, it’s the same as the difference between township schools and suburban schools (mainly former Model C schools) in the 2011 study best known as “prePirls” (pre-Progress in International Reading Literacy Study), which recorded 0.9 standard deviations. There are clearly miracles happening in Limpopo.

I could go on and on and talk about other ridiculous changes, such as the national grade six mathematics average (from 27% in 2012 to 43% in 2014), grade five home-language increases in the North West (from 26% to 58%) or grade three mathematics increases in Mpumalanga (from 36% to 50%) or grade four home-language increases in KwaZulu-Natal (from 38% to 58%), and so on. These are all absolutely and unequivocally impossible, and have never been seen on a large scale anywhere in the world before. Ever.

Testing can be an extremely useful way to monitor progress and influence pedagogy and curriculum coverage, but only if it is done properly. Testing regimes usually take between five and 10 years to develop before they can offer the kinds of reliability needed to make claims about “improvement” or “deterioration”.

Test results send strong signals to students and teachers about what constitutes acceptable performance and whether things are improving or not. For example, the department assumes that 50% on the ANAs re-presents competent performance, but there is no rational basis for using this threshold as conceptually equivalent to “acceptable achievement”.

The overall decline in ANA achievement between grade one and grade nine is also extremely misleading, because it suggests that the problem lies higher up in the system. But all research shows that children are not acquiring foundational skills in grades one to three and that this is the root cause of underperformance in higher grades.

Testing children is a serious business that requires large teams of highly skilled professionals whose sole responsibility is to ensure the reliability and validity of the ANA results and process. This includes building a large bank of questions across grades, learning outcomes and subjects. It involves setting and moderating tests; linking and analysing test questions using item response theory; as well as reporting and disseminating results in ways that principals, teachers and parents understand. It needs intense collaboration across the curriculum and assessment branches of government and with those who develop the department’s workbooks. It requires a much longer planning, piloting and reporting cycle than the impossible time frames to which departmental officials are subject.

Let me be clear: the ANAs should not be scrapped – they are one of the most important policy interventions in the past 10 years. However, the first rule in educational assessment, as in medicine, is: “Do no harm.” Sending erroneous signals to teachers and students about “improvements” is extremely unhelpful. This makes it so much more difficult to really induce the improvement in behaviour at the classroom level that is central to real advances in learning outcomes.

In essence, the department needs to answer this: Are the ANA results comparable over time and across grades? If not, why are they being used as evidence for claims about “improvements” or “deteriorations” across grades or over time?


Previous posts on the ANAs:

Starting Behind and Staying Behind: Insurmountable learning deficits in mathematics (new Working Paper)

Screen Shot 2014-12-11 at 10.44.44 AM

A Working Paper that I co-wrote with Janeli Kotze was released today on the Stellenbosch Economic Working Paper site (available here). The paper has also been accepted for publication in the International Journal of Educational Development and should be out next year. I will include some excerpts from the paper for those who’d prefer the short version…


This study quantifies a year’s worth of mathematics learning in South Africa (0.3 standard deviations) and uses this measure to develop empirically-calibrated learning trajectories. Two main findings are, (1) only the top 16% of South African Grade 3 children are performing at an appropriate Grade 3 level. (2) The learning gap between the poorest 60% of students and the wealthiest 20% of students is approximately three Grade-levels in Grade 3, growing to four Grade-levels by Grade 9. The paper concludes by arguing that the later in life we attempt to repair early learning deficits in mathematics, the costlier the remediation becomes.


Few would argue that the state of mathematics education in South Africa is something other than dire. This belief is widespread among academic researchers and those in civil society, and is also strongly supported by a host of local and international assessments of mathematical achievement extending back to at least 1995 (Howie & Hughes, 1998; Reddy, 2006; Fleisch, 2008; Spaull, 2013; Taylor et al., 2013). Many of these studies, and particularly those that focus on mathematics, have identified that students acquire learning deficits early on in their schooling careers and that these backlogs are the root cause of underperformance in later years. They argue that any attempts to raise students’ mathematical proficiency must first address these deficits if they are to be successful (Taylor et al., 2003). The present study adds further evidence to this body of work by using nationally representative data to provide some indication of the true size and scope of these learning deficits.

In South Africa, research in this area has generally focussed on in-depth localized studies of student workbooks and classroom observation (Ensor et al., 2009). For some examples, Carnoy et al. (2012) observe mathematics learning in Grade 6 classrooms from 60 schools in one South African province (North West) and compare these classrooms to 60 schools in neighbouring Botswana. On a smaller scale, Venkat & Naidoo (2012) focus on 10 primary schools in Gauteng and analyse coherence for conceptual learning in a Grade 2 numeracy lesson. Similarly Schollar (2008) conducted interviews and classroom observations as well as analysed a large sample of learner scripts to determine the development (or lack thereof) of mathematical concepts through the Grades.

Where the present research differs from these earlier studies is that it focuses on quantifying national learning deficits in general, rather than in specific learning areas. While the latter are essential for understanding what the problems are and how to fix them, analyses at the national level are also needed if we are to understand the extent and distribution of the problem, both of which are imperative for policy-making purposes. This is only possible by analysing multiple nationally-representative surveys of student achievement, which is the focus of the present study. The two core research questions that animate this study are as follows:

  • How large are learning deficits in South Africa and how are they distributed in the student population?
  • Do learning deficits grow, shrink or remain unchanged as students progress to higher Grades?

To answer these questions we analyse four nationally representative datasets of mathematics achievement, namely: (1) the Systemic Evaluation 2007 (Grade 3), (2) the National School Effectiveness Study 2007/8/9 (Grade 3, 4 and 5), (3) the Southern and Eastern African Consortium for Monitoring Educational Quality (SACMEQ) 2007 (Grade 6), (4) and the Trends in International Mathematics and Science Study (TIMSS) 2011 (Grade 9).

The extant research on mathematics learning in South Africa strongly supports this conclusion with numerous researchers highlighting the inadequate acquisition of basic skills and the consequent negative effects on further learning. Taylor & Vinjevold (1999) summarise the findings from 54 studies[1] commissioned by the President’s Education Initiative and conclude that:

“At all levels investigated by [The President’s Education Initiative], the conceptual knowledge of students is well below that expected at the respective Grades. Furthermore, because students are infrequently required to engage with tasks at any but the most elementary cognitive level, the development of higher order skills is stunted” (Taylor & Vinjevold, 1999, p. 231).

This lack of engagement with higher order content is the prime focus of Reeves and Muller’s (2005) analysis of Opportunity-to-Learn (OTL) and mathematics achievement in South Africa, where OTL is the curriculum actually made available to learners in the classroom. Taylor et al. (2003, p. 129) in their book Getting Schools Working summarise succinctly the debilitating effects of cumulative learning deficits:

“At the end of the Foundation Phase [Grades 1-3], learners have only a rudimentary grasp of the principles of reading and writing … it is very hard for learners to make up this cumulative deficit in later years … particularly in those subjects that … [have] vertical demarcation requirements (especially mathematics and science), the sequence, pacing, progression and coverage requirements of the high school curriculum make it virtually impossible for learners who have been disadvantaged by their early schooling to ‘catch-up’ later sufficiently to do themselves justice at the high school exit level.’

And lastly, Schollar (2008) summarises the findings of the Primary Mathematics Research Project which looked at over 7000 learners from 154 schools in South Africa and concludes as follows:

“Phase I concluded that the fundamental cause of poor learner performance across our education system was a failure to extend the ability of learners from counting to true calculating in their primary schooling. All more complex mathematics depends, in the first instance, on an instinctive understanding of place value within the base-10 number system, combined with an ability to readily perform basic calculations and see numeric relationships … Learners are routinely promoted from one Grade to the next without having mastered the content and foundational competences of preceding Grades, resulting in a large cognitive backlog that progressively inhibits the acquisition of more complex competencies. The consequence is that every class has become, in effect, a ‘multi-Grade’ class in which there is a very large range of learner abilities and this makes it very difficult, or even impossible, to consistently teach to the required assessment standards for any particular Grade. Mathematics, however, is an hierarchical subject in which the development of increasingly complex cognitive abilities at each succeeding level is dependent on the progressive and cumulative mastery of its conceptual frameworks, starting with the absolutely fundamental basics of place value (the base-10 number system) and the four operations (calculation)” (Schollar, 2008, p. 1).

To provide an alternative measure of performance, we provide two examples of no-language items in NSES and show when students answer the question correctly – i.e. in Grade 3, Grade 4, Grade 5 or not by the end of Grade 5. Given that one needs to follow the same students from Grade 3 to 5 we limit the sample here to the panel sample of NSES students (8383 students). Figure 3 below shows a simple question testing two and three digit addition with no carrying. This is within the Grade 3 curriculum which states that students should be able to “perform calculations using the appropriate symbols to solve problems involving addition of whole numbers with at least three digits.” Although this is a Grade 3 level item and contains no language content, only 20% of Quintile 1-4 students could answer this correctly in Grade 3, with the proportion in Quintile 5 being twice as high (42%) but still low. While there is evidently some learning taking place in Grade 4 and 5, more than 40% of Quintile 1-4 children still could not answer this Grade 3 level problem at the end of Grade 5. In Quintile 5 this figure was only 22%.

Figure 3

Screen Shot 2014-12-11 at 11.29.06 AM

Figure 4 below shows a similar situation where the vast majority of Grade 3 children cannot answer this Grade 3 level problem. While some children learn the skill in Grade 4 or 5, the majority of children still cannot answer this problem at the end of Grade 5, despite it being set at the Grade 3 level.

Figure 4

Screen Shot 2014-12-11 at 11.30.35 AM

 Moving from learning deficits to learning trajectories

While the previous sections have identified the proportion of students that are not operating at a Grade 3 level, they do not provide much guidance in terms of learning trajectories into later Grades. The figures above show that some students are only learning part of the Grade 3 curriculum in either Grade 4 or Grade 5 and that many never seem to acquire these skills. However one cannot say to what extent they are also acquiring Grade 4 level skills in Grade 4 and Grade 5 level skills in Grade 5, although this is unlikely. This is because the NSES test was set at a Grade 3 level with only a small number of questions set at the Grade 4 level. One could use SACMEQ (Grade 6) and TIMSS (Grade 9) as measures of mathematical proficiency at higher levels, but these tests are not calibrated to be comparable to each other, or to earlier tests like the NSES. This is problematic since learning trajectories require data points distributed across the full range of educational phases which are comparable to each other both in terms of the content tested and the difficulty level of the tests. One alternative method to partially overcome the lack of inter-survey comparability is to measure the size of learning deficits in each data set using intra-survey benchmarks.

Applying the above method we calculate the difference in average achievement between Quintiles 1 (poorest 20% of students) and quintile 5 (wealthiest 20% of students) for the different surveys and then convert these into a common standard-deviation metric. The difference between quintiles 1 and 5 is 28 percentage points in NSES Grade 3, 130 SACMEQ points in Grade 6, and 122 TIMSS points in Grade 9. These different metrics are not directly comparable and there is no simple way of equating the scores. Consequently we convert the differences into within-survey standard deviations and then, using the 0.3 standard deviation benchmark as one year of learning, one can say that this difference was equal to 4 Grade-levels in Grade 3[1] (NSES), 4.4 Grade-levels in Grade 6 (SACMEQ) and 4.7 Grade-levels in Grade 9 (TIMSS).

Lewin (2007) provides a useful conceptual model for the trajectory needed to reach a particular goal – in this case matric (Grade 12). He refers to an ‘on-track-line’ and an ‘off-track-line’ where the off-track-line is any line below the on-track-line. In the present example, the on-track-line is calibrated to be equal to the average performance of Quintile 5 students.

To illustrate the above in a graph, we set the average Quintile 5 achievement to be equal to the Grade-appropriate benchmark such that the learning trajectory of these students are on the “on-track” trajectory and will reach matric (Grade 12) performing at roughly a Grade 12 level. We then calculate the difference between this ‘benchmark performance’ and the average performance of Quintiles 1, 2, 3 and 4 and then convert this difference into Grade-level equivalents using 0.3 standard deviations as equal to one Grade-level of learning. In doing so, we essentially create a learning trajectory spanning from Grade 3 (NSES) to Grade 9 (TIMSS) with linear projections for those Grades where we do not have data (Grade 7, 8, 10, 11 and 12). The exact figures for all calculations are provided in the online appendix. Figure 6 below shows the likely learning trajectories of the average student in each quintile of student socioeconomic status.

Figure 6 shows that the average student in Quintile 1, 2 and 3 is functioning at approximately three Grade-levels lower than the Quintile 5 benchmark in Grades 3, 4, 5 and 6. Observing average performance by quintile in Grade 9 shows that the difference between Quintile 1, 2 and 3 students and Quintile 5 students (the benchmark) has now grown to more than four Grade-levels. If it is assumed that Quintile 5 students in Grade 9 are functioning at roughly a Grade 9 level, then Quintile 1 and 2 students are functioning at roughly a Grade 4.5 level in Grade 9. The trajectory lines, one for Quintile 5 and one for the average of Quintiles 1-4, show that in Grade 3 there already exist large differences in performance (approximately three Grade-levels) and that by the time children enter Grade 9 this gap in performance has grown to about four Grade-levels. The linear trend in performance between these two groups suggests that if the same number of students in Quintiles 1-4 in Grade 9 continued in schooling until Grade 12 (i.e. no drop out between these two periods) they would be functioning at approximately 4.9 Grade levels lower than their Quintile 5 counterparts in Grade 12 (1.5 standard deviations lower).

Figure 6

Screen Shot 2014-12-11 at 10.44.44 AM

Returning to Lewin’s (2007) notion of an “on-track” progress line, perhaps the most important conclusion arising from this conceptual framework is that any performance below the “on-track” line creates an increasing gradient of expectation as the pupil moves into higher grades. This expectation is what is required by the curriculum to reach the goal (passing the grade 12 exam, for example) relative to where the student is at the present. As students’ learning deficits grow, the gradient of what needs to be achieved to reach the goal then progressively steepens to the point where it enters what Lewin (2007, p. 7) refers to as a ‘Zone of Improbable Progress.’ For example, the improvement that is required to bring the average Grade 9 Quintile 1 student in South Africa up to the required benchmark by Grade 12 is unrealistic given that they are performing at roughly a Grade 5 level in Grade 9. By contrast, the gradient of achievement required to bring the average Quintile 1 Grade 3 pupil up to the required benchmark by matric is slightly more manageable. The clear conclusion arising from this analysis is that intervening early to correct and prevent learning deficits is the only sustainable approach to raising average achievement in under-performing schools.

What we would add to this conclusion is that the root cause of these weak educational outcomes is that children are acquiring debilitating learning deficits early on in their schooling careers and that these remain with them as they progress through school. Because they do not master elementary numeracy and literacy skills in the foundation and intermediate phases, they are precluded from further learning and engaging fully with the Grade-appropriate curriculum, in spite of being enrolled in school. Lewin (2007, p. 10) refers to these children as ‘silently excluded’ since they are enrolled and attending school but learning little. Importantly, these children are precluded from further learning, not because of any inherent deficiency in their abilities or aptitudes, but rather because of the systematic and widespread failure of the South African education system to offer these students sustained and meaningful learning opportunities. Indeed, many children from poorer backgrounds have both the ability and the desire to succeed, and when provided with meaningful learning and remediation opportunities, do in fact succeed (see Spaull et al, 2012 for an example).

The clear policy recommendation which proceeds from these findings confirms what is becoming increasingly accepted, that any intervention to improve learning in South Africa needs to intervene as early as possible. Given South Africa’s egregiously high levels of inequality, it should come as no surprise that poor children in South Africa find themselves at a nexus of disadvantage, experiencing a lack of social, emotional and cognitive stimulation in early childhood. These children then enter a primary school system that is unable to equip them with the skills needed to succeed in life, let alone to remediate the large learning deficits they have already accumulated to date.

When faced with limited resources and a choice of where to intervene in the schooling system, the counsel from both the local and international literatures is unequivocal; the earlier the better. The need to focus on the primary Grades, and especially the pre-primary years, is not only driven by the fact that underperformance is so widespread in these phases, but also because remediation is most possible and most cost-effective when children are still young (Heckman, 2000). Due to the cumulative negative effects of learning deficits – particularly for vertically-integrated subjects like mathematics – it is not usually possible to fully remediate pupils if the intervention is too late (i.e. in high school), as too many South African interventions are. Nobel Laureate Professor James Heckman summarises the above succinctly when he explains that:

“Policies that seek to remedy deficits incurred in early years are much more costly than early investments wisely made, and do not restore lost capacities even when large costs are incurred. The later in life we attempt to repair early deficits, the costlier the remediation becomes” (Heckman, 2000, p. 5).

Full paper available here.

Tonight I am angry

disability 2

It’s currently 9:28pm and I’m sitting on my hotel bed in Pretoria oscillating between anger and tears. Today was the first day of a UNICEF conference on Early Childhood Development and I was presenting on insurmountable learning deficits (PPT here). It’s strange that I feel so emotional because it was an extremely successful day of presentations from people like Marie-Louise Samuels, Linda Biersteker, Linda Richter and Mark Tomlinson – all giants in the field of ECD. And yet, here I am clenching my jaw, crying, listening to the aircon and wondering how we got here. Fuck. I keep thinking of one line from Neruda’s poem:

“Sometimes I wake up early and even my soul is wet”

I find it really difficult to hear about the lived reality of people with disabilities. After the conference closed today I had a long conversation with Jean Elphick, an inspiring woman who works with disabled children in a township in Gauteng. She told me that most disabled children (51%) on their program stay at home each day and only a small proportion go to an appropriate school (watch this 4-minute video from Afrika Tikkun). They have been trying to get these kids into schools for three years, but with very limited success. The part that really hit me was when she told me that 15 of the disabled children she works with had been abused and some had been raped. Fifteen. Boys and girls. The children that stay at home unsupervised are especially vulnerable since they have no sexuality education and limited people outside the home to disclose to or confide in. Hardly any of the sexual abuse/rape cases of these children are ever heard in court and when they are rarely result in convictions. She told me about rape cases where there was even DNA evidence and yet the case had been shelved.

She told me about one disabled girl who went to the police station to open a case of rape against her assailant. The policeman told her that the bleeding from the rape was just her period and the doctor doing the J88 exam refused to record the anal rape. She had been vaginally and anally raped by this man. This child’s assailant was never convicted of rape. And this is not an isolated incident. Last year the M&G covered the issue and reported on one home for disabled children “Ikhaya Loxolo”, I include an excerpt from the article:

“In 2011, Ikhaya Loxolo had 10 residents, nine of them female. They were between 10 and 22 years old. “All nine had been raped previously – some repeatedly,” says Gunther. “The worst thing is that it happens so often that it’s normal to the community. It’s what happens to a mentally disabled girl.” One of the girls was raped when she fetched water from a spring near her home. She was 10 at the time. Another, a 14-year-old, was raped when she went home for the Christmas holidays. “There was no one to look after her and a drunk guy came into the house,” says Gunther. “He locked the door and raped her.”…”Rape is like a plague here. A lot of women and girls get raped but it’s especially the mentally disabled girls. These men know very well that these girls are mentally handicapped. That is why they target them, because they’re easy prey: they can’t fight back and mostly they can’t identify their attackers.” When a mentally disabled girl is raped in the district, says Gunther, her parents are “sad” about it, “but not shocked, because it happens all the time.”

Screen Shot 2014-12-10 at 8.11.56 AM

Screen Shot 2014-12-10 at 8.12.08 AM

Screen Shot 2014-12-10 at 8.12.54 AM

(The above are excerpts from Jean Elphick’s Keynote address at “The South African Professional Society on the Abuse of Children” – PDF available here)

Last night I heard another tragic story about a disabled illiterate person. My good friend (and hero) Veronica McKay was telling me about some of the people that take adult basic education classes (ABET) called Kha Ri Gude. The program aims to provide basic numeracy and literacy skills to illiterate and innumerate adults and is available across the country. As part of the drive to recruit participants they go door to door in the community and ask if anyone wants to join the classes. She told me that often the disabled men and women said that they were locked away in a room and only brought out and dressed up when the disability grant was collected each month. This is one of the reasons why some said they enjoyed the adult literacy classes – for some it was their only social interaction during the week.

At the beginning of the KRG program they ask participants to list what they want to gain, or be able to do, by the end of the program. This helps them tailor the classes to the needs of those participants. Some people say they want to be able to read the Bible, others that they want to open a bank account and others that they want to use an ATM. One of the illiterate participants (who was also disabled) reported that after the program she went to the ATM herself and withdrew the money from her disability grant, where previously her grandchildren did this for her. She reported that now she was receiving much more money. Basically she realised that her grandchildren were keeping some money for themselves and she didn’t know it because she was innumerate and illiterate. Veronica told me that after the course many of the participants said they felt much more empowered and confident to go outside. One of the blind Kha Ri Gude trainers, who now trains others who are blind and illiterate (using Braille) reflected on the KRG program and said: “Now people take me as a role model and they believe that if you give someone a chance then they can do what they want to do. Many people in Kha Ri Gude say to me, ‘Thank you, you revived my life.’

On a different occasion Veronica told me about one previously illiterate woman who, when asked why she did the course said, “Because now I know when I have enough money to go to the shop to buy things. Also if the shopkeeper gives me the correct change. Before I just had to hold out my hand with the money and the shopkeeper would take the money and give me back the change. But I couldn’t tell if it was the right change. I think he wasn’t always giving me the right change. Now I can tell.”

In keeping with the national tone of reflecting on the one year anniversary of  Madiba’s death, I include a quote from the Unifier:

We have tried to give special emphasis to the rights of people living with disability. It is so easy to think of equality demands with reference primarily to race, colour, religion and gender, and to forget, or to relegate to secondary importance, the vast discrimination against disabled persons” – Nelson Mandela (Message to the Conference for the Disabled, 4 April 2004).


NEEDU Grade 5 Reading Report 2013 (excellent!!)


Today the Minister of Basic Education Mrs Angie Motshekga released the results of the Annual National Assessments for 2014 (report here, speech here). This is not my post on the ANA results – I will write an article for next week’s M&G for that. (However, if it were my blog on the ANAs I would talk about how Grade 9’s abysmal results are rooted in Foundation Phase and the early grade ANAs are not accurate reflections of learning. O, ya, and also that the ANASs are absolutely, categorically and unequivocally NOT comparable year-on-year). But as I said, this is not my blog on ANAs.

Instead it is my blog on the Minister’s excellent choice to focus on reading. In her speech today she devoted considerable attention to the importance of reading and highlighted the numerous initiatives that the DBE has undertaken to improve the state of reading in South Africa (see page 20 and 21).

The Minister also makes extensive reference to the 2013 NEEDU Grade 5 Reading Report (draft). Together with Lillie Pretorius’ excellent article, this NEEDU report was the best thing I’ve read on reading all year. To give an overview of the report let me quote from the introduction:

“This report begins with a brief discussion of literacy and the complexity of reading and reading instruction. It gives a short explanation of the difference between decoding and comprehension and the importance of oral reading fluency for understanding and interpreting what is being read. The report outlines the importance of reading norms, and in particular reading norms for a country like South Africa with the large majority of its early readers reading in a second language. Finally, before the NEEDU Grade 5 reading data is presented, the recent and current national strategies and interventions to improve learner reading proficiency are tracked, suggesting that the crises in reading in South Africa is not new, is not unknown, yet persists” (p 4).

I have argued elsewhere that I believe we need to adopt a national education goal in South Africa; “Every child must read and write fluently by the end of grade 3.” Anyone who is seriously interested in education in South Africa and how to improve the state we’re in should read this draft of the upcoming NEEDU report. It is truly excellent.

Matric markers STILL not tested – my 2014 rant

matric markers pic

Every year for the past four years the department of basic education has tried — unsuccessfully — to implement competency tests for matric markers. Each year the teacher unions derail these well-intentioned plans, with the South African Democratic Teachers’ Union (Sadtu) raising the biggest ruckus.

The department’s logic is flawless: the integrity of the marking and moderation procedures of the National Senior Certificate exam depends crucially on the ability of markers to assess student responses accurately. Furthermore, without directly testing the content knowledge and marking competency of teachers one cannot be sure that the quality of matric markers is such that matric pupils receive the marks they deserve.

Importantly, the tests the department proposes would be conducted in a confidential, dignified and equitable manner that would not undermine the professionalism of applicants.

Sadtu counters that all teachers are equally capable of marking the matric exams and thus there is no need for minimum competency tests for prospective markers. This flies in the face of everything we know about teachers’ content knowledge and the pedagogical skills of large parts of the South African education system.

In a 1999 book, Getting Learning Right, Penny Vinjevold and Nick Taylor summarised the results of 54 studies commissioned by the Joint Education Trust, and wrote: “The most definite point of convergence across the President’s Education Initiative studies is the conclusion that teachers’ poor conceptual knowledge of the subjects they are teaching is a fundamental constraint on the quality of teaching and learning activities, and consequently on the quality of learning outcomes.” By implication this includes their ability to mark complex material accurately.

More recently, a 2011 report [p13] by the Southern and Eastern African Consortium for Monitoring Educational Quality found that only 32% of grade six mathematics teachers in South Africa had desirable levels of mathematics content knowledge, compared with 90% in Kenya and 76% in Zimbabwe.

Similar findings
I could go on and mention the numerous provincial studies that have been conducted in the North West, the Eastern Cape, KwaZulu-Natal and elsewhere that all find the same thing — extremely low levels of teacher content knowledge in the weakest parts of the schooling system — which, crucially, make up the majority of South Africa’s schools.

Given this situation, one wonders how Sadtu can argue that all matric teachers are equally competent to mark the matric exams or that they should not be tested. The union stance is that a system of teacher testing will disadvantage teachers from poor schools who cannot compete with those from wealthier schools. Although it is certainly true that the department has failed to provide meaningful learning opportunities to teachers in these underperforming schools, jeopardising the marks of matric pupils to make this stand is misguided, unethical and potentially even illegal.

These are important but separate issues and should be dealt with in different forums. But it is worth noting that the Western Cape has been testing prospective matric markers in the province since 2011, the only province in the country to do so.

The logic of the unions on this matter is perplexing. On numerous occasions they have rightly argued that teachers in poorer schools have not had meaningful learning opportunities and, therefore, that teachers are unequally prepared to teach, and by implication also unequally prepared to mark. Yet now they are arguing that all matric teachers are equally capable of marking the matric exams? So which is it? You can’t have it both ways. They either are or are not equally competent to mark matric exams. If it is the former, one cannot ensure children will receive the marks they are due; and if it is the latter, then one simply cannot argue that teachers should not be assessed prior to being appointed as markers.

On this question, a colleague of mine asked the following question: “How does the department employ people to teach matric when they are not considered competent to mark?” The uncomfortable answer is that, unfortunately, many matric teachers are neither competent to mark nor to teach — and this is because of no fault of their own. The blame instead falls squarely at the feet of the department, which has not provided them with quality professional development opportunities.

If one looks at the specifics of appointing matric markers, the union objections become even more bizarre. Although all matric teachers are legally allowed to apply to be matric markers, who is appointed and the criteria used for making these appointments are solely at the department’s discretion. Provided that these criteria are aligned with the position and are not discriminatory on such grounds as race, gender and sexual orientation, the department can select whomever it decides is most capable of doing the job.

Selection criteria
Currently the selection criteria relate to qualifications, teaching experience and language proficiency, but — bizarrely — not content knowledge. Given the nature of the work — assessing student responses for grading purposes — it seems only logical that applicants should be able to demonstrate this competency prior to being appointed for possessing it.

Because of the importance of the matric exam’s results for the life chances of individual pupils both in terms of further education opportunities and labour-market prospects, the department should put its foot down and take a stand for the 700 000 or so part-time and full-time students who are writing matric this year: it should insist that the 30 000-odd matric markers be tested prior to appointment.

Pupils, parents and school governing bodies have every reason to be concerned when there is no formal testing process to ensure that the teachers who will mark their all-important matric exams have the competence to do so in a consistent, fair and unbiased manner. Whether or not competency tests for matric markers are implemented has nothing to do with the unions and everything to do with the fairness of the marking and moderation procedures.

In sum, should prospective matric markers be tested prior to appointment? Yes. Is this a union issue? No. Will this be the last we hear of it? Unfortunately not.

The most tragic part of the above article is that I wrote it in November 2013 (published in the M&G here) and yet I can republish it here with one amendment; changing the sentence in the first line “for the past three years” to “for the past four years.” Matric markers are STILL not assessed before they are appointed, despite practically everyone agreeing that they should be tested. The second and third largest teacher unions (NAPTOSA and SAOU) both do not oppose teacher testing) Most notably the Ministerial Task Team report on the NSC (2014) who concluded that “Only the Western Cape selected its markers in 2013 based upon competency tests and was possibly disadvantaged by the strictness of the marking in its final overall results. A multifaceted, urgent and substantial intervention is called for to deal with the significant problems with the marking and the impact of this on the validity and reliability of the results” (Page 150 of the report). Why is it that the Minister can’t do the right thing on an issue that is UNAMBIGUOUSLY clear, rather than caving to SADTU?! This situation is utterly utterly disgraceful.

DFID 2014 rigorous literature reviews on education

lit review pic

DFID has recently funded a series of rigorous literature reviews on a number of important topics. I’ve included the descriptions below (from here originally) as well as links to the evidence brief and full literature reviews:

Early childhood development and cognitive development in developing countries (2014)
This review aimed to: (i) review existing evidence on the review topic to inform programme design and policy making undertaken by the DfID, other agencies and researchers; and (ii) identify critical evidence gaps to guide the development of future research programmes

The impact of tertiary education on development (2014)
After a long period in which the international development community has placed emphasis on primary education, there is now renewed interest in tertiary education (TE). However, the extent and nature of the impact of TE on development remains unclear. This rigorous review seeks to address this question in the context of low and lower middle income countries (LLMICs).

Interventions to enhance girls’ education and gender equality (2014)
The central research question that this review sets out to investigate concerns the kind of interventions that research evidence suggests can lead to an expansion and improvement in girls’ education. It also considered evidence on the relationship between an expansion and improvement in girls’ education and a deepening of gender equality.

The role and impact of private schools in developing countries (2014)
The research question driving the review is: Can private schools improve education for children in developing countries? The conceptual framework set out a number of hypotheses and assumptions that underpin the polarised debate about the potential and real contribution of private schools. These are interrogated through a rigorous and objective review of the evidence and findings are mapped on to an evidenced theory of change.

Literacy, foundation learning and assessment in developing countries (2014)

Developing countries face distinct challenges in providing access to quality education. Educational provision also varies markedly in terms of teacher training, teaching and learning resources, school attendance, and motivation of parents, teachers and children for schooling. Against this backdrop, we consider the available evidence on foundation learning and literacy in order to identify key components for intervention that are appropriate to specific cultural and linguistic contexts.

The political economy of education systems in developing countries (2014)

Teachers and schools do not exist in isolation of the larger world around them. Frequently, many of their actions – and the school outcomes that they are accountable for – are influenced by incentives and constraints operating outside the schooling system. Each of these factors influences different aspects of education reform, whether policy design, financing, implementation or evaluation. Given the importance of these power relations in influencing student outcomes, there is surprisingly little literature to guide us in making related policy decisions. One reason is that examining these issues in the case of education may not be amenable to a particular disciplinary lens and is better served through an inter-disciplinary approach. A key contribution of this review is to pull together the essential literature from various disciplinary and interdisciplinary traditions and to provide a conceptual framework in which to situate the analysis of political economy issues in education research. Another contribution is to carefully review the existing literature and identify research gaps in it. The review organises the literature along 5 key themes.

Pedagogy, curriculum, teaching practices and teacher education in developing countries (2013)

This rigorous literature review, focused on pedagogy, curriculum, teaching practices and teacher education in developing countries. It aimed to: (1) review existing evidence on the review topic to inform programme design and policy making undertaken by the DFID, other agencies and researchers, and (2) identify critical evidence gaps to guide the development of future research programmes.

The political economy of education systems in conflict-affected contexts (2014)

This report is a rigorous literature review on the political economy of education systems in conflict-affected contexts and is aimed at education advisers and agencies, development practitioners, and Ministry of Education policy makers working in conflict-affected contexts. It is also aimed at the broader education and conflict community of research and practice linked to the Inter-Agency Network of Education in Emergencies (INEE). The report seeks to provide theoretically informed and policy-relevant insights on the global, national and local governance of education systems in conflict-affected contexts.