So the results of the Annual National Assessments (ANA) for 2012 were released on Monday (DBE report available here). Basically national standardized testing for Grades 1-6 and 9. Unfortunately, the Department and the Minister have made explicit comparisons between the ANA’s of 2011 and 2012, in spite of the fact that such comparisons should not be made (the tests are not comparable) and are thus highly misleading. Prof Servaas van der Berg and I were interviewed for this week’s issue of the Mail & Guardian on this exact issue. The interview can be found on the M&G website here, and the full (unedited) transcript can be downloaded HERE. This is a classic case of one step forward two steps back. I include the two most important excerpts from the interview below:
- Q: In some primary grades there were very large improvements from 2011 to 2012. For example, the Grade 3 literacy improvement from 35% in 2011 to 52% in 2012 – do you have any comment on this?
- A: “Yes, we found that particularly strange. All the available evidence suggests that changes of this magnitude are simply not possible, locally or internationally. It may help to provide some background information on the relative magnitude of these increases. The Grade 3 Literacy improvement of 17 percentage points year-on-year (a 49% increase) amounts to 0.70 standard deviations (based on the Grade 3 literacy scores from Verification ANA 2011). If one compares this to the largest improvers around the world, it would mean that South Africa has the fastest improving educational system in the world. If these results were true it would mean we have improved more in a single year than Columbia (0.52 standard deviations) did in 12 years from 1995-2007 – and Columbia was the fastest improving country of the 67 countries tested in TIMSS for this period. Or using a different cross-national survey, we have improved more in a single year than Russia did over the 2001-2006 period (0.54 standard deviations) in the Progress in International Reading and Literacy Study (PIRLS) – and Russia experienced the largest increase in student achievement of the 28 countries tested in PIRLS over this period. This is simply not possible. One could also use local comparisons to provide a sense-check to the ANA 2011-2012 improvement. Every year the Western Cape conducts tests (Systemic Evaluations) of Grade 3 and Grade 6 students. These tests are marked centrally and not by the schools themselves. Between 2011 and 2012 there was almost no improvement in Systemic Test results in the Western Cape, yet according to the ANA results the Western Cape improved by 14 percentage points. Given that the Systemic Tests are calibrated to be of equal difficulty year-on-year, and that they are marked centrally, they are currently a more reliable indicator of true progress in learning than the ANA’s and provide strong evidence that ANA is exaggerating any improvement that there may have been in learning in our schools. Apart from international and local comparisons, the results for ANA 2012 do not appear internally consistent. If the results were calibrated to be of similar difficulty in each grade (which is necessary for inter-grade comparison), how is it possible that the Grade 1 mathematics average in 2012 was 68% but the Grade 3 average was only 41%, just two grades later? The performance further deteriorates to 27% in Grade 6 and a dismal 13% in Grade 9 (for which test results are presented for the first time). Are these tests of equal difficulty for their grade? If so, it would indicate much better performance in the lower than the higher grades. Yet it would seem that there was no inter-grade linking of items, which is necessary to ensure that difficulty levels are similar. This is made explicit in the report: “There was no deliberate attempt to include questions to assess the degree to which the assessment standards of earlier grades had been achieved” (p67). Thus one cannot compare the results of one grade with the next, or say that performance is deteriorating as the grades progress. To put it simply, it is not possible to compare two grades or two points in time unless the difficulty level of the tests are comparable. This can be determined by using Rasch analysis, a technique which requires some items (test questions) to be common across two tests that are being compared so that these can serve as anchors to calculate the difficulty levels of other items and put them on the same scale. After calculating the Rasch scores one can equate the difficulty levels of tests and adjust the marks accordingly. “
- Q: What are your thoughts on the Annual National Assessments in general – should they be abolished?
A: “Most certainly not. They must just be improved. The Annual National Assessments are an important and worthwhile endeavour and are needed to improve the quality of education in South Africa. The introduction of these tests is one of the most important advances in educational policy in recent years, as it provides a source of information for teachers, students, parents and policy makers that was absent before. Without a testing system like ANA it is not possible to determine which schools need what help, or to allow us to diagnose and remediate learning problems early enough such that they do not become insurmountable deficits. ANA provides information to teachers about the level they should assess at, and the level of cognitive demand that should aim at. It can provide objective feedback to parents about their children’s performance, which is essential for them to know how the school system serves them and what learning deficits they may have. Parents and children have a right to know this, and poor and illiterate parents doubly so.
The real problem in our system is the failure of most students to master foundational numeracy and literacy skills in primary school, which then spills over into secondary schools. However, for the ANAs to provide the information on performance in schools, they need to be reliable indicators of learning across grades and over time. To this end the Department should put in place an independent verification process, and tests should adhere to international guidelines for standardised testing. The fact that ANA’s results from 2011 and 2012 are incomparable is highly unfortunate. This means that schools, teachers and parents are getting erroneous feedback. Thus the 2012 ANA results, compared to those of 2011, creates an impression of a remarkable improvement in school performance which did not really occur. This would make it so much more difficult to really induce the improvement in behaviour at the classroom level that is central to real advances in learning outcomes. “