Monthly Archives: January 2016

Important new education research

holy

    • Ursula Hoadley and Joe Muller have just published their important paper looking at assessment in South Africa “Visibility and differentiation: Systemic testing in a developing country context” (Curriculum Journal, 2016)- I prefer their earlier title “Testing testing: Investigating the epistemic potential of systemic testing” (Un-gated draft-version of that paper here).
    • Why has large-scale standardised testing attracted such a bad press? Why has pedagogic benefit to be derived from test results been downplayed? The paper investigates this question by first surveying the pros and cons of testing in the literature, and goes on to examine educators’ responses to standardised, large-scale tests in a sample of low socio-economic status (SES) schools in the Western Cape, South Africa. The paper shows that teachers and school managers have an ambivalent attitude to tests, wary of the reputational costs they can incur, but also curious about the differentiated picture test results can give them as they learn to ‘read’ the underlying codes embedded in the results. The paper concludes that a focus on what tests make visible and a recognition of the pedagogic agency of teachers points to potential pedagogic benefits of systemic tests.

    • Craig Paxton has finally finished his PhD thesis “Possibilities and constraints for improvement in rural South African schools” (UCT, 2015). This is on my to skim/read list together with Eric Schollar’s PhD (see below)
    • Part of Craig’s PhD abstract:”Rural South African schools face a complex mix of challenges, which make improvement a daunting task. Not only do schools deal with the time, place and space issues that face rural schools worldwide, but in addition they contend with a legacy of severely deprived schooling under the apartheid system. Using the framework of the Five Essential Supports, developed by the Consortium on Chicago School Research, together with Bourdieu’s notions of habitus and doxa, this thesis examines what improvement might mean in this deeply disadvantaged context. The five supports – leadership, learning climate, school-community ties, ambitious instruction and professional capacity – are contextualised to account for both the rural setting and the peculiarities of education in South Africa’s former homeland communities. Alongside this largely quantitative framework, Bourdieu’s conceptual tools are brought to bear, offering an alternative perspective that makes sense of the complex forces produced by history and rurality

    •  Pritchett’s new (2015) RISE Working Paper “Creating Education Systems Coherent for Learning Outcomes.” This has been quite an influential paper for me. Although in the South African context I would almost always add “Capacitation” to his four criteria Delegation, Financing, Information, Motivation. Lant also has a great (and scathing) critique of meta-analyses of quantitative studies:
    • If one were to take this approach of “rigorous evidence” at face value then there is rigorous evidence that nothing in the conventional wisdom actually works. There is rigorous evidence that giving out textbooks doesn’t matter, there is rigorous evidence pay for performance doesn’t matter, there is rigorous evidence that class size doesn’t matter. Of course there is also rigorous evidence that all these elements of the conventional wisdom do matter. The usual approach of doing a “systematic review” of the literature that simply counts studies (in a quality weighted basis) is not at all helpful. Suppose that context A is a system coherent for learning—so that teachers know what students should learn, that learning is measured on a regular and reliable basis and teachers are motivated to achieve high student learning—and class size is reduced. Let’s assume that learning improves (as there is RCT evidence from the USA, for instance, that this is true). Context B is a system coherent for schooling only. Class size is reduced. Let’s assume learning doesn’t improve (as there is RCT evidence from Kenya, for instance, that this is true). Suppose the only two studies in the systematic review were USA and Kenya. Then the conclusion would be that “class size improves student learning in 50 percent of the studies.” Now suppose that 8 more rigorous studies were done in the USA so that a systematic review would conclude “class size improves student learning in 90 percent of studies.” Suppose, in contrast, 8 more studies were done in Kenya. Then a systematic review of the rigorous evidence would conclude “class size improves student learning in 10 percent of the studies.” All three statements are equally worthless. The (assumed) truth is that “class size improves performance in context A but not in context B” and hence unless one knows whether the relevant context is A or B the systematic review finding of impact in 50 percent, 90 percent or 10 percent of the studied cases is irrelevant.

  • Glewwe & Muralidharan’s new (2015) RISE Working Paper “Improving School Education Outcomes in Developing Countries” they find that:
  • Interventions that focus on improved pedagogy (especially supplemental instruction to children lagging behind grade level competencies) are particularly effective, and so are interventions that improve school governance and teacher accountability

Matric 2015 standardisation matters

denial

OK so I got a little frustrated with explaining the whole matric standardisation vibe to a million people, so here’s the deal once and for all…

Soon after the matric results jamboree ended and people went back to work, there were a few unanswered questions about how the matric exams of 2015 were standardised by Umalusi. Already in September of 2015 I was asking some Departmental officials whether the Minister was going to report the matric results of progressed and non-progressed learners separately (which is what I would’ve done). The logic being that the public (wrongly) view the matric exam as the main indicator of the health of the education system and that if progressed learners were lumped with non-progressed learners there would be excess pressure to ensure that the matric pass rate did not drop too much. But this is not the approach that the DBE took. I also emailed someone in Umalusi to ask how they were planning on doing their standardisation given that the two groups of matrics (2014 and 2015) were so different, with the latter having many more students due to the progressed learner policy.

After comparing the publicly-available Umalusi statement and the publicly available DBE NSC Technical Report 2015 it became possible to see how large the Umalusi adjustments were this for 2014 and 2015 for nine subjects – see table below.

Screen Shot 2016-01-24 at 1.27.52 PM

I won’t rehash my full argument here (if you’re interested read my article Matric Cracks Starting to Show). The yellow highlighted subjects are those that had big jumps in enrolments, for example there were an extra 76,791 learners taking maths-literacy in 2015 compared to 2014. Notice that the pass rates increased substantially between the raw pass rate and the pass rate after (1) Umalusi adjustment, (2) language compensation, (3) school-based assessment. The gist of that article was to say that the progressed learners of 2015 were not properly accounted for in the Umalusi standardisation process and that the most logical reason for a drop in performance of the raw marks was the inclusion of extra (weaker) learners, i.e. progressed learners, rather than a much more difficult exam.

bother

Subsequent to my article, the CEO of Umalusi, Dr Rakometsi, wrote a reply titled “Promoted pupils had no big effect on matric rates” and clarified a number of issues. For the sake of brevity I will summarize the most salient points here:

  • Umalusi was told by the DBE that there were only 66,088 progressed learners
  • If one excludes these 66,088 progressed learners then the failure rate within subjects increased by between 1% and 4%.
  • He confirmed that “the pass rate on raw marks is at 38% for maths-literacy
  • The maximum adjustment that Umalusi can make is 10 percentage points which was applied for mathematics literacy “because the paper turned out to be more difficult in 2015 compared to previous years. As a results of this maximum adjustment, 27% of learners who scored between 20-29% obtained a pass”
  • One paragraph in the article is particularly important and so I quote it verbatim:

“The report indicates that the impact of progressed learners to the individual subjects was minimal. As a result, there was no basis to standardise the results of the progressed learners separately. What we call progressed learners is actually only the KNOWN progressed learners. The argument that there were more is an assumption. Umalusi can only work on the information before it, not on assumptions and extrapolations.”

From the above we can draw two important conclusions:

(1) The 66,088 progressed learners were not excluded when the results were standardised relative to earlier years, despite knowing that these learners were weaker students. This seems totally bizarre. We know that these are weaker learners, why would we include them in the norm-referencing curve and compare to earlier years were these students did not exist? Even if they only contributed to a drop in the pass rate of between 1-4% why were they excluded?

2) (the most important conclusion) Umalusi only looked at the 66,088 “officially” progressed learners and ignored all the other information suggesting that there might be additional weaker learners who were actually progressed learners but were not captured as progressed learners, what I called “quasi-progressed” learners in my article. We know that provinces are not recording who is a progressed learner with the same accuracy.

Perhaps the most telling evidence is just to ask how many extra matrics there were in 2015 compared to 2014? The answer is 111,676 (644,536 in 2015 compared to 532,860 in 2014). But if there were only 66,088 progressed learners, where did the remaining 45,588 learners come from?

dissolve

Some have suggested that it’s from a change of age policy that happened in 1998, but that lead to a small cohort in 2011/2012 not now, as Stephen Taylor has shown using EMIS data. The table below (taken from here) shows the different enrolment numbers by year and grade. What we can see is that the matric class of 2011 was always going to be small  If we look at the matric class of 2011 there were 534,498 learners in matric and only 930,019 learners in grade 8 four years earlier. Basically we knew that the matric class of 2011 was going to be smaller. Whereas if we look at the matric class of 2015 (with 687,230 learners according to this table) this is unexplainably big. If we look at the grade 8 cohort of 2011 we see that there were 1,008,110 which is only about 7000 learners more than the grade 8 class of 2010. So how are we to explain the massive difference we see when we compare the 2014 and 2015 matrics (111,676)?

Screen Shot 2016-01-24 at 2.04.53 PM

In my mind the answer is straight-forward – the extra learners in matric 2015 are the direct result of trying to decrease grade repetition by “promoting” weaker learners into higher grades rather than fail them. If this is correct then we needed to exclude the full 111,676 learners when standardising relative to earlier years. Umalusi will (and has) argued that this was not possible and that they did not even try to take into account of quasi-progressed learners.

head banging

So those of you that’ve read this far might be asking “So who cares? Why is this even important?” and the answer is that it matters a lot for universities and the labour-market if Umalusi gets this wrong.  If the standardisation process assumed that a drop in the raw marks was only due to an increase in test difficulty (which is what Umalusi did) when a more plausible explanation was that it was because we included an extra 21% of weaker learners, then the real value (and signal) of a basic matric is actually declining over time.

On page 171 of the 2014 Ministerial Task Team Review of the NSC we read the specifications of when Umalusi can and cannot adjust matric marks:

“Reasons for differences may include: cohort differences, changes in curriculum, changes in setters of the examination papers, disruptions in the schooling processes and security leakages. In the absence of evidence of such possible reasons, it is then generally accepted that the differences are due to deviations in the standards of the examination or marking and therefore marks are adjusted to compensate for the deviations (Umalusi 2007a, 29).” [emphasis added]

Personally I do think that some of these tests increased in difficulty, but it is ludicrous to think that adding 21% more students who are KNOWN to be weaker students would not decrease the marks. Also this is the first year where basically all adjustments were upward. There was not a single downward adjustment. Coincidence much?

thats messed up

Just because Umalusi could not identify the quasi-progressed learners doesn’t mean they can just ignore them. Hence the cartoon above. It would seem that Umalusi has essentially said “Yes we can see that the cohort is much bigger. Yes we can see that there was a clear policy intervention to progress weaker learners. Yes we can see that the official numbers of progressed learners do not match with the full increase in the size of the cohort. But we are going to pretend that there were only 66,088 progressed learners. We refuse to accept any other reality because we can’t do anything about it anyway so what’s the use in knowing.

The fact that the marks were pushed up primarily at the bottom (probably too much) means that students passed in 2015 who would not have passed in 2012. It means students have qualified to study basic degrees in 2015 who would not have qualified if they wrote in 2012. So, if I’m right, any of the following could result:

  • There will be a flood of applications for degrees and diploma’s that require the lowest levels of matric points. Thousands more students will have ‘met the criteria’ and the universities would not have not anticipated this. In fact I have already heard that UNISA has had an unprecedented increase in applications, Edgewood at UKZN has been swamped. Damelin has a huge spike in applications. If I’m right then these students should never have qualified for university and will fail. They might incur debt, move province, make life decisions based on incorrect signal. (Let us not speculate on how the surge in applications for NSFAS will stoke the FeesMustFall fire and mean that there is less to go around and more angry students with hopes and dreams that the State cannot fulfil.
  • Universities will see students that are not even remotely prepared for higher academic study and will have to increase their access programs and expect higher failure rates.
  • As a result of the above the universities will increase the matric-point requirements for entry into their programs for 2017 (particularly programs like B.A, B.Soc-Sci, B.Ed etc.). They will also start to rely more on the National Benchmarking Tests in their selection criteria. [Sidebar, researchers should compare NBT results with matric results in 2013, 2014 and 2015 to see if there are any differences that might be attributable to wrongly-boosted matric marks]. 
  • The gap between the earnings of those with a matric and those with a matric+degree will grow (note it is already large, see graph below). This is largely because the adjustment was primarily at the bottom meaning there are many more students with a low-level-matric who have, in effect, lower levels of knowledge and skill than low-level-matrics of 5 years ago.

Screen Shot 2016-01-24 at 2.30.15 PM

(Source: Hendrik van Broekhuizen, 2013, here)

As I said, none of the above precludes the fact that the tests were more difficult in 2015 (although this is still speculation). I am only saying that there is no way in my mind that including an extra hundred thousand weaker learners didn’t play any part in the drop in the raw marks. And, in essence, that is what Umalusi is arguing.

oprah no

Dr Rakometsi and I will be discussing this on the Redi Hlabi show at 10am tomorrow (25 Jan 2016). It should be an interesting discussion 🙂

redi-tlhabi_detail.jpg

Links I liked

think

 

“Matric Cracks Starting to Show” – my ST article

crack

(The article below first appeared in the Sunday Times on the 10th of January 2016)

Like so many things in South Africa, this year’s matric results are a paradox of good policies badly implemented. This time it was the Minister’s bold ‘promotion policy’ that led to an extra 21% more learners writing matric (644,536 this year compared to 532,860 last year). The policy limits the number of times learners can repeat a grade to once every three years and means fewer students drop out, being ‘promoted’ instead. While her decisive action has led to increased efficiency and improved access, it has also inadvertently caused a huge crack in the matric standardisation process, one that is only now starting to become apparent. The fact that the Department did not properly identify all progressed learners, and that Umalusi did not (and perhaps could not) take account of all progressed learners in their standardisation process calls into question the massive upward adjustments in marks that took place behind the scenes.

As usual, some commentators have myopically focussed on the drop in the matric pass rate, from 76% (2014) to 71% (2015) as if this, in and of itself, were a meaningful indication of anything. It isn’t. Or that it signalled a decline in quality, or harder exams. It doesn’t. Yes, the matric pass rate went down but the number of learners passing it went up. And in fact the real question might not be why the matric pass rate dropped, but why it didn’t drop further. In comparing the media statement from Umalusi and the technical report from the Department, the answer is quite clear. The decision was made to raise the raw marks across the board, from Maths and Physical Science to Life Science, Maths Literacy, History, Accounting, Geography and 24 other subjects. Umalusi themselves make a point of emphasizing that this was an “unprecedented set of adjustments”. When the Maths Literacy pass rate is adjusted from 38% to the final (and publicly reported) 71%, this is most certainly unprecedented, and I would argue, unwarranted. Was the test really so much more difficult than previous years? (This is the only reason why one is allowed to adjust the marks at all). Why did the internal and external moderators not pick up the huge increase in difficulty? Is it not more plausible that the massive drop in pre-adjusted performance was actually due to the additional 112,000 weaker learners who would’ve otherwise dropped out? If so, Umalusi shouldn’t have adjusted.

This is not to say that the Minister was wrong in introducing the promotion policy. Quite the opposite; she was heeding local and international research which shows that excessive repetition is costly, inefficient and has no educational benefit to the learner. Yes, we do need to find ways of preventing and remediating the problem, but rooting out wasteful repetition in the mean time is prudent and wise. A positive effect of this policy and the extra-large class of 2015 meant many more learners taking and passing key subjects, with about 52,000 extra matric passes, 9000 extra maths passes and 15,500 extra bachelor passes.

Both Umalusi and the Department claim that there were only 65,671 progressed learners. Yet there were an extra 111,676 matrics this year. So where did the other 46,005 extra learners come from? The clear answer is that there was a big policy change preventing schools failing learners multiple times and encouraging them to promote weak learners and push them into matric. Secondly, the way provinces record and report who is a progressed learner is highly dubious and varies by province and district. So, although we have approximately 66,000 ‘officially’ progressed learners, we also have 46,000 ‘quasi-progressed’ learners (what Umalusi calls ‘borderline candidates’).

The reason why all of this matters is because it influences the decision of whether to adjust the matric results and by how much. Umalusi is only ever meant to adjust the marks up or down if they believe the exam was harder or easier than previous years. The core assumption in this standardisation process is that the different matric cohorts (2013, 2014 or 2015 matrics) are of equal ability. Thus, any differences between the years can only be because the paper was easier or harder. And this is where the crack emerges. There is simply no way that the 2015 distribution of 645,000 matrics (including progressed and quasi-progressed learners) are as strong as the distribution of 533,000 learners in 2014. Thus the reason the 2015 cohort did so much worse on the raw scores was because of the extra 112,000 weaker learners, not because the tests were harder. We know that Umalusi did not take this into account because there is no way of identifying the 46,000 quasi-progressed learners. In Umalusi’s defence they couldn’t have excluded them even if they had wanted to because provinces didn’t record them. But it doesn’t seem Umalusi excluded these 112,000 (or even the 66,000) learners when they standardised the 2014 and 2015 distributions. This is illogical.

In an unusual change from previous media statements, this year Umalusi included the raw failure rates of subjects (i.e. before any adjustments). This can be compared to the marks in the technical report issued by the Department. The only difference between the two figures are the Umalusi adjustment, a small change due to school based assessments and a small language compensation for second language learners (extra 4 percentage points). When I refer to ‘raw’ and ‘final-adjusted’ pass rates I mean before and after these are accounted for. The three subjects I will focus on here are Maths Literacy, Geography and Business Studies since they all have big increases in enrolments which suggests these were the subjects taken by the progressed and quasi-progressed learners. The differences between the raw pass rate and the final-adjusted pass rate are large for Geography (increased from 66% to 77%), for Business Studies (increased from 54% to 76%) and especially for Maths Literacy (from a shockingly low 38% to 71% after adjustments!). For a national assessment these are incredibly large adjustments.

This could only be justified if the 2015 exams were extraordinarily more difficult in 2015 than in 2014. I simply do not buy it. The internal and external moderators all agreed that these exams were set at the appropriate level. To warrant adjustments of this magnitude they would have had to have been way out in their judgements. Why are we looking for alternative explanations for the big drop in raw marks when this one is staring us in the face? The most logical and obvious reason for the drop is the inclusion of an extra 112,000 weaker learners in 2015. Paper difficulty is marginal by comparison. In maths literacy alone there were 76,791 extra candidates in 2015. Where did these learners come from? It is clear that these are the weaker progressed and borderline candidates and that they are the main reason why the raw marks dropped so much. If so then we cannot just adjust the raw marks upwards, as was done this year.

The Umalusi standardisation process is necessary and probably the best we can do when different papers are written year-on-year, but Umalusi needs to clarify what happened here and in future be more transparent in their standardisation process. Unfortunately, no amount of standardisation can solve the biggest problem in our education system which is the fact that most children attending the poorest 75% of schools do not learn to read for meaning by the end of grade three and are forever behind. Indeed, matric starts in grade 1.

Dr Nic Spaull is an education economist at Stellenbosch University. He can be found on Twitter @NicSpaull and his work can be found at nicspaull.com