I am currently doing some research which draws extensively from the work of Rodrik and Hausmann, particularly their “Growth Diagnostics” approach (see especially page 13). In it they talk about the importance of experimentation to figure out what works, often using China as the example par excellence. For example: “Can anyone name the (Western) economists or the piece of research that played an instrumental role in China’s reforms? What about South Korea, Malaysia or Vietnam? In none of these cases did economic research, at least as conventionally understood, play a significant role in shaping development policy…China owes a great deal of its success to a willingness to experiment pragmatically with heterodox solutions…The process of China’s policy reform consisted of diagnosing the nature of the binding constraints and identifying possible remedies in an innovative, experimental fashion with few preconceptions about what works or is appropriate” (Rodrik, 2009). Rodrik then goes on to apply this notion to Randomized Control Trials (see this excellent document on RCTs for policy):
“Randomized field experiments, which are legion in this area, have demonstrated considerable success with specific interventions. Importantly, some of these interventions—on school subsidies or remedial education, for example—have been replicated in a number of different contexts (Kremer and Holla, 2009). Still we have very little guidance from this literature on how we proceed to identify education interventions that are most suited to and likely to be most effective in a particular setting. We get even less help on diagnosis in other areas such as reducing corruption or increasing manufacturing productivity which have received only spotty attention from randomizers. The best among randomized trials in development economics are of course informed by some diagnostic process, but curiously, micro-development economists are often not very explicit about the steps needed to identify the most serious failings in a given context. Nor are they very clear about how one narrows a very large list of potential solutions to a smaller number of interventions most likely to be effective” (Rodrik, 2009: 17).
Now there is much to be said on the application of this kind of logic to South Africa’s education system. If you speak to people who actually know what is going on in South Africa, you will be surprised how much they will admit to not knowing. Should we switch from mother-tongue instruction to English at grade four or grade six, or just go straight-for-English and teach in English from grade one? What is the best method of improving teacher quality in South Africa? Short in-service courses at an academic institution, teacher knowledge tests with incentives, or on-the-job training and coaching (as just a few examples)? What is the best method of raising academic achievement in Grade R and Foundation Phase? Is it graded-readers in an African language? Standardized tests? Teacher training (what training?)? In all of these instances we really don’t know what the answer is, and these are not trivial questions – they are of the utmost importance.
One of the biggest problems is that we are not willing to experiment and figure out what works. Randomized control trials (RCTs) could help us answer these questions by taking a sample of schools (say 300) and randomly allocating 100 to receive graded readers in an African language, 100 where the teachers receive teacher training and coaching, and 100 as a control (against which the ‘impact’ of the other two can be measured). This would help us answer one of the questions above. (Incidentally this is one of the few – perhaps only – RCTs that have been proposed in South Africa for education (by Stephen Taylor et al, currently on the drawing board and looking for funding I think).
One of the reasons why we have so few RCTs underway in South Africa is that RCTs are quite expensive, sometimes between R5-10 million, but not always. This is where we need to take a small diversion and emphasize that when you are spending in excess of R200 000 000 000 (R200bn+) on education, as we do in South Africa, allocating at least R150m per year for about 25 RCTs annually is really just common sense. At the moment I think there is only one RCT looking at education underway in South Africa (looking at the impact of Khan Academy here in the WC), at least that I am aware of. These impact evaluations would allow us to definitively answer questions which we really don’t know the answers to, and without RCTs, may never know the answer to. Unless we can be given the freedom and finances to experiment with reasonable proposals (and implement and test them according to high standards) we will never be able to figure out what works. Experimenting on a small scale (a few hundred schools at a time) and figuring out what works first, before going to scale, is much more sensible and cost-effective than simply rolling out untested policies which is basically our modus operandi at the moment.
The need for experimentation in South African education cannot be overstated. The Department, Presidency and Treasury all need to put their money where their mouth is and get the ball rolling on RCTs – especially in education!!
Some other useful links:
- “Test, Learn Adapt: Developing Public Policy with Randomized Controlled Trials” – excellent document developed for UK policy makers to help them understand the need for evaluating policies.
- JPAL Africa are one of the major organizations that do RCTs, along with IPA
- Experimenting with Khan Academy in Diepsloot (Report) (not the one I mentioned above but still interesting!).