- A usually conservative US Supreme Court recently ruled that the Defense of Marriage Act (DOMA) is unconstitutional. Their rulings on Prop 8 also clears the way for gay marriage in California. On this note, it’s always nice to remember that while the arc of history is long, it is bent towards justice (MLKJ). What yesterday seemed ridiculous (women voting? Interracial marriage? Equal rights for black Africans) is today commonplace. The legalization of gay marriage across the States and across the world is now just a matter of when, not if. Wonderful to think that gay marriage has been legal in South Africa for almost 8 years – in 2006 the National Assembly passed the motion by a vote of 230 to 41.
- For those of us ascribing to the Christian faith, I highly recommend this article by Prof Smedes titled: “Homosexuality and divorce, why not treat them the same?” and this letter from the Bishop of Salisbury. For those concerned with secular ethics see “Homosexuality is not immoral” by Peter Singer. I obviously have more to say on this issue so there will definitely be a post or two on this in the future…
- On a related note, Exodus International – the largest ex-gay / pray away the gay – ministry issued an apology and shut down. Also see this The Beast article on this – I loved the quote “Mercifully, there comes a point when even the most committed of ideologues admit defeat.”
- Really useful website “World Data on Education Seventh Edition 2010/11” – helpful country summaries for LOADS of countries…
- Awesome website showcasing the interiors of wonderfully creative people: http://theselby.com/ – thanks Laura Rossouw
- The wonderful Stephen Fry on loneliness and his attempted suicide.
- Quote of the week comes from An interview with Milton Friedman:
- “I think the major issue is how broad the evidence is on which you rest your case. Some of the modern approaches involve mining and exploring a single body of evidence within itself. When you try to apply statistical tests of significance, you never know how many degrees of freedom you have because you’re taking the best out of many tries. I believe that you have a more secure basis if, instead of relying on extremely sophisticated analysis of a small fixed body of data, you rely on cruder analysis of a much broader and wider body of data, which will include widely different circumstances. The natural experiments that come up over a wide range provide a source of evidence that is stronger and more reliable than any single very limited body of data.”
I am currently doing some research which draws extensively from the work of Rodrik and Hausmann, particularly their “Growth Diagnostics” approach (see especially page 13). In it they talk about the importance of experimentation to figure out what works, often using China as the example par excellence. For example: “Can anyone name the (Western) economists or the piece of research that played an instrumental role in China’s reforms? What about South Korea, Malaysia or Vietnam? In none of these cases did economic research, at least as conventionally understood, play a significant role in shaping development policy…China owes a great deal of its success to a willingness to experiment pragmatically with heterodox solutions…The process of China’s policy reform consisted of diagnosing the nature of the binding constraints and identifying possible remedies in an innovative, experimental fashion with few preconceptions about what works or is appropriate” (Rodrik, 2009). Rodrik then goes on to apply this notion to Randomized Control Trials (see this excellent document on RCTs for policy):
“Randomized field experiments, which are legion in this area, have demonstrated considerable success with specific interventions. Importantly, some of these interventions—on school subsidies or remedial education, for example—have been replicated in a number of different contexts (Kremer and Holla, 2009). Still we have very little guidance from this literature on how we proceed to identify education interventions that are most suited to and likely to be most effective in a particular setting. We get even less help on diagnosis in other areas such as reducing corruption or increasing manufacturing productivity which have received only spotty attention from randomizers. The best among randomized trials in development economics are of course informed by some diagnostic process, but curiously, micro-development economists are often not very explicit about the steps needed to identify the most serious failings in a given context. Nor are they very clear about how one narrows a very large list of potential solutions to a smaller number of interventions most likely to be effective” (Rodrik, 2009: 17).
Now there is much to be said on the application of this kind of logic to South Africa’s education system. If you speak to people who actually know what is going on in South Africa, you will be surprised how much they will admit to not knowing. Should we switch from mother-tongue instruction to English at grade four or grade six, or just go straight-for-English and teach in English from grade one? What is the best method of improving teacher quality in South Africa? Short in-service courses at an academic institution, teacher knowledge tests with incentives, or on-the-job training and coaching (as just a few examples)? What is the best method of raising academic achievement in Grade R and Foundation Phase? Is it graded-readers in an African language? Standardized tests? Teacher training (what training?)? In all of these instances we really don’t know what the answer is, and these are not trivial questions – they are of the utmost importance.
One of the biggest problems is that we are not willing to experiment and figure out what works. Randomized control trials (RCTs) could help us answer these questions by taking a sample of schools (say 300) and randomly allocating 100 to receive graded readers in an African language, 100 where the teachers receive teacher training and coaching, and 100 as a control (against which the ‘impact’ of the other two can be measured). This would help us answer one of the questions above. (Incidentally this is one of the few – perhaps only – RCTs that have been proposed in South Africa for education (by Stephen Taylor et al, currently on the drawing board and looking for funding I think).
One of the reasons why we have so few RCTs underway in South Africa is that RCTs are quite expensive, sometimes between R5-10 million, but not always. This is where we need to take a small diversion and emphasize that when you are spending in excess of R200 000 000 000 (R200bn+) on education, as we do in South Africa, allocating at least R150m per year for about 25 RCTs annually is really just common sense. At the moment I think there is only one RCT looking at education underway in South Africa (looking at the impact of Khan Academy here in the WC), at least that I am aware of. These impact evaluations would allow us to definitively answer questions which we really don’t know the answers to, and without RCTs, may never know the answer to. Unless we can be given the freedom and finances to experiment with reasonable proposals (and implement and test them according to high standards) we will never be able to figure out what works. Experimenting on a small scale (a few hundred schools at a time) and figuring out what works first, before going to scale, is much more sensible and cost-effective than simply rolling out untested policies which is basically our modus operandi at the moment.
The need for experimentation in South African education cannot be overstated. The Department, Presidency and Treasury all need to put their money where their mouth is and get the ball rolling on RCTs – especially in education!!
Some other useful links: