you are viewing a single comment's thread.

view the rest of the comments →

[–]WickedWitchOfTheWest 1 insightful - 1 fun1 insightful - 0 fun2 insightful - 1 fun -  (0 children)

[Freddie deBoer] What Do We Do with Education Research?

I’ve thought about that conversation often in the years since. I am torn between my general edunihilism and the persuasiveness of his general point, on the one hand, and the sense that an absence of rigorous research couldn’t possibly be better than the flawed research we currently have, on the other. What I am left with is the question of whether it’s possible to meaningfully sort more certainty from less, without treating work that produces less certainty as inherently of lower value than that which produces more, when actual practicing researchers will always have direct professional incentive to represent their work as more definitive. I also wonder whether any of our findings will be truly generalizable, or if the remarkable diversity in contexts and student populations found across schooling makes that impossible. And I wonder if teachers will ever really implement pedagogical techniques we find to be more effective, if such a thing exists, when they will often find their own lived experience contrary to what researchers say, to say nothing of the turf wars and culture issues between practitioners and researchers.

It all seems like a mess, to be frank. But I do think there is little choice but to keep going and to try and get a little better over time - while accepting that, for reasons of both methods and the underlying reality, effect sizes will usually remain small.

To render things in convenient list form: what are the issues with education research?

Methodological and data issues. Small effects, big variance, lots of endogeneity, lots of confounds, available samples are frequently systematically dissimilar from general population, difficult or impossible to truly randomize in many contexts, bogus randomization in many others…. In sheer analytical terms, this is all quite difficult.

Publication and replication issues. All of the conditions that afflict psychology in its replication/p-value crisis apply to education research, potentially even more damagingly. Very often ed researchers have big ol’ spreadsheets with tons of demographic and school variables that they can then quickly correlate with output variables like test scores or GPA, which makes data snooping tempting - particularly given that you need to publish to get hired and get tenure and you need to get a significant finding to get published. And unlike psychological research, which frequently has limited real-world valence, the now widely-discussed issues with p-value hacking and publication bias can have large (and expensive) consequences in ed research, because policymakers are drawing inferences from research that they then use to make decisions that result in the deployment of a lot of public resources.

Conflicting results facilitate selective reading. Because there is so much conflicting data and contradicting studies, you can always build the narrative you want through choosing the data that supports your work and ignoring that which does not.

Institutional capture and optimism bias. Education research is dominantly funded by institutions that are hungry for positive results - positive effects that are purported to derive from implementable pedagogical or administrative changes which would, supposedly, start to “move the needle.” The increasingly brutal competition in academia for tenure track lines makes the need for access to grant funding only more vital over time, and the people who control the purse strings don’t want to hear negative results. There are committed pessimists within the ed research world but very few of them are pre-tenure or otherwise lacking in institutional security. The Gates Foundation, by sheer size alone, disciplines researchers against speaking plainly about negative findings and subtly influences the entirety of the published research record. In a very real sense the dominant ideology of the educational research world simply is the ideology of the foundations, and this is not healthy.

Accurately measured but controversial conclusions. The relationship between SAT scores and socioeconomic status is a classic example: while usually exaggerated, the correlation between SES and SAT scores is real. This is often used as an argument to dismiss the test as invalid. But in fact there is also an SES effect in GPA, graduation rates, state standardized tests, etc., which tells us that rather than being evidence of a flawed test, the correlation is a reflection of the uncomfortable fact that students from wealthier families actually are more college prepared than students from poorer. The reasons for this are complex, but the idea that the test must be inaccurately measuring the intended construct because the outcomes say unpleasant things is obviously wrongheaded. But this dynamic permeates educational research and policy. Consider research which shows that, when looked at longitudinally using the kind of fixed-effects models that can help adjust for the limitations of purely correlational analyses, we find that suspending students from school has weakly but significantly positive effects on their academic outcomes. It’s fair to say many people would not be welcoming to this research’s conclusions. This is, again, consequence-laden in a way the latest stupid fad in psychology research is not. This kind of finding can prompt the kind of controversy that can, in turn, ruin a young career. Education is a sensitive subject and sensitivity makes clear thinking in research much more difficult.