Translate

Sunday, April 21, 2013

Call Me Suspicious: Ravitch, Chap 3



I get so frustrated with educational research where the results of a study seem to show one thing, only to be refuted by another study.  Is it the nature of education, is it because of poorly designed studies, is it all idealogically slanted from the get-go?

In speaking of gains in District 2 in NYC  Ravitch says “little attention has been paid to the remarkable economic boon and demographic changes in the district during the 1990s.  These shifts surely influenced the district’s educational gains.”  She goes on to say there were demographic changes in district 2 that were only revealed in a census several years later,  and that the proportion of white students went up and African American and Hispanic went down, and suggests that these changes were the cause of reported educational gains.

While the census may not have documented a demographic change till years later, the schools certainly knew it immediately.  Schools know how many kids are coming in each day, and they collect and know the demographic groups.  When Alvarado’s statisticians went to calculate the average test scores in various demographic groups (which surely they did) they knew how many were in each group in 1987 and how many there were in 1995. 

So did Alvarado and the original District 2 researchers suppress the demographic differences in their reports?  Or is Ravitch casting doubt by “revealing new information,” though the information surely wasn’t new to the school district.








1 comment:

  1. I found myself thinking these very same questions.
    Unfortunately this kind of issue plagues social research. It's just too easy in many cases to let confirmation bias override the empirical process. Researchers always want to feel like they stay neutral, but it isn't uncommon for them to ignore changing inputs as their research continues. It's easier to assume all things stay the same. Think about all the times you hear reports where they talk about 'controlling' for ten million factors and STILL they won't want to suggest anything more than a loose correlation versus anything that's causal. The reason? Because they're may be ten million more factors they didn't control for OR the factors they controlled for weren't having the affect they defined them as having.

    The Student Motivation article in 480 does a decent job of showing the attempt by researchers to rule out the subtle difference of whether students benefit from more teacher support OR less stress...even then, they are the ones that have to define the measures to evaluate these things.

    I don't necessarily believe that the researchers for District 2 were being purposefully misleading, but at the same time I think you're correct in pointing out the matter of changing demographics.

    They should have and would have been aware of this and should have adjusted their methods accordingly. Why they didn't is a mystery to me, but, as I said, it's not super uncommon in social sciences and when it happens it usually turns out to be more of a wishful thinking oversight than anything else.

    ReplyDelete