It is worth noting also that when considering an effect size, other factors need to be considered as well, like resources, effort, and time put into an instructional strategy. For example, if a school opted from smaller classes, which has a medium effect size, they need to consider the resources, money, and time they need to accommodate to have such a a medium effect size, which by comparison to teacher feedback (done well) the latter makes much more sense as it yields higher effect size with much less effort and resources.
However, Hattie’s effect size validity has been under numerous attacks lately, accusing it of being skewed. I was aware of such criticism before, but it has become more salient lately through some publications and discussions on Twitter.
Is @john_hattie wrong about effect sizes? @RobertSlavin critiques the evidence. https://t.co/XctD27z0J2 via @MrRooBKK - @Visible Learning @Jenni_Donohoo @MichaelFullan1— Andy Hargreaves (@HargreavesBC) June 23, 2018
Thanks for sharing @Jenni_Donohoo ! The issue I see is that it is Hattie defending Meta-Analysis. What other research supports this method and why is qualitative research not included in the analysis?— Dr. Derrick Cameron (@DerrickJCameron) June 24, 2018
Because of the lit he draws on, much of what Hattie says we already knew. A sig issue is b/c of the packaging / commercialization it is taken up uncritically by systems/schools & his name is used to invoke a sense of authority https://t.co/QEVGbd3Srn— Scott Eacott (@ScottEacott) June 24, 2018
This Twitter conversations were due to Dr. Robert Salvin’s (currently Director of the Center for Research and Reform in Education at Johns Hopkins University and Chairman of the Success for All Foundation) post John Hattie is Wrong where he accuses Hattie of bias study findings by accepting “ the results of the underlying meta-analyses without question” as “most meta-analyses accept all sorts of individual studies of widely varying standards of quality”. He adds that
To create information that is fair and meaningful, meta-analysts cannot include studies of unknown and mostly low quality. Instead, they need to apply consistent standards of quality for each study, to look carefully at each one and judge its freedom from bias and major methodological flaws, as well as its relevance to practice. A meta-analysis cannot be any better than the studies that go into it. Hattie’s claims are deeply misleading because they are based on meta-analyses that themselves accepted studies of all levels of quality.
Towards the end of this post, Dr. Slavin refers to his work on Evidence Syntheses (www.bestevidence.org) as reliable and unbiased. Was he trying to promote his work? Does he feel Hattie’s work and commercialization of his work have overshadowed his work on reliable meta-analysis?
Hoever, Dr. Salvin’s post was not the only criticism, earlier in 2017 an academic article by Jerome Bergeron, published in McGill Journal of Education, and titled HOW TO ENGAGE IN PSEUDOSCIENCE WITH REAL DATA: A CRITICISM OF JOHN HATTIE’S ARGUMENTS IN VISIBLE LEARNING FROM THE PERSPECTIVE OF A STATISTICIAN likened Hattie’s research to “a fragile house house of cards that quickly falls apart”. He claims that Cohen’s d (Hattie’s measure of effect size) simply cannot be used as a universal measure of impact. He even accused Hattie, his works, and his followers and believers as blind and it falls into the promotion of pseudoscience (reminds me of NLP).
To believe Hattie is to have a blind spot in one’s critical thinking when assessing scientific rigor. To promote his work is to unfortunately fall into the promotion of pseudoscience. Finally, to persist in defending Hattie after becoming aware of the serious critique of his methodology constitutes willful blindness.
He says that Hattie, though not afraid of numbers, has created many errors, but the most two salient ones are:
- Miscalculation in Meta-analyses
- Inappropriate baseline comparisons
He insists that effect size, despite the absence of a unit, is a relative measure that provides a comparison to a set, group, or baseline population, even if it may be implicit. He says that to compare two independent groups is not the same as comparing grades before and after an intervention implemented with the same group. Hattie’s comparisons are arbitrary and he is completely unaware of it.
He also points to an error of arbitrariness of Hattie’s barometer. He says that in addition to mixing multiple and incompatible dimensions, Hattie confounds two distinct populations: 1) factors that influence academic success and 2) studies conducted on these factors.
He concludes his article with a call for researchers in education and education departments to consult keen statisticians instead of doing it themselves.
Another fierce criticism on Hattie’s work (and Hattie himself) was termed the “cult of Hattie” in another academic article by Scott Eacott in 2017, titled School leadership and the cult of the guru: the neo-Taylorism of Hattie. In this article, Eacott contends that Hatti’s word in Visible Learning “has become not only the latest fad or fashion, almost to the point of saturation, but reached a level where it can now be labelled the ‘Cult of Hattie’. In which claims that Hattie has deliberately discarded context, a very powerful influence on student achievement. Eacott’s attack, unlike Bergeron’s, does not attack the statistical errors in Hattie’s work but claims that Hattie did three things to build is Visible Learning empire (He is localizing Hattie’s influence in Australia):
1- Hattie uses specific temporal conditions that enabled the rise of Hattie’s work across the nation (Australia)
2- Unlike past attempts at pedagogical reform, Hattie’s work has provided school leaders with data that appeal to their administrative pursuits>
3- Hattie articulated a new image of school leadership (him being their savior)
What do you think? Is Hattie’s “Holy Grail” the wrong one? a pseudoscience? or are these critics, among many others, are just being well, Critics?
Comments
Post a Comment