all 2 comments

[–]3andfro 2 insightful - 1 fun2 insightful - 0 fun3 insightful - 1 fun -  (0 children)

Abstracts don't provide enough info to ascertain the quality of the study; they're like academic click-bait to give you an idea whether you want to read the whole thing. The actual data that give a clue about the reliability of the study's methodology, analysis, and conclusions aren't there.

I'm not surprised about the influence of "mathiness."

[–]penelopepnortneyBecome ungovernable[S] 1 insightful - 1 fun1 insightful - 0 fun2 insightful - 1 fun -  (0 children)

i just came across a fun article that may well explain quite a lot.

it selected 200 american adults with advanced degrees. [graphic shows % breakdown of people w/a Masters and a PhD in humanities/social sciences, medicine, mathematics, natural science/technology and other (e.g., education]

they were asked to gauge study abstracts and rate their assessment of their quality on a scale of 0-100. it used the same two abstracts for all participants...

The manipulation always consisted in the addition of the following [meaningless mathematical formula] at the end of the abstract.

and adding “mathiness” to an abstract caused people to rate it more highly in inverse proportion to how much they knew about math and models.

for humanities, social sciences the variance was large and stat sig (p<0.01) and for other (including education) it was twice as big and also stat sig (p<0.01). that’s a helluva p value for an N this small. this seems to be a VERY pronounced effect and it gets worse the less expert in math you are.