Showing posts with label evidence. Show all posts
Showing posts with label evidence. Show all posts

Monday, February 27, 2017

Stat-ception II: How to fix statistics in psychology



Stat-ception Part II

I'm a star!

OK, my public speaking skills may not exactly have made me a star (yet!), but I AM on YouTube! I've included a link to my recent (Feb 2017) Cognition Forum presentations, as well as my current thinking about easily--and immediately implementable--solutions to ameliorate those weaknesses.

The first video goes into depth about the issues; the second describes my proposed solutions to those problems.

https://www.youtube.com/playlist?list=PLvPJKAgYsyoKcGOCKEYT2GyzK0yLVXvzN

For your viewing pleasure, I've also embedded the videos here:
 


Any feedback or advice is welcome!

I've also made the slideshows available on Google Drive. Here's the link to the first slideshow, so you can follow along: https://drive.google.com/file/d/0B4ZtXTwxIPrjTktiMGdoQ3JBSHM/view. And here's the link to the slideshow for the second video as well: https://drive.google.com/file/d/0B4ZtXTwxIPrjalZxdFJfUWNKTVU/view?usp=sharing

A draft of my manuscript on the topic (intended for eventual publication) is freely available for download at https://osf.io/preprints/psyarxiv/hp53k/. Since I'm an advocate of the open science movement, it's only right that I make my own work publicly available--hence why I uploaded these videos (and my manuscript) to public repositories.

You may not trust my own take on these issues, in which case I commend you for your skepticism! In the videos, I made numerous references to Ziliak & McCloskey (2009), Gigerenzer (2004), and Open Science Collaboration (2015)--all are worth reading, for anyone who cares about scientific integrity and the research process. All three works were highly influential in my thinking on this topic, though I cited a variety of other papers as well in my aforementioned manuscript.

You may disagree with my recommendations in the second video, and if so, that's okay! How to address the limitations of NHST and fix science is absolutely a discussion worth having; I advance my own ideas in the spirit of jump-starting such a discussion.

So, please put your thoughts in the comments, and share my work with colleagues who may be interested in the topic!

Monday, February 20, 2017

Stat-ception: Everything you think you know about psych stats is wrong!




In the spirit of open science, I have posted a video of a talk on statistical practice that I gave in the Cognition Forum at Bowling Green State University.


This talk was in 2 parts; the first part summarizes many of the common objections to null hypothesis significance testing (NHST) that thinkers have made over the decades, and the second part goes over my current recommendations to tackle the problem.

 

Part I is available at https://youtu.be/JgZZkMJhPvI; Part II is forthcoming! I've also embedded the video right here:

You can view and download the full slideshow at https://drive.google.com/open?id=0B4ZtXTwxIPrjTktiMGdoQ3JBSHM. The free (and very easy-to-use!) statistical program JASP can be found at https://jasp-stats.org/. JASP is useful if you want to run the analysis on the precision-vs-oomph example that I discuss at the end of the video (at the 39:41 mark).

I have already tackled some of the issues with NHST on more than one occasion in prior posts here, and I have also provided a practical guide to psych stats as a freely available educational resource!

There are a variety of excellent papers on the topic of statistical practice in social science fields; my working paper on the subject summarizes them. In the interest of open science, I've made this working paper available at https://osf.io/preprints/psyarxiv/hp53k/. Other great resources on the topic include Gigerenzer (2004) and Ziliak & McCloskey (2009), which are also freely available.

Thursday, December 1, 2016

When statistics are meaningless



Low-quality evidence renders statistics meaningless

John Ioannidis, noted canary in the coal mine of bad science, just put an article on ResearchGate that caught my attention. Late on November 30, 2016, he posted the latest article on which he was a co-author. The article is called “Diet, body size, physical activity, and the risk of prostate cancer.” Here’s the abstract, and here’s the full article. This article reviews the meta-analytic evidence regarding the risk factors for prostate cancer.

The findings summarized by the abstract? 176 out of 248 meta-analyses used continuous exposure assessment to measure the impact of each factor. Of those 176, none satisfied all of the authors’ pre-set criteria for using the best meta-analytic methods to provide strong evidence of the factors linked to prostate cancer. Not one.

The authors graded the strength of evidence in these meta-analyses according to the following categories: strong, highly suggestive, suggestive, and weak. The only strong, reliable risk factor for developing prostate cancer? Height.

For every 5 additional centimeters in height, the risk of developing prostate cancer increases by 4%.
  • Quick, somebody, feature the headline:
    Does Being Tall Give You Cancer? Shocking New Research Shows That Taller Men Are More Likely to Develop Prostate Cancer
How are my clickbait-headline-writing skills?

...Okay, the scientist in me demands that I present a more serious and evenhanded treatment of the topic. So, I’ll report that there is also some evidence to suggest that BMI, weight, amount of calcium in the diet, and alcohol intake are also factors that appear to impact prostate cancer development.

However, the authors did emphasize in the abstract that “...only the association of height with total prostate cancer incidence and mortality presented highly suggestive evidence...” The other factors I listed above are “supported by suggestive evidence.”

But, considering the reflections on the state of biomedical science that Ioannidis published in February of 2016, one wonders just how “suggestive” that evidence really is!

I think this represents a good applied example of why an understanding of stats is important in today’s world! I think it’s also a good example of how easily the competition for funding can corrupt proper scientific procedures! But, lest you think I’m trying to pick on biomedical research, here’s another example that hits frighteningly close to home.

My takeaway: No matter how advanced your statistical techniques or how powerful your software, statistics are meaningless when the evidence itself is biased...

ResearcherID