Showing posts with label more complete picture of data. Show all posts
Showing posts with label more complete picture of data. Show all posts

Tuesday, May 29, 2018

Lies, Damned Lies, and Statistics



Lies, Damned Lies, and Statistics

In an interesting post, Michael Batnick, the Irrelevant Investor, makes a critical point about the oft-overlooked limitations of data in the world of behavioral finance: http://theirrelevantinvestor.com/2018/04/04/the-limits-to-data/

Using Excel shows you how a robot should allocate its lottery winnings.
It doesn't show you that 70% of human lottery winners go bankrupt.

Darwin famously didn't trust complicated mathematics ("I have no faith in anything short of actual measurement and the Rule of Three," he wrote in a letter). He wasn't wrong: complex procedures can obscure what's going on 'under the hood.' This can render a formula's weaknesses virtually invisible.

Have you heard about the studies showing that irrelevant neuroscientific information in a research summary makes people rate the conclusion as more credible? The same seems to go for math—when people see some complex, technical information, they'd often rather just believe it instead of thinking critically.

http://www.bcps.org/offices/lis/researchcourse/images/statisticsirony.gif
 By Signe Wilkinson, for the Philadelphia Daily News
http://www.bcps.org/offices/lis/researchcourse/images/statisticsirony.gif


Monday, February 27, 2017

Stat-ception II: How to fix statistics in psychology



Stat-ception Part II

I'm a star!

OK, my public speaking skills may not exactly have made me a star (yet!), but I AM on YouTube! I've included a link to my recent (Feb 2017) Cognition Forum presentations, as well as my current thinking about easily--and immediately implementable--solutions to ameliorate those weaknesses.

The first video goes into depth about the issues; the second describes my proposed solutions to those problems.

https://www.youtube.com/playlist?list=PLvPJKAgYsyoKcGOCKEYT2GyzK0yLVXvzN

For your viewing pleasure, I've also embedded the videos here:
 


Any feedback or advice is welcome!

I've also made the slideshows available on Google Drive. Here's the link to the first slideshow, so you can follow along: https://drive.google.com/file/d/0B4ZtXTwxIPrjTktiMGdoQ3JBSHM/view. And here's the link to the slideshow for the second video as well: https://drive.google.com/file/d/0B4ZtXTwxIPrjalZxdFJfUWNKTVU/view?usp=sharing

A draft of my manuscript on the topic (intended for eventual publication) is freely available for download at https://osf.io/preprints/psyarxiv/hp53k/. Since I'm an advocate of the open science movement, it's only right that I make my own work publicly available--hence why I uploaded these videos (and my manuscript) to public repositories.

You may not trust my own take on these issues, in which case I commend you for your skepticism! In the videos, I made numerous references to Ziliak & McCloskey (2009), Gigerenzer (2004), and Open Science Collaboration (2015)--all are worth reading, for anyone who cares about scientific integrity and the research process. All three works were highly influential in my thinking on this topic, though I cited a variety of other papers as well in my aforementioned manuscript.

You may disagree with my recommendations in the second video, and if so, that's okay! How to address the limitations of NHST and fix science is absolutely a discussion worth having; I advance my own ideas in the spirit of jump-starting such a discussion.

So, please put your thoughts in the comments, and share my work with colleagues who may be interested in the topic!

Monday, February 20, 2017

Stat-ception: Everything you think you know about psych stats is wrong!




In the spirit of open science, I have posted a video of a talk on statistical practice that I gave in the Cognition Forum at Bowling Green State University.


This talk was in 2 parts; the first part summarizes many of the common objections to null hypothesis significance testing (NHST) that thinkers have made over the decades, and the second part goes over my current recommendations to tackle the problem.

 

Part I is available at https://youtu.be/JgZZkMJhPvI; Part II is forthcoming! I've also embedded the video right here:

You can view and download the full slideshow at https://drive.google.com/open?id=0B4ZtXTwxIPrjTktiMGdoQ3JBSHM. The free (and very easy-to-use!) statistical program JASP can be found at https://jasp-stats.org/. JASP is useful if you want to run the analysis on the precision-vs-oomph example that I discuss at the end of the video (at the 39:41 mark).

I have already tackled some of the issues with NHST on more than one occasion in prior posts here, and I have also provided a practical guide to psych stats as a freely available educational resource!

There are a variety of excellent papers on the topic of statistical practice in social science fields; my working paper on the subject summarizes them. In the interest of open science, I've made this working paper available at https://osf.io/preprints/psyarxiv/hp53k/. Other great resources on the topic include Gigerenzer (2004) and Ziliak & McCloskey (2009), which are also freely available.

Wednesday, December 7, 2016

The Simple Life: Graphs to aid understanding of the results



In my first publication, my advisor and I created graphs according to the guidelines listed on the Judgment and Decision Making journal's webpage. My advisor and I both maintain a general preference to see results presented visually, in the form of easily-understood, properly-labeled graphs. However, in correspondence with JDM's editor, he felt that the results were simple enough to understand, and the graphs therefore weren't really necessary.

I think the editor's idea was that:
a) not including the graphs would save space, and
b) including the graphs would probably involve a bunch of difficult/time-consuming formatting. I certainly don't blame the editor for advising us to exclude the graphs; I understand perfectly where he's coming from!

So, to save time, space, and effort, our graphs were never published. Until now.

The graph for Experiment 1, which fits best with the information presented near the top of p. 305, presents the proportion of participants' decisions that were consistent with pre-exposure (bar on left) and with the recognition heuristic (as determined by participants' self-reported recognition; bar on right). Error bars represent 95% confidence intervals.



The graph for Experiment 2, below, illustrates the data presented on p. 307-308. The dots represent the mean proportion of choices that were consistent with the recognition heuristic, for participants in each of the three training conditions. Again, error bars indicate 95% confidence intervals, which I recommend reporting as part of a more complete picture of your data.
















Here's a link to the full-size .jpg image for the first graph, and here's the link for the second graph.

ResearcherID