Showing posts with label science. Show all posts
Showing posts with label science. Show all posts

Tuesday, December 22, 2020

Aliens by 2035?


Aliens By 2035?
 
A credulous article about SETI claims that we'll find aliens "soon:" http://nautil.us/blog/why-well-have-evidence-of-aliensif-they-existby-2035

Oh? Seth Shostak will bet me a cup of coffee that we'll find aliens by 2035?

If I were a betting man, I'd take that bet. And he might as well buy it for me now, before inflation brings the price of that coffee to $20 or more!

There are many, many problems with Dr. Shostak's piece.
  1. "I'm optimistic by nature--as a scientist, you have to be."
           -Nope. As a scientist, you have to be skeptical. Popperian falsification, anyone? Optimism--or lack thereof--is irrelevant.
     
  2. "Given the current state of SETI efforts and abilities, I feel that we're on the cusp of learning something truly revolutionary."
           -This is going to sound quite crass, but it's true: nobody cares what Dr. Shostak--or anyone else--feels about might happen in the future. Especially when that possibility is so vaguely stated!
           "...learning something truly revolutionary" could refer to anything from finding an extraterrestrial civilization to discovering that magnetic fields are regulated by microscopic men riding microsopic stationary bikes.
           And upon what is this assertion based? A feeling. Very scientific.

  3. "Most of our experiments so far have used large radio antennas in an effort to eavesdrop on radio signals transmitted by other societies..."
           -What if they don't want us to listen to them? Characterizing the SETI search this way is very disturbing, even if this was nothing more than a poor choice of words. If the people who are involved in this program see it as a cosmic game of peeping-tom, that raises the question of whether it's morally right to be pursuing this topic!
     
  4. [quote continued from #3] "...an approach that was dramatized by Jodie Foster in the 1997 movie Contact."
           -Mmm, a made-for-Hollywood scientific endeavor. Because serious science is always fit to be summarized and dramatized in 2 hours.
           This can be forgiven, though, because the search for aliens is inherently interesting.
__

Michael Crichton--yes, the science-fiction author--made some insightful observations about the sociology of science in his famous CalTech lecture in 2003. In that talk, he blasted the apparent mathematical legitimacy of SETI--the Drake equation--as "meaningless." You can read a transcript of that talk here:
http://stephenschneider.stanford.edu/Publications/PDF_Papers/Crichton2003.pdf
__

Ultimately, SETI is at least partly a PR move. Dr. Shostak discusses his belief that SETI can aid science literacy in a 2002 article: http://adsabs.harvard.edu/full/2004IAUS..213..535S  I would particularly like to point you to page 2, where Dr. Shostak writes "Because of their emotional content, the media can generate excitement for science."

But let's never allow that goal--to get kids excited about science--to compromise the integrity of what makes science so useful.

-At its core, SETI needs money to continue the search for extraterrestrial intelligence. In order to attract funding, it needs to generate publicity. And it generates publicity by saying things like 'we'll find aliens by 2035.'

Unfortunately, the more mundane truth is that we don't know a) the nature of what we're looking for, b) when it will arrive (or whether it will arrive), c) if we'll even know a signal when we detect it, or d) if any such signals even exist!

This fact--that we're blindly groping around a pitch-black desert in hopes of stumbling across a pool of water--doesn't make for good headlines. But it has the virtue of being the truth.

__

Despite all the above curmudgeonly objections, I'd personally contribute some funding to SETI if I ever become a multi-billionaire! But to be brutally honest with myself, the probability of that actually happening is smaller than the chance of a pointy-eared humanoid descending from a spaceship and bidding me, "Live long and prosper."


***
This New Scientist article reports on a mathematical case that we may never detect extraterrestrial signals--not even if they cover half of our galaxy!

Shostak has been spoonfeeding this optimistic prediction to media outlets for years. I give him credit for sticking with his initial prediction of contact by ~2035, rather than moving the goalposts. But if 2035 comes and goes with no alien signals yet, Dr. Shostak will be in his early 90s...if he's even still around to see whether or not his prediction has come true. And if it hasn't, I seriously doubt that anyone will criticize a man of such advanced age. But the history of futurists is littered with disappointment; I have no reason to suspect that this prediction will be any different.

If you're really down on futurism and/or want to read a morose counterpoint to the exuberant optimism of most predictions about the future, read this insightful takedown of the whole concept of predicting the future.

***
Wondering about the social media usage of actual college students? 
Check out the results of this totally informal—but realsurvey.

In case you missed it, I review some fantastic, easy-to-use, and FREE stats programs here.
For more help explaining statistical concepts and when to use them, 
please download my freely available PDF guide here!
https://drive.google.com/open?id=0B4ZtXTwxIPrjUzJ2a0FXbHVxaXc

Monday, July 13, 2020

Around Academia



Around Academia

The first in a roundup series that I've decided to call "Around Academia."

Is 'self-care' just another way of policing people's thoughts, by compelling them to feel happy? Or might it be a cynical marketing ploy to sell products? https://www.coyneoftherealm.com/blogs/news/the-tyranny-of-self-care-this-year-s-model-of-compulsive-happiness

Are early-career female researchers getting due credit for their work? https://www.coyneoftherealm.com/blogs/news/rising-early-career-female-academics-and-second-to-last-authorship
  • Some advice, whether the assertion linked above is true or not: Don't be a jerk. Give people due credit!
On a related note: should we publish fewer papers? Nelson, Simmons, and Simonsohn make a compelling case: http://opim.wharton.upenn.edu/DPlab/papers/publishedPapers/Simmons_2013_Lets%20Publish%20Fewer%20Papers.pdf
  • I can't resist including this quote from page 292: "Under the current system, researchers are heavily rewarded for having new and exciting ideas and only vaguely rewarded for being accurate. Researchers are trained to defeat the review process and conquer the publisher. Uncovering a new and true insight is quite helpful in that process, but it is hardly necessary."

    Yikes. An savage indictment of the current state of the publication process (rather than in its theoretical/ideal form)!
Are yoga and mindfulness simply fads with more hype than substance? http://blogs.plos.org/mindthebrain/2017/07/19/creating-illusions-of-wondrous-effects-of-yoga-and-meditation-on-health-a-skeptic-exposes-tricks/


Tuesday, May 29, 2018

Lies, Damned Lies, and Statistics



Lies, Damned Lies, and Statistics

In an interesting post, Michael Batnick, the Irrelevant Investor, makes a critical point about the oft-overlooked limitations of data in the world of behavioral finance: http://theirrelevantinvestor.com/2018/04/04/the-limits-to-data/

Using Excel shows you how a robot should allocate its lottery winnings.
It doesn't show you that 70% of human lottery winners go bankrupt.

Darwin famously didn't trust complicated mathematics ("I have no faith in anything short of actual measurement and the Rule of Three," he wrote in a letter). He wasn't wrong: complex procedures can obscure what's going on 'under the hood.' This can render a formula's weaknesses virtually invisible.

Have you heard about the studies showing that irrelevant neuroscientific information in a research summary makes people rate the conclusion as more credible? The same seems to go for math—when people see some complex, technical information, they'd often rather just believe it instead of thinking critically.

http://www.bcps.org/offices/lis/researchcourse/images/statisticsirony.gif
 By Signe Wilkinson, for the Philadelphia Daily News
http://www.bcps.org/offices/lis/researchcourse/images/statisticsirony.gif


Wednesday, February 28, 2018

Peer Review On Trial



Peer Review Gone Wrong
or, A Cynic's Manifesto

This behavior should get your scientific knickers in a knot: https://theintercept.com/2018/02/23/3m-lawsuit-pfcs-pollution/

Essentially, 3M used perfluorinated chemicals in the production of some of their products (like Scotchgard). Research has demonstrated that such chemicals may harm humans, other organisms, and the environment when they seep into the ground water.

The State of Minnesota sued 3M for this; 3M settled the case for less than 1/5 the amount of the lawsuit, and all without admitting wrongdoing. And 3M had the gall to release a statement describing the settlement as "consistent with 3M’s long history of environmental stewardship."

With 'environmental stewards' like that, who needs polluters?...

Just as outrageously (perhaps more so, if you're an early-career researcher!), this article points out that Minnesota's lawsuit named a widely-published professor, Dr. John Giesy, who allegedly took money from 3M while putting obstacles to publication in the path of scientists whose work showed that perfluorinated chemicals can be dangerous.

State's evidence includes this e-mail, and this particularly damning one, which shows that Giesy knew perfectly well that he was protecting 3M and its interests.

***

Friday, April 14, 2017

Guides to pre-registered experiments



Guides to methodological pre-registration for experiments

If you're like me, you've considered doing a pre-registered study but put it off because you weren't sure what to expect or how much paperwork there would be. I've become a big proponent of open science [I'm working on a future blog post on the topic], as I think it's crucially important to make your materials, data, conclusions, etc. available to other researchers and to the wider public!

As Simmons, Nelson, & Simonsohn (2017) wrote, pre-registration and full methodological disclosure are crucial to the credibility of psychological research. If we want to be taken seriously as scientists, we should behave in accordance with the highest standards of scientific integrity...and that includes pre-registering our studies.

Why pre-registration? Two big reasons: 1) it prevents us from fooling ourselves about our own research findings, and 2) it gives us something we can point to and say "Yes, I planned to do that all along!"

And if we didn't actually plan it all along, it forces us to face the facts—which should serve to keep us humble.

Despite what some people may think, increased transparency is good for science. Period. Sanjay Srivastava gives the topic a thoughtful treatment here, and I agree with him. Our priority should be high-quality science first, PR/funding concerns a DISTANT second. If we have good science in the first place, many of the other concerns will evaporate.

So, here are two resources to give you a good overview of the pre-registration process, and to guide you through what's required:

Monday, February 27, 2017

Stat-ception II: How to fix statistics in psychology



Stat-ception Part II

I'm a star!

OK, my public speaking skills may not exactly have made me a star (yet!), but I AM on YouTube! I've included a link to my recent (Feb 2017) Cognition Forum presentations, as well as my current thinking about easily--and immediately implementable--solutions to ameliorate those weaknesses.

The first video goes into depth about the issues; the second describes my proposed solutions to those problems.

https://www.youtube.com/playlist?list=PLvPJKAgYsyoKcGOCKEYT2GyzK0yLVXvzN

For your viewing pleasure, I've also embedded the videos here:
 


Any feedback or advice is welcome!

I've also made the slideshows available on Google Drive. Here's the link to the first slideshow, so you can follow along: https://drive.google.com/file/d/0B4ZtXTwxIPrjTktiMGdoQ3JBSHM/view. And here's the link to the slideshow for the second video as well: https://drive.google.com/file/d/0B4ZtXTwxIPrjalZxdFJfUWNKTVU/view?usp=sharing

A draft of my manuscript on the topic (intended for eventual publication) is freely available for download at https://osf.io/preprints/psyarxiv/hp53k/. Since I'm an advocate of the open science movement, it's only right that I make my own work publicly available--hence why I uploaded these videos (and my manuscript) to public repositories.

You may not trust my own take on these issues, in which case I commend you for your skepticism! In the videos, I made numerous references to Ziliak & McCloskey (2009), Gigerenzer (2004), and Open Science Collaboration (2015)--all are worth reading, for anyone who cares about scientific integrity and the research process. All three works were highly influential in my thinking on this topic, though I cited a variety of other papers as well in my aforementioned manuscript.

You may disagree with my recommendations in the second video, and if so, that's okay! How to address the limitations of NHST and fix science is absolutely a discussion worth having; I advance my own ideas in the spirit of jump-starting such a discussion.

So, please put your thoughts in the comments, and share my work with colleagues who may be interested in the topic!

Monday, February 20, 2017

Stat-ception: Everything you think you know about psych stats is wrong!




In the spirit of open science, I have posted a video of a talk on statistical practice that I gave in the Cognition Forum at Bowling Green State University.


This talk was in 2 parts; the first part summarizes many of the common objections to null hypothesis significance testing (NHST) that thinkers have made over the decades, and the second part goes over my current recommendations to tackle the problem.

 

Part I is available at https://youtu.be/JgZZkMJhPvI; Part II is forthcoming! I've also embedded the video right here:

You can view and download the full slideshow at https://drive.google.com/open?id=0B4ZtXTwxIPrjTktiMGdoQ3JBSHM. The free (and very easy-to-use!) statistical program JASP can be found at https://jasp-stats.org/. JASP is useful if you want to run the analysis on the precision-vs-oomph example that I discuss at the end of the video (at the 39:41 mark).

I have already tackled some of the issues with NHST on more than one occasion in prior posts here, and I have also provided a practical guide to psych stats as a freely available educational resource!

There are a variety of excellent papers on the topic of statistical practice in social science fields; my working paper on the subject summarizes them. In the interest of open science, I've made this working paper available at https://osf.io/preprints/psyarxiv/hp53k/. Other great resources on the topic include Gigerenzer (2004) and Ziliak & McCloskey (2009), which are also freely available.

Sunday, January 8, 2017

What You Think You Know About Psychology is Wrong




What You Think You Know About Psychology is Wrong:
The limitations of null hypothesis significance testing

By: Zach Basehore

Are college students psychic?!

Let's say someone claims that people who go to college are more psychic than people who do not attend college. So I decide to test this claim!

How would I do that? Well, a simple test would be to examine people's ability to correctly predict whether a coin will land on heads or tails when I flip it. There are 10,000 college students and 10,000 non-college students; each person predicts the results of 100,000 coin flips, one flip at a time.

The results:
Each participant had a proportion of correct predictions. The mean proportion of correct predictions among college students was .50006 (that is, 50.006% correct), and the non-college-students had a mean proportion of correct predictions equal to .49999. The SDs are .00160 and .00155, respectively.

When you run an independent-samples t test, this difference is statistically significant at an alpha level of .01! The 95% CI for the difference is also quite narrow (indicating that these means are very close to the true population means).

So the statistical test gives us very strong evidence that college students really are more prescient than non-college students! We've made a new discovery that revolutionizes our understanding of the human mind, and opens up a whole new field of inquiry! Why are college students more psychic? Is it because they're smarter? More sensitive? Do they pay closer attention to the world around them?

The problem:
In this example, I've found evidence of psychic abilities! Specifically, I've shown that college students predict the outcome of coin flips more accurately than non-college students, and there's less than a 1% probability that the difference I found is due to chance alone, if the null hypothesis is true at the population level)! How exciting—I can establish a huge name for myself among scientific psychologists, and have my pick of schools at which to continue my groundbreaking research! I could continue this research at Oxford… nah, let's find a better climate; like Miami or USC. I could get multi-million-dollar grants to fund an elaborate lab with fancy equipment! I can give TED talks, write books and go on lucrative speaking tours...my research will grab headlines the world over! I’ll be a household name!

The gut-check:
But wait a second...what was the actual difference again? On average, college students are right on 7 more trials (out of 100,000) than non-college students?...

Any time you gather real-world data, you’d expect there to be some small difference between groups, even if it’s really not due to any systematic effect. In the research described above, everything happened in just the right way to give me a spurious result:
  • 1 - low variance within each group [thanks in part to the excessive sample size; see the law of large numbers];
  • 2 - a small but statistically significant difference that can easily be explained by a seemingly reasonable mechanism, and
  • 3 - a very large sample.
These factors explain how I found a statistically significant difference between college students and non-college students despite the tiny difference in means.

Excited by the significant result and the potential to trumpet my exciting new ‘discovery’ [thereby launching a career, positioning myself as an expert who can charge ridiculously high consulting or speaking fees], I've failed to critically evaluate the implications of my results. And therefore, I've failed as a scientist. :(

How can we avoid falling into that trap?

One solution:
A standardized measure of effect size, like Cohen's d, will reveal what SHOULD be obvious from a look at the raw data: this difference between groups is tiny and practically insignificant, and it shouldn't convince anyone that college students are actually psychic!

In the spirit of scientific inquiry, you can test this for yourself! At GraphPad QuickCalcs, enter a mean of .50006 for Group 1 and .49999 for Group 2. Next, enter the SD of .00160 for Group 1 and .00155 for Group 2. The N for each group is 10000. Hit "Calculate now" and see what you get.

Now, enter the same means and SDs, but change the N to 100 for both groups, and observe the results.

Then, go to the Cohen's d calculator here and enter the same information (it doesn't ask for sample size). So what does all of this information mean?…

I’ve already done the easy part for you:

Sample of 20,000:


Sample of 200:



Cohen's d:


***

Statisical significance is a concept that has been called idolatrous, mindless, and an educational failure that proves essentially nothing! But every psychology major and minor has to learn it nonetheless...

The absurd focus on p-values in many social science fields (like psychology, education, economics, and biomedical science) leads to articles like the highly influential John Ioannidis piece Why Most Published Research Findings Are Falsewhich has been cited over 4000 times! 

A variety of ridiculous conclusions have been published based on small p-values, such as:

This is exactly why I pound the figurative table so hard about using effect sizes and well-designed, targeted experimental research. Don't just run NHST procedures on autopilot, or collect a huge dataset and mine for significance, or draw conclusions based solely on the arbitrary p .05 standard.

But that's not how math works! How is the .05 standard arbitrary? And where did it come from?Well, Gigerenzer (2004) identifies the source of this practice as a 1935 textbook by the influential psychological statistician Sir R.A. Fisher—and Gigerenzer also notes that Fisher himself wrote in 1956 that the practice of always relying on the .05 standard is absurdly academic and is not useful for scientific inquiry!

So, one of the early thinkers on whose work current psychological statistical practice is based would likely recoil in horror at what has become of statistical practice in our field today! [Note, however, that Cowles and Davis (1982) identified similar, though less absolute, rules about an older statistical practice called probable error.]

Remember that the greatest scientific discoveries, such as gravity, the laws of thermodynamics, Darwin's description of natural selection, and Pavlov's discovery of classical conditionnot one relied on anything like p-values. 

Not

One.

There is truly no substitutenone whatsoever—for thinking critically about the quality of your research design, the strengths and limitations of your procedure, and the size and replicability of your effect. Attempts to automate interpretation based on the .05 standard (or any such universal line-in-the-sand!) result in most researchers pumping out mounds of garbage and hoping to find a diamond in the rubbish heap, rather than setting out specifically to find a genuine diamond...

Conclusion? The validity of most psychological research is questionable (at best)! We're taught to base research around statistical procedures that are of dubious help in understanding a phenomenonand our work is almost always published solely on that basis! This pervasive problem will not be easy to fix: we need the entire field to stop doing analyses on autopilot, and to start thinking deeply and critically!

The most powerful evidence is, and will always be, to show that an effect occurs over, and over, and over again.

*** 
If you need further explanations, here are a couple helpful links:
Some interesting links on the investigation of people who claim to have paranormal powers:

ResearcherID