**Don't Drink the Kool-Aid!**

Unfortunately, not every claim by a scientist is true:

https://blogs.scientificamerican.com/cross-check/a-dig-through-old-files-reminds-me-why-ie28099m-so-critical-of-science/

Putting 'Should' into 'Science'

Unfortunately, not every claim by a scientist is true:

https://blogs.scientificamerican.com/cross-check/a-dig-through-old-files-reminds-me-why-ie28099m-so-critical-of-science/

I left a comment at a very interesting blog post at http://www.ivigilante.com/where-rationality-ends/. Here's what I wrote:

An interesting article! I think that you brought up rationality in a sense that has more to do with academic notions of what is “ideal” than with the real world. It’s understandable, since most formal education (particularly in economics departments!) focuses on this notion!

Herbert Simon was, as far as I’m aware, the first academic to point out that expecting people to act ‘rationally’ is futile. He won a Nobel Prize for his work on bounded rationality, too, so it’s not like his work went unrecognized. A brief summary: http://www.economist.com/node/13350892

As this page correctly observes, people’s behavior often fails to adhere to even the most basic principles of logic, which leaves humans looking pretty stupid IF you believe that logic–i.e. ‘rationality’–is the best standard to which we should compare behavior.

But Gerd Gigerenzer and others (including me) argue the opposite! Rather than saying, ‘Gee, people are pretty dumb because they don’t make decisions the way a computer would make a decision,’ we argue instead that people use shortcuts that are typically well-suited for the real world of vast uncertainty and severe time pressure.

Some interesting reading on Gigerenzer’s approach, which he calls “ecological rationality:” https://hbr.org/2014/06/instinct-can-beat-analytical-thinking and https://www.edge.org/conversation/gerd_gigerenzer-smart-heuristics

In the instance of the gas station that you described above, I’d say you were acting in an ecologically rational manner, probably using a heuristic. I’d guess that your deliberation went something like this:

“This old-timey gas station has ridiculously long lines. I hate long lines! Screw it, I’ll go elsewhere and pay a bit more for my gas. The savings of about $1.00 per fill-up isn’t worth my time and aggravation.”

A standard economic model would require that you sit down and calculate whether the savings of $1 or $2 isactuallyworth your time and aggravation. Which would, of course, require that you assigned a numerical value to your aggravation, knew how much time you’d spend waiting, and how much of that waiting time you’d actually spend on side hustles (vs. something unproductive), how much money you’d make from the time spent on your side hustle, etc…

When it’s spelled out like that, it becomes painfully obvious that nobody behaves like “homo economicus,” because performing such actions is, in itself, terribly inefficient–not to mention far more imprecise than big-time thinkers would like to admit [i.e. how much IS your aggravation worth? And what if you estimate a 15-minute wait, but it’s actually a 25-minute wait, which causes extra frustration?How muchextra frustration would that cause, and what negative effects would that stress have on your health? And, more to the point, how much would it cost to address those negative health effects? Because, remember, we need numbers in order to make the equation work…].

So, to the point of your article: if you buy into Gigerenzer’s notions, rational thought often DOESN’T require complexity–as long as you’re holding people to the standard of ecological rationality, rather than logical-mathematical rationality.

The example above illustrates how ridiculous it is to expect people to make 'rational,' calculated decisions

Thus, assuming that your goal is to be productive, it is therefore irrational to be rational!!!

To use the above example of waiting in a long line in order to save $0.10 per gallon on gas, you'd have to quantify everything: how much is your time worth? How much is your frustration worth? How much gas would your car would use at idle while you wait in line for a pump to open up? What is the 'opportunity cost' of spending time waiting in line for a pump, waiting to pay at the counter, and trying to merge back onto the highway? Furthermore, as another commenter noted at the above article by

By the very nature of calculation, you need to fill in numbers here. But how can you quantify frustration in monetary terms? The very act of filling in this kind of equation requires enough guesswork to make the entire procedure an exercise in futility!

That's a hidden fact about math: it requires certainty about each term in the equation. Once you introduce guesswork, the equation becomes meaningless. It will still give you an answer, but to quote a popular proverb on the subject: "Junk in, junk out."

Essentially, acting in an ecologically rational manner is

There are times in which deliberation and care are necessary before embarking on a course of action; there are other times in which quick, decisive action is needed. It is therefore impossible, on a practical level, to know in advance which kind of approach is best for solving a particular problem. One must first identify the nature of the problem, and any important features that may make this particular problem unique/different from similar situations one has already faced.

Academics tend to dislike such arguments, since this renders it impossible for researchers to determine in advance what the "correct" answer should be. After all, a researcher needs to present his or her ideas to skeptical peers! So it is far more defensible to tell your peers: "The correct answer is X, because this equation, based on principles of formal logic, has determined that the correct answer is X."

After all, the above approach has the appearance of airtight inevitability! If you say, "This equation yields X, therefore, the correct answer is X," you're much less likely to spark a heated debate than if you introduce the obvious subjectivity of "Situation P requires this equation, whereas Situation Q requires that equation. Since here, we have Situation P, this equation is necessary, so the correct answer is X. But if we changed these features, we'd have Situation Q, and the correct answer would be Y." An audience member could easily challenge you by objecting that we don't really have Situation P—we actually have Situation Q, or maybe Situation R. This debate could derail your entire presentation and make you look silly in front of a room full of your peers at a conference!

But is the point of research to avoid disagreement, or to advance knowledge?...

My overarching point is this: when academic research demonstrates that most people arrive at the wrong answer to a question, the participants' errors may be an artifact of the research process, NOT something inherently wrong with the human mind!

Next time a researcher tries to tell you that you're irrational, you shouldn't necessarily believe it...

Regression
is useful for making a predictive model. Let's say there's a positive
linear correlation between *K*
and *N*,
but you suspect that Factors *L*
and *M*
also contribute to Outcome *N*.

Make up a story—say, that Factors *K*,
*L*,
and *M*
represent intelligence, persistence, and amount of sleep per night
and *N*
refers to a course grade.

So,
to test the relative impacts of Factors *K*,
*L*,
and *M*
on Outcome *N*,
you can feed each factor into a regression model, and test whether
each factor increases the fit. That is, a correlation between Factor
*K*
and Outcome *N*
yields a Pearson's *r*
of
.64
and
*R2*
of
.4096.

But, when you run a regression
testing
the effect of
Factors
*K*
**and**
*L*
on
Outcome *N*,
you find an *R2*
of
.5625, with
a
significant change in the *R2*
value.
That
means
that Factors *K*
and
*L*
**together**
do
a better job of
explaining
the relationship than Factor *K*
alone.

Then,
you run a regression with Factors
*K*,
*L*,
and *M*
together,
and find
an
*R2*
of
.5929, with no significant change—this
means that Factor *M*
does
not help
to
explain the relationship.
Outcome
*N*
is
due mostly to Factors *K*
and
*L*;
Factor *M*
is
an
unimportant
predictor
of
Outcome
*N*.

And, if you're confused about the math...remember in middle school or high school math, when you learned about "rise over run" and learned the formula y = *m*x + b? Yeah, that's a simple linear regression. With multiple regression, you can add multiple terms, such that y = *a*x_{1}
+ *b*x_{2} + *c*x_{3}...+ z. But it's still the same concept, just with more predictors than that lone "*m*x" term.

For more help explaining statistical concepts and when to use them,

In case you missed it, there are some fantastic, easy-to-use, and FREE stats programs available now! I review them here.

please download my freely available PDF guide here!

Not clear about when you should use a chi-square vs. when to use a *t* test?

First, you should check out my free, downloadable PDF, A Practical Guide to Psych Stats.

Now that that's out of the way—if you're still not sure, how about a tasty example?

Let's say that we want to know whether a bag of Original Skittles has a truly
random distribution of colors. If so, we’d expect to find roughly
equal numbers of red, green, purple, yellow, and orange Skittles, right?

A chi-square goodness-of-fit test [that is, a one-variable chi-square] can help us evaluate this. If there are 18 red, 13 green, 18 purple, 19 yellow, and 17 orange, the chi-square goodness-of-fit test tells us whether this distribution is different enough from an even distribution of 17 apiece (85 Skittles / 5 colors) that we can reject the notion that the colors are evenly distributed.

If you're really curious about my made-up numbers, by the way, here's a straightforward, easy-to-use online calculator to help you: http://www.socscistatistics.com/tests/goodnessoffit/Default2.aspx

A chi-square goodness-of-fit test [that is, a one-variable chi-square] can help us evaluate this. If there are 18 red, 13 green, 18 purple, 19 yellow, and 17 orange, the chi-square goodness-of-fit test tells us whether this distribution is different enough from an even distribution of 17 apiece (85 Skittles / 5 colors) that we can reject the notion that the colors are evenly distributed.

If you're really curious about my made-up numbers, by the way, here's a straightforward, easy-to-use online calculator to help you: http://www.socscistatistics.com/tests/goodnessoffit/Default2.aspx

***

Now, let's say
we’re looking for differences in the proportion of red Skittles to
the other colors in a bag of Original vs. a bag of Tropical Skittles.

In this case, we have two categorical variables [Original vs. Tropical Skittles,*and* unequal distribution of colors], so we would need a chi-square test
for independence. The additional category makes the calculation a little more complex (but not if you use statistical software to handle the dirty work! 😊), but ultimately, we're looking at the same thing as before: are there roughly equal numbers of each type Skittles in each bag?

In this case, we have two categorical variables [Original vs. Tropical Skittles,

In case you missed it, there are some fantastic, easy-to-use, and FREE stats programs available now! I review them here.

For more help explaining statistical concepts and when to use them,

please download my freely available PDF guide here!

Is AI—artificial intelligence—hype or substance? Can machines think? Can they ever

achieve consciousness? Will the AI "Singularity" eventually destroy us all?

I've compiled a few links regarding interesting (advanced, but interesting) statistical topics. Here they are:

http://www.nicebread.de/whats-the-probability-that-a-significant-p-value-indicates-a-true-effect/

https://medium.com/@richarddmorey/new-paper-why-most-of-psychology-is-statistically-unfalsifiable-4c3b6126365a#.maizqdsok

http://www.dgpskongress.de/frontend/index.php?page_id=154

http://www.researchtransparency.org/

http://andrewgelman.com/2016/08/22/bayesian-inference-completely-solves-the-multiple-comparisons-problem/

http://healthyinfluence.com/wordpress/2012/01/30/all-bad-statistics-are-persuasive-errors/

A simulation of wealth inequality through a truly random procedure [clickbait title aside, it IS interesting]: http://www.decisionsciencenews.com/2017/06/19/counterintuitive-problem-everyone-room-keeps-giving-dollars-random-others-youll-never-guess-happens-next/

At the end of every semester, I hold a session for my students in which I talk about non-academic stuff. Since there are no classes and, often, no guidance on the everyday ins and outs of adulthood, somebody's gotta do something about this, right?

Right?!?!

So, I decided to do something about it! And I've made it available to everyone for free, right here! I've posted the PDF of my slideshow here, since I like to make things freely available for all!

To get my narration, however, you'll have to be one of my students...

Nothing like watching an episode of Last Week Tonight a year after it hit YouTube! More like Last

Subscribe to:
Posts (Atom)