**...And the winner is...**

The students in my Spring 2017 stats class were wondering this very question! So, as a brief introduction to research (and as an example of the one-way ANOVA, which we had recently covered), I offered my class the option to get a couple extra credit points for surveying 5 of their friends about their social media usage.

Read on for a snapshot of social media usage among college students right now!

**Limitations**

I intentionally designed the survey with a couple weaknesses, to give the students some practice at identifying those limitations. We used a 7-point Likert-type scale [it's pronounced LICK-ert, by the way! In case that link goes dead, here's a cached version].

- Only the endpoints on this scale were labeled: a 1 indicated "I never use this form of social media" and a 7 indicated "I use this form of social media multiple times per day."

Not having any labels for the intermediate values is a weakness because it introduces an unacceptable amount of error based on how people interpret a particular number—how do we know that you and I interpret a value of "6" the same way?

Answer: we don't. Hence, this is a weakness. And a rather serious one!

- Another major weakness is that these students each asked about 5 friends at Bowling Green State University.

Given that this is a Psych Stats course, many students are Psych majors. Given that fact, they probably have a disproportionately high number of friends who also major in psychology. Are Psych majors representative of all BGSU students, let alone all college students?

Not necessarily; hence, the sampling procedure is another major limitation of this study.

For purposes of a class demonstration, this flawed sample is fine. But it severely limits the ability to generalize the results to all BGSU undergraduate students, let alone college students nationwide. Or, at least, the sampling procedure inspires some doubt about generalizibility.

- A third limitation is that I only included 5 forms of social media, rather than a more complete list. One student suggested including Tumblr, which is defensible—but for simplicity's sake, I shot that idea down.

Respondents gave self-report data (on the aforementioned 1-7 scale) regarding their usage of: Facebook, Snapchat, Instagram, Twitter, and Pinterest. That's it.

So, usage of LinkedIn, reddit, tumblr., Google+, flickr, SoundCloud, and other social networking sites were left out of the picture here. Even MySpace has stuck around, as musicians sometimes use it to gain additional exposure for their work. These sites are not captured in this survey.

Nonetheless, some data is better than no data! As far as student engagement goes, this data is also better than made-up data, because we're looking at real responses from real people—even if the survey methodology is less-than-ideal!

**Results**

The results of the survey are posted in .csv format on my Google Drive, publicly accessible here. I did the analysis in JASP, which I've previously recommended for many use cases (the complete analysis is available here) and in the even newer program jamovi (that analysis is available here).

Here's the [un-editable] graph generated by JASP:

And here are the descriptive stats:

A couple highlights:

For convenience's sake, I've screencapped the post-hoc test as well. [Click to enlarge image]

- Snapchat is the clear winner, with the highest mean (5.671)
*and*the lowest SD (1.819) - Instagram takes second place, Facebook is a close third, and Twitter lags behind. Pinterest is a distant last place in this sample
- The
*F*-ratio was 'statistically significant':*F*(4, 360) = 22.08,*p*< .001 - For effect size, I used eta-squared: Eta-squared = 0.197
- A post-hoc analysis (with Tukey correction) reveals that Pinterest
is significantly different from all others (duh!) and Snapchat is
significantly different from Twitter. Instagram and Twitter are also
significantly different.

- Statisticians will note that Levene's test reveals a violation of
the assumption of equality of variance. Strictly speaking, this means
that we should not run an ANOVA; instead, we should use a non-parametric
alternative like the Kruskal-Wallis H-test.

In my experience, though, this rarely yields a fundamentally different result. And after you run the H-test, you still need a post-hoc test anyway!

For convenience's sake, I've screencapped the post-hoc test as well. [Click to enlarge image]

I ran the post-hoc test in the brand-new stats program jamovi, which allows you to run the post-hoc test with no correction or with several of the most frequently-used correction procedures. I like how jamovi let me do the analysis both ways, and showed the results side-by-side.

*no*correction for multiple comparisons yields a significant difference for Facebook vs. Snapchat. It also shows that Facebook and Twitter are almost, but not quite, significantly different (

*p*= .055). Should we ignore this result because it didn't meet the sacred .05 criterion?

I'd say that we should consider it in the context of the study. What are we looking for? Patterns in usage of social media among college students (specifically, college students at BGSU).

What are we trying to accomplish? Well, let's suppose I'm trying to advertise a product or service to college students, in which case I want my ad to be seen by as many college students as possible, for as few $$$ as possible.

Even if the difference between Facebook and Twitter usage isn't significant at the conventional alpha level of .05, if we're talking about efficiency of time, effort, and money, it's close enough that I'd certainly consider advertising on Facebook instead of Twitter!

So is Tukey's correction (or another multiple correction procedure) necessary here? It's certainly debatable; I fall on the "no" side of things—after all, if there's a significant ANOVA, then there's clearly a significant difference somewhere, right? Multiple correction procedures reduce power, so if you use a correction like Tukey's test, you could end up with a significant ANOVA but no significant post-hoc results!

And significance is kind of overblown, anyway...

What are we trying to accomplish? Well, let's suppose I'm trying to advertise a product or service to college students, in which case I want my ad to be seen by as many college students as possible, for as few $$$ as possible.

Even if the difference between Facebook and Twitter usage isn't significant at the conventional alpha level of .05, if we're talking about efficiency of time, effort, and money, it's close enough that I'd certainly consider advertising on Facebook instead of Twitter!

So is Tukey's correction (or another multiple correction procedure) necessary here? It's certainly debatable; I fall on the "no" side of things—after all, if there's a significant ANOVA, then there's clearly a significant difference somewhere, right? Multiple correction procedures reduce power, so if you use a correction like Tukey's test, you could end up with a significant ANOVA but no significant post-hoc results!

And significance is kind of overblown, anyway...

But I've made my case already; you can decide for yourself.

_________________

Remember, if you're interested in a more nuanced analysis, you can download the .csv file linked above and run the analyses yourself! I suggest using JASP or jamovi, which are both free of cost and open-source!

## No comments:

## Post a Comment