Repeat After Me: Why can’t anyone replicate the scientific studies from those eye-grabbing headlines?

Click on the image for the full cartoon.

Screen Shot 2017-08-17 at 1.51.28 PM.png

https://thenib.com/repeat-after-me

Advertisements

“Everything Is Crumbling” The Replication Crisis in the Social Sciences

“An influential psychological theory, borne out in hundreds of experiments, may have just been debunked. How can so many scientists have been so wrong?”

“And yet, it now appears that ego depletion could be completely bogus, that its foundation might be made of rotted-out materials. That means an entire field of study—and significant portions of certain scientists’ careers—could be resting on a false premise. If something this well-established could fall apart, then what’s next? That’s not just worrying. It’s terrifying.”

http://www.slate.com/articles/health_and_science/cover_story/2016/03/ego_depletion_an_influential_theory_in_psychology_may_have_just_been_debunked.html

Podcast: “The Replication Crisis” on You are Not So Smart

“Nosek recently lead a project in which 270 scientists sought to replicate 100 different studies in psychology, all published in 2008 — 97 of which claimed to have found significant results — and in the end, two-thirds failed to replicate. Clearly, some sort of course correction is in order.”

https://youarenotsosmart.com/2017/07/19/yanss-100-the-replication-crisis/

Hidden Brain Podcast: The Scientific Process

This is a great podcast that gets into some of the issues and challenges with constructing knowledge in the human sciences. Though we differentiate between the human and natural sciences, both use similar processes to construct experiments and rely on similar reasoning processes to construct knowledge. One key idea in both types of science is the idea that experiments can be reproduced and results can be replicated in order to validate a study’s findings. What happens when you can’t reproduce a prior finding? Were the original results fraudulent? Poorly constructed experiments? Was the new experiment faulty? The answer to these questions is complicated but this podcast delves into some of those issues.

“Lots of psychology studies fail to produce the same results when they are repeated. How do scientists know what’s true?”

Listen· 28:17Queue
Toggle more options

http://www.npr.org/player/embed/479201596/479202167

Why raising the minimum wage in Seattle did little to help workers, according to a new study

This is an interesting article on the effect of raising the minimum wage. What’s interesting about it is that it raises a lot of issues that speak to the challenge of proving things in the human sciences, particularly in Economics. Because we can’t create two perfectly identical situations in real life with which to test out variables, we are left with having to determine the effects of variables like changing the minimum wage in imperfect, real life, experiments. If you pay close attention to the language, the writer communicates the experimental conclusions which often times sounds equivocal, weakly worded, or uncertain and that is because those people who conduct the experiments or undertake the research understand that it is very difficult to come up with solid, definitive conclusions like you can in the natural sciences. For example, you can say definitively in the natural sciences that if you heat up a gas and keep volume constant, that you will increase pressure. This is a consistent finding, backed up by experiments. Can you come to the same type up of definitive conclusion in the human sciences? Can you definitively answer the question, “what is the effect of raising the minimum wage?” The answer to that question is probably no. You will get answers that have a lot of qualifiers and adjectives like “probably” and “sometimes” and “maybe.”

“Overall, there was almost no effect on workers’ average total earnings, but Vigdor pointed out that the average could be misleading. The consequences for many individual workers — both positive and negative — could have been more significant.”

https://www.washingtonpost.com/news/wonk/wp/2016/07/29/study-raising-the-minimum-wage-did-little-for-workers-earnings-in-seattle/

What If We Just Gave Poor People a Basic Income for Life? That’s What We’re About to Test.


“By ‘rigorous’ we mean a few things. First, the test must be experimental, so that we generate unbiased and transparent estimates of impact. Second, the guarantee must be a long-term commitment. We already know quite a bit about the beneficial effects of giving people money for a few years; the key question is how the knowledge that your livelihood is secured for more than a decade affects your behavior now. Do you take more risk? Get more schooling? Look for a better job? Third, the guarantee needs to be universal within well-defined communities, since the goal is as much to understand social dynamics as individual behaviors. While various other basic income pilots have been conducted in the past, none so far have met all three of these criteria.”

http://www.slate.com/blogs/moneybox/2016/04/14/universal_basic_income_this_nonprofit_is_about_to_test_it_in_a_big_way.html

OK Cupid: “We Experiment On Human Beings!”

The ethics of human experimentation on the internet has been greatly debated, especially in light of the revelation that Facebook engaged in experiments on its users without their consent. Another site, OK Cupid, proudly states that they experiment on humans and whether or not you realize it, if you’re on the internet then you’re being experimented on all the time.

“We noticed recently that people didn’t like it when Facebook ‘experimented’ with their news feed. Even the FTC is getting involved. But guess what, everybody: if you use the Internet, you’re the subject of hundreds of experiments at any given time, on every site. That’s how websites work.”

http://blog.okcupid.com/index.php/we-experiment-on-human-beings/