If you’re trying to navigate a foreign city, you’ll probably whip out Google Maps to guide you.
Google Maps uses GPS technology to show you where you are in the world, thanks to the system of satellites whipping around the Earth at 14,000 km/h, 20,200 km above where you’re standing.
These satellites are moving so fast that time on them moves slightly faster than it does for you on the ground, about 38 microseconds per day faster, in line with Einstein’s general theory of relativity.
If this difference in time between the satellites and the ground weren’t accounted for, the differences between where the satellites think you are, and where you actually are, could get out of sync by as much as 10 km in a single day.
This is just one of the many ways we’ve validated Einstein’s theories since he formulated them. We still use them today because they’re good science. They’ve stood the test of time, they continue to be useful, and they’ve been validated through numerous studies and experiments putting them to the test.
Not all science is so lucky, though. Psychology, and the more modern Decision Science, face a massive “replication crisis.” Research gets published that purports to prove some new psychological concept, but then that research is never re-tested, meaning no one makes sure that it isn’t made up or a fluke.
In his book The Beginning of Infinity, David Deutsch refers to much of this research as “explanationless” science:
“...without powerful techniques of error-detection and correction – which depend on explanatory theories – this gives rise to an instability where false results drown out the true. In the ‘hard sciences’ – which usually do good science – false results due to all sorts of errors are nevertheless common. But they are corrected when their explanations are criticized and tested. That cannot happen in explanationless science.” (emphasis mine)
If someone comes out with a new theory of gravity tomorrow, we can test it fairly easily. If someone comes out with a new psych study saying that men are 25% more likely to like jazz music… the only way we can check that is to do the study again. But studies are very rarely reproduced, they’re just assumed to be accurate.
That’s a huge problem, though, since when psychological and other social sciences studies do get put to the test, they tend to fail. This is so pervasive of an issue that during a massive reproducibility project in 2015, fewer than half of the 100 major psychological findings studied were successfully reproduced. It wouldn’t be crazy to assume that any new claim in psychology is more likely to be false than true.
Some of these concepts that have failed replication are still widely believed, though. A few have massively popular TED Talks about them. So I thought it would be helpful to keep a running list of all the popular psych concepts that aren’t true, or are at least suspect, and why we should stop spreading them.
Feel free to read straight through, or jump around. Here’s the current list:
The concept of Grit as a meaningful indicator of life achievement and general competency was popularized by Angela Duckworth in her book by the same name.
Much of the arguments in it are based on research like the West Point Graduates study, which found that graduates who had the highest “grittiness” as assessed at enrollment had the highest chance of making it through the program.
The book went on to be bestseller, and everyone started thinking about how they could be more “gritty” and giving themselves and their kids the “grit test.” And for the many people who read it and told themselves “Yeah! I’m gritty!” or were able to create a nice narrative about how their past success was due to their grittiness (and not, say, being born in the US with a high-enough IQ and in a good-enough school system), it was a great feel-good concept that quickly caught hold.
First, the whole idea of Grit is basically just “industriousness” from the Big 5 personality traits rebranded. It’s nothing new, more a clever terminological difference (and good marketing).
That alone isn’t enough to say that we should stop obsessing over Grit, but the repeated failures to replicate Duckworth’s experiments is concerning. In a broad study on school success, grit was not predictive of success.
In another one, it turned out that the parent trait of conscientiousness was more important.
In another, grit failed to be predictive, and past performance was the better indicator.
And in a meta-analysis of studies on grit, they found it was correlated with conscientiousness, but not necessarily with success.
The marshmallow test was one of the earliest, and most cited, studies done on the importance of willpower.
It was simple enough: kids were sat at a table in front of a marshmallow, and they were told that if they could wait a few minutes before eating it, they would get two marshmallows. The videos of kids resisting the marshmallow are hilarious if you haven’t seen them:
https://www.youtube.com/watch?v=QX_oy9614HQ
When researchers followed up on these kids years later, they found that the kids who had successfully held out for the second marshmallow had greater academic and career achievement.
And thus the conclusion was drawn that if you have high willpower, and can resist your urges, you’ll go on to be more successful.
The premise isn’t necessarily wrong, but saying it’s research backed is. In this case, it could be that other factors contribute to both enduring the marshmallow test and performing well in school, such as listening to authority.
Dispensing with this idea as being “research backed” comes from an attempt at replicating the famous study, which found that the results were significantly less than initially reported, and that they all but disappeared when you accounted for family background, early cognitive ability, and home environment.
It also turns out that there were other factors that could affect how a kid performed on the test, such as how much they trusted the experimenter.
So no, you don’t need to worry if your kid eats the marshmallow.
This is a big one, and one that I used to buy into whole-heartedly. The idea is simple and attractive: willpower is a resource, like muscular energy, and the more you “use up” your willpower, the less you’ll have for other tasks.
It’s part of why you’ll hear people say they always wear the same clothes, or pre-set their meals, or try to avoid as many decisions as possible. They don’t want to use up their “decision making units.”
To be clear: there are benefits to removing decisions from your life. It’s easier to overcome your biases, you save time, and it helps with habit formation.
But you don’t save willpower, and you may actually lose it. Meta-analyses of the ego depletion model for willpower have shown that there’s actually a very small effect, and that our beliefs about willpower may be more of the problem than an actual resource model.
Later meta analyses provided some evidence in support of the resource model, but they’re repeatedly continually countered by new analyses showing that the resource model doesn’t hold up.
As an individual, this should be reassuring. Having to make decisions or exert willpower isn’t a zero-sum game. You can keep control throughout the day, and you shouldn’t make excuses to break your rules just because you worked out.
Stand in the super hero, arms akimbo pose, and you’ll pump up your testosterone and confidence before the big meeting, so the power posing advice goes.
This is one of the more popular, and oft-spread psychological myths, with Amy Cuddy’s speech on the topic being one of the most watched TED Talks of all time.
It’s tempting to believe, too. If you stand in one of those poses and look in the mirror you feel something happen psychologically, but that doesn’t mean the research holds up.
The main issue with the power posing research is the claim that it has a meaningful effect on hormones and risk tolerance. The original research suggested that if you do a power pose before a big meeting, or presentation, it can have an effect on your hormones, pumping up your testosterone and making you a more confident risk taker.
That claim has repeatedly failed replication, and seems to have been a fluke in the original study. Cuddy’s original study only had 42 participants, which opened it up to a lot of statistical error. Larger studies, like the one linked above, used 200+ participants, and saw no meaningful change in hormones or risk taking.
After some arguments back and forth about their data, another team of researchers did a meta p-value analysis on the existing literature and found no meaningful evidence for the claimed benefits of power posing.
So maybe you feel special doing your superhero pose in the mirror, but there’s no science to support the benefits.
Fake it till you make it: emotional edition. This research suggested that while being happy makes you smile, you can also force yourself to smile to make yourself feel happy.
If you try it, it feels like it’s working. You’ll notice that you feel a little happier when you fake a smile. So how could this research be wrong?
The original study in 1988 suggested that people who forced themselves to smile while looking at a cartoon found the cartoon funnier. That research was taken as gospel and the idea of “smiling to make yourself happier” spread like wildfire through psychology and decision science classes around the world.
But when a number of labs tried to replicate the study in 2016, it didn’t hold up. 9 labs found a similar effect but at a much lower magnitude, and 8 labs found no effect at all, which when they combined the data came out to no significant observed effect.
You can see just how overblown the original study was by the results in the more recent study:
The original Strack et al. (1988) study reported a rating difference of 0.82 units on a 10-point Likert scale. Our meta-analysis revealed a rating difference of 0.03 units with a 95% confidence interval ranging from −0.11 to 0.16.
Stick your tongue out at a newborn, or smile at them, and they learn to mimic the expression.
It’s a popular concept in most developmental psychology textbooks, and many parents will report seeing their newborns engage in this kind of behavior.
So if studies found that newborns could imitate facial expressions, and parents report seeing it too, how could this one be wrong?
First, the parental data are far from rigorous. If you stick your tongue out at your baby a dozen times and they return the gesture once, you’ll mentally latch onto the one time they mimic the gesture and ignore all the times they don’t (selection bias).
As for the study, it was tested in 2016 and what they found was that while they were able to get the same results by imitating the methodology of the original study, those methods weren’t rigorous enough, and when they introduced additional controls, the results disappeared.
The infants were just as likely to offer the gestures in responses to control expressions as they were to test expressions, suggesting no in-born imitation model. Sorry parents.
The idea of this one is attractive: if you’re put in a situation like being a prison guard, or Nazi soldier, some deeper animalistic part of you takes over and you’re not responsible for your actions.
It lets us feel a little better about the evil in the world, and about the little bad things each of us do.
But, unfortunately, this one is a sham too.
For a full breakdown on everything wrong about the Stanford Prison Experiment, I’d suggest you read “The Lifespan of a Lie,” an excellent article about the history of the study.
But here’s the gist: the participants were faking some of their reactions, the guy organizing it had ulterior motives, the guards were instructed to be brutal, and its repeatedly failed replication. There are a host of issues with this one, and there’s really no reason to take it seriously as a psychological study.
Another famous study, similar to the Stanford Prison Experiment, that makes us feel a little more at ease in a world of evil.
Milgram’s experiments suggested that evil people are “just following orders,” which he claimed to have proven by getting subjects to administer increasingly violent electric shocks to a subject they were supposed to be testing.
It’s seductive, in that it makes us consider how we would behave in similar authoritative situations. Would you keep shocking someone until they seemed to die, just because an authority told you to?
This one is, admittedly, hard to judge. Thanks for regulations passed since Stanley Milgram’s famous shock experiment, we can’t actually replicate the experiment in the US, since it would be deemed too unethical.
There are some concerning aspects to the original study though. Research done by a psychologist in 2012 unearthed some concerns, including that half of the participants knew it wasn’t real, and that Milgram’s experimenters didn’t stick with the script they were supposed to use and resorted to more heavy-handed coercion than Milgram reported.
Despite that, this one has been replicated in other ways like in this study in Poland, so it may hold up.
Ever since a famous 1972 study by Paul Eckman, we’ve believed, and been teaching, that facial expressions are universal.
That whether you’re a banker living in downtown New York, or a tribesman deep in the Amazonian jungle, you express your emotions through facial expressions largely the same way, and recognize the expressions of others the same way as well.
But new research suggests that may not be the case.
This study was tested in 2014, based on an important distinction in how people were allowed to sort facial expressions.
It turned out that when you told people what expression to sort faces into, the results were the same whether you were a Bostonian or a member of the Namibian Himba tribe. Both could tell if a face was happy, angry, disgusted, etc.
But, that’s not really how we interpret emotions. We look at someone’s face and have to interpret it without a set list of emotions to ascribe to it. And when the experiment was done using a “free sorting” method to mimic this, the results were wildly different.
As described in the Popular Science article on the results:
“Americans used "discrete emotion words" like "anger" and "disgust" to describe their piles, but also used "additional mental state words" like "surprise" and "concern." Meanwhile, the Himba labeled their piles with physical action words, like "laughing" or "looking at something." Free-sorters also did not sort into the six distinct piles that would support the universal recognition theory; instead, the piles varied in number. The only consistency across both groups seemed to be that participants sorted the faces well according to the positivity or negativity of the emotion and level of arousal (the extremity of the emotion).”
A particularly spooky finding in modern psychology has been that if you sever the connection between the two hemispheres of the brain, patients seem to develop two different consciousnesses, something we talked about at length in the Made You Think episode on The Elephant in the Brain.
The way researchers reached this conclusion was that they could show an instruction to a patient telling them to do something, then when they started to go do that thing, if they were asked (vocally) why they’re doing it, they would make up a reason, instead of saying they don’t know.
Spooky, right? But it might not mean that they have two consciousnesses floating around up there.
In a more recent study from 2017 trying to replicate some of these results, they found that subjects knew that objects were present even when they couldn’t perceive them, through some sort of secondary communication within their brain not through the cortical stem.
This suggests that what’s actually going on is that their perception gets split, while their consciousness actually remains in one piece… somehow.
The idea of “priming” is one of the most cited, and oft repeated, concepts in psychology research.
The premise is that if you “prime” someone with words making them think of old age, they’ll walk slower, which was demonstrated in a famous study in 1996 that’s been cited over 4,500 times (that’s a lot). It also demonstrated that people primed with the concept of rudeness interrupted the experimenter more, and that people primed with African American stereotypes “reacted with more hostility to a vexatious request of the experimenter.”
It even earned a spot in Daniel Kahneman's famous book “Thinking Fast and Slow,” which he now seems to regret.
Researchers attempted to replicate this famous study a few years ago, significantly increasing the number of participants and using a more objective timing method, and the results disappeared.
The only way they could successfully recreate the results of the original study, was if they told the experimenters that certain participants would be walking slower. The experimenters had to be biased to find the results that the researchers were looking for.
The original researcher wasn’t too happy about this, but that’s understandable considering his career-defining research seems to not hold up.
Oxytocin is known for its role in labor and lactation, but recent claims have suggested that intranasal oxytocin spray can make people more trusting and affectionate towards one another.
Much of that claim comes from an experiment done on subjects having to write down a secret and put it in an envelope, with the freedom to hand it back to the experimented unsealed, sealed, or sealed with sticky tape.
In the original study, it seemed that a whiff of oxytocin could make people less inclined to seal the envelope. But it didn’t hold up.
This issue was figured out in two ways. First, some researchers realized there was a “file drawer” problem with oxytocin research, where all of the null hypotheses on its effects weren’t seeing the light of day, making it seem more robust than it was.
Then, when researchers tried to replicate the envelope test with oxytocin spray, it failed, causing researchers to question the accepted norm that oxytocin could increase trust between people.
The original test for this suggested that if you taped up some fake eyes over a donation container for collecting money, people would give more. It had a very “big brother” aspect to it, and confirmed some people’s suspicions that you need to watch people to make sure that they behave.
You’ll see this one being used in the wild fairly frequently, too. Signs that you’re on camera, smiling faces by donation jars, there are tons of examples of it.
But….
Researchers conducted two meta analyses of the existing research on the effect of “artificial surveillance cues” on generosity, and found no effect.
These surveil;ance cues neither increased generosity, nor made it more likely that people would be generous at all. Essentially, there was no effect, and not enough data to support the popular claims that adding some eyes can make people behave more prosocially.
This was a fun one while it lasted: reminding people about money could make them more selfish, support inequality, accepting of discrimination, and in favor of free market economies.
It was a good way to feel self-righteous as an anti-capitalist, and believe that “money is the root of all evil.”
This one just simply failed to replicate. It was attempted in 36 different labs, and only 1 was able to confirm the original result.
This notion was popularized by Malcolm Gladwell in his book “Outliers,” it’s a feel-good idea that there’s nothing special about top performers, they just put in enough work to reach the level of expert.
Specifically, he argued that it took 10,000 hours to become an expert in an area. Put in 10,000 hours, and you become an expert. Now get to work.
Since then, it’s become a popular saying and truism that if you just “put in your 10,000 hours” you can rise to the top of your field. But is it true?
This research was so misrepresented by Gladwell that the original author of the research, Anders Ericsson, had to write an entire book explaining what he actually learned from his work.
There are two problems with the 10,000 hour theory. First, not all fields are alike. I could invent a game right now, say boosted board juggle racing, where you weave in and out of NYC traffic while juggling on an electric skateboard, and if I spent a week or two working on it I’d probably be an “expert” mostly because no one else would be stupid enough to do it.
On the other end, disciplines like chess, violin, gymnastics, soccer, and other incredibly competitive fields may take significantly more than 10,000 hours to become an expert in, considering how good everyone else is.
But then the bigger problem is that not all practice is the same. 10,000 hours goofing off on the golf course with your buddies is not the same as 10,000 hours of guided instruction from a pro. You can’t just “do your 10,000 hours” and expect to be a pro, you have to practice the right way.
This research has gotten extremely popular in the last decade, so popular that parents and educators are using it to try to sway how they educate and talk to children.
The idea is that people can have a “fixed mindset,” where they believe their intelligence and ability is fixed, or they can have a “growth mindset,” where they believe they can learn, improve, and succeed.
It sounds great, but there are some issues.
First, to be clear, replications of growth mindset research have mostly held up. The results are rarely as strong as the original results Dweck reported, but they’re there.
So why do I say this one is wrong?
First, it tends to get interpreted wrong. People will suggest that the growth mindset research suggests you can change your IQ, or that there’s no such thing as talent and “smartness,” which simply isn’t true, and isn’t suggested anywhere in the growth mindset research. All the growth mindset research says is that you will improve more at a task if you believe you can. Which, phrased like that, isn’t so impressive.
That also introduces an issue, where people will believe that they’re bad at something simply because they’re not trying hard enough, which can be incredibly demoralizing if you don’t have the biological framework for succeeding in that area.
This one is murky though, and I suggest you read this much more thorough breakdown if you’re curious about the issues with growth mindsets.
Know any that I’m missing? Let me know on Twitter and I’ll add them to the list.
Comments are reserved for site members only. Not a member? Sign up here.