social psychology’s crisis of confidence

A recent NYT Magazine article has prompted colleagues and friends alike to ask me, what’s going on in your discipline? Perhaps you’ve heard that there’s a “crisis” in social psychology. It’s been covered prominently–e.g.,  NYT, AtlanticSlateWikipedia. This essay is my attempt at explaining.

Introduction

The present crisis in social psychology can be traced to two highly publicized events in 2010 and 2011—publication of impossible findings using accepted methods of rigorous psychological science (Bem, 2011; Simmons, Nelson, & Simonsohn, 2011), and cases of fraud, notably Diederick Stapel (Finkel, Eastwick, & Reis, 2015; Yong, 2012). These events prompted numerous special issues on methodological rigor, replication, and transparency (e.g., Ledgerwood, 2016; Stangor & Lemay, 2016), large-scale efforts to replicate findings in flagship journals (Open Science Collaboration, 2015), and ominous commentaries from leaders of the field (e.g., Kahneman (2012), “I see a train wreck looming”). The current crisis echoes that of prior decades (Elms, 1975; Gergen, 1973; McGuire, 1973), but has notable differences (Hales, 2016; Spellman, 2014). First, I discuss how common research practices undermine our ability to make valid inferences. Second, I elaborate on why the field is grappling with these issues, and how the current crisis differs from those of the past. I conclude with recommendations for moving forward.

Common (and “Questionable”) Practices

Many research practices in social psychology (e.g., selectively reporting a subset of measures used) have long been recognized as “questionable” because they increase false inferences (e.g., Greenwald, 1975; Rosenthal, 1979). Yet, these practices remain surprisingly common (John, Loewenstein, & Prelec, 2012), due to perverse incentives, norms, or lack of awareness (Nosek, Spies, & Motyl, 2012). Many questionable practices are justifiable sometimes (particularly when reported transparently), though all of them increase the likelihood of false inferences (Nosek et al., 2012 for review). Here, I focus on the practice I see as most central to the current crisis.

The principle common research practice to the present crisis is opaque and misleading reporting of researcher degrees of freedom (Simmons et al., 2011). Researcher degrees of freedom are the set of possible methodological and statistical decisions in the research process. For example, should outliers be excluded? Which items should be used? It is rare, and sometimes impractical, to have a priori predictions about how to make all, or even most, of these decisions. Thus, it is common practice to explore alternatives after seeing data. In a given dataset, slightly different alternatives can lead to vastly different conclusions, and there may be no objective justification for taking one alternative over another (Gelman & Loken, 2013). For example, imagine a test that is non-significant when data are log-transformed, and significant when they are truncated. These two approaches may be equally justified for skewed data. However, we often rationalize in favor of alternatives that meet our expectations, in this case, statistical confirmation of our hypothesis (John et al., 2012). There are many other biases that lead us to favor positive alternatives (e.g., motivated reasoning or hindsight bias). Recall Richard Feynman’s advice to Caltech’s class of 1974, in science “the first principle is that you must not fool yourself – and you are the easiest person to fool.”

Furthermore, bias-prone decisions compound to exacerbate false inferences, even when decisions are seemly bias-free. By way of analogy, imagine the research process is a garden of forking paths. Each fork in the path represents a decision (e.g., truncating data), which eventually leads to an outlet (representing the conclusion). The long and winding path taken through this labyrinth may be justified by scientific logic at each juncture. However, because there are so many junctures, it is improbable that any two scientists (or even the same scientist a year from now) would take the same path through the garden. Deviation at a single fork can lead to disparate outlets, because new decisions are informed by data that were altered by previous decisions (Gelman & Loken, 2013). This is how 29 research teams can examine the same dataset with the same hypothesis, and come to 29 different conclusions (Silberzahn et al., 2017). When decisions are not determined a priori, they are inevitably guided by data and biases that influence the validity of inferences.

Research degrees of freedom increase the likelihood of false inferences, however they do not intrinsically undermine scientific progress. Nonetheless, it is not only common practice to maintain flexibility in design and analysis (Gardner, Lidz, & Hartwig, 2005; Ioannidis, 2005), it is also common to publish results as if only a single path was explored, or even as if a single path was predetermined (Begley & Ellis, 2012; Bem, 2003; Giner-Sorolla, 2012). Such presentation makes it challenging to distinguish between confirmatory (more reliable) and exploratory (more tentative) research. Without reliable representation of the current evidence, it is difficult to determine the degree to which an effect is understood and valid, as well as where to place future research efforts. The regularity of many researcher degrees of freedom accompanied by opaque or misleading reporting is central to the current crisis.

Why we are Reeling

Social psychology is grappling with a crisis (again), because formerly theoretical concerns about replicability (Elms, 1975; Gergen, 1973; McGuire, 1973), have been made tangible by empirical findings (Bem, 2011; Simmons et al., 2011) and fraud (e.g., Stapel)—both of which received considerable attention beyond ivory towers. A Google News search of “replication crisis and social psychology” reveals over 7,000 articles in the last few years including prominent outlets such as NYT, BBC, and WSJ. Scholars agree that outright fraud is a problem, but a rare one, and thus, not a primary concern. In contrast, questionable research practices are concerning because they are so common (John et al., 2012) and can result in impossible findings (Simmons et al., 2011). Many point to Daryl Bem’s (2011) paper on “precognition” as the catalyst of the present crisis. The paper, published in JPSP, appears to show that people have extrasensory perception. The distinguished Lee Ross, who served as peer reviewer, said of it, “clearly by the normal rules that we [used] in evaluating research, we would accept this paper… The level of proof here was ordinary. I mean that positively as well as negatively. I mean it was exactly the kind of conventional psychology analysis that [one often sees], with the same failings and concerns that most research has” (Engber, May 2017). Bem empirically arrived at an improbable conclusion (ESP exists) using common practices for entry into our flagship journal. This prompted Simmons and colleagues (2011) to use the same common practices to conduct an experiment that came to an impossible conclusion (that listening to certain songs can change the listeners’ age). These events led many social psychologists to question common practices, and revisit theoretical concerns of the past.

This Time is Different

The current crisis echoes that of prior decades (Gergen, 1973; McGuire, 1973), even centuries (Allport, 1968; Schlenker, 1974), in that it is concerned with replicability (Stangor & Lemay, 2016)—and rightfully so. The transparent communication of methods that enables scientific knowledge to be reproduced is the defining principle of the scientific method, and perhaps the only quality separating scientific belief from other beliefs (Nosek et al., 2012; Kuhn, 1962; Lakatos, 1978; Popper, 1934). Just as replicability is a sign of a functioning science, so too may be the perpetual self-conscious grappling with claims for scientific status. Psychologists and philosophers of science have long debated the scientific status of social psychology (Schlenker, 1974). In fact, such self-critical angst can be traced to the historical origin of the discipline when we differentiated ourselves from philosophy (Danziger, 1990). Yet, there are notable differences between the “crisis of confidence” in the 1970s (Elms, 1975), and that of today.

First, the former crisis was largely characterized by concerns about external validity, whereas today’s crisis in primarily concerned with threats to statistical conclusion validity (Hales, 2016). For example, McGuire (1967, 1973) worried that our focus on the “ingenious stage manager” of the laboratory produces conditions that render null results meaningless and positive result banal, while at the same time being unlikely to replicate outside the laboratory. Another example is found in Gergen (1973), who argued that social psychological effects are hopelessly dependent on the historical and cultural context in which they are tested, and thus impossible to generalize to principles in a traditional scientific sense.

In contrast, today’s crisis is concerned with the validity of statistical conclusions drawn from an experiment (Hales, 2016). Instead of asking, “does the effect generalize?” We are now asking, “does the effect exist at all?” In the previous crisis, Mook (1983) famously argued in defense of external validity. Laboratory experimentation need only concern itself with “what can happen” (as opposed to “what does happen”). It is the theory tested by a particular experiment that generalizes, not the experiment itself. A compelling defense, however, the assertion rests on the validity of statistical conclusions. The contemporary crisis is grappling with the assertion that common practices not only demonstrate “what can happen,” but that they can be used to show that “anything can happen.” If anything can happen in our laboratories, what differentiates our science from science fiction?

A second way in which the current crisis is different is related to changes in technology and demographics (Spellman, 2014). Technological changes are eliminating space concerns, and increasing speed and transparency of communication. One consequence of which is that people who fail to replicate research can more readily share that information, and see that they are not alone. Thus, it is easier to be critical of the finding itself rather than assume a methodological mistake was made (McGuire, 1973). Similarly, increases in diversity of the field have precipitated more critical questioning of the status quo. In brief, today’s crisis has elements of a social revolution that were missing from prior crises (Spellman, 2014). These factors will fuel a more persistent push for change this time around.

Recommendations

I conclude with recommended changes to improve confidence in our science. In fear of presumption, I follow McGuire (1973) in submitting my suggestions as koans—full of paradox and caveat; they are intended to be at once provocative and banal.

Koan 1:“Does a person who practices with great devotion still fall into cause and effect?…No, such a person doesn’t.”

Preregister

In 2000, the National Heart Lung and Blood Institute (NHLBI) initiated a policy requiring all funded pharmaceutical trials to prospectively register outcomes in an uneditable database, ClinicalTrials.gov. After the policy went into effect, the prevalence of positive results reported in NHLBI-funded trials dropped from 57% to 8% (Kaplan & Irvin, 2015). Preregistration improves confidence in published findings because it reduced selective reporting. More broadly, preregistration makes researcher degrees of freedom more apparent, reduces opaque and misleading reporting (Nosek, Ebersole, DeHaven, & Mellor, 2017), and allows us to better distinguish between confirmatory and exploratory research (Nosek et al., 2012).

Koan 2: “Having our cake and eating it too.”

Explore Small, Confirm Big

There is growing recognition that “small sample sizes hurt the field in many ways” (Stangor & Lemay, 2016), because it undermines both statistical confidence and the perception of rigor (Button et al., 2013). However, there is a trade-off to reckon with—it is resource expensive and unreasonable to test all hypotheses with large samples (Baumeister, 2016). We can have our cake and eat it too if we instead explore new questions with small samples to determine which are worth putting to larger confirmatory tests (Sakaluk, 2016). True, so long as we call a spade a spade. Small-N studies should leave the reader with the impression that the effect is tentative and exploratory, and then attempt to confirm “big” (Baumeister, 2016; Dovidio, 2016). Though, there is disagreement over implementation. Should there be separate journals for small-exploratory and large-confirmatory studies (Baumeister, 2016)? Should those studies appear in sequence in the same paper (Stangor & Lemay, 2016), or in different sections of the same journal (Dovidio, 2016)? My contention is that any of these approaches will be better than the status quo, so long as “truth in advertising” is maintained.

Koan 3:“He who pays the piper calls the tune.”

Gatekeepers and Replicators

Editors and reviewers tacitly agree that replicability is foundational to confidence and scientific progress, yet few journals incentivize replication. A recent study found that, of 1151 psychology journals reviewed, only 3% explicitly stated that they accept replications (4.3% of 93 social psychology journals; Martin & Clarke, 2017). If researchers could be assured that replications get published, more would be conducted. However, what makes for a constructive replication is widely debated. A promising approach is to test hypotheses as exactly as possible, while simultaneously testing new conditions that refine and generalize (Hüffmeier, 2016). Publishers must provide carrots to replicate, preregister, increase sample size, etcetera, or, as Nosek and colleagues suggest (2012), let us do away with them. Make publishing trivial and engage in post-publication peer review, they say. This allows researchers to decide when content is worth publishing and shifts the priority of evaluators to methodological, theoretical, and practical significance, and away from apparent statistical significance. Registered reports prompt a similar shift by enabling results-blind peer review (Munafò et al., 2017). Publishers could act as managers of peer review, focusing solely on bolstering confidence and rigor in the process, instead of also engaging in dissemination, marketing, and archiving. This is a worthy and feasible objective in the internet age (Nosek et al., 2012).

Koan 4: “What is the way? …An open-eyed man falling into the well.”

Transparency

The ultimate solution to our confidence dilemma is openness (Nosek et al., 2012). Make more information from our studies available. Preregistration helps make the research plan transparent, but the field would also benefit from changing norms around sharing and archiving data, materials, and workflows (Simonsohn, 2013; Wicherts, Bakker, & Molenaar, 2011; Wicherts, Borsboom, Kats, & Molenaar, 2006). More transparency not only addresses fabrication, it also enables verification, correction, and aggregation of knowledge—all of which bolster confidence in (and progress of) science. There is concern that greater transparency unveils the messy complexity and conflicting evidence of our science. That it enables science deniers and other malevolent critics in their efforts to mislead the public. To this I say, “fools believe and liars lie,” regardless of truth or access. In my admittedly optimistic view, earnestly open presentation wins confidence in the long run. For example, scientists who concede failures, explore reasons for failure, or are transparent in their publication of failures (as opposed to denying their validity, hiding them, or not acting) are perceived as more able and ethical (Ebersole, Axt, & Nosek, 2016). Scientists overestimate the negative consequences of a failed replications and transparent reporting (Fetterman & Sassenberg, 2015).

Conclusion

The present crisis is not entirely new, but it has critical difference. If we can use common research practice to find the impossible, where does that leave our science? I venture that these koan may move us to embrace our science not as history entirely (Gergen, 1973) but perhaps as evidence-based history. So too, in the style of Rozin (2001), may we start to embrace the exploratory and narrative nature of our present science. Perhaps then, we will again find our confidence.

References (click here)

 

Advertisements

What Do We Want our Scientific Discourse to Look Like?

I was recently quoted in an article appearing in the Observer, a publication of Association for Psychological Science. In the article Alison Ledgerwood quotes from a diverse set of voices in psychology on the topic of scientific discourse in part in response to Susan Fiske’s piece in the Observer. Fiske takes issue with methodological critics of psychological science (who she referred to as “methodological terrorists” in an earlier draft circulated online). Her article promoted many responses (see here) and a call led by Ledgerwood to write a more diverse (and less status-driven) article for the Observer on the topic. True to form, Alison quoted my writing fairly, and elegantly brought together many other contributions.

Here, I provide my small contribution in its entirety.

We would serve each other, and science as a whole, better if we treated critique and communication of science as an open and humble process of discovery and improvement. To this end, I would like to see our scientific discourse focus more on methodology and evidence. This is easier said than done. Criticisms of the science are often construed as criticisms of the scientist. Even when we, as scientists, appreciate the criticism and recognize its scientific value, it still evokes concerns that others will lose trust in us and in our research. It is no wonder people are distressed by methodological criticism. However, focusing our discourse on methodology and evidence, with more awareness of how tone and context influence others’ perceptions of the scientist whose work is under the microscope, will help ensure healthy development of our science. Second, I would like to see an increase in open and humble scientific discourse. Openness may make our mistakes and shortcomings more apparent, and it may make it easier for others to critique our work, but it will surely improve our science. If we simultaneously place more value on humble communication, I expect criticisms will feel less personal and be easier to swallow as well. Finally, as a graduate student, I feel vulnerable publicly stating my thoughts on criticism and openness in science, which speaks to the climate of our discourse. It is essential that we have a communication environment in which graduate students, post-docs, and junior faculty from all backgrounds are rewarded for humbly and openly presenting methodologically sound ideas, research, and criticisms.

Meehl on theory testing, never gets old.

The position of Popper and the neo-Popperians is that we do not “induce” scientific theories by some kind of straightforward upward seepage from the clearly observed facts, nor do we “confirm” theories as the Vienna positivists supposed. All we can do is to subject theories—including the wildest and “unsupported” armchair conjectures (for a Popperian, completely kosher)’— to grave danger of refutation…

A theory is corroborated to the extent that we have subjected it to such risky tests; the more dangerous tests it has survived, the better corroborated it is. If I tell you that Meehl’s theory of climate predicts that it will rain sometime next April, and this turns out to be the case, you will not be much impressed with my “predictive success.” Nor will you be impressed if I predict more rain in April than in May, even showing three asterisks (for p < .001) in my t-test table! If I predict from my theory that it will rain on 7 of the 30 days of April, and it rains on exactly 7, you might perk up your ears a bit, but still you would be inclined to think of this as a “lucky coincidence.” But suppose that I specify which 7 days in April it will rain and ring the bell; then you will start getting seriously interested in Meehl’s meteorological conjectures. Finally, if I tell you that on April 4th it will rain 1.7 inches (.66 cm), and on April 9th, 2.3 inches (.90 cm) and so forth, and get seven of these correct within reasonable tolerance, you will begin to think that Meehl’s theory must have a lot going for it. You may believe that Meehl’s theory of the weather, like all theories, is, when taken literally, false, since probably all theories are false in the eyes of God, but you will at least say, to use Popper’s language, that it is beginning to look as if Meehl’s theory has considerable verisimilitude, that is, “truth-like-ness.”

Meehl, P. E. (1978). Theoretical risks and tabular asterisks: The slow progress of soft psychology. Journal of Consulting and Clinical Psychology, 46, 806–834. doi:10.1037//0022-006X.46.4.806

Day 1: Does Coffee Make You Smarter?

This post is a part of a series, see previous post.

Like with most resolutions starting was the hardest part. It has been a little over a week since I declared my resolution to undertake a daily self-experiment (publicly mind you) on coffee. But public humiliation couldn’t stop me from procrastinating–change is hard. Of course each morning post-declaration I woke up with a different rationalization for not changing my behavior: “It’s still the holidays, I deserve a break.”; “I got up too late, I’ll do it tomorrow.”; “I’m twitching, I can’t wait, I need my coffee now!” But now, finally, the experiment is under way. I am using a tool developed by Stephen M. Kosslyn (a psychologist at Stanford) and his co-conspirators Yoni Donner and Nick Winter to facilitate this experiement. If you interested in self-experiments, quantified-self, or how to use data for self-improvement I suggest you check it out.

So today I started with the new morning routine and documented the process with some notes.

Day 1: Test-Before-Coffee

Start time: Mon, 6 Jan 2014 08:45:45 End time: Mon, 6 Jan 2014 09:02:42

Sporadic Notes: I commenced just 5 minutes after waking up. I am a groggy morning person so it will be interesting to see how this may affect my results. I expect to see improvement throughout the testing period on test-before-coffee days since it became evident that as I shook the sleepydust from my eyes the tests got easier. I’ll check back on this latter in the month. Several of the tests had rules that took me a few second to understand. I restarted these confused-trials to ensure an accurate measurement. On one of the tests (“Design Copy”) I took a practice trial to make sure I understood the rules. The practice trial is recorded on the results page so I assume it will be incorporated into analysis.

Learnings: In the morning I need to read directions twice. Use practice trial if unsure. Tapping the space bar as fast as you can is a good way to get your family out of bed.

Stay tuned for forthcoming notes and results.

Does Coffee Make You Smarter?

coffee

Coffee has become my morning staple and a key to my productivity. After a good cup I feel fast, focussed, witty, and smart. I am curious if these feelings of augmented intellectual efficiency are real or just an illusion of a brain that just satisfied a fix. So I’m conducting a self experiment. The protocol is simple. Every morning I will drink 8 ounces of my favorite liquid elixir between 7-9 AM. On day one I will take cognitive tests right before my cup. On day two I will drink my cup, wait 45 minutes and then compete the tests. Other factors such a food intake and exercise will be tracked to examine correlates. This daily alteration will continue for about a month at which point I will share my results.

 

[Cartoon courtesy Mark Anderson.]

Increasing statistical power in psychological research without increasing sample size

A quote to live by from an article describing three simple ways to increase power in psychological research (without increasing sample size). And why it’s important. 

Increasing statistical power is one of the rare times where what is good for science, and what is good for your career actually coincides. It increases the accuracy and replicability of results, so it’s good for science. It also increases your likelihood of finding a statistically significant result (assuming the effect actually exists), making it more likely to get something published. You don’t need to torture your data with obsessive re-analysis until you get p < .05. Instead, put more thought into research design in order to maximize statistical power. Everyone wins, and you can use that time you used to spend sweating over p-values to do something more productive. Like volunteering with the Open Science Collaboration.

– , Open Science Collaboration