What Do We Want our Scientific Discourse to Look Like?

I was recently quoted in an article appearing in the Observer, a publication of Association for Psychological Science. In the article Alison Ledgerwood quotes from a diverse set of voices in psychology on the topic of scientific discourse in part in response to Susan Fiske’s piece in the Observer. Fiske takes issue with methodological critics of psychological science (who she referred to as “methodological terrorists” in an earlier draft circulated online). Her article promoted many responses (see here) and a call led by Ledgerwood to write a more diverse (and less status-driven) article for the Observer on the topic. True to form, Alison quoted my writing fairly, and elegantly brought together many other contributions.

Here, I provide my small contribution in its entirety.

We would serve each other, and science as a whole, better if we treated critique and communication of science as an open and humble process of discovery and improvement. To this end, I would like to see our scientific discourse focus more on methodology and evidence. This is easier said than done. Criticisms of the science are often construed as criticisms of the scientist. Even when we, as scientists, appreciate the criticism and recognize its scientific value, it still evokes concerns that others will lose trust in us and in our research. It is no wonder people are distressed by methodological criticism. However, focusing our discourse on methodology and evidence, with more awareness of how tone and context influence others’ perceptions of the scientist whose work is under the microscope, will help ensure healthy development of our science. Second, I would like to see an increase in open and humble scientific discourse. Openness may make our mistakes and shortcomings more apparent, and it may make it easier for others to critique our work, but it will surely improve our science. If we simultaneously place more value on humble communication, I expect criticisms will feel less personal and be easier to swallow as well. Finally, as a graduate student, I feel vulnerable publicly stating my thoughts on criticism and openness in science, which speaks to the climate of our discourse. It is essential that we have a communication environment in which graduate students, post-docs, and junior faculty from all backgrounds are rewarded for humbly and openly presenting methodologically sound ideas, research, and criticisms.


Meehl on theory testing, never gets old.

The position of Popper and the neo-Popperians is that we do not “induce” scientific theories by some kind of straightforward upward seepage from the clearly observed facts, nor do we “confirm” theories as the Vienna positivists supposed. All we can do is to subject theories—including the wildest and “unsupported” armchair conjectures (for a Popperian, completely kosher)’— to grave danger of refutation…

A theory is corroborated to the extent that we have subjected it to such risky tests; the more dangerous tests it has survived, the better corroborated it is. If I tell you that Meehl’s theory of climate predicts that it will rain sometime next April, and this turns out to be the case, you will not be much impressed with my “predictive success.” Nor will you be impressed if I predict more rain in April than in May, even showing three asterisks (for p < .001) in my t-test table! If I predict from my theory that it will rain on 7 of the 30 days of April, and it rains on exactly 7, you might perk up your ears a bit, but still you would be inclined to think of this as a “lucky coincidence.” But suppose that I specify which 7 days in April it will rain and ring the bell; then you will start getting seriously interested in Meehl’s meteorological conjectures. Finally, if I tell you that on April 4th it will rain 1.7 inches (.66 cm), and on April 9th, 2.3 inches (.90 cm) and so forth, and get seven of these correct within reasonable tolerance, you will begin to think that Meehl’s theory must have a lot going for it. You may believe that Meehl’s theory of the weather, like all theories, is, when taken literally, false, since probably all theories are false in the eyes of God, but you will at least say, to use Popper’s language, that it is beginning to look as if Meehl’s theory has considerable verisimilitude, that is, “truth-like-ness.”

Meehl, P. E. (1978). Theoretical risks and tabular asterisks: The slow progress of soft psychology. Journal of Consulting and Clinical Psychology, 46, 806–834. doi:10.1037//0022-006X.46.4.806

Review of Blind Spot: the Hidden Biases of Good People by Mahzarin R. Banaji and Anthony G. Greenwald

Considerable ambiguity still surrounds the exact circumstance under which Michael Brown was shot dead by a police officer in the summer of 2014 in Ferguson, Missouri. One thing is certain; it has spurred mountains of media coverage, energized protests across the country, and gave many Americans enough pause to rethink their views on the treatment of Black Americans in this country, particularly when it comes to law enforcement. But will merely thinking about the disadvantages experienced by millions of Black Americans and other minority groups move us toward a more egalitarian society? Mahzarin R. Banaji and Anthony G. Greenwald paint a more complex picture in their provocative and timely book, “Blind Spot.” The book sheds light on the social-cognitive science behind hidden biases that may help explain discrimination and persistent gaps in health care, housing, employment, and law enforcement experienced by minorities.

Over the last fifty to seventy years there has been a sharp decline in the public expression of prejudice. In the early 1960’s roughly forty percent of White Americans still favored racial segregation in schools. By 1995 that figure dropped to nearly zero. Toeing the line, American governments and institutions dramatically shifted policies to align with normative racial attitudes, evidenced by Brown v. Board of Education, to name just one example. However, despite the decline of explicitly prejudiced attitudes and policy, Black disadvantage persists. Banaji and Greenwald point to national audit studies conducted in 1989 and 2000 in which Black and White actors were paired for similarity in appearance, education, and socioeconomic status, and then asked to apply for mortgage loans, purchase insurance, or secure an apartment lease. Results consistently demonstrate racial disadvantage. For example, in 2000, White homebuyers and apartment seekers were favored eight percent more often than Blacks. Similar employment audits reveal a sixteen percent favoring of White over Black job applicants. Other forms of “un-obtrusive” measures also point to persistence of racial discrimination despite what people report on questionnaires. For example, the lost letter technique is an un-obtrusive method researchers use to measure peoples’ attitudes. An exemplar of this method involves a stamped and addressed envelope that is left open in a public place. The envelope contains a graduate school application and a photo of either a White or Black applicant. One iteration of this study found that the letter was mailed 45 percent of the time for White applicants and only 37 percent of the time for Black applicants.

In light of results from audit studies and the lost letter technique, keen readers may be asking themselves why Black Americans continue to experience clear forms of discrimination despite changes in public policy and reported attitudes of Americans. Covering the last two decades of innovative research on what social cognitive scientists such Banaji and Greenwald call implicit bias, “Blind Spot” attempts to answer this difficult and pressing question. The tone of the book is careful and professional but also provocative and personal. It is filled with interesting anecdotes on the path to scientific discovery, candid self-analyses, and difficult lines of questioning that provide a rare window into the minds of two gifted psychological scientists. In “Blind Spot” Banaji and Greenwald accomplish the exceptional feat of conveying real hard-nosed science in way that makes the science feel real.

On an intuitive level, we all understand that what we do, say, believe and feel are not always guided by what we consciously think. We have all driven home on a regular commute only to realize upon arrival that we have no recollection of the drive. Or we have prepared cereal for breakfast only to return the first bite to the bowl out of shock at the sour taste of orange juice. Just as driving a car or pouring cereal can be done without conscious awareness so too can we hold beliefs or attitudes without conscious awareness. The attitudes we hold have a reflective, a conscious, or an explicit form as well as an automatic, an unconscious, or an implicit form. This is not a new idea. Philosophers and scientist have been writing about the two-sided nature of the mind for hundreds of years and thanks to Freud the unconscious has become something everyone and their grandmother is familiar with. The novel idea that Banaji and Greenwald outline in “Blind Spot” is at its core a technique, not an idea. Measuring the unconscious has been an exceptionally challenging task for scientists, but as two of the leading researchers on implicit social cognition, Banaji and Greenwald developed a creative way to measure how our unconscious beliefs guide our behavior, using what they call the Implicit Association Task (IAT). The most basic form of the IAT measures how long it takes people to organize pictures, objects, or names with words that are either positive or negative. If it takes longer to press a key to confirm a match between say a picture of a Black person and a positive word than it does to press a key to confirm a match between a picture of a White person and positive word; and it also takes a relatively shorter length of time to confirm Black with negative words than White with negative words, then that person holds a cognitive bias that associates White with good and Black with bad. By measuring the time it takes people to make associations, the IAT is capitalizing on how the brain stores information. Concepts (such as Black and good) that are more highly associated with one another can be retrieved faster, and thus result in faster reaction times. The underlying assumption of the IAT is that concepts that are closely associated in the brain are in essence preferred and more swiftly accessed, or at least have been reinforced over time perhaps through the mountains of media and cultural images that associate White with good and Black with bad. The IAT is a breakthrough because it quantifies implicit cognitive biases that are particularly hard to capture through traditional self-report measures for attitudes, such as racial prejudice, that people are highly motivated to hide or regulate. As a result, scientists can now measure attitudes in a way that reveals similar estimates of discrimination as those found in more cumbersome audit studies or un-obtrusive techniques. Based on millions of responses to the race IAT, scientists now know that seventy-five percent of Americans display implicit preference for White relative to Black (take the test here).

One critique of Banaji and Greenwald’s book is their use of the word bias. After all, the IAT is a measure of an implicit association and not an implicit bias. Bias is a loaded term and its use in this context of the race IAT implies racial prejudice. The Oxford dictionary defines bias as a “prejudice in favor of or against one thing, person or group compared with another, usually in a way considered to be unfair.” The critical component of this definition is that the preference is unfair. This begs the question: is it unfair for a person to hold an unconscious cognitive preference for “Black is bad” when they honestly do not hold that belief in an explicit sense? Stated another way: when is it reasonable to conclude that an implicit association is an implicit bias? I like to define fairness as everyone getting what she or he needs. Using this definition of fairness we can conclude that when the needs of people are disregarded, overlooked, or suppressed as a result of the mere association between Black and bad, and White and good it is reasonable to call this association a bias. In light of this logic, there is ample evidence that scores on the IAT do in fact result in behaviors that prevent people from getting what they need. It might be hard for people unfamiliar with the scientific literature to see how differences on the order of hundredths of seconds on the IAT can lead to real-world discrimination, however, Banaji and Greenwald point to numerous studies in which the IAT predicts differences in actual discriminatory behavior. For example, the race IAT predicted voting for John McCain rather than Barack Obama after controlling for a number of other factors. This said, it is important to note that most of these studies are correlational. That is, it is possible the relationship between IAT and discriminatory behavior is reverse such that discriminatory behaviors lead people to have a Black is bad IAT result. In order to examine the causal relationship between IAT and discriminatory behavior we need to randomly assign people to have high or low race IAT scores. Clearly this is not possible, but there are a few studies that attempt to experientially manipulate the IAT through priming people with positive Black role models (e.g., Michael Jordan) and negative White role models (e.g., the shoe bomber) and then measure the effect on behavior. Banaji and Greenwald point to one or two such studies which provide preliminary evidence that experimental changes in IAT do in fact lead to the expected changes in behavior, however the issue was largely avoided in their book. Perhaps the authors decided this is an issue best left to scientific journals, and that the current weight of evidence points in the hypothesized direction. I agree, but it is worth a note of caution to the reader.

Even with evidence that implicit association leads to real-world discrimination it is easy to assume that this is only for people who secretly harbor prejudiced beliefs. However, this is not the case. It is possible to hold an explicit attitude that conflicts with an implicit attitude. In fact, the title of Banaji and Greenwald’s book is inspired by the idea that many good people who hold egalitarian views and have good intentions, also have a blind spot (that is they hold implicit attitudes) that can prevent them from acting in line with their egalitarian values. The authors skillfully lay out an often-understated way in which implicit biases, such as the black is bad association, lead to hidden biases in the way well-intentioned people act. For example, non-action, selective helping, and in-group favoritism which is, in its worst form, nepotism, can also be as innocent as giving to a charitable organization that primarily assists needy people who happen to be White. By contributing to such a charity, people are not directly harming minority racial groups, but they are contributing to the relative advantage of White communities. In this way, “intergroup discrimination is less and less likely to involve explicit acts of aggression toward the out-group and more likely to involve everyday acts of helping the in-group… [which] may be the largest contributing factor to the relative disadvantages experienced by Black Americans and other already disadvantaged groups.”

This is hard pill for well-intentioned White Americans to swallow. It is difficult for people to identify their non-actions let alone feel guilty about them. As the sociologist Peggy McIntosh put it, White privilege is “an invisible weightless knapsack of assurances, tools, maps, guides, codebooks, passports, visas, clothes, compass, emergency gear, and blank checks.” Taking a close look at what it means to be advantaged or to have White Privilege involves unpacking that knapsack. Unpacking the knapsack requires a very difficult kind of self-reflection that highlights the fact that the things we may feel we have earned are in fact gifts we were given for being born of certain color. Further, we know from studies on cognitive dissonance theory that becoming aware of hidden biases or implicit attitudes that conflict with our beliefs and actions violates the natural human striving for mental harmony, or consonance. It produces discomfort and striving to align the discordant parts, which begs the question: how can we bring our explicit and implicit attitudes into alignment?

If there is one part of “Blind Spot” that leaves readers wanting more it is Banaji and Greenwald’s answer to the question: “how can I reduce my implicit biases?” The science of changing or avoiding the traps of implicit biases is nascent. To the authors’ credit, they were careful with their language and conclusions, staying as close as possible to evidence for which there is scientific consensus throughout the book. This is refreshing and appreciated considering how most popular science books reach for grand truths and easy solutions, but by sticking to their scientific guns in “Blind Spot,” the book has the unpleasant consequence of leaving the reader feeling hopeless. Implicit attitudes run deep. They are resistant to interventions, and remain relatively stable over time. Current scientific work is exploring the bounds of this stability and more intervention research is needed to explore how implicit attitudes can be shaped over time. In the meantime, the best way to prevent implicit biases from guiding our behavior is to “outsmart the machine.” That is, we can develop strategies that reduce the likelihood that implicit biases play a role in health care, employment, or housing loan decisions. For example, the National Heart, Lung, and Blood Institute has drafted guidelines for cholesterol screening at certain ages to prevent the providers from forgoing cholesterol screening for women on the basis that they are less likely to develop heart disease. While women have a lower-risk than men of developing heart disease, they are still at risk. The guidelines help ensure providers do not rely on their (correct but biased) gut feeling that women probably do not need the screening as much as men. The result has been higher quality care for women and catching risk for heart disease earlier. An important lesson from “Blind Spot” is that it can be a remarkably fruitful and worthwhile exercise to explore ways to outsmart your own machinery in order to reduce biases in your actions. If you are in a position that judges the merits of others (e.g., an employer), blind yourself to information such as name, ethnicity, gender, etc. as much as possible before passing judgment.

If outrage at police violence against young black men and the subsequent political frenzy that ensues after each incident are any indication of the beliefs most Americans hold toward Black people, there is no better time to read “Blind Spot.” Banaji and Greenwald shed light on a question running through the minds of many well-intentioned Americans: “Why is this still happening?” Perhaps if we put some thought into how the way we think influences our behavior, we can devise more inventive and pervasive mechanisms for avoiding the trips of implicit biases. Only then will we be able to act in ways that reflect the true nature of who we say we are and want to be.

I’ll close with a hopeful video to lighten the mood.

Peter McGraw and Joel Warner on Humor

I have a shameless plug to make. My first interview for The Society Page’s Humor-Code-Book-Coverpodcast, Office Hours hit the airwaves today (subscribe on iTunes). Dr. Peter McGraw and Joel Warner were kind enough to chat with me at Portland’s Bridgetown Comedy Festival on their new book, The Humor Code. We talked (and laughed) about Benign Violation Theory and their travels around the world in search of what makes things funny (listen here).

Peter McGraw (@PeterMcGraw) is a marketing and psychology professor at the University of Colorado Boulder and founder of the Humor Research Lab (aka HuRL). Joel Warner (@joelmwarner) is a journalist, writing for many prominent publications including Wired, The Boston Globe, and Slate.


links that tickled me



Quantified-Self, Experience Sampling, and Ecological Momentary Assessment Tech for Behavioral Science

Lately I’ve been dabbling in self-quantification, exploring various tools and procedures to better track and understand myself. There is a vibrant community of quantified-selfers actively participating in online forums, local meet-ups, and international conferences. There are mountains of data and code self-experimenters share publicly and a growing number of tools available to assist quantifiers.

I have only scratched the surface in my exploration. I have a self-experiment on coffee and cognitive skills underway (see here and here, results soon) and I’ve kept close tabs on my running for 6 months now. I recently started tracking my mood, hydration, and working-habits (among other things) with the Reporter Application. Report is a mobile app from Nicholas Feltron and friends, based on the Annual Feltron Report. The Feltron Report is part-diary, part-data visualization, part-statistical report on the day-to-day life of Nicholas Feltron. It covers the mundane to the fantastical to the pragmatic. If you like radio (I do), 99percentinvisible covers this aesthetic-data-nerd’s report with beautifully engineered sound candy. The Report App allows you to customize questions so you can track whatever you find interesting and has built-in ecological momentary assessment or experiential sampling, which is a scientific procedure for collecting information on human behavior, emotion, etc. in real-time. You have some control over the schedule of data collection. You can set the number of times throughout the day you want the app to ping you with the questions and you can answer the same set or a separate set of questions when waking in the morning or going to bed in the evening.

Experience sample and ecological momentary assessment are not new methodologies. They’ve been used by social and behavioral scientists for decades, but the technology is changing which has allowed for growth in what is possible. Tamlin Conner at University of Otago describes the new possibilities for experience sampling and ecological momentary assessment research in the Handbook of Research Methods for Studying Daily Life. This chapter is a good read on conceptual and methodological reasons why more behavioral scientists should explore this area.

Does physical activity promote emotional well-being? Do people eat differently when away from home, or when others are around?… How is behavior affected by the physical settings in which we live, work, and play? Methods for studying daily life experiences have arrived, fueled by questions of this sort and new technologies… Daily life experience methods are familiar, albeit not yet standard, tools in several literatures (e.g., medicine and health, emotion, social and family interaction). In the National Institutes of Health’s Healthy People 2020 initiative, Bachrach (2010) highlighted these methods among the “tools that can revolutionize the behavioral and social sciences,” not­ withstanding the fact that “researchers are still in the earliest stages of tapping into [their] vast potential.”… Moreover, new technologies… promise to increase dramatically the scope and accessibility of these methods. In short, there is every reason to expect that daily life research methods will become more influential in the near future.

I will continue with the self-quantification and start sharing some of my findings, but my next step is to explore how methods and tools from the quantified-self world, such as experiential sampling and ecological momentary assessment, can be use in behavioral and psychological research. PACO is one tool that has peaked my interest. It allows the user to design experiential sampling experiment, and then administer and distribute the experiment to a population via email. PACO comes from Bob Evan, a google employee, and while it is still in beta I think it has a lot of potential. Also, it doesn’t hurt that it is free and open source.

There are a number of other companies and apps that are emerging in this realm. With new technologies there are new possibilities for research (and probably money to be made for those who can develop technologies that enhance the research capablities of behavioral and social scientists). Tamlin was kind enough document and share a list of tools on the market (See this link: Conner, T. S. (2013, Nov). Experience sampling and ecological momentary assessment with mobile phones. Retrieved from http://www.otago.ac.nz/psychology/otago047475.pdf.).

Links That Tickled Me

A lot of ear hair tickling this week…

This will make the modern lifeguard cringe.


bleak and poignant as is his way…