social psychology’s crisis of confidence

A recent NYT Magazine article has prompted colleagues and friends alike to ask me, what’s going on in your discipline? Perhaps you’ve heard that there’s a “crisis” in social psychology. It’s been covered prominently–e.g.,  NYT, AtlanticSlateWikipedia. This essay is my attempt at explaining.

Introduction

The present crisis in social psychology can be traced to two highly publicized events in 2010 and 2011—publication of impossible findings using accepted methods of rigorous psychological science (Bem, 2011; Simmons, Nelson, & Simonsohn, 2011), and cases of fraud, notably Diederick Stapel (Finkel, Eastwick, & Reis, 2015; Yong, 2012). These events prompted numerous special issues on methodological rigor, replication, and transparency (e.g., Ledgerwood, 2016; Stangor & Lemay, 2016), large-scale efforts to replicate findings in flagship journals (Open Science Collaboration, 2015), and ominous commentaries from leaders of the field (e.g., Kahneman (2012), “I see a train wreck looming”). The current crisis echoes that of prior decades (Elms, 1975; Gergen, 1973; McGuire, 1973), but has notable differences (Hales, 2016; Spellman, 2014). First, I discuss how common research practices undermine our ability to make valid inferences. Second, I elaborate on why the field is grappling with these issues, and how the current crisis differs from those of the past. I conclude with recommendations for moving forward.

Common (and “Questionable”) Practices

Many research practices in social psychology (e.g., selectively reporting a subset of measures used) have long been recognized as “questionable” because they increase false inferences (e.g., Greenwald, 1975; Rosenthal, 1979). Yet, these practices remain surprisingly common (John, Loewenstein, & Prelec, 2012), due to perverse incentives, norms, or lack of awareness (Nosek, Spies, & Motyl, 2012). Many questionable practices are justifiable sometimes (particularly when reported transparently), though all of them increase the likelihood of false inferences (Nosek et al., 2012 for review). Here, I focus on the practice I see as most central to the current crisis.

The principle common research practice to the present crisis is opaque and misleading reporting of researcher degrees of freedom (Simmons et al., 2011). Researcher degrees of freedom are the set of possible methodological and statistical decisions in the research process. For example, should outliers be excluded? Which items should be used? It is rare, and sometimes impractical, to have a priori predictions about how to make all, or even most, of these decisions. Thus, it is common practice to explore alternatives after seeing data. In a given dataset, slightly different alternatives can lead to vastly different conclusions, and there may be no objective justification for taking one alternative over another (Gelman & Loken, 2013). For example, imagine a test that is non-significant when data are log-transformed, and significant when they are truncated. These two approaches may be equally justified for skewed data. However, we often rationalize in favor of alternatives that meet our expectations, in this case, statistical confirmation of our hypothesis (John et al., 2012). There are many other biases that lead us to favor positive alternatives (e.g., motivated reasoning or hindsight bias). Recall Richard Feynman’s advice to Caltech’s class of 1974, in science “the first principle is that you must not fool yourself – and you are the easiest person to fool.”

Furthermore, bias-prone decisions compound to exacerbate false inferences, even when decisions are seemly bias-free. By way of analogy, imagine the research process is a garden of forking paths. Each fork in the path represents a decision (e.g., truncating data), which eventually leads to an outlet (representing the conclusion). The long and winding path taken through this labyrinth may be justified by scientific logic at each juncture. However, because there are so many junctures, it is improbable that any two scientists (or even the same scientist a year from now) would take the same path through the garden. Deviation at a single fork can lead to disparate outlets, because new decisions are informed by data that were altered by previous decisions (Gelman & Loken, 2013). This is how 29 research teams can examine the same dataset with the same hypothesis, and come to 29 different conclusions (Silberzahn et al., 2017). When decisions are not determined a priori, they are inevitably guided by data and biases that influence the validity of inferences.

Research degrees of freedom increase the likelihood of false inferences, however they do not intrinsically undermine scientific progress. Nonetheless, it is not only common practice to maintain flexibility in design and analysis (Gardner, Lidz, & Hartwig, 2005; Ioannidis, 2005), it is also common to publish results as if only a single path was explored, or even as if a single path was predetermined (Begley & Ellis, 2012; Bem, 2003; Giner-Sorolla, 2012). Such presentation makes it challenging to distinguish between confirmatory (more reliable) and exploratory (more tentative) research. Without reliable representation of the current evidence, it is difficult to determine the degree to which an effect is understood and valid, as well as where to place future research efforts. The regularity of many researcher degrees of freedom accompanied by opaque or misleading reporting is central to the current crisis.

Why we are Reeling

Social psychology is grappling with a crisis (again), because formerly theoretical concerns about replicability (Elms, 1975; Gergen, 1973; McGuire, 1973), have been made tangible by empirical findings (Bem, 2011; Simmons et al., 2011) and fraud (e.g., Stapel)—both of which received considerable attention beyond ivory towers. A Google News search of “replication crisis and social psychology” reveals over 7,000 articles in the last few years including prominent outlets such as NYT, BBC, and WSJ. Scholars agree that outright fraud is a problem, but a rare one, and thus, not a primary concern. In contrast, questionable research practices are concerning because they are so common (John et al., 2012) and can result in impossible findings (Simmons et al., 2011). Many point to Daryl Bem’s (2011) paper on “precognition” as the catalyst of the present crisis. The paper, published in JPSP, appears to show that people have extrasensory perception. The distinguished Lee Ross, who served as peer reviewer, said of it, “clearly by the normal rules that we [used] in evaluating research, we would accept this paper… The level of proof here was ordinary. I mean that positively as well as negatively. I mean it was exactly the kind of conventional psychology analysis that [one often sees], with the same failings and concerns that most research has” (Engber, May 2017). Bem empirically arrived at an improbable conclusion (ESP exists) using common practices for entry into our flagship journal. This prompted Simmons and colleagues (2011) to use the same common practices to conduct an experiment that came to an impossible conclusion (that listening to certain songs can change the listeners’ age). These events led many social psychologists to question common practices, and revisit theoretical concerns of the past.

This Time is Different

The current crisis echoes that of prior decades (Gergen, 1973; McGuire, 1973), even centuries (Allport, 1968; Schlenker, 1974), in that it is concerned with replicability (Stangor & Lemay, 2016)—and rightfully so. The transparent communication of methods that enables scientific knowledge to be reproduced is the defining principle of the scientific method, and perhaps the only quality separating scientific belief from other beliefs (Nosek et al., 2012; Kuhn, 1962; Lakatos, 1978; Popper, 1934). Just as replicability is a sign of a functioning science, so too may be the perpetual self-conscious grappling with claims for scientific status. Psychologists and philosophers of science have long debated the scientific status of social psychology (Schlenker, 1974). In fact, such self-critical angst can be traced to the historical origin of the discipline when we differentiated ourselves from philosophy (Danziger, 1990). Yet, there are notable differences between the “crisis of confidence” in the 1970s (Elms, 1975), and that of today.

First, the former crisis was largely characterized by concerns about external validity, whereas today’s crisis in primarily concerned with threats to statistical conclusion validity (Hales, 2016). For example, McGuire (1967, 1973) worried that our focus on the “ingenious stage manager” of the laboratory produces conditions that render null results meaningless and positive result banal, while at the same time being unlikely to replicate outside the laboratory. Another example is found in Gergen (1973), who argued that social psychological effects are hopelessly dependent on the historical and cultural context in which they are tested, and thus impossible to generalize to principles in a traditional scientific sense.

In contrast, today’s crisis is concerned with the validity of statistical conclusions drawn from an experiment (Hales, 2016). Instead of asking, “does the effect generalize?” We are now asking, “does the effect exist at all?” In the previous crisis, Mook (1983) famously argued in defense of external validity. Laboratory experimentation need only concern itself with “what can happen” (as opposed to “what does happen”). It is the theory tested by a particular experiment that generalizes, not the experiment itself. A compelling defense, however, the assertion rests on the validity of statistical conclusions. The contemporary crisis is grappling with the assertion that common practices not only demonstrate “what can happen,” but that they can be used to show that “anything can happen.” If anything can happen in our laboratories, what differentiates our science from science fiction?

A second way in which the current crisis is different is related to changes in technology and demographics (Spellman, 2014). Technological changes are eliminating space concerns, and increasing speed and transparency of communication. One consequence of which is that people who fail to replicate research can more readily share that information, and see that they are not alone. Thus, it is easier to be critical of the finding itself rather than assume a methodological mistake was made (McGuire, 1973). Similarly, increases in diversity of the field have precipitated more critical questioning of the status quo. In brief, today’s crisis has elements of a social revolution that were missing from prior crises (Spellman, 2014). These factors will fuel a more persistent push for change this time around.

Recommendations

I conclude with recommended changes to improve confidence in our science. In fear of presumption, I follow McGuire (1973) in submitting my suggestions as koans—full of paradox and caveat; they are intended to be at once provocative and banal.

Koan 1:“Does a person who practices with great devotion still fall into cause and effect?…No, such a person doesn’t.”

Preregister

In 2000, the National Heart Lung and Blood Institute (NHLBI) initiated a policy requiring all funded pharmaceutical trials to prospectively register outcomes in an uneditable database, ClinicalTrials.gov. After the policy went into effect, the prevalence of positive results reported in NHLBI-funded trials dropped from 57% to 8% (Kaplan & Irvin, 2015). Preregistration improves confidence in published findings because it reduced selective reporting. More broadly, preregistration makes researcher degrees of freedom more apparent, reduces opaque and misleading reporting (Nosek, Ebersole, DeHaven, & Mellor, 2017), and allows us to better distinguish between confirmatory and exploratory research (Nosek et al., 2012).

Koan 2: “Having our cake and eating it too.”

Explore Small, Confirm Big

There is growing recognition that “small sample sizes hurt the field in many ways” (Stangor & Lemay, 2016), because it undermines both statistical confidence and the perception of rigor (Button et al., 2013). However, there is a trade-off to reckon with—it is resource expensive and unreasonable to test all hypotheses with large samples (Baumeister, 2016). We can have our cake and eat it too if we instead explore new questions with small samples to determine which are worth putting to larger confirmatory tests (Sakaluk, 2016). True, so long as we call a spade a spade. Small-N studies should leave the reader with the impression that the effect is tentative and exploratory, and then attempt to confirm “big” (Baumeister, 2016; Dovidio, 2016). Though, there is disagreement over implementation. Should there be separate journals for small-exploratory and large-confirmatory studies (Baumeister, 2016)? Should those studies appear in sequence in the same paper (Stangor & Lemay, 2016), or in different sections of the same journal (Dovidio, 2016)? My contention is that any of these approaches will be better than the status quo, so long as “truth in advertising” is maintained.

Koan 3:“He who pays the piper calls the tune.”

Gatekeepers and Replicators

Editors and reviewers tacitly agree that replicability is foundational to confidence and scientific progress, yet few journals incentivize replication. A recent study found that, of 1151 psychology journals reviewed, only 3% explicitly stated that they accept replications (4.3% of 93 social psychology journals; Martin & Clarke, 2017). If researchers could be assured that replications get published, more would be conducted. However, what makes for a constructive replication is widely debated. A promising approach is to test hypotheses as exactly as possible, while simultaneously testing new conditions that refine and generalize (Hüffmeier, 2016). Publishers must provide carrots to replicate, preregister, increase sample size, etcetera, or, as Nosek and colleagues suggest (2012), let us do away with them. Make publishing trivial and engage in post-publication peer review, they say. This allows researchers to decide when content is worth publishing and shifts the priority of evaluators to methodological, theoretical, and practical significance, and away from apparent statistical significance. Registered reports prompt a similar shift by enabling results-blind peer review (Munafò et al., 2017). Publishers could act as managers of peer review, focusing solely on bolstering confidence and rigor in the process, instead of also engaging in dissemination, marketing, and archiving. This is a worthy and feasible objective in the internet age (Nosek et al., 2012).

Koan 4: “What is the way? …An open-eyed man falling into the well.”

Transparency

The ultimate solution to our confidence dilemma is openness (Nosek et al., 2012). Make more information from our studies available. Preregistration helps make the research plan transparent, but the field would also benefit from changing norms around sharing and archiving data, materials, and workflows (Simonsohn, 2013; Wicherts, Bakker, & Molenaar, 2011; Wicherts, Borsboom, Kats, & Molenaar, 2006). More transparency not only addresses fabrication, it also enables verification, correction, and aggregation of knowledge—all of which bolster confidence in (and progress of) science. There is concern that greater transparency unveils the messy complexity and conflicting evidence of our science. That it enables science deniers and other malevolent critics in their efforts to mislead the public. To this I say, “fools believe and liars lie,” regardless of truth or access. In my admittedly optimistic view, earnestly open presentation wins confidence in the long run. For example, scientists who concede failures, explore reasons for failure, or are transparent in their publication of failures (as opposed to denying their validity, hiding them, or not acting) are perceived as more able and ethical (Ebersole, Axt, & Nosek, 2016). Scientists overestimate the negative consequences of a failed replications and transparent reporting (Fetterman & Sassenberg, 2015).

Conclusion

The present crisis is not entirely new, but it has critical difference. If we can use common research practice to find the impossible, where does that leave our science? I venture that these koan may move us to embrace our science not as history entirely (Gergen, 1973) but perhaps as evidence-based history. So too, in the style of Rozin (2001), may we start to embrace the exploratory and narrative nature of our present science. Perhaps then, we will again find our confidence.

References (click here)

 

Advertisements

come say hi at SPSP – coffee, sugar, and climate change – open and reproducible science

This week, at the society for personality and social psychology’s annual convention, I’m speaking in a symposium, rethinking health behavior change. I will talk about a study in which we* tested strategies to help people reduce the amount of sweetener added to their daily coffee (ideally without reducing enjoyment of it**). I’m also presenting a poster on how people talk to others about making behavioral changes that affect the environment.

One thing that excites me about these studies–they represent my first (admittedly clumsy) attempts at being completely reproducible and open with my science. Datasets, R analysis scripts, hypotheses, and all other study materials are publicly available***, and were preregistered****.

Openness and reproducibility in science fascinate me—both as a topic of research and as a guiding principle for my own research. Since starting graduate school, I have preregistered (nearly) all of my studies and have been working toward making the entire process transparent. I’ve also been learning how to write reproducible code in R. It has been challenging… you know, for the obvious reasons… misaligned incentives, human fallibility, complexity, and time. BUT, I’ve learned a lot (i think*****), and it has made me a better scientist (i think******). If nothing else, I can now make these cool graphs (below) for conference talks (and next time I won’t have to spend way too much time trying to make them look pretty*******).

Psych friends, come say hi at SPSP. Here’s the time and location for my talk and poster (and related scripts and files, here and here). Or, let’s just get a drink.

*me, Traci Mann, and Tim (our coffee connoisseur collaborator).

**that’s the hard part… sugar is yummy.

*** public project pages for the “coffee study” and “social message framing study” (the one climate change).

****an uneditable public archive of the study plan that is time-stamped prior to collecting (or looking at) data.

*****i welcome feedback and comments (particular on my R code). let me know if you find errors or have suggestions for improvement.

******hard to test empirically. though I’m pretty darn sure reproducibility and openness make Science better.

*******the beauty of reproducible code.

Sneak peek at SPSP presentation figures.

coffee

^Here’s the code (viewable in any web browser).

image-1-16-17-at-3-03-pm

^Here’s the code.

p.s. HT to Simine Vazire whose blog inspired the above footnote style. #usefulbloghack.

Can Theory Change What it is a Theory About?

In Beyond Freedom and Dignity B.F. Skinner writes, “no theory changes what it is a theory about; man remains what he has always been.” By this Skinner means that the underlying rules or processes that guide human behavior are constant, and that knowledge of these processes does not change their nature. However, throughout the social psychological literature we see suggestions of just the opposite—knowledge of a psychological process can change the psychological process. For example, Schmader (2010) provides evidence that simply teaching people about stereotype threat may “inoculate them against its effects.” The theory of social identity threat postulates that people are sensitive to contexts that threaten their identity, and when such a situation is detected people engage in ruminative conflict that can distract them enough to undermine their performance in that setting. Schmader is claiming that giving people knowledge of psychological processes predicted by theory changes the processes that unfold. This point raises several important questions: what is a psychological theory? Does psychological theory describe stable processes in the Skinnerian sense? Can we think of psychological theory in the same way that we think about theories of say physics or biology? If we believe theory must have some element of stability (e.g., if we believe light traveled at the same speed in the middle ages as it does today), and that theories exist out side of and are independent from our knowledge of their existence (e.g. the theory of special and general relativity existed before Einstein identified them, and his discovery did not change their quality), then can we classify social psychological theories as theories? My sense is no. Or maybe we need to modify our definition of what qualifies as a theory. Or perhaps our definition of stability in the processes that underlie phenomena and our belief that observation is independent from underlying processes needs modification.

References

Schmader, T. (2010). Stereotype Threat Deconstructed. Current Directions in Psychological Science, 19, 14–18. doi:10.1177/0963721409359292

 

 

Temporal Self-Regulation Theory: Why we keep trying (and failing) to go for that early morning run.

keep-the-dream-alive Last night in a burst of optimism I set my alarm for 5:30 AM. I thought I would sneak in an early morning run around the neighborhood before work. But as bells rang at that un-godly hour, I cracked an eye to a dark, cold room and groped for the snooze button. Ten minutes later, with a slight increase in clarity, I delayed once more “today, sleep is more important”…snooze again. As you might have guessed, I didn’t wake up in time to run.

We’ve all had a similar experience. Our preconceived intentions to engage in healthy behavior too often fail to come to fruition when it’s time to act. But we also intuit that our intentions are somehow linked to our behavior.

Most of the prevailing theoretical models of health behavior such as the Theory of Planned Behavior (Ajzen & Madden, 1986, request pdf), posit that intentions in combination with a number of other factors, such as behavioral beliefs, can predict likelihood of behavior. And these theories do predict behavior reasonably well (see Godin and Kok, 1996), but they fail to explain why large increases in intention only lead to small changes in behavior (see review by Webb and Sheeran, 2006). In this way these theories fail to fully explain health behavior.

Hall and Fong (2007), developed Temporal Self-Regulation Theory to help explain why, when it comes to health-related actions, the intention–behavior link may break down. They postulate that perhaps our intentions sometimes fail to lead to behavior because,

[many health behaviors are] associated with a characteristic set of contingencies whose valence changes dramatically depending on the temporal frame.

I’ve added emphasis to the quote to help break it down. In generally when psychologists talk about behavioral “contingencies” they are referring to if-then conditions that create potential for the occurrence of certain behavior and its consequences. Using the running example above, one behavioral contingency could be stated, “if I run in the morning, then I might be healthier when I’m older”. The “valence” of this contingency is positive—who doesn’t want to live a long and healthful life? “Temporal frame” refers to the very human capacity to think not only in the present moment or short-term, but also to weigh long-term consequences of our actions. Our example contingency has a long-term orientation. The authors contend that valence of the contingency changes with temporal frame, so let’s say I am thinking in the short-term, the behavioral contingency could then be stated, “if I run in the morning, then I might be tired for the rest of the day”. This is, of course, negative in valence. So the theory predicts that I will be more likely to create an intention to run in the morning if I’m focussed on the long-term as opposed to the short-term. This helps explain why it’s so hard to engage in health protective behaviors (such as running) and dis-engage in health risk behaviors (such as smoking). It is hard to delay gratification and most health risk behaviors are satisfying in the short-term and unsatisfying in the long-term, while most protective behaviors are predominately unsatisfying in the short-term and satisfying in the longer-term .

So, back to why my intention to run in the morning failed to lead to running after the alarm went off.

Last night when I set my alarm for 5:30 AM I was thinking about my long-term health, “I’ll look and feel so good in my summer swimsuit after working out” or “I’ll be less prone to disease when I’m older”.  Further the immediate costs of setting the alarm were low—I only had to click few buttons. In contrast, while reaching for the snooze, the costs of running were more immediate and the short-term consequence were salient, “I’m tired now, and I’ll be too sleepy to be productive today if I run”.

These tables and figures from Hall and Fong (2007) demonstrates how protective and risky health behavior have the opposite contingency valence with respect to time orientation. As depicted in the table 1, participants in this study estimated the point in time at which they would notice the benefit/cost of health protective behaviors (e.g. exercise and dieting), and health risk behaviors (e.g. smoking and drinking).

temporal proximity measure Hall and Fong (2007) Sticking with our morning run example, Figure 1 below demonstrates that people don’t notice the cost of running when thinking about rising at the crack of dawn for a run (question #1) or when deciding to run by setting the alarm an hour early (question #2). We start to feel the cost when the alarm goes off and we have to get out of bed and dress (questions #3). The perceived cost continues to grow as we run and after we’ve successfully run once (questions #4 and #5). We start to feel the cost less once we’ve made this morning run a regular routine for a week (question #6). As we continue to engage in our morning run routine the perceived cost continues to decrease, completely disappearing after a several years (question #9).

Now, what about the benefit of running early in the morning? Figure 1 indicates that we don’t feel the benefit of our run until we’ve done it regularly for a week (question #6), at which point the benefits grow exponentially for a year (question #8) and then decreases toward zero as we approach a decade (question #10).

These results provide evidence that the perceived benefit of running occurs well after the initial behavior occurs, while the perceived cost is felt just before, during  and a short while after the behavior initiates.

So when we are making the decision to set the alarm early for tomorrow’s run costs are low and abstract, so we are focusing on the long-term. When the alarm goes off and we are engaging in the behavior the costs are high and concrete, so we are focusing on the short-term. 

Before looking at Figure 1 below, notice that numbers 0 through 9 on the x-axis correspond to questions 1 through 10 in Table 1 pictured above. This is because academics like to make things more complicated than they need to be :). Screen Shot 2014-03-20 at 9.00.55 AM Figure 2 shows that the same trend holds for another health protective behavior (dieting). Screen Shot 2014-03-20 at 9.01.08 AMAs expected, the authors found the opposition result for health risk behavior—costs come after engaging in behavior and benefits occur before/during, see Figures 3 and 4 below. Screen Shot 2014-03-20 at 9.01.19 AM Screen Shot 2014-03-20 at 9.01.31 AM

So how does Temporal Self-Regulation Theory help me running in the morning? It suggests that on thing that might help is to try to minimize the short-term costs and maximize the short-term benefit. This can be hard, but may be as simple as rewarding yourself with a favorite breakfast if you complete the morning run.

Obviously, perceived temporal proximity with regard to behavior is only part of the picture. The authors introduce a working model (below) to illustrate Temporal Self-Regulation Theory more fully, which I’ve enhanced with definitions of each component. The model introduces two factors, behavioral prepotency and self-regulatory capacity that (1) influence (or moderate) the link between intentions and behavior; and (2) directly influence behavior in the absence of intentions. Health behaviors are complex and theories require continuous testing and refinement but Temporal Self-Regulation Theory adds an interesting new component to existing theories that is surely worth further consideration and testing.

Enhanced schematic representation of Temporal Self-Regulation Theory

References

Ajzen, I., & Madden, T. J. (1986). Prediction of goal-directed behavior: Attitudes, intentions, and perceived behavioral control. Journal of Experimental Social Psychology, 22, 453-474.

Godin, G., & Kok, G. (1996). The theory of planned behavior: A review of its applications to health-related behaviors. American Journal of Health Promotion, 11, 87-98.

Hall, P. a., & Fong, G. T. (2007). Temporal self-regulation theory: A model for individual health behavior. Health Psychology Review, 1(1), 6–52. doi:10.1080/17437190701492437

Webb, T. L., & Sheeran, P. (2006). Does changing behavioral intentions engender behavior change? A meta-analysis of the experimental evidence. Psychological Bulletin, 132, 249-268. doi: 10.1037/0033-2909.132.2.249

 

links that tickled me

 

 

Quantified-Self, Experience Sampling, and Ecological Momentary Assessment Tech for Behavioral Science

Lately I’ve been dabbling in self-quantification, exploring various tools and procedures to better track and understand myself. There is a vibrant community of quantified-selfers actively participating in online forums, local meet-ups, and international conferences. There are mountains of data and code self-experimenters share publicly and a growing number of tools available to assist quantifiers.

I have only scratched the surface in my exploration. I have a self-experiment on coffee and cognitive skills underway (see here and here, results soon) and I’ve kept close tabs on my running for 6 months now. I recently started tracking my mood, hydration, and working-habits (among other things) with the Reporter Application. Report is a mobile app from Nicholas Feltron and friends, based on the Annual Feltron Report. The Feltron Report is part-diary, part-data visualization, part-statistical report on the day-to-day life of Nicholas Feltron. It covers the mundane to the fantastical to the pragmatic. If you like radio (I do), 99percentinvisible covers this aesthetic-data-nerd’s report with beautifully engineered sound candy. The Report App allows you to customize questions so you can track whatever you find interesting and has built-in ecological momentary assessment or experiential sampling, which is a scientific procedure for collecting information on human behavior, emotion, etc. in real-time. You have some control over the schedule of data collection. You can set the number of times throughout the day you want the app to ping you with the questions and you can answer the same set or a separate set of questions when waking in the morning or going to bed in the evening.

Experience sample and ecological momentary assessment are not new methodologies. They’ve been used by social and behavioral scientists for decades, but the technology is changing which has allowed for growth in what is possible. Tamlin Conner at University of Otago describes the new possibilities for experience sampling and ecological momentary assessment research in the Handbook of Research Methods for Studying Daily Life. This chapter is a good read on conceptual and methodological reasons why more behavioral scientists should explore this area.

Does physical activity promote emotional well-being? Do people eat differently when away from home, or when others are around?… How is behavior affected by the physical settings in which we live, work, and play? Methods for studying daily life experiences have arrived, fueled by questions of this sort and new technologies… Daily life experience methods are familiar, albeit not yet standard, tools in several literatures (e.g., medicine and health, emotion, social and family interaction). In the National Institutes of Health’s Healthy People 2020 initiative, Bachrach (2010) highlighted these methods among the “tools that can revolutionize the behavioral and social sciences,” not­ withstanding the fact that “researchers are still in the earliest stages of tapping into [their] vast potential.”… Moreover, new technologies… promise to increase dramatically the scope and accessibility of these methods. In short, there is every reason to expect that daily life research methods will become more influential in the near future.

I will continue with the self-quantification and start sharing some of my findings, but my next step is to explore how methods and tools from the quantified-self world, such as experiential sampling and ecological momentary assessment, can be use in behavioral and psychological research. PACO is one tool that has peaked my interest. It allows the user to design experiential sampling experiment, and then administer and distribute the experiment to a population via email. PACO comes from Bob Evan, a google employee, and while it is still in beta I think it has a lot of potential. Also, it doesn’t hurt that it is free and open source.

There are a number of other companies and apps that are emerging in this realm. With new technologies there are new possibilities for research (and probably money to be made for those who can develop technologies that enhance the research capablities of behavioral and social scientists). Tamlin was kind enough document and share a list of tools on the market (See this link: Conner, T. S. (2013, Nov). Experience sampling and ecological momentary assessment with mobile phones. Retrieved from http://www.otago.ac.nz/psychology/otago047475.pdf.).

Links That Tickled Me

  1. Call for openness to replication is priming research.
  2. What did Malcolm Gladwell actually say about the 10,000 hour rule.
  3. Daniel Kahneman’s letter to behavior priming scientists.
  4. Brushing your mind. Strange concept. What is it?
  5. Social Psychology’s last mile problem. SPARQ’s online database of social psych interventions.
  6. FvieThirtyEight chimes in on  e-cigarettes.