What Do We Want our Scientific Discourse to Look Like?

I was recently quoted in an article appearing in the Observer, a publication of Association for Psychological Science. In the article Alison Ledgerwood quotes from a diverse set of voices in psychology on the topic of scientific discourse in part in response to Susan Fiske’s piece in the Observer. Fiske takes issue with methodological critics of psychological science (who she referred to as “methodological terrorists” in an earlier draft circulated online). Her article promoted many responses (see here) and a call led by Ledgerwood to write a more diverse (and less status-driven) article for the Observer on the topic. True to form, Alison quoted my writing fairly, and elegantly brought together many other contributions.

Here, I provide my small contribution in its entirety.

We would serve each other, and science as a whole, better if we treated critique and communication of science as an open and humble process of discovery and improvement. To this end, I would like to see our scientific discourse focus more on methodology and evidence. This is easier said than done. Criticisms of the science are often construed as criticisms of the scientist. Even when we, as scientists, appreciate the criticism and recognize its scientific value, it still evokes concerns that others will lose trust in us and in our research. It is no wonder people are distressed by methodological criticism. However, focusing our discourse on methodology and evidence, with more awareness of how tone and context influence others’ perceptions of the scientist whose work is under the microscope, will help ensure healthy development of our science. Second, I would like to see an increase in open and humble scientific discourse. Openness may make our mistakes and shortcomings more apparent, and it may make it easier for others to critique our work, but it will surely improve our science. If we simultaneously place more value on humble communication, I expect criticisms will feel less personal and be easier to swallow as well. Finally, as a graduate student, I feel vulnerable publicly stating my thoughts on criticism and openness in science, which speaks to the climate of our discourse. It is essential that we have a communication environment in which graduate students, post-docs, and junior faculty from all backgrounds are rewarded for humbly and openly presenting methodologically sound ideas, research, and criticisms.

Meehl on theory testing, never gets old.

The position of Popper and the neo-Popperians is that we do not “induce” scientific theories by some kind of straightforward upward seepage from the clearly observed facts, nor do we “confirm” theories as the Vienna positivists supposed. All we can do is to subject theories—including the wildest and “unsupported” armchair conjectures (for a Popperian, completely kosher)’— to grave danger of refutation…

A theory is corroborated to the extent that we have subjected it to such risky tests; the more dangerous tests it has survived, the better corroborated it is. If I tell you that Meehl’s theory of climate predicts that it will rain sometime next April, and this turns out to be the case, you will not be much impressed with my “predictive success.” Nor will you be impressed if I predict more rain in April than in May, even showing three asterisks (for p < .001) in my t-test table! If I predict from my theory that it will rain on 7 of the 30 days of April, and it rains on exactly 7, you might perk up your ears a bit, but still you would be inclined to think of this as a “lucky coincidence.” But suppose that I specify which 7 days in April it will rain and ring the bell; then you will start getting seriously interested in Meehl’s meteorological conjectures. Finally, if I tell you that on April 4th it will rain 1.7 inches (.66 cm), and on April 9th, 2.3 inches (.90 cm) and so forth, and get seven of these correct within reasonable tolerance, you will begin to think that Meehl’s theory must have a lot going for it. You may believe that Meehl’s theory of the weather, like all theories, is, when taken literally, false, since probably all theories are false in the eyes of God, but you will at least say, to use Popper’s language, that it is beginning to look as if Meehl’s theory has considerable verisimilitude, that is, “truth-like-ness.”

Meehl, P. E. (1978). Theoretical risks and tabular asterisks: The slow progress of soft psychology. Journal of Consulting and Clinical Psychology, 46, 806–834. doi:10.1037//0022-006X.46.4.806

Day 1: Does Coffee Make You Smarter?

This post is a part of a series, see previous post.

Like with most resolutions starting was the hardest part. It has been a little over a week since I declared my resolution to undertake a daily self-experiment (publicly mind you) on coffee. But public humiliation couldn’t stop me from procrastinating–change is hard. Of course each morning post-declaration I woke up with a different rationalization for not changing my behavior: “It’s still the holidays, I deserve a break.”; “I got up too late, I’ll do it tomorrow.”; “I’m twitching, I can’t wait, I need my coffee now!” But now, finally, the experiment is under way. I am using a tool developed by Stephen M. Kosslyn (a psychologist at Stanford) and his co-conspirators Yoni Donner and Nick Winter to facilitate this experiement. If you interested in self-experiments, quantified-self, or how to use data for self-improvement I suggest you check it out.

So today I started with the new morning routine and documented the process with some notes.

Day 1: Test-Before-Coffee

Start time: Mon, 6 Jan 2014 08:45:45 End time: Mon, 6 Jan 2014 09:02:42

Sporadic Notes: I commenced just 5 minutes after waking up. I am a groggy morning person so it will be interesting to see how this may affect my results. I expect to see improvement throughout the testing period on test-before-coffee days since it became evident that as I shook the sleepydust from my eyes the tests got easier. I’ll check back on this latter in the month. Several of the tests had rules that took me a few second to understand. I restarted these confused-trials to ensure an accurate measurement. On one of the tests (“Design Copy”) I took a practice trial to make sure I understood the rules. The practice trial is recorded on the results page so I assume it will be incorporated into analysis.

Learnings: In the morning I need to read directions twice. Use practice trial if unsure. Tapping the space bar as fast as you can is a good way to get your family out of bed.

Stay tuned for forthcoming notes and results.

Does Coffee Make You Smarter?

coffee

Coffee has become my morning staple and a key to my productivity. After a good cup I feel fast, focussed, witty, and smart. I am curious if these feelings of augmented intellectual efficiency are real or just an illusion of a brain that just satisfied a fix. So I’m conducting a self experiment. The protocol is simple. Every morning I will drink 8 ounces of my favorite liquid elixir between 7-9 AM. On day one I will take cognitive tests right before my cup. On day two I will drink my cup, wait 45 minutes and then compete the tests. Other factors such a food intake and exercise will be tracked to examine correlates. This daily alteration will continue for about a month at which point I will share my results.

 

[Cartoon courtesy Mark Anderson.]

Increasing statistical power in psychological research without increasing sample size

A quote to live by from an article describing three simple ways to increase power in psychological research (without increasing sample size). And why it’s important. 

Increasing statistical power is one of the rare times where what is good for science, and what is good for your career actually coincides. It increases the accuracy and replicability of results, so it’s good for science. It also increases your likelihood of finding a statistically significant result (assuming the effect actually exists), making it more likely to get something published. You don’t need to torture your data with obsessive re-analysis until you get p < .05. Instead, put more thought into research design in order to maximize statistical power. Everyone wins, and you can use that time you used to spend sweating over p-values to do something more productive. Like volunteering with the Open Science Collaboration.

– , Open Science Collaboration