What Do We Want our Scientific Discourse to Look Like?

I was recently quoted in an article appearing in the Observer, a publication of Association for Psychological Science. In the article Alison Ledgerwood quotes from a diverse set of voices in psychology on the topic of scientific discourse in part in response to Susan Fiske’s piece in the Observer. Fiske takes issue with methodological critics of psychological science (who she referred to as “methodological terrorists” in an earlier draft circulated online). Her article promoted many responses (see here) and a call led by Ledgerwood to write a more diverse (and less status-driven) article for the Observer on the topic. True to form, Alison quoted my writing fairly, and elegantly brought together many other contributions.

Here, I provide my small contribution in its entirety.

We would serve each other, and science as a whole, better if we treated critique and communication of science as an open and humble process of discovery and improvement. To this end, I would like to see our scientific discourse focus more on methodology and evidence. This is easier said than done. Criticisms of the science are often construed as criticisms of the scientist. Even when we, as scientists, appreciate the criticism and recognize its scientific value, it still evokes concerns that others will lose trust in us and in our research. It is no wonder people are distressed by methodological criticism. However, focusing our discourse on methodology and evidence, with more awareness of how tone and context influence others’ perceptions of the scientist whose work is under the microscope, will help ensure healthy development of our science. Second, I would like to see an increase in open and humble scientific discourse. Openness may make our mistakes and shortcomings more apparent, and it may make it easier for others to critique our work, but it will surely improve our science. If we simultaneously place more value on humble communication, I expect criticisms will feel less personal and be easier to swallow as well. Finally, as a graduate student, I feel vulnerable publicly stating my thoughts on criticism and openness in science, which speaks to the climate of our discourse. It is essential that we have a communication environment in which graduate students, post-docs, and junior faculty from all backgrounds are rewarded for humbly and openly presenting methodologically sound ideas, research, and criticisms.

Meehl on theory testing, never gets old.

The position of Popper and the neo-Popperians is that we do not “induce” scientific theories by some kind of straightforward upward seepage from the clearly observed facts, nor do we “confirm” theories as the Vienna positivists supposed. All we can do is to subject theories—including the wildest and “unsupported” armchair conjectures (for a Popperian, completely kosher)’— to grave danger of refutation…

A theory is corroborated to the extent that we have subjected it to such risky tests; the more dangerous tests it has survived, the better corroborated it is. If I tell you that Meehl’s theory of climate predicts that it will rain sometime next April, and this turns out to be the case, you will not be much impressed with my “predictive success.” Nor will you be impressed if I predict more rain in April than in May, even showing three asterisks (for p < .001) in my t-test table! If I predict from my theory that it will rain on 7 of the 30 days of April, and it rains on exactly 7, you might perk up your ears a bit, but still you would be inclined to think of this as a “lucky coincidence.” But suppose that I specify which 7 days in April it will rain and ring the bell; then you will start getting seriously interested in Meehl’s meteorological conjectures. Finally, if I tell you that on April 4th it will rain 1.7 inches (.66 cm), and on April 9th, 2.3 inches (.90 cm) and so forth, and get seven of these correct within reasonable tolerance, you will begin to think that Meehl’s theory must have a lot going for it. You may believe that Meehl’s theory of the weather, like all theories, is, when taken literally, false, since probably all theories are false in the eyes of God, but you will at least say, to use Popper’s language, that it is beginning to look as if Meehl’s theory has considerable verisimilitude, that is, “truth-like-ness.”

Meehl, P. E. (1978). Theoretical risks and tabular asterisks: The slow progress of soft psychology. Journal of Consulting and Clinical Psychology, 46, 806–834. doi:10.1037//0022-006X.46.4.806

Where are data on gun violence?

Much of the recent coverage of gun violence in this country points to a lack of data available on the topic. The absence of these data, or at least the inaccessibility of them, points to inherent prejudice. In an age where we collect data on literally everything and use it daily to help explain phenomena and change our world it is telling that it is hard to find good data on gun violence, particular gun violence as it relates to race, sex, age and mental health.

There are some projects working to remedy this. I’d like to see the gun violence archive project expanded. The project started in 2014 as an offshoot of a crowdsourced initiative by Slate, which documented incidents of gun violence after Newton. We need a tool on this website to visualize the data they collect. Maps of incidents that can be tabulated by different variables would help bring to light the normality of gun violence and the prevalence of racially charged incidents. In light of recent events it is noteworthy that this project collects data on “officer involved shootings”. However the project fails to capture officer involved shootings of unarmed person(s). Instead the project counts the following categories under “officer involved shootings”…

  1. Officer shot
  2. Officer killed
  3. perpetrator shot
  4. perpetrator killed
  5. perpetrator suicide at standoff

This is problematic because the method of collection presumes that someone shot or killed by an officer is a perpetrator (someone who has committed a crime). While the project has an “armed” category described in their glossary it doesn’t collect data on “unarmed” incidents. Further, race/ethnicity, age, sex, and mental health status are conspicuously absent from the glossary for this project. These data should be collected!

The data we collect and how we collect it tells us a lot about what we value.

We need to value data on gun violence with an eye toward race, sex, age, and mental health. We need to translate data into graphics and stories to help explain what the heck is going on. And we need to use data and story to inform how we change. Otherwise, I’m afraid outrage will fade, and the status quo will resume until the next everyday tragedy goes viral.

links that tickled me

 

 

Promote open science! Capture citations on articles you view

http://scinet.osf.io/citelet

With a simple extension or bookmarklet you can help promote open science by capturing the citation information on articles you view. Another project worth sharing from the Open Science Framework.

Day 1: Does Coffee Make You Smarter?

This post is a part of a series, see previous post.

Like with most resolutions starting was the hardest part. It has been a little over a week since I declared my resolution to undertake a daily self-experiment (publicly mind you) on coffee. But public humiliation couldn’t stop me from procrastinating–change is hard. Of course each morning post-declaration I woke up with a different rationalization for not changing my behavior: “It’s still the holidays, I deserve a break.”; “I got up too late, I’ll do it tomorrow.”; “I’m twitching, I can’t wait, I need my coffee now!” But now, finally, the experiment is under way. I am using a tool developed by Stephen M. Kosslyn (a psychologist at Stanford) and his co-conspirators Yoni Donner and Nick Winter to facilitate this experiement. If you interested in self-experiments, quantified-self, or how to use data for self-improvement I suggest you check it out.

So today I started with the new morning routine and documented the process with some notes.

Day 1: Test-Before-Coffee

Start time: Mon, 6 Jan 2014 08:45:45 End time: Mon, 6 Jan 2014 09:02:42

Sporadic Notes: I commenced just 5 minutes after waking up. I am a groggy morning person so it will be interesting to see how this may affect my results. I expect to see improvement throughout the testing period on test-before-coffee days since it became evident that as I shook the sleepydust from my eyes the tests got easier. I’ll check back on this latter in the month. Several of the tests had rules that took me a few second to understand. I restarted these confused-trials to ensure an accurate measurement. On one of the tests (“Design Copy”) I took a practice trial to make sure I understood the rules. The practice trial is recorded on the results page so I assume it will be incorporated into analysis.

Learnings: In the morning I need to read directions twice. Use practice trial if unsure. Tapping the space bar as fast as you can is a good way to get your family out of bed.

Stay tuned for forthcoming notes and results.

My Science Is Harder Than Your Science…Bla, Bla, Bla.

There are several false assumptions that proliferate discourse in the media and among scientists about neuroscience, and science in general, that I believe are largely driven by artificial distinctions drawn between the “hard” and “soft” sciences. I recently came across an article on the fusion of architecture and neuroscience, that acts as just one example of a broader obsession and delusion about anything prefixed with “neuro”. The premise is that this relatively new field  is exciting because it provides an objective “window into the mind” that can better inform technologies and hard sciences than softer sciences like psychology, economics or sociology.

This particular article, examines how knowledge of the mind can improve architectural design. It asks questions like, can neuroarchitecture foster scientific discovery or improve development of social skills among autistic children by clever manipulation of aesthetics and physical design? While I agree that neuroscience can inform many fields including architecture, I object to the explicit tone that is too common in discourse on neuro[fill in the blank]. That is that neuroscience is a blessing because it is the first science of the mind objective enough to be fused with other hard sciences.

Here is  a sample from the article which quotes, Eduardo Macagno, professor of biological sciences at the University of California, San Diego:

“We are now really beginning to understand better how to measure the responses to the built environment without relying on psychology, social science, observational behavior. [Those studies] don’t have the quantitative and objective experimental approach that we believe neuroscience brings to the interface with architecture.”

This is a fundamental misunderstanding of social science that is driven by many things, but language is probably what throws people off the most. Macagno is confusing the tools of research with the method. Sciences that use new and exciting tools cloaked in complex technical language are often considered more objective, despite the fact that they use the same (or less rigorous) research methods as sciences with tools that are more easily understood in plain english.

One tool used in neuroscience is the fMRI which measures changes in blood flow to different areas of the brain. By measuring relative increases in blood flow to certain regions of the brain scientists can develop insights into brain function. While this is a powerful tool, accurate interpretation of results requires advanced training in technical language, physiology, methodology and statistics. Cloaked in complex language, people outside the field often fail to recognized that fMRI studies are usually correlational, relative increases in blood flow are only associated with increased neural activity, and blood flow lags behind neural events in the brain by about 2-6 seconds, which makes it difficult to pinpoint the connection between a stimulus or behavior and it’s associated brain region. This being said many similar methodological limitations are faced by the softer psychological sciences and even the harder sciences like physics.

It is frustrating to see scientists speaking in such absolutes about the quality of research going on in one field versus another. It points to a lack of homework on methodology, and snap judgements based on familiarity of language. Generally, the hard/soft distinction in science is not about rigor of methodology, it is more a distinction between inaccessible and colloquial language used to explain tools of the trade. Of course variability of the object of study might have something to do with it. But that is for another post.

[Featured photo courtesy of Royal Anthropological Institute’s Education Outreach Programme]