What Do We Want our Scientific Discourse to Look Like?

I was recently quoted in an article appearing in the Observer, a publication of Association for Psychological Science. In the article Alison Ledgerwood quotes from a diverse set of voices in psychology on the topic of scientific discourse in part in response to Susan Fiske’s piece in the Observer. Fiske takes issue with methodological critics of psychological science (who she referred to as “methodological terrorists” in an earlier draft circulated online). Her article promoted many responses (see here) and a call led by Ledgerwood to write a more diverse (and less status-driven) article for the Observer on the topic. True to form, Alison quoted my writing fairly, and elegantly brought together many other contributions.

Here, I provide my small contribution in its entirety.

We would serve each other, and science as a whole, better if we treated critique and communication of science as an open and humble process of discovery and improvement. To this end, I would like to see our scientific discourse focus more on methodology and evidence. This is easier said than done. Criticisms of the science are often construed as criticisms of the scientist. Even when we, as scientists, appreciate the criticism and recognize its scientific value, it still evokes concerns that others will lose trust in us and in our research. It is no wonder people are distressed by methodological criticism. However, focusing our discourse on methodology and evidence, with more awareness of how tone and context influence others’ perceptions of the scientist whose work is under the microscope, will help ensure healthy development of our science. Second, I would like to see an increase in open and humble scientific discourse. Openness may make our mistakes and shortcomings more apparent, and it may make it easier for others to critique our work, but it will surely improve our science. If we simultaneously place more value on humble communication, I expect criticisms will feel less personal and be easier to swallow as well. Finally, as a graduate student, I feel vulnerable publicly stating my thoughts on criticism and openness in science, which speaks to the climate of our discourse. It is essential that we have a communication environment in which graduate students, post-docs, and junior faculty from all backgrounds are rewarded for humbly and openly presenting methodologically sound ideas, research, and criticisms.

impression management and open science

I love this Charles H. Cooley (1902, p. 320) quote on how self-presentational concerns have institutional and professional forms (including in science, gasp!)

If we never tried to seem a little better than we are, how could we improve or “train ourselves from the outside inward?” And the same impulse to show the world a better or idealized aspect of ourselves finds an organized expression in the various professions and classes, each of which has to some extent a cant or pose, which its members assume unconsciously, for the most part, but which has the effect of a conspiracy to work upon the credulity of the rest of the world. There is a cant not only of theology and of philanthropy, but also of law, medicine, teaching, even of science—perhaps especially of science, just now, since the more a particular kind of merit is recognized and admired, the more it is likely to be assumed by the unworthy.

The unveiling of fraudulent research among highly acclaimed scientists along with the advent of new computing and archiving technologies has driven a recent (depending on how you measure it) push from within the scientific community for more “open” practices. The debate around open science and reluctance in adopting its practices are rarely discussed in terms of interpersonal processes. However, discussions of open science are discussions about the presentation of scientific research to other scientists and the public. I think the relevance of impression management processes to calls for more openness in science is an area worth exploring in more detail. I’d like to write more on this, please post in the comments if you know of anyone who has written on this topic.

References

Coole, C.H. (1902). Human nature and the social order. New York, NY: C. Scribner’s sons.

What I’m looking for in a graduate advisor. And why it’s good for science.

Applying to doctoral graduate programs is an arduous, time-consuming, and often ambiguous process. Testing, essay writing, and networking aside, it’s hard to identify the right program/person to spend the next 5-7 years with. I spent countless hours reading up on promising schools. And once I selected the faculty members conducting research that peaked my interest and matched my background, I needed to find out if they were even considering students. University websites are often out of date, and some faculty are unclear with their future plans and expectations. So I, and many other bright-eyed students, send thoughtful emails and cross our fingers on a reply:

“Dear Dr. So-And-So, I LOVE your work on [blank], and it is related to research I have done on [blank] in Dr. [Blank’s] lab. Are you taking students next year?”

A little more transparency would not only save everyone a lot of time and anxiety, it could determine the future of a science. Professors must receive dozens of these emails every Fall. And I imagine many students ask about out-dated research agendas based on the out-dated websites. It’s understandable that professors often fail to post updates on their activities and respond to these emails. Their time is being pulled in many directions, and expectations are growing. But, as Ben A. Barres (2013) argues, strong student-advisor relationships are integral to the continued success and innovation of a scientific field. Too often students never receive a clear answer and so spend a lot of time and money applying to programs completely in the dark. Prospective students pray their faculty member of choice is actually considering students for the next year, still working on topics listed online, and not going on sabbatical or reducing the size of their lab.

A breathe of fresh air. I came across several professors and programs that posted clear information on what they expected out of doctoral applicants. Here is the best example-A short blog post saving prospective students the time and money of applying to a non-match, and professors/administrators the time sifting through emails and applications not directed toward their current and future research agenda. The post made it immediately apparent whether or not this faculty member was a good match for me.

So in the interest of transparency, time-saving, and sustained success of the field, here’s what I am looking for in a graduate advisor. 

Note: I used Barres (2013) as a guide and focus on my field of interest, psychology.

Are they a good scientist?

  1. How many publications do they have? How many are recent? How many are in my area of interest?
  2. What is the impact of their publications on the field (h-index)?
  3. Are they publishing research (not just reviews) in top journals (i.e. are they innovators in their field?)
  4. Has their lab or center recently secured major grant funding, such as NIH, NSF, NIMH, etc.

Are they a good mentor?

  1. Get in touch with your prospective mentor’s current students and ask them questions about their mentor. Make sure you are in a space where they can answer honestly.
  2. Do they spend time with students discussing science? Good mentors spend time with their students designing good experiments, interpreting/analyzing data, writing research papers and grants, reviewing papers for journals, and practicing talks for conferences.
  3. Do they encourage students to engage in activities (that may be outside of their research interests) that are good for the student’s training? Activities such as TAing, attending conferences, and taking summer courses or workshops.
  4. Is there room to develop your own ideas or are you a slave to faculty research?
  5. Are they aloof, a micro-manager or somewhere in between?
  6. Is there a team spirit in the lab/center, where people collaborate effectively and are not pit against each other in a fight for attention, resources or scholarly success?
  7. Are lab meetings group discussions in which everyone contributes their thoughts and ideas, or is it primarily a time where the faculty member lectures or dictates to presenters what they should do next?
  8. What is the Postdoc to PhD student ratio in the lab/center? A high ratio might be an indication that your prospective mentor doesn’t see mentorship as a priority.
  9. How big is the lab/center? If it is relatively large it might indicate that your prospective mentor doesn’t have the time to give you individual attention.
  10. How many joint-publications and first-authors do their current students have?
  11. Ask for their CV if it is not available online.
  12. Ask for a list of the faculty’s former students. Find out what these students are doing today. Are they still in research? How successful are they? Are their achievements something you aspire toward?

Are their research interests similar to mine?

A point about this final question worth noting before diving in. I will quote Barres (2013) directly because he just puts it so well:

“An advisor should not be selected solely because he or she is the one researcher at your university that happens to work on the precise focused topic that you think you are most interested in. […] In my experience, this is exactly what nearly every graduate student does! Keep in mind that if you like solving puzzles, as all scientists do, there will be many different puzzles that you will find equally rewarding to work on. […] Begin your search for an advisor by casting as broad of a net as possible.”

Ok, now I’ll throw my broad net:

  1. Do they conduct experiments or studies that explore the etiology of health behavior, disease, or illness?
  2. Are they interested in development or evaluation of real-world health interventions or programs?
  3. Do they use or have an interest in developing research or interventions that use mobile or internet-based technologies?
  4. Do they employ diverse methodologies? Do they collaborate across the disciplines of psychology, public health, sociology, or economics?
  5. Are they interested in one or more of the following topics?: Health Behavior Change, Theory-Driven Psychology Interventions, Health Promotion, Disease Prevention, Emotion Regulation, Health Message Framing, Obesity, Exercise, Nutrition, Built Environment, Decision-Making, Mindfulness, Adverse Events or Trauma, Stress, Psychophysiology, Methodology, Technology for Health Research, Vulnerable or High-Need Populations.

I hope this post provides some useful suggestions for students applying to graduate programs. Please feel free to add ideas in the comments. I also hope this post underlines the importance of transparency and openness in science. With the advent of internet-based technologies, a move toward clarity, free-flow of information, and open communication will help science continue to flourish in the 21st century. And it might ease the migraine-inducing match-making process for students and faculty alike.

References:

Barres, B.A. (2013). How to pick a graduate advisor. Neuron, 80 (2), 275-279. doi:10.1016/j.neuron.2013.10.005

My Science Is Harder Than Your Science…Bla, Bla, Bla.

There are several false assumptions that proliferate discourse in the media and among scientists about neuroscience, and science in general, that I believe are largely driven by artificial distinctions drawn between the “hard” and “soft” sciences. I recently came across an article on the fusion of architecture and neuroscience, that acts as just one example of a broader obsession and delusion about anything prefixed with “neuro”. The premise is that this relatively new field  is exciting because it provides an objective “window into the mind” that can better inform technologies and hard sciences than softer sciences like psychology, economics or sociology.

This particular article, examines how knowledge of the mind can improve architectural design. It asks questions like, can neuroarchitecture foster scientific discovery or improve development of social skills among autistic children by clever manipulation of aesthetics and physical design? While I agree that neuroscience can inform many fields including architecture, I object to the explicit tone that is too common in discourse on neuro[fill in the blank]. That is that neuroscience is a blessing because it is the first science of the mind objective enough to be fused with other hard sciences.

Here is  a sample from the article which quotes, Eduardo Macagno, professor of biological sciences at the University of California, San Diego:

“We are now really beginning to understand better how to measure the responses to the built environment without relying on psychology, social science, observational behavior. [Those studies] don’t have the quantitative and objective experimental approach that we believe neuroscience brings to the interface with architecture.”

This is a fundamental misunderstanding of social science that is driven by many things, but language is probably what throws people off the most. Macagno is confusing the tools of research with the method. Sciences that use new and exciting tools cloaked in complex technical language are often considered more objective, despite the fact that they use the same (or less rigorous) research methods as sciences with tools that are more easily understood in plain english.

One tool used in neuroscience is the fMRI which measures changes in blood flow to different areas of the brain. By measuring relative increases in blood flow to certain regions of the brain scientists can develop insights into brain function. While this is a powerful tool, accurate interpretation of results requires advanced training in technical language, physiology, methodology and statistics. Cloaked in complex language, people outside the field often fail to recognized that fMRI studies are usually correlational, relative increases in blood flow are only associated with increased neural activity, and blood flow lags behind neural events in the brain by about 2-6 seconds, which makes it difficult to pinpoint the connection between a stimulus or behavior and it’s associated brain region. This being said many similar methodological limitations are faced by the softer psychological sciences and even the harder sciences like physics.

It is frustrating to see scientists speaking in such absolutes about the quality of research going on in one field versus another. It points to a lack of homework on methodology, and snap judgements based on familiarity of language. Generally, the hard/soft distinction in science is not about rigor of methodology, it is more a distinction between inaccessible and colloquial language used to explain tools of the trade. Of course variability of the object of study might have something to do with it. But that is for another post.

[Featured photo courtesy of Royal Anthropological Institute’s Education Outreach Programme]

Lisa Wade on Academic Blogging

The Office Hours Podcast recently featured Professor Lisa Wade of Sociological Images. Worth a listen. Though I would like to hear more of her view on recognition (or lack thereof) from her school and academic community for her blogging efforts. I was disappointed to hear that she didn’t mention it in her application for tenure. Why shouldn’t there be some metric for new media that serves the college community? It brings mostly good PR to the school and serves as a valuable teaching tool for colleagues. Perhaps it is fair to classify it as a hobby, and section it off from paid scholarly work, but it is time consuming and provides clear benefit to the school and field. A more nuanced follow-up conversation on this topic would be interesting.

Is the HPV Vaccine Effective? Part 1

In 2010 I conducted an analysis of internet, popular, and scholarly sources with regard to the efficacy and effectiveness of the HPV Vaccine. This is the first post in a series that will share my finding and attempt to bring results up to date. 

In June of 2006 the United States licensed the use of a human papillomavirus (HPV) vaccine and in 2007 the Center for Disease Control (CDC) outlined recommendations for implementation of Cervarix and Gardasil among 11 and 12 year old girls as well as catch-up measures to take among females 13-26 years of age. Since, the CDC has expanded their recommendations to include Gardasil for 11 and 12 year old boys (and catch-ups for males 13-26 years old). The CDC recommends use of these drugs between the ages of 9 and 26 years, however use outside of the aforementioned age range (11-12 years) should be under the close supervision of a doctor. As a quick aside, this blog series will focus on females but some information regarding male treatment will be include sporadically.

There are hundreds of strands of HPV, many of which are associated with infections in the genital tract and various types of cancer. HPV is responsible for 99.7% of cervical cancer and 5% of all cancers (Moscicki, 2008). Worldwide there are 500,000 new case of cervical cancer a year (approximately ten to eleven thousand in the United States), resulting in 276,00 deaths annually (four-thousand U.S.; Pichichero, 2006; Moscicki, 2008). In the U.S. an estimated 1.5 million women currently have an HPV-associated disease (Moscicki, 2008). HPV types 16/18 have been identified as the cause of 70% of all cervical cancer cases (Pichichero, 2006) and HPV 6/11 account for over 90% of genital warts (Kulasingam, 2007).

Since the commencement of widespread vaccination programs in the early 19th century there has been skepticism and resistance among the public, however, beginning in the 1990s a surge of anti-vaccination activity, particular against childhood vaccination, has garnered extensive media attention and affected rates of vaccination among the general population (Wolfe & Sharpe, 2002). Media, particularly Internet-based sources, are often the first and most frequently used sources (in the United States) for information on health. Sexual implications coupled with the involvement of young adolescents makes the HPV vaccine an inherently emotion-laden topic; as such the vaccine has received considerable media attention (Kahan, Braman, Cohen, Gastil, & Slovic, 2010). T

This blog series will analyze media and literary sources with regard to two questions:

  1. Is the HPV vaccination effective?
  2. Should it be administered or made mandatory among young adolescents in the United States?

The series will also examine research on public perception as it is relevant to the HPV vaccination. I hope it will be insightful for both you and me.

References

Kahan, D. M., Braman, D., Cohen, G. L., Gastil, J., & Slovic, P. (2010). Who fears the HPV vaccine, who doesn’t, and why? an experimental study of the mechanisms of cultural cognition. Law and Human Behavior, 34, 501-516.

Kulasingam, S.L. (2007). Implementation of an HPV-Vaccination Program. Disease Management and Health Outcomes, 15, 141-149.

Moscicki, A. (2008). HPV vaccines: Today and in the future. Journal of Adolescent Health, 43, S26-S40.

Pichichero, M. E. (2006). Prevention of cervical cancer through vaccination of adolescents. Clinical Pediatrics, 45, 393-398.

Wolfe, R.M., & Sharp, L.K. (2002). Anti-vaccinationists past and present. British Journal of Medicine, 325, 430-432.

Announcement: There is a Turning of the Tide in Psychology

A new blog from the Open Science Collaboration hit the web today with inaugural post by Denny Borsboom. He discusses the turning of the tide on openness in the psychological research community. Check it out. 

In the wake of the Stapel case, the community of psychological scientists committed to openness, data-sharing, and methodological transparency quickly reached a critical mass. The Open Science Framework allows researchers to archive all of their research materials, including stimuli, analysis code, and data, to make them public by simply pressing a button.