Category: General

(expired) Interdisciplinary Postdoc position in the Reach & Touch Lab (Bielefeld Biopsychology)

We are looking for a Postdoc to join the large-scale project ICSPACE (intelligent coaching space) in the excellence cluster Citec. The project brings together VR technology and computer graphics, language research, and psychology, aiming to develop VR-based training. The role of the Postdoc will be to develop, coordinate, and run studies that asssess and evaluate the visual, auditory, and tactile interaction between the VR enviromment and its human users. The Postdoc will be integrated both in the Citec project and the Biopsychology group. The position is third-party funded and does not require teaching. It is initially for about 1 year, but has an option to be extended. The position should start as soon as possible.

We will interview as soon as possible after the deadline (August 11, 2016).

The Reach & Touch Lab / Biopsychology group investigates sensorimotor and multisensory transformation and integration, with a focus on tactile processing and its relationship with movement (check out the remaining website and two more open Postdoc positions and a PhD position). Available research methods will be motion tracking, EEG, TMS, EMG, 3T fMRI, and a two-armed Kinarm. Besides the focus on touch and sensorimotor processing, the group will investigate developmental aspects of these topics in infants and children. At the University of Bielefeld, there are multiple opportunities for collaboration for additional psychological/neuroscientific methods. There’s a multitude of possibilities for research, and your ideas matter.

Please consult the official job advertisement in German or English.

Apply by August 11!

Not sure whether you should apply? Get in touch with your questions via Email (tobias.heed@uni-hamburg.de) or Twitter (@TobiasHeed) to arrange a phone call if you can’t reach me by phone directly.

(expired) 2 Postdoc positions in the Reach & Touch Lab (in the new Bielefeld-based Biopsychology group)

I am looking for 2 Postdocs to join my new lab in Bielefeld, the Biopsychology & Cognitive Neuroscience group of the Psychology Department.

I will be interviewing as soon as possible after the deadline (July 29, 2016).

The group investigates sensorimotor and multisensory transformation and integration, with a focus on tactile processing and its relationship with movement (check out the remaining website, http://reachtouchlab.com). Available research methods will be motion tracking, EEG, TMS, EMG, 3T fMRI, and a two-armed Kinarm. Besides the focus on touch and sensorimotor processing, the group will investigate developmental aspects of these topics in infants and children. At the University of Bielefeld, there are multiple possibilities for collaboration for additional psychological/neuroscientific methods. The positions are university-funded (3 years, additional 3 years possible) and include teaching. They should start in October 2016. There’s a multitude of possibilities for research, and your ideas matter.

Apply by July 29!

Not sure whether you should apply? Get in touch with your questions via Email (tobias.heed@uni-hamburg.de) or Twitter (@TobiasHeed) to arrange a phone call if you can’t reach me by phone directly.

 

Postdoc positions

Note: this is not the official advertisement. Please find it here (in German). English applications are fine, non-German applications are welcome.

 

Job description

The jobholder will plan, execute, analyze, and publish research studies in the lab, using (some of) the above listed methods. This includes the organization and scientific administration of projects and potentially the co-supervision of PhD students. (75%)

The position includes teaching of 2 student courses per semester (4 “Lehrveranstaltungsstunden (LVS)”). (20%)

It is expected that the jobholder takes part in the academic self-administration. (5%)

The development of an own research focus, including acquisition of third party funding, is encouraged. It is possible to habilitate on the position.

Job specification

Necessary qualifications, knowledge, and competences

  • PhD in Psychology, Cognitive Neuroscience, or a comparable degree in a related field
  • Practical experience with experiments in at least 2 of the above-mentioned scientific research methods, demonstrated through corresponding publications
  • Very good programming skills for experimental acquisition and analysis (e.g. Presentation, Python, Matlab, R), and the willingness to acquire further such skills if required
  • Very good statistical knowledge
  • Very good English skills and experience with publishing in English
  • Independent and thorough working style
  • The group will be practicing Open Science; it will be expected that the jobholder documents and publishes data and scripts.

Desirable qualifications, knowledge, and competences

  • Knowledge of Bayesian statistics
  • PhD topic in the area of sensorimotor processing or sensorimotor development
  • Experience with motion tracking or Kinarm
  • Teaching experience
  • Experience with supervising students (theses, student assistants, PhDs)

Please submit your application preferably as one single pdf file that contains all relevant content (motivation letter, CV, copies of relevant certificates, job reference letters if applicable). Please list two references from an academic background whom I may contact during the application process.

 

(expired) PhD position in the Reach & Touch Lab (in the new Bielefeld-based Biopsychology group)

Deadline extended until July 30!

I am looking for a PhD student to join my new lab in Bielefeld, the Biopsychology & Cognitive Neuroscience group in the Psychology Department.

I will be interviewing as soon as possible after the deadline (July 30, 2016).

The group investigates sensorimotor and multisensory transformation and integration, with a focus on tactile processing and its relationship with movement (check out the remaining website). Available research methods will be motion tracking, EEG, TMS, EMG, 3T fMRI, and a two-armed Kinarm. Besides the focus on touch and sensorimotor processing, the group will investigate developmental aspects of these topics in infants and children. The PhD position is funded from my already running Emmy Noether project, and will focus on basic research about touch, movement, and decision making, using EEG, motion tracking, modeling, and behavioral methods (there is some flexibility, so your own ideas matter).

The positions can start in October 2016, but no later than January 2017. It is limited to 3 years. It is 3rd party funded (65% position) and does not require teaching.

Apply by July 30!

Not sure whether you should apply? Get in touch with your questions via Email (tobias.heed@uni-hamburg.de) or Twitter (@TobiasHeed) to arrange a phone call if you can’t reach me by phone directly.

 

PhD Position

Note: this is not the complete, official advertisement. Please find it here in German, or consult the English translation.

Job description

Participation in the DFG-funded Emmy Noether project “Sensorimotor processing and reference frame transformations in the human brain” (see http://reachtouchlab.com)

Design, acquisition, analysis, and support in the publication of psychological/neuroscientific experiments (90%)
Participation in project coordination (10%)

The position aids scientific qualification: a PhD can be obtained.

Job specification

Formal qualification

a completed university degree such as MSc Psychology, MSc Cognitive Neuroscience or similar

Further qualifications, skills, and competences

The project uses behavioral parameters, motion tracking, and EEG. It is furthermore planned that results will be modeled, e.g. with diffusion/accumulator models.

The jobholder should
– have gained experience with one or several of these methods, e.g. in the MSc thesis project, as a student assistant, or in an internship
– have experience with experimental and/or analysis programming (e.g. Presentation, Python, Matlab, R)
– be prepared to massively extend these programming skills
– have good statistical knowledge
– be communicative and team-oriented
– work thoroughly and independently
– have good English skills
Please submit your application preferably as one single pdf file that contains all relevant content (motivation letter, CV, copies of relevant certificates, job reference letters if applicable). Please list two references from an academic background whom I may contact during the application process.

Looking for help: assembling a list of neuroscience methods intro papers

I’ve not yet found a satisfying neuroscience methods book to use as an introduction for students. I’ve therefore started a list of introductory papers. My goal is to have a complete list with good-to-read papers that introduce each method, show examples of their use in research, and discuss their weaknesses. Optimally, there will be a “very easy overview” paper, and then some additional, more in-depth papers for each method.

As of now, the list is far from perfect: for one, it is hopelessly incomplete, but I suspect this is more because of my ignorance of good papers than because of a lack of good papers. Besides, many of the papers here are too difficult for entry-level reading.

So, if you know any good methods papers that are suited for beginning students, please drop me a line via Email or on Twitter!

Hopefully, the list will be useful also for others. Missing links to papers will be inserted over time, and I’ll indicate the difficulty of each paper.

 

general

Donoghue, J. P. (2008). Bridging the Brain to the World: A Perspective on Neural Interface Systems. Neuron, 60(3), 511–521. http://doi.org/10.1016/j.neuron.2008.10.037

Taub, E., Uswatte, G., & Elbert, T. (2002). New treatments in neurorehabiliation founded on basic research. Nature Reviews Neuroscience, 3(3), 228–236. http://doi.org/10.1038/nrn754

King, M., Dablander, F., Jakob, L., Agan, M., Huber, F., Haslbeck, J., & Brecht, K. (2016). Registered Reports for Student Research. Journal of European Psychology Students, 7(1). http://doi.org/10.5334/jeps.401

Open Science Collaboration. (2015). Estimating the reproducibility of psychological science. Science, 349(6251), aac4716–aac4716. http://doi.org/10.1126/science.aac4716

Button, K. S., Ioannidis, J. P. A., Mokrysz, C., Nosek, B. A., Flint, J., Robinson, E. S. J., & Munafò, M. R. (2013). Power failure: why small sample size undermines the reliability of neuroscience. Nature Reviews Neuroscience, 14(5), 365–376. http://doi.org/10.1038/nrn3475

 

EEG/MEG

Otten, L. J., & Rugg, M. D. (2004). Interpreting event-related brain potentials. In T. C. Handy (Ed.), Event-Related Potentials: A Methods Handbook (pp. 3–16). Cambridge: MIT Press. Retrieved from http://discovery.ucl.ac.uk/185452/

Lopes da Silva, F. (2004). Functional localization of brain sources using EEG and/or MEG data: volume conductor and source models. Magnetic Resonance Imaging, 22(10), 1533–1538. http://doi.org/10.1016/j.mri.2004.10.010

Lehmann, D., & Skrandies, W. (1984). Spatial analysis of evoked potentials in man—a review. Progress in Neurobiology, 23(3), 227–250. http://doi.org/10.1016/0301-0082(84)90003-0

Rush, S., & Driscoll, D. A. (1968). Current distribution in the brain from surface electrodes. Anesthesia & Analgesia, 47(6), 717–723.

Baillet, S., Mosher, J. C., & Leahy, R. M. (2001). Electromagnetic brain mapping. Signal Processing Magazine, IEEE, 18(6), 14–30.

 

EMG

Mcneil, C. J., Butler, J. E., Taylor, J. L., & Gandevia, S. C. (2013). Testing the excitability of human motoneurons. Frontiers in Human Neuroscience, 7, 152. http://doi.org/10.3389/fnhum.2013.00152

Zwarts, M. J., & Stegeman, D. F. (2003). Multichannel surface EMG: Basic aspects and clinical utility. Muscle & Nerve, 28(1), 1–17. http://doi.org/10.1002/mus.10358

 

fMRI

Heeger, D. J., Huk, A. C., Geisler, W. S., & Albrecht, D. G. (2000). Spikes versus BOLD: what does neuroimaging tell us about neuronal activity? Nature Neuroscience, 3(7), 631–633. http://doi.org/10.1038/76572

Logothetis, N. K., & Pfeuffer, J. (2004). On the nature of the BOLD fMRI contrast mechanism. Magnetic Resonance Imaging, 22(10), 1517–1531. http://doi.org/10.1016/j.mri.2004.10.018

Logothetis, N. K., & Wandell, B. A. (2004). Interpreting the BOLD Signal. Annual Review of Physiology, 66(1), 735–769. http://doi.org/10.1146/annurev.physiol.66.082602.092845

Orban, G. A., Van Essen, D., & Vanduffel, W. (2004). Comparative mapping of higher visual areas in monkeys and humans. Trends in Cognitive Sciences, 8(7), 315–324. http://doi.org/10.1016/j.tics.2004.05.009

Wandell, B. A., & Winawer, J. (2011). Imaging retinotopic maps in the human brain. Vision Research, 51, 718–737. http://doi.org/10.1016/j.visres.2010.08.004

O’Reilly, J. X., Woolrich, M. W., Behrens, T. E. J., Smith, S. M., & Johansen-Berg, H. (2012). Tools of the trade: psychophysiological interactions and functional connectivity. Social Cognitive and Affective Neuroscience, 7(5), 604–609. http://doi.org/10.1093/scan/nss055

 

TMS

Di Lazzaro, V., & Rothwell, J. C. (2014). Corticospinal activity evoked and modulated by non-invasive stimulation of the intact human motor cortex. The Journal of Physiology, 592(19), 4115–4128. http://doi.org/10.1113/jphysiol.2014.274316

Bestmann, S., & Krakauer, J. W. (2015). The uses and interpretations of the motor-evoked potential for understanding behaviour. Experimental Brain Research, 233(3), 679–689. http://doi.org/10.1007/s00221-014-4183-7

Bestmann, S., & Duque, J. (2015). Transcranial Magnetic Stimulation Decomposing the Processes Underlying Action Preparation. The Neuroscientist, 1073858415592594. http://doi.org/10.1177/1073858415592594

 

tA/DCS

Di Lazzaro, V., & Rothwell, J. C. (2014). Corticospinal activity evoked and modulated by non-invasive stimulation of the intact human motor cortex. The Journal of Physiology, 592(19), 4115–4128. http://doi.org/10.1113/jphysiol.2014.274316

Merrill, D. R., Bikson, M., & Jefferys, J. G. R. (2005). Electrical stimulation of excitable tissue: design of efficacious and safe protocols. Journal of Neuroscience Methods, 141(2), 171–198. http://doi.org/10.1016/j.jneumeth.2004.10.020

Fertonani, A., & Miniussi, C. (2016). Transcranial Electrical Stimulation: What We Know and Do Not Know About Mechanisms. The Neuroscientist. http://doi.org/10.1177/1073858416631966

 

measuring movement & using movement to infer cognition

Faisal, A. A., Selen, L. P. J., & Wolpert, D. M. (2008). Noise in the nervous system. Nature Reviews Neuroscience, 9(4), 292–303. http://doi.org/10.1038/nrn2258

Franklin, D. W., & Wolpert, D. M. (2008). Specificity of Reflex Adaptation for Task-Relevant Variability. The Journal of Neuroscience, 28(52), 14165–14175. http://doi.org/10.1523/JNEUROSCI.4406-08.2008

Wolpert, D. M., & Landy, M. S. (2012). Motor control is decision-making. Current Opinion in Neurobiology. Retrieved from http://www.sciencedirect.com/science/article/pii/S0959438812000827

Song, J. H., & Nakayama, K. (2009). Hidden cognitive states revealed in choice reaching tasks. Trends in Cognitive Sciences, 13(8), 360–366.

 

invasive recordings in animals

Alivisatos, A. P., Andrews, A. M., Boyden, E. S., Chun, M., Church, G. M., Deisseroth, K., … Zhuang, X. (2013). Nanotools for Neuroscience and Brain Activity Mapping. ACS Nano, 7(3), 1850–1866. http://doi.org/10.1021/nn4012847

Donoghue, J. P. (2008). Bridging the Brain to the World: A Perspective on Neural Interface Systems. Neuron, 60(3), 511–521. http://doi.org/10.1016/j.neuron.2008.10.037

 

invasive recordings in humans (grids, epilepsy, BMIl)

Bensmaia, S. J., & Miller, L. E. (2014). Restoring sensorimotor function through intracortical interfaces: progress and looming challenges. Nature Reviews Neuroscience, 15(5), 313–325. http://doi.org/10.1038/nrn3724

Hatsopoulos, N. G., & Donoghue, J. P. (2009). The Science of Neural Interface Systems. Annual Review of Neuroscience, 32(1), 249–266. http://doi.org/10.1146/annurev.neuro.051508.135241

 

cooling/lesioning

Lomber, S. G. (1999). The advantages and limitations of permanent or reversible deactivation techniques in the assessment of neural function. Journal of Neuroscience Methods, 86(2), 109–117. http://doi.org/10.1016/S0165-0270(98)00160-5

 

Calcium imaging

Alivisatos, A. P., Andrews, A. M., Boyden, E. S., Chun, M., Church, G. M., Deisseroth, K., … Zhuang, X. (2013). Nanotools for Neuroscience and Brain Activity Mapping. ACS Nano, 7(3), 1850–1866. http://doi.org/10.1021/nn4012847

 

optogenetics

 

animal models (mouse, worms, fish, monkey)

 

stats/methods

Myung, I. J. (2003). Tutorial on maximum likelihood estimation. Journal of Mathematical Psychology, 47(1), 90–100. http://doi.org/10.1016/S0022-2496(02)00028-7

Maris, E., & Oostenveld, R. (2007). Nonparametric statistical testing of EEG- and MEG-data. Journal of Neuroscience Methods, 164(1), 177–190. http://doi.org/10.1016/j.jneumeth.2007.03.024

Pernet, C. R., Chauveau, N., Gaspar, C., & Rousselet, G. A. (2011). LIMO EEG: A Toolbox for Hierarchical LInear MOdeling of ElectroEncephaloGraphic Data. Computational Intelligence and Neuroscience, 2011, 1–11. http://doi.org/10.1155/2011/831409

Nakagawa, S., & Hauber, M. E. (2011). Great challenges with few subjects: Statistical strategies for neuroscientists. Neuroscience & Biobehavioral Reviews, 35(3), 462–473. http://doi.org/10.1016/j.neubiorev.2010.06.003

Cumming, G., Fidler, F., & Vaux, D. L. (2007). Error bars in experimental biology. The Journal of Cell Biology, 177(1), 7.

Nieuwenhuis, S., Forstmann, B. U., & Wagenmakers, E.-J. (2011). Erroneous analyses of interactions in neuroscience: a problem of significance. Nat Neurosci, 14(9), 1105–1107. http://doi.org/10.1038/nn.2886

Kliegl, R., Wei, P., Dambacher, M., Yan, M., & Zhou, X. (2011). Experimental effects and individual differences in linear mixed models: estimating the relationship between spatial, object, and attraction effects in visual attention. Frontiers in Quantitative Psychology and Measurement, 1, 238. http://doi.org/10.3389/fpsyg.2010.00238

Osborne, J. W. (2013). Is data cleaning and the testing of assumptions relevant in the 21st century? Frontiers in Psychology, 4. http://doi.org/10.3389/fpsyg.2013.00370

Speelman, C. P., & McGann, M. (2013). How Mean is the Mean? Frontiers in Psychology, 4. http://doi.org/10.3389/fpsyg.2013.00451

Cumming, G. (2014). The New Statistics Why and How. Psychological Science, 25(1), 7–29. http://doi.org/10.1177/0956797613504966

Gelman, A., & Stern, H. (2006). The Difference Between “Significant” and “Not Significant” is not Itself Statistically Significant. The American Statistician, 60(4), 328–331. http://doi.org/10.1198/000313006X152649

 

practical analysis/programming

Lacouture, Y., & Cousineau, D. (2008). How to use MATLAB to fit the ex‐Gaussian and other probability functions to a distribution of response times. Tutorials in Quantitative Methods for Psychology.

Urai, Anne, Prettier plots in Matlab, http://anneurai.net/2016/06/13/prettier-plots-in-matlab
 

Experiences with signing peer reviews

There’s always discussion about peer review. I’m sure your group does the same as mine — for every anonymous peer review we get, we guess who the author might have been. It’s more than just curiosity. Knowing who authored a critique might help in finding a convincing reply by addressing what the reviewer really finds relevant. It might allow asking back to clarify if a comment remains unclear to us.
But maybe most of all, sometimes I’d really like to know what made a reviewer write a disrespectful, bashing review.

Different ideas of how peer review should be done

You can see on Twitter that some scientists are thinking about signing their reviews, but are worried about the consequences if their review is critical of the study. In fact, some have suggested that peer reviews should be signed by those who have gained tenure (implying that, if you haven’t, it might have serious, negative consequences).

Others are proposing much more radical changes to the peer review system. Some have suggested that reviewers should be allowed to publish their reviews on their blog. This would, for instance, show the contributions we make as reviewers, which are currently secret and invisible. Some openness about the review process is emerging in the publishing world. Frontiers publishes the names of the reviewers with each article, and PeerJ publishes the full content of the reviews if the paper’s authors and reviewers consent.

Still others suggest to get rid of the current review practices entirely, and instead to publish o preprint servers, with peer review being performed post-publication, by an online comment/reply procedure.

Writing anonymous reviews

When I started in science, I got to know the standard model of anonymous peer review from both sides.
On the giving end, it is comfortable to know that the authors won’t know who you are. This way, it’s easier to criticise and doubt the manuscript under review. But then again, don’t we discuss and criticize each other’s work at every conference we go to? Why does it feel so much harder to sign a review than it does to state your opinion at a poster? Sure, something written is more durable than something you say at a meeting, but as a reviewer, I am doubting, criticising, and questioning a paper with the openness to be convinced by the authors in scientific debate. Thus, it should be normal that some of the things I write in a review are wrong.

And then, once I was more known in my field, there was this thing about trying to remain anonymous. You know, this situation where the authors aren’t aware of your paper that perfectly fits their argument, or that should be cited for some other reason? How do you include that in an “anonymous” review without revealing who you are? And, the situation in which a colleague, at a conference, came up to me and told me he knew I was the reviewer because that one experimental condition I had suggested could only be coming from me. Anonymity: nice concept, often hard to guarantee in scientific debate.

Getting anonymous reviews

On the receiving end, peer review proved hard, too. Haven’t we all gotten those reviews that we had to put away for a few days before we felt we could face the seeming destruction they meant to our work… But worse, we’ve all gotten those troll reviews. Reviews written in a manner lacking respect, that bash our work and we just don’t understand why. I often wonder whether those reviewers had used the same tone if they had signed their review.

Signing reviews: positive effects

Then I started getting some signed reviews. Overall, their number is still small, maybe 10% of all the reviews I’ve received. But I was surprised about my own response to these reviews. Even if they were very critical, the one thing that stands out to me is that I never had the feeling that they were disrespectful. For whatever weird psychological reason, knowing that there was a name to the review made it much easier to get to work on them. Now, you might think, sure, these people signed their reviews because they didn’t have any substantial criticism. Not at all. Their reviews were just as critical. One asked us to redo our entire data analysis.

I met my reviewers at conferences in three cases, and each time talked to them about the review. It was, in each case, an informative discussion, and never awkward. Even with the reviewer who asked us to redo the analysis…

With all these experiences of writing and receiving reviews, I decided a while ago that I would sign my own reviews from now on. And I was surprised by the responses I got. One reviewer wrote me after the paper was through, thanking me for the “contributions” and asking for pdfs of my publications. At a recent conference, the first author of a paper which had been rejected came to me and told me that he had found my review very helpful (whereas I had feared he’d think me a prick), and we had a nice conversation.

One thing that is clear to me: although I still write tough reviews when it’s called for, I make every effort to write them respectfully. I did that before singing, too, but I try even harder now. I imagine, putting your name under your piece would do that for most reviewers. Wouldn’t that be a step forward.

So…

In conclusion, while it can still fee awkward to submit my name with a critical review, my experiences have been positive. Of course, whether I’ll get tenure hasn’t been decided. So it remains to be seen whether I insulted some senior author so much that it will have such drastic consequences as that person trying to hinder my further career. It seems improbable to me, though the situation appears to differ in some fields.

Putting my name to my opinions and criticism appears to me to be the way it should be: let’s have discussions in which we fight over our standpoints. But let’s keep our respect.

From here, the step to published reviews, be it with the papers, on my blog, or in some online forum, is then just a step away.

Comments are open! Share your thoughts!

The one thing we should NOT conclude from the Open Science Collaboration’s replication study

The authors of the replication study that was published in Science this week added some niceties in their paper, stating:

It is also too easy to conclude that a failure to replicate a result means that the original evidence was a false positive. Replications can fail if the replication methodology differs from the original in ways that interfere with observing the effect.

Some of the responses to the study seem to cling to that passage of the paper, apparently in the hope to explain the low number of successful replications (some 35 of 97) in said study.

For instance, the board of the German Society of Psychology (DGPs) stated that (translated from German):

Such findings rather show that psychological processes are often context-dependent and their generalizability must be further investigated. The replication of an American study might render different results than if it is run in Gernany or Italy (or vice versa).

A similar context argument is made in this post of the NY Times.

When a study should replicate

Good scientific conduct requires that the methods part of a paper state all details necessary to redo the study. In other words, the methods section reflects what the authors deemed relevant to observe the reported effect. The Collaboration not only used the methods parts of the studies they tried to replicate, but also asked the original authors for equipment etc. Therefore, robust effects most likely had a good chance to be replicated.

One can ask what kind of effects we are producing with our science if we expect that the country in which the study is run will affect the significance of the result. (While this might be sensible for some social psychology, it should usually not be true in cognitive science.) More generally, by stating that our effects will be eliminated by small, unknown differences from one lab to another, we basically say that we have no clue about the true origin of those effects. It is my utter hope that most labs do not take on this kind of thinking…

The wrong conclusion

The above-mentioned post by the NY Times was titled Psychology is not in crisis. One can argue about the word crisis (see post by Brian Earp). But the title basically suggests that there’s nothing to worry about. Non-replication is just bad luck, due to weird little blurps in the lab context, and will inspire us to new research. I think that is the one conclusion we should not draw from the Collaboration’s study.

The better conclusion: a kick in the behind

I posted about some of the reasons for non-reproducibility yesterday: underpowered studies, file-drawer publication behavior, selective reporting, and p-hacking. All of these problems have been known for some time, but the scientific community is slow to adjust its culture to attack them.

Rather than blaming the results of The Collaboration’s study on small differences between lab setups and the like, we should face the imperfections of our current research culture: success on science depends on publishing lots of papers; thus, studies are done fast, and results are blasted out as fast as possible; publication is usually possible only with significant results.

Some things that can be done

So, rather than leaning back and doing business as usual, let’s think about what can be done. There are numerous initiatives and possibilities. Most of them require work and are still unpopular. But this week’s replication study could give us some momentum to get going. Here’s what my team came up with in today’s lab meeting:

  • Pre-register your study with a journal, so that it will be published no matter what the result. (We discussed that this is not always feasible, because with complex studies, you might have to try out different analysis strategies. Nevertheless, pre-registration should work well with many studies that use previously established effects and paradigms.)

  • Replicate effects before publishing. (The “but…”s are obvious: costly, time-intensive, not attractive in today’s publish-or-perish world.)

  • Publish non-significant results. (Can be difficult. Very.)

  • Calculate power before doing experiments.

  • Inspect raw data and single subject data, not just means.

  • Publish which hypotheses were post-hoc, and which ones existed from the start.

  • Publish data sets.

  • Use Bayesian statistics to accumulate evidence over experiments.

If every lab just started with one or two of these points that they aren’t yet practicing, we’d probably end up with a better replication success score next time around.

Why science results are often spurious

In a huge effort, the Open Science Collaboration, headed by Brian Nosek, attempted to replicate >100 studies that were published in 2008 in 3 psychology journals (see paper here). 100 experiments were finished and made it into the report. Of those, a crushingly low number revealed the effects of the first publication.

There’s some debate about the methodology of the study — see, for example, the excellent post by Alexander Etz who suggests a Bayesian approach instead of classifying each replication attempt into a success vs. failure. But, as Etz concludes, any way you look at it, a lot of studies didn’t replicate — somewhere between one and two thirds.

It’s worth noting that surprising results and difficult studies replicated less often. Thus, if we generalize the result to all of science, we might expect a higher nonsense rate from higher impact journals, which often publish unexpected findings. It might also mean that we should expect higher nonsense rates for more complex methods, as for example fMRI experiments. This is, of course, just a hunch. But estimate the cost, both in terms of time and money, of a project that tried to replicate 100 fMRI studies…

What are the reasons for the failure to replicate so many studies? There are at least 3 problems:

Publication bias and underpowered studies

Studies are usually published if they have a positive result, that is, a significant p-value. This is because

  • in classical statistics, a non-significant result does not allow us to draw valid conclusions
  • not finding what you attempted to find is usually not very interesting
  • not finding what you attempted to find might just mean you did sloppy work

As a result, lots of studies are conducted, but never written up — the so-called file drawer problem. What gets into the journals are those attempts that were successful — but often by chance. The rest goes in the trash.

And vice versa, studies that are published often give us a wrong impression about how strong an effect really is: Because being successful in science means publishing as many papers as possible, studies often acquire only few subjects, so that studies have low power. Then, if a significant effect is found by chance, it is published. Accordingly, most replications will come out with smaller effects, or none at all (see Button et al. 2013).

Selective reporting of measurements

Because we’re so keen on finding a positive result, we often measure a lot of things at once. But given the problem of chance results, the more measures we take, the more probable it is that we find a spurious significant result. The problem becomes worse if a researcher measures many things, then picks the significant result for publication, but does not disclose that there were many other measures that did not show a significant result (see Simmons et al. 2011). This will all the more give the impression of a good result, though really it was just chance.

p-hacking

There are more papers that report p-values just below 0.05 than there should be (see Lakens 2015). This indicates that authors work on their data until they become significant. There are several ways to do that, for example

  • cleaning data by eliminating outlier data points and outlier subjects. There are high degrees of freedom in cleaning, and no hard rules as to what should (not) be done.
  • acquiring data subject by subject and stopping when the p-value is low enough (see Simmons et al. 2011). This strategy will make you stop data acquisition after you have, by chance, sampled one or several subjects with a strong effect, who push your p-value just under the 0.05 mark.

It is obvious that such results will not replicate under more controlled and constrained data acquisition and analysis.

I’ve never understood why 5% is such a magical mark. But as long as science careers rest on publishing many papers, and getting a paper published rests on passing those magic 5%, I suppose we’ll continue seeing p-hacking.

Science practices seem to be really hard to change. These problems have been known for a while. Maybe we’re gaining some momentum.
Happy experimenting!

References

Open Science Collaboration (2015). Estimating the reproducibility of psychological science. Science 349: aac4716.

Lakens D. (2015) On the challenges of drawing conclusions from p-values just below 0.05. PeerJ 3:e1142.

Simmons, J. P., Nelson, L. D., & Simonsohn, U. (2011). False-Positive Psychology: Undisclosed Flexibility in Data Collection and Analysis Allows Presenting Anything as Significant. Psychological Science, 22, 1359–1366.

Button, K. S., Ioannidis, J. P. A., Mokrysz, C., Nosek, B. A., Flint, J., Robinson, E. S. J., & Munafò, M. R. (2013). Power failure: why small sample size undermines the reliability of neuroscience. Nature Reviews Neuroscience, 14(5), 365–376.

Research Collaborative (SFB) continues until 2019

Great news! We got word yesterday that the German Research Foundation (DFG) will fund the Hamburg-based Research Collaborative (“Sonderforschungsbereich”, SFB) 936, Multi-site communication in the brain, for a second 4-year period, from 2015–2019.
At the Reach & Touch Lab, Jonathan Schubert & I are currently wrapping up the work of our first funding period’s project in the SFB, “The role of vision for shaping cortico-cortical interactions mediating sensorimotor transformations”.
I am excited that the new period’s project, “Tactile-visual interactions for saccade planning during free viewing and their modulation by TMS” is a collaboration with Peter König of the University of Osnabrück. We will investigate how different brain regions connect with each other to merge space across our senses.
We have a PostDoc position open in this project – look out for information about it here very soon!