Author: Tobias Heed

Poster at SfN 2015: Mislocalizing touch from hands to feet

Tobias will be attending SfN, which is taking place in Chicago starting this upcoming Saturday.
He’ll present a poster at location O15 on Wednesday morning on a study by Steph Badde and himself. In this study, we demonstrate that participants often grossly misperceive where tactile stimuli occurred. For example, when we stimulate their left hand, they may report that the touch occurred on their right foot.
Tactile location is initially coded in primary somatosensory cortex, where activity arises in the sensory homunculus at the location that receives input from the periphery. Our results show that the stimulus location we perceive in the end can be vastly different from what would be expected from this initial homuncular representation. Furthermore, we show how this misperception is determined by several anatomical and spatial factors.

Come and see our poster on Wednesday, Oct. 21, 8-12 at poster slot O15! (Also see in the SfN Meeting Planner)

New paper on tactile decision making: Brandes & Heed (J Neurosci)

New paper:
Brandes, J. & Heed, T. (2015). Reach Trajectories Characterize Tactile Localization for Sensorimotor Decision Making, Journal of Neuroscience 35(40): 13648-13658; doi: 10.1523/JNEUROSCI.1873-14.2015

I’m very happy that this paper is now out. It is the first paper of Janina’s PhD work. She spent a ton of time to optimize the experimental paradigm and, even more so, the analysis.

The research question

Imagine you feel an itch on your foot and want to scratch it. Where your hand has to go depends on where you’ve placed your foot. The brain merges body posture and the skin location of the itch so that your hand goes to the right place.
But how exactly the brain localizes the touch is debated. The origin of the debate are results from experiments with crossed limbs. In a number of tasks involving touch, people are much worse with crossed than with uncrossed hands.
When you cross your hands, your right hand (that is, the skin location) lies in left space. So the two pieces of information – skin and space – are in conflict. But what does that tell us about how the brain actually processes touch location?
One hypothesis is that hand crossing makes it difficult for the brain to compute the touch location in space. According to this idea, the skin location is known fast, but (with crossed limbs) the spatial location becomes available late. Another hypothesis is that the computation is actually not at all difficult. Rather, the brain remembers the skin location, and integrates skin and spatial locations. With crossed hands, the two are in conflict (right vs. left), and this conflict must be resolved. Our experiment sought to dissociate these two accounts.

What we show

In our experiment, participants received a touch on their crossed feet, and they had to reach to the touched location. If only skin location was available at first then people should initially reach to the wrong foot, because the skin location is opposite to the foot’s actual spatial location (Hypothesis 1).
In contrast, if the brain computes the spatial location fast but then tries to resolve the conflict then people should take a while to reach (longer than when the feet are not crossed and there isn’t any conflict), but the reach should go directly to the correct location (Hypothesis 2).
The second option is basically what we found. Overall, it’s not quite that simple so read about the details in the paper…

Why the results are important

We show that the transformation from skin location to the 3D location in space is not difficult. With this finding, we refute a common idea about how touch is located.
We demonstrate experimentally that the brain integrates information from different kinds of spatial representations to localize touch. We already had found modeling evidence for this in a study by Steph Badde (recently came out, see here), and had also presented this hypothesis in our recent TiCS paper. Finally, our results provide a direct link from spatial integration in touch to decision making models such as drift diffusion models (more on that in the paper!).

The one thing we should NOT conclude from the Open Science Collaboration’s replication study

The authors of the replication study that was published in Science this week added some niceties in their paper, stating:

It is also too easy to conclude that a failure to replicate a result means that the original evidence was a false positive. Replications can fail if the replication methodology differs from the original in ways that interfere with observing the effect.

Some of the responses to the study seem to cling to that passage of the paper, apparently in the hope to explain the low number of successful replications (some 35 of 97) in said study.

For instance, the board of the German Society of Psychology (DGPs) stated that (translated from German):

Such findings rather show that psychological processes are often context-dependent and their generalizability must be further investigated. The replication of an American study might render different results than if it is run in Gernany or Italy (or vice versa).

A similar context argument is made in this post of the NY Times.

When a study should replicate

Good scientific conduct requires that the methods part of a paper state all details necessary to redo the study. In other words, the methods section reflects what the authors deemed relevant to observe the reported effect. The Collaboration not only used the methods parts of the studies they tried to replicate, but also asked the original authors for equipment etc. Therefore, robust effects most likely had a good chance to be replicated.

One can ask what kind of effects we are producing with our science if we expect that the country in which the study is run will affect the significance of the result. (While this might be sensible for some social psychology, it should usually not be true in cognitive science.) More generally, by stating that our effects will be eliminated by small, unknown differences from one lab to another, we basically say that we have no clue about the true origin of those effects. It is my utter hope that most labs do not take on this kind of thinking…

The wrong conclusion

The above-mentioned post by the NY Times was titled Psychology is not in crisis. One can argue about the word crisis (see post by Brian Earp). But the title basically suggests that there’s nothing to worry about. Non-replication is just bad luck, due to weird little blurps in the lab context, and will inspire us to new research. I think that is the one conclusion we should not draw from the Collaboration’s study.

The better conclusion: a kick in the behind

I posted about some of the reasons for non-reproducibility yesterday: underpowered studies, file-drawer publication behavior, selective reporting, and p-hacking. All of these problems have been known for some time, but the scientific community is slow to adjust its culture to attack them.

Rather than blaming the results of The Collaboration’s study on small differences between lab setups and the like, we should face the imperfections of our current research culture: success on science depends on publishing lots of papers; thus, studies are done fast, and results are blasted out as fast as possible; publication is usually possible only with significant results.

Some things that can be done

So, rather than leaning back and doing business as usual, let’s think about what can be done. There are numerous initiatives and possibilities. Most of them require work and are still unpopular. But this week’s replication study could give us some momentum to get going. Here’s what my team came up with in today’s lab meeting:

  • Pre-register your study with a journal, so that it will be published no matter what the result. (We discussed that this is not always feasible, because with complex studies, you might have to try out different analysis strategies. Nevertheless, pre-registration should work well with many studies that use previously established effects and paradigms.)

  • Replicate effects before publishing. (The “but…”s are obvious: costly, time-intensive, not attractive in today’s publish-or-perish world.)

  • Publish non-significant results. (Can be difficult. Very.)

  • Calculate power before doing experiments.

  • Inspect raw data and single subject data, not just means.

  • Publish which hypotheses were post-hoc, and which ones existed from the start.

  • Publish data sets.

  • Use Bayesian statistics to accumulate evidence over experiments.

If every lab just started with one or two of these points that they aren’t yet practicing, we’d probably end up with a better replication success score next time around.

Why science results are often spurious

In a huge effort, the Open Science Collaboration, headed by Brian Nosek, attempted to replicate >100 studies that were published in 2008 in 3 psychology journals (see paper here). 100 experiments were finished and made it into the report. Of those, a crushingly low number revealed the effects of the first publication.

There’s some debate about the methodology of the study — see, for example, the excellent post by Alexander Etz who suggests a Bayesian approach instead of classifying each replication attempt into a success vs. failure. But, as Etz concludes, any way you look at it, a lot of studies didn’t replicate — somewhere between one and two thirds.

It’s worth noting that surprising results and difficult studies replicated less often. Thus, if we generalize the result to all of science, we might expect a higher nonsense rate from higher impact journals, which often publish unexpected findings. It might also mean that we should expect higher nonsense rates for more complex methods, as for example fMRI experiments. This is, of course, just a hunch. But estimate the cost, both in terms of time and money, of a project that tried to replicate 100 fMRI studies…

What are the reasons for the failure to replicate so many studies? There are at least 3 problems:

Publication bias and underpowered studies

Studies are usually published if they have a positive result, that is, a significant p-value. This is because

  • in classical statistics, a non-significant result does not allow us to draw valid conclusions
  • not finding what you attempted to find is usually not very interesting
  • not finding what you attempted to find might just mean you did sloppy work

As a result, lots of studies are conducted, but never written up — the so-called file drawer problem. What gets into the journals are those attempts that were successful — but often by chance. The rest goes in the trash.

And vice versa, studies that are published often give us a wrong impression about how strong an effect really is: Because being successful in science means publishing as many papers as possible, studies often acquire only few subjects, so that studies have low power. Then, if a significant effect is found by chance, it is published. Accordingly, most replications will come out with smaller effects, or none at all (see Button et al. 2013).

Selective reporting of measurements

Because we’re so keen on finding a positive result, we often measure a lot of things at once. But given the problem of chance results, the more measures we take, the more probable it is that we find a spurious significant result. The problem becomes worse if a researcher measures many things, then picks the significant result for publication, but does not disclose that there were many other measures that did not show a significant result (see Simmons et al. 2011). This will all the more give the impression of a good result, though really it was just chance.

p-hacking

There are more papers that report p-values just below 0.05 than there should be (see Lakens 2015). This indicates that authors work on their data until they become significant. There are several ways to do that, for example

  • cleaning data by eliminating outlier data points and outlier subjects. There are high degrees of freedom in cleaning, and no hard rules as to what should (not) be done.
  • acquiring data subject by subject and stopping when the p-value is low enough (see Simmons et al. 2011). This strategy will make you stop data acquisition after you have, by chance, sampled one or several subjects with a strong effect, who push your p-value just under the 0.05 mark.

It is obvious that such results will not replicate under more controlled and constrained data acquisition and analysis.

I’ve never understood why 5% is such a magical mark. But as long as science careers rest on publishing many papers, and getting a paper published rests on passing those magic 5%, I suppose we’ll continue seeing p-hacking.

Science practices seem to be really hard to change. These problems have been known for a while. Maybe we’re gaining some momentum.
Happy experimenting!

References

Open Science Collaboration (2015). Estimating the reproducibility of psychological science. Science 349: aac4716.

Lakens D. (2015) On the challenges of drawing conclusions from p-values just below 0.05. PeerJ 3:e1142.

Simmons, J. P., Nelson, L. D., & Simonsohn, U. (2011). False-Positive Psychology: Undisclosed Flexibility in Data Collection and Analysis Allows Presenting Anything as Significant. Psychological Science, 22, 1359–1366.

Button, K. S., Ioannidis, J. P. A., Mokrysz, C., Nosek, B. A., Flint, J., Robinson, E. S. J., & Munafò, M. R. (2013). Power failure: why small sample size undermines the reliability of neuroscience. Nature Reviews Neuroscience, 14(5), 365–376.

Great news: DAAD is funding 3-month visit by Daniel Marigold from Canada

Daniel Marigold is an Associate Professor at the Department of Biomedical Physiology and Kinesiology at the Simon Fraser University in Burnaby, Canada. His research focuses on the role of vision for walking. One of his main interests is how parietal cortex organizes movements of different effectors, a topic we’ve addressed in the Reach & Touch Lab (see publications by Leoné et al. 2014 and Heed et al. 2011), and that Phyllis Mania is currently pursuing as part of the Emmy Noether group’s work package.

The DAAD is funding Dan to visit the Reach & Touch Lab for 3 months in 2016. We’re planning a TMS project in which we will test the relevance of parietal cortex for hand and foot motor planning. Better yet, Dan won’t be coming alone, but will be joined by Dr. Kim Lajoie, a member of his lab with a background in neuronal recording in the parietal cortex of cats.

We’re looking forward to Dan and Kim’s stay!

Steph Badde has left to go to NY

Steph Badde, who received her PhD from the Biological Psychology and Neuropsychology lab, and worked as a PostDoc here since, is starting a PostDoc with Michael Landy in August. She’s packed her bags and gone off to NY.

We’re sad to let her go! And wish her exciting times in the States.

We are offering BSc thesis topics on tactile perception in babies for 1 or 2 BSc students

We are looking for 1 or 2 Bachelor students for a BSC thesis project.

When can babies feel touch? And how does their knowledge about their own body develop? When can a baby guide its hand toward a body part that has been touched?

We are assessing the ability of 3-7 month-old babies to respond to tactile stimulation on different body parts. The task at hand is to systematize and categorize the babies’ behavior. This requires the qualitative description and the quantitative coding of behavior that has been recorded on video.

The thesis can start immediately.

Contact: Tobias Heed

Steph Badde awarded Best Dissertation Prize by the DGPs’s General Psychology section

For the competition, the 10 best dissertations of the last 2 years are pre-selected by a panel of reviewers. The PhDs (actually, now postdocs) then present their work at a meeting that is usually held in the place the last winner came from, and prizes are awarded to the 3 selected as best by a local committee.

Steph presented the work from her dissertation, “Top-down and bottom-up influences on tactile localization and remapping”, and received the first prize.

Congratulations!

Steph is currently a postdoc in the lab of Brigitte Röder and collaborates with the Reach & Touch Lab.

We are hiring a PostDoc – apply by June 15, 2015

Funding

The position is funded by a project in the Research Collaborative (Sonderforschungsbereich, SFB) 936, “Multi-site communication in the brain”. The principle investigators of the project are Tobias Heed (Hamburg), Peter König (Osnabrück), and Brigitte Röder (Hamburg). The position is attached to the Emmy Noether Group “Reach and Touch” headed by Tobias Heed, within the Biopsychology department of the University of Hamburg.

The SFB is funded for 4 years. The earliest starting date is July 1, 2015, but a later starting date is possible. The position will end on June 30, 2019, independent of the starting date. 

The SFB consists of 18 projects. They all investigate some aspect of brain connectivity. Methods courses, talks by international guests, and yearly retreats are organized on a regular basis, providing an interesting, interdisciplinary research environment.

 

Project

The advertised position is in the project “Tactile-visual interactions for saccade planning during free viewing and their modulation by TMS”. The project investigates how saccade planning is influenced by tactile input. The project’s focus is on connectivity between unisensory and multisensory brain regions, measured with EEG, and on the effects of disturbing these networks with TMS.

The PostDoc’s tasks are the planning, data acquisition, analysis, and publication of behavioral, EEG, and combined EEG/TMS studies.

The SFB is located in Hamburg (commonly known as the most beautiful city of the world…). Some initial training for the PostDoc is planned to take place in Peter König’s lab in Osnabrück. There will be close collaboration between the Hamburg and Osnabrück labs for the project, as well as collaboration with other EEG/MEG projects of the SFB.

 

Who we are looking for

must-have:

  • You have a university degree in a relevant subject, plus doctorate (PhD).
  • You have experience with the planning, data acquisition, analysis, and publication of EEG or MEG studies with frequency analysis, preferably with the software fieldtrip.
  • You have experience with programming to create experiments (e.g. in Matlab, Presentation, Python).
  • You like to work and integrate with a team, but you can work very independently. Applications of both new and advanced PostDocs are welcome.

good-to-have:

  • You have experience with the analysis of EEG/MEG connectivity
  • eye tracking
  • and/or TMS.

 

What next?

If you have any questions, please do not hesitate to contact Tobias Heed (tobias.heed@uni-hamburg.de).

You can find the official advertisement here.

Applications should be sent by email, in one single pdf file, to tobias.heed@uni-hamburg.de by June 15, 2015.