Category: Publications

New paper on the anchors of external reference frames in touch (Plos One)

New paper:
Heed T., Backhaus J., Röder B., Badde S. (2016).
Disentangling the External Reference Frames Relevant to Tactile Localization.
PLoS ONE 11(7):e0158829. doi:10.1371/journal.pone.0158829
[ open access pdf ] [ data & scripts ]

In this paper, we’ve published work begun by Jenny Backhaus during her time in our lab several years ago. It took us a while to get the paper ready, because we used Generalized Linear Mixed Modeling, a statistical approach that proved difficult for the data of the experimental paradigm we had used here, temporal order judgments.
We’ve started to publish Open Access, and to provide the data and scripts to our papers. So if you would like to try out different statistics than the ones we used here, run wild.

The research question

One of the central themes of the Reach & Touch Lab is that the brain automatically places tactile events in space. Given that touch is perceived through sensors in the skin, projecting touch into space requires computations that integrate skin location of the touch with the current posture of one’s body.
But space is a relative concept: the brain could code touch relative to many anchors. For instance, it could code every touch relative to the eyes. This would be useful, because the location of touch could then easily be integrated with locations of the visual system. But there are many alternative “anchors” relative to which the brain could code touch. Suggestions have been the head, the torso, and even landmarks outside the body.

What we show

We manipulated body posture in a way that allows us to disentangle different possible anchors that may be relevant in touch. We present three main findings. First, the eyes appear to be an anchor for tactile space. Second, head and torso did not play a role in tactile coding in our experiments. Finally, however, an eye anchor alone cannot explain our participants’ behavior; this result suggests that other spatial codes (just not head and torso-centered ones) play a role in tactile processing, in line with previous results we’ve published (see our recent review). We suspect that an important code is an object-centered one, in which spatial coordinates would depend on the involved body parts, as well as their posture relative to one another.

Why the results are important

The reference frames used in touch – that is, the anchors relative to which space is coded – have been investigated with a number of different paradigms. Our present paper connects a popular paradigm, the so-called temporal order judgment, with a large body of literature that has used other paradigms, by showing that the eyes are consistently important for tactile spatial coding. The temporal order judgment is a very flexible paradigm, making it an attractive choice for tactile research. It is therefore important to know that it renders results that generalize to other paradigms.
Furthermore, our finding that an object-centered reference frame may be particularly important in tactile coding challenges us to develop experiments that will directly test this hypothesis.

New paper on tactile decision making: Brandes & Heed (J Neurosci)

New paper:
Brandes, J. & Heed, T. (2015). Reach Trajectories Characterize Tactile Localization for Sensorimotor Decision Making, Journal of Neuroscience 35(40): 13648-13658; doi: 10.1523/JNEUROSCI.1873-14.2015

I’m very happy that this paper is now out. It is the first paper of Janina’s PhD work. She spent a ton of time to optimize the experimental paradigm and, even more so, the analysis.

The research question

Imagine you feel an itch on your foot and want to scratch it. Where your hand has to go depends on where you’ve placed your foot. The brain merges body posture and the skin location of the itch so that your hand goes to the right place.
But how exactly the brain localizes the touch is debated. The origin of the debate are results from experiments with crossed limbs. In a number of tasks involving touch, people are much worse with crossed than with uncrossed hands.
When you cross your hands, your right hand (that is, the skin location) lies in left space. So the two pieces of information – skin and space – are in conflict. But what does that tell us about how the brain actually processes touch location?
One hypothesis is that hand crossing makes it difficult for the brain to compute the touch location in space. According to this idea, the skin location is known fast, but (with crossed limbs) the spatial location becomes available late. Another hypothesis is that the computation is actually not at all difficult. Rather, the brain remembers the skin location, and integrates skin and spatial locations. With crossed hands, the two are in conflict (right vs. left), and this conflict must be resolved. Our experiment sought to dissociate these two accounts.

What we show

In our experiment, participants received a touch on their crossed feet, and they had to reach to the touched location. If only skin location was available at first then people should initially reach to the wrong foot, because the skin location is opposite to the foot’s actual spatial location (Hypothesis 1).
In contrast, if the brain computes the spatial location fast but then tries to resolve the conflict then people should take a while to reach (longer than when the feet are not crossed and there isn’t any conflict), but the reach should go directly to the correct location (Hypothesis 2).
The second option is basically what we found. Overall, it’s not quite that simple so read about the details in the paper…

Why the results are important

We show that the transformation from skin location to the 3D location in space is not difficult. With this finding, we refute a common idea about how touch is located.
We demonstrate experimentally that the brain integrates information from different kinds of spatial representations to localize touch. We already had found modeling evidence for this in a study by Steph Badde (recently came out, see here), and had also presented this hypothesis in our recent TiCS paper. Finally, our results provide a direct link from spatial integration in touch to decision making models such as drift diffusion models (more on that in the paper!).

New paper: Reference frames for tactile attention are reflected in alpha and beta EEG activity

New paper in press: Jonathan Schubert, Verena N. Buchholz, Julia Föcker, Andreas Engel, Brigitte Röder, & Tobias Heed: Oscillatory activity reflects differential use of spatial reference frames by sighted and blind individuals in tactile attention, to appear in NeuroImage.

Researchers have recognized rhythmic variations of the EEG signal, termed oscillatory activity,  as an important indicator of cognitive activity. In this paper, we explored how oscillatory activity of two frequency bands, alpha and beta, relates to the spatial processing of touch.

Alpha and beta activity change when a person is paying attention to a defined area of space. For example, when you expect the traffic light on your left to turn green soon, then your right hemisphere, responsible for the left visual field, will show reduced alpha and beta activity. But what about touch: imagine you hear a mosquito flying around in your dark room; it seems to be to your left. You concentrate on your left arm, afraid that the mosquito will touch down and suck your blood. Do alpha and beta activity respond in the same way as when you waited for the traffic light to change?

When we cross our right hand over to the left side, then our brain codes this hand in two ways: as a right body part, but also as a body part in the left space. We tested whether alpha and beta activity change according to the “body” code, or according to the “space” code. We found that the two frequency bands behave differently. Alpha activity changes according to the “space” code, and beta activity changes according to the “body” code — at least when your visual system has developed normally.

We also analyzed how the frequency bands behave in people who were born blind. In this group, alpha and beta activity both changed according to the “body” code. This difference between sighted and blind humans in neural activity fits well with differences between the two groups in behavior that we have investigated in earlier studies. Sighted humans automatically perceive touch in space. People who were born blind often rely entirely on a “body” code. Still, both groups appeared to use the same brain regions to direct their attention to one or the other hand. This means that these brain regions use a different spatial code for attention to touch depending on whether you have sight or not — an impressive demonstration of how the different sensory systems influence one another.

New paper: Irrelevant tactile stimulation biases visual exploration in external coordinates

New paper in press: José Ossandón, Peter König, and Tobias Heed: Irrelevant tactile stimulation biases visual exploration in external coordinates, to appear in Scientific Reports.

Humans make rapid eye movements, so-called saccades, about 2-3 times a second. Peter König and his group have studied how we choose the next place to look at. It turns out that a number of criteria come together for this decision:  “low-level” visual features like contrast and brightness, “high-level” visual features like interestingness, and general preferences for one or the other side of space all influence where the eyes want to go next.

But imagine yourself walking through a forest. When you hear some birds singing, or you hear some cracking in the underwood, you will direct your gaze into a general direction, up in the trees or towards the nearby bushes). Similarly, you might feel that the ground is soft and uneven, making you scan the path in front of you. All of these are examples of information gathered by senses other than vision.

In this new paper, we brought together Peter König’s interest in eye movement choices and Tobias’s interest in processing touch in space. Together with Peter’s PhD student José Ossandón, we investigated how touch influences where we look next. Our participants viewed natural images, and from time to time received a tap on one of their hands. We told participants that the taps were entirely irrelevant (and really, they never had to do anything with them). Nevertheless, when we tapped the left hand, then the next few eye movements more often landed on the left side of the viewed scene, than when we tapped the right hand. Our participants did not look to where we had applied the tap on the hand; the taps rather made them orient in the general direction of the touch, towards the left or right side of space.

We then asked our participants to cross their hands: now their left hand was on the right, and the right hand was on the left. In this situation, tapping the left hand made participants look more often to the right – that is, towards the side of space in which the tapped hand laid. In other words, eye movements were biased towards where the tap was in space, not towards where it was on the body. This finding is a nice example of how our brain recodes information from the different senses, here touch (see our recent review paper on tactile remapping for more information), and uses it to guide behavior, for example exploratory eye movements.

The collaboration between José, Peter, and Tobias emerges from the Research Collaborative SFB 936, to which both Peter and Tobias have contributed research projects.

New paper: Effects of movement on tactile localization in sighted and blind humans

New paper in press: Tobias Heed, Johanna Möller, and Brigitte Röder: Movement induces the use of external spatial coordinates for tactile localization in congenitally blind humans, to appear in Multisensory Research.

We and others have often found that people who were born blind process touch differently than people who can see. Sighted people automatically compute where a touch is in space, that is, they combine the location of the touch in the skin, and where the touched body part currently is. Congenitally blind people don’t seem to do the same. Instead, they mostly rely just on the location of the touch on the skin, unless they really have to derive the location in space. Given these differences, the visual system is apparently important for how we perceive touch.

We did indeed find that blind humans code touch differently while they move than while they are still. And, as we had suspected, they seem to derive a location for touch in space in this situation. Yet, this spatial location appears to be of much higher relevance to sighted than to blind people.

Therefore, our results confirm that whether you can see or not critically influences the way you perceive touch. However, how we code touch is also affected by movement.

New paper: Tactile remapping: from coordinate transformation to integration in sensorimotor processing

New paper by Heed, T., Buchholz, V.N., Engel, A.K., and Röder, B. (in press). Tactile remapping: from coordinate transformation to integration in sensorimotor processing, to appear in Trends in Cognitive Sciences.

Tactile remapping, the transformation of where a touch is on the skin into a location in space, has often been conceptualized as a serial process: first, we perceive where the touch is on the skin; then, we compute the location in space; then, we use that location ever after. In this opinion paper, we argue instead that tactile localization is better viewed not as a serial, but as an integrative process. We propose that the brain determines where a touch was by using all kinds of information, including the skin location and the newly computed location in space. This view has emerged over recent years in our lab, most clearly in Steph Badde’s work on tactile remapping.

This view brings up the question how the different pieces of information are brought together, and how they are integrated. In our paper, we suggest that the analysis of oscillatory brain activity and large-scale brain connectivity may be ideally suited to investigate these kinds of questions.

My favorite part of the paper is a sideline we explore in Box 1. We briefly introduce the idea of sensorimotor contingencies as the basis for the transformation between different spatial formats (like skin location and space). According to this view, the brain might learn the relationship between the different formats by learning the statistical distributions of the sensory and motor signals that occur together. To make this a bit more graspable, imagine you feel, for the first time in your life, an itch on your nose (skin location). To direct your arm to scratch the nose, you could make random movements until you finally reach the nose and relieve the itch. Over time, you would realize that relief of the nose itch happens when your arm is in a certain location, an event that you can relate with seeing your hand near your face and with the proprioceptive signals that go along with this arm posture. Traditionally, researchers have assumed that the brain has to calculate the location of the nose in space, and that this spatial location can then be used to guide the hand. In the sensorimotor contingency approach, no such explicit derivation of the nose’s spatial position is necessary: you simply re-create all the sensory signals that you have learned will co-occur when a nose itch ends, by initiating the appropriate motor commands that lead to this end state. Given that I have investigated transformations between different spatial formats for several years, the prospect that they might not exist at all was a bit daunting at first. However, on second thought, I realized that the sensorimotor contingency approach fits perfectly with the integration idea we promote in our opinion paper.

The paper emerges from the cooperation of several projects within the Research Collaborative “Multi-site communication in the brain”.