Similar “what/where” functional segregations have been proposed for both visual and auditory cortical processing. In this fMRI study, we investigated if the same segregation exists in the crossmodal domain, when visual and auditory stimuli have to be matched in order to perform either a recognition or a localization task. Recent neuroimaging research highlighted the contribution of different heteromodal cortical regions during various forms of crossmodal binding. Interestingly, crossmodal effects during audiovisual speech and object recognition have been found in the superior temporal sulcus, while crossmodal effects during the execution of spatial tasks have been found over the intraparietal sulcus, suggesting an underlying “what/ where” segregation. In order to directly compare the specific involvement of these two heteromodal regions, we scanned ten male right-handed subjects during the execution of two crossmodal matching tasks. Participants were simultaneously presented with a picture and an environmental sound, coming from either the same or the opposite hemifield and representing either the same or a different object. The two tasks required a manual YES/NO response respectively about location or semantic matching of the presented stimuli. Both group and individual subject analysis were performed. Task-related differences in BOLD response were observed in the right intraparietal sulcus and in the left superior temporal sulcus, providing a direct confirmation of the “what–where” functional segregation in the crossmodal audiovisual domain.

"What" versus "where" in the audiovisual domain: an fMRI study.

SESTIERI, CARLO;DI MATTEO, ROSALIA;FERRETTI, Antonio;DEL GRATTA, Cosimo;CAULO, MASSIMO;TARTARO, Armando;ROMANI, Gian Luca
2006-01-01

Abstract

Similar “what/where” functional segregations have been proposed for both visual and auditory cortical processing. In this fMRI study, we investigated if the same segregation exists in the crossmodal domain, when visual and auditory stimuli have to be matched in order to perform either a recognition or a localization task. Recent neuroimaging research highlighted the contribution of different heteromodal cortical regions during various forms of crossmodal binding. Interestingly, crossmodal effects during audiovisual speech and object recognition have been found in the superior temporal sulcus, while crossmodal effects during the execution of spatial tasks have been found over the intraparietal sulcus, suggesting an underlying “what/ where” segregation. In order to directly compare the specific involvement of these two heteromodal regions, we scanned ten male right-handed subjects during the execution of two crossmodal matching tasks. Participants were simultaneously presented with a picture and an environmental sound, coming from either the same or the opposite hemifield and representing either the same or a different object. The two tasks required a manual YES/NO response respectively about location or semantic matching of the presented stimuli. Both group and individual subject analysis were performed. Task-related differences in BOLD response were observed in the right intraparietal sulcus and in the left superior temporal sulcus, providing a direct confirmation of the “what–where” functional segregation in the crossmodal audiovisual domain.
File in questo prodotto:
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11564/117649
 Attenzione

Attenzione! I dati visualizzati non sono stati sottoposti a validazione da parte dell'ateneo

Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 39
  • ???jsp.display-item.citation.isi??? 37
social impact