Lian Arzbecker

Postdoctoral researcher


Curriculum vitae


arzbecker.1 (at) osu (dot) edu | lianarzb (at) buffalo (dot) edu


Motor Speech Disorders Lab

Communicative Disorders and Sciences, University at Buffalo



Brain-to-brain synchronization in language processing and comprehension


Conference


Geoff D. Green, Ewa Jacewicz, Robert A. Fox, Lian Arzbecker
9th Annual Buckeye Language Network Symposium, The Ohio State University, Virtual, 2022 Apr

Cite

Cite

APA   Click to copy
Green, G. D., Jacewicz, E., Fox, R. A., & Arzbecker, L. (2022). Brain-to-brain synchronization in language processing and comprehension. Virtual: 9th Annual Buckeye Language Network Symposium.


Chicago/Turabian   Click to copy
Green, Geoff D., Ewa Jacewicz, Robert A. Fox, and Lian Arzbecker. “Brain-to-Brain Synchronization in Language Processing and Comprehension.” Virtual: 9th Annual Buckeye Language Network Symposium, 2022.


MLA   Click to copy
Green, Geoff D., et al. Brain-to-Brain Synchronization in Language Processing and Comprehension. 9th Annual Buckeye Language Network Symposium, 2022.


BibTeX   Click to copy

@conference{geoff2022a,
  title = {Brain-to-brain synchronization in language processing and comprehension},
  year = {2022},
  month = apr,
  address = {Virtual},
  organization = {The Ohio State University},
  publisher = {9th Annual Buckeye Language Network Symposium},
  author = {Green, Geoff D. and Jacewicz, Ewa and Fox, Robert A. and Arzbecker, Lian},
  month_numeric = {4}
}

Abstract

Evidence from the neuroscience of verbal communication shows that when two people share information (one speaks and the other listens) their brain activities work in synchrony (Silbert et al., 2014). The speaker-listener neural coupling occurs only during communicative success (when the listener understands the story) and this brain-to-brain synchrony (B-Bsync) is lost when the listener fails to understand the speaker’s message. We propose that B-Bsync affords a more sensitive assessment of language processing and comprehension than currently available on the basis of behavioral measures. Our current efforts are in quantifying B-Bsync using functional near-infrared spectroscopy (fNIRS) and fNIRS-based hyperscanning approach. We analyze patterns of neural activity separately in the speaker and in the listener and assess statistically the correspondence in their brain activation (the degree of synchronized activation of cortical sites and temporal symmetry). The tracking of the coupling in time is possible because fNIRS can capture changes in neural activity at a high temporal resolution (e.g., 6-25 ms versus 2-3 s with fMRI—a significant improvement) that facilitates measures of connectivity between active brain regions over time. We hypothesize that B-Bsync reflects the degree of listening effort. Specifically, strong B-Bsync reflects the optimal communication, indicating effortless listening and comprehension. A more effortful listening (when the speaker and listener do not share the same nonnative accent) will decrease B-Bsync, and B-Bsync will cease completely when the listener does not understand the speaker (who may be speaking a different language).