Avniel Singh Ghuman, PhD, joined the Department of Neurological Surgery in September of 2011. He received his undergraduate education in math and physics at The Johns Hopkins University and completed his doctoral education in biophysics at Harvard University. He completed his postdoctoral training at the National Institute of Mental Health prior to joining the faculty at the University of Pittsburgh.
As director of MEG (Magnetoencephalography) Research, one of Dr. Ghuman’s primary roles is to facilitate, develop, and advance clinical and basic neuroscience research using MEG. To this end, he is helping to develop new research applications for MEG in collaboration with researchers throughout the community. MEG is the most powerful functional neuroimaging technique for noninvasively recording magnetic fields generated by electrophysiological brain activity, providing millisecond temporal resolution and adequate spatial resolution of neural events.
Dr. Ghuman’s research focuses on how our brain turns what falls upon our eyes into the rich meaningful experience that we perceive in the world around us. Specifically, his lab studies the neural basis of the visual perception of objects, faces, words, and social and affective visual images in the real-world. His lab examines the spatiotemporal dynamics of how neural activity reflects the stages of information processing and how information flow through brain networks responsible for visual perception.
To accomplish these research goals, Dr. Ghuman’s lab records electrophysiological brain activity from humans using both invasive (intracranial EEG; iEEG — in collaboration with Taylor Abel, MD, and Jorge González-Martínez, MD, PhD) and non-invasive (magnetoencephalography; MEG) measures. In conjunction with these millisecond scale recordings they use multivariate machine learning methods, network analysis, and advanced signal processing techniques to assess the information processing dynamics reflected in brain activity. Additionally, his lab uses direct neural stimulation to examine how disrupting and modulating brain activity alters visual perception. This combination of modalities and analysis techniques allow Dr. Ghuman to ask fine-grained questions about neural information processing and information flow at both the scale of local brain regions and broadly distributed networks. Dr. Ghuman's research can be found on the Laboratory of Cognitive Neurodynamics webpage.
Dr. Ghuman's publications can be reviewed through the National Library of Medicine's publication database.
Specialized Areas of Interest
Professional Organization Membership
Education & Training
- BA, Math and Physics, The John Hopkins University, 1998
- PhD, Biophysics, Harvard University, 2007
Honors & Awards
- Young Investigator Award, NARSAD, 2012
- Award for Innovative New Scientists, National Institute of Mental Health, 2015
Research Activities
Social perception unfolds in the real world, as we actively interact with people around us.Dr. Ghuman investigated the neural basis of real-world face perception using multi-electrode intracranial recordings during unscripted natural interactions with friends, family, and others. Computational models reconstructed videos of faces participants viewed from brain activity alone and highlighted a critical role for the social-vision pathway in natural face perception. The brain was more sharply tuned to subtle expressions over strong one—a “Weber’s law” for facial expressions—which was confirmed in controlled psychophysical experiments. This study leveraged neural recordings during natural social interactions to model neuro-perceptual relationships in an uncontrolled real-world environment, revealed neural coding rules for facial expressions, and demonstrated the perceptual implications of those rules using controlled experimentation.
Critical neurocognitive processes, such as performing natural activities and fluctuations of arousal, take place over minutes-to-days in real-world environments. Dr. Ghuman harnessed 3-12 days of continuous multi-electrode intracranial recordings in twenty humans that engaged in natural activities, including interacting with friends and family, watching TV, sleeping, etc., while under simultaneous neural and video recordings. Applying deep learning and dynamical systems analysis to the data revealed that brain network dynamics predicted neurocognitive phenomena such as circadian rhythm, heart rate, and multiple aspects of behavior (socializing, watching a screen, etc.). Network activity formed a “punctuated equilibrium” of stable states and transitory bursts between them. These transitory bursts coincided with shifts in behavior, such as switching from using a phone to talking to a friend. During these transitions, the brain would rapidly explore several interim states in a highly chaotic and disorganized fashion. Despite this chaos, large-scale dynamics were guided by a consistent set of anatomical networks that slowly modulated their activity in accordance with broad neurophysiological and conscious states. These large-scale dynamics were anchored by a homeostatic-like central attractor involving activation of the default mode network. Together, these findings suggest ways our brains use rapid, chaotic transitions that coalesce into neurocognitive states slowly fluctuating around a stabilizing central equilibrium to balance flexibility and stability during real-world behavior.
Media Appearances
Neuroscientists listened in on people’s brains for a week. They found order and chaos.
MIT Technology Review
February 7, 2023
Ability to Recognize Faces Grows With Age, Study Finds
The Wall Street Journal
January 5, 2017
Epilepsy Research Leads To New Insights Into How Our Brains Read
WESA Radio Pittsburgh Tech Report
August 16, 2016
Study shows how words are represented in the brain
UPI
July 20, 2016
Decoding Reading in the Brain
Cognitive Neuroscience Society
July 19, 2016
“Reading” The Reading Mind
ScienceBeta
July 8, 2016