Avniel Singh Ghuman, PhD, joined the Department of Neurological Surgery in September of 2011.
Dr. Ghuman received his undergraduate education in math and physics at The Johns Hopkins University. He completed his doctoral education in biophysics at Harvard University. He completed his postdoctoral training at the National Institute of Mental Health prior to joining the faculty at the University of Pittsburgh.
As director of MEG (Magnetoencephalography) Research, one of Dr. Ghuman’s primary roles is to facilitate, develop, and advance clinical and basic neuroscience research using MEG. To this end, he is helping to develop new research applications for MEG in collaboration with researchers throughout the community. MEG is the most powerful functional neuroimaging technique for noninvasively recording magnetic fields generated by electrophysiological brain activity, providing millisecond temporal resolution and adequate spatial resolution of neural events.
Dr. Ghuman’s research focuses on how our brain turns what falls upon our eyes into the rich meaningful experience that we perceive in the world around us. Specifically, his lab studies the neural basis of the visual perception of objects, faces, words, and social and affective visual images. His lab examines the spatiotemporal dynamics of how neural activity reflects the stages of information processing and how information flow through brain networks responsible for visual perception.
To accomplish these research goals, Dr. Ghuman’s lab records electrophysiological brain activity from humans using both invasive (intracranial EEG; iEEG — in collaboration with Jorge Gonzalez-Martinez, MD, PhD) and non-invasive (magnetoencephalography; MEG) measures. In conjunction with these millisecond scale recordings they use multivariate machine learning methods, network analysis, and advanced signal processing techniques to assess the information processing dynamics reflected in brain activity. Additionally, his lab uses direct neural stimulation to examine how disrupting and modulating brain activity alters visual perception. This combination of modalities and analysis techniques allow Dr. Ghuman to ask fine-grained questions about neural information processing and information flow at both the scale of local brain regions and broadly distributed networks.
More information on Dr. Ghuman's research can be found on the Laboratory of Cognitive Neurodynamics webpage.
Specialized Areas of Interest
Professional Organization Membership
Education & Training
- BA, Math and Physics, The John Hopkins University, 1998
- PhD, Biophysics, Harvard University, 2007
Honors & Awards
- Young Investigator Award, NARSAD, 2012
- Award for Innovative New Scientists, National Institute of Mental Health, 2015
Morett LM, O’Hearn K, Luna B, Ghuman AS. Altered Gesture and Speech Production in Autism Spectrum Disorders Detract from In-Person Communication Quality. Journal of Autism and Developmental Disorders 46(3):998-1012, 2016.
Alhourani A, McDowell MM, Randazzo M, Wozny T, Kondylis E, Lipski W, Beck S, Karp JF, Ghuman AS, Richardson RM. Network Effects of Deep Brain Stimulation. Journal of Neurophysiology 114(4):2105-2117, 2015.
Ghuman AS, Brunet NM, Li Y, Konecky RO, Pyles JA, Walls SA, Destefino V, Wang W, Richardson, R.M. (2014). Dynamic encoding of face information in the human fusiform gyrus. Nature Communications 5:5672, 2014.
Hwang K, Ghuman AS, Manoach DS, Jones S, Luna B. Cortical Neurodynamics of Inhibitory Control. Journal of Neuroscience 34(29):9551-9561, 2013.
Ghuman AS, McDaniel JR, Martin A. A Wavelet-Based Method for Measuring the Oscillatory Dynamics of Resting-State Functional Connectivity in MEG. NeuroImage 56(1):69-77, 2011.
Kverega K, Ghuman AS, Kassam KS, Aminoff EM, Hämäläinen MS, Chaumon M, Bar M. Neural Synchronization in the Contextual Association Network. Proceedings of the National Academy of Science 108(8):3389-3394, 2011.
Ghuman AS, McDaniel JR, Martin A. Face Adaptation Without A Face. Current Biology 20(1):32-36, 2010.
Ghuman AS, Bar M, Dobbins I, Schnyer D. The Effects of Priming on Frontal-Temporal Communication. Proceedings of the National Academy of Science 105(24):8405-8409, 2008.
A complete list of Dr. Ghuman's publications can be reviewed through the National Library of Medicine's publication database.
Over the past year, Dr. Ghuman’s Cognitive Neurodynamics Lab made a number of new and ongoing discoveries. Using intracranial recordings in epilepsy patients, the lab has found a novel, dynamic model for how information is represented in the brain, including illustrating an extended brain circuit used to read words and understanding how brain states influence what we see.
The map of category-selectivity in human ventral temporal cortex (VTC) provides organizational constraints to models of object recognition. One important principle is lateral-medial response biases to stimuli that are typically viewed in the center or periphery of the visual field. However, little is known about the relative temporal dynamics and location of regions that respond preferentially to stimulus classes that are centrally viewed, like the face- and word-processing networks. Here, word- and face-selective regions within VTC were mapped using intracranial recordings from 36 patients. Partially overlapping, but also anatomically dissociable patches of face- and word-selectivity were found in VTC. In addition to canonical word-selective regions along the left posterior occipitotemporal sulcus, selectivity was also located medial and anterior to face-selective regions on the fusiform gyrus at the group level and within individual male and female subjects. These regions were replicated using 7 Tesla fMRI in healthy subjects. Left hemisphere word-selective regions preceded right hemisphere responses by 125 ms, potentially reflecting the left hemisphere bias for language; with no hemispheric difference in face-selective response latency. Word-selective regions along the posterior fusiform responded first, then spread medially and laterally, then anteriorally. Face-selective responses were first seen in posterior fusiform regions bilaterally, then proceeded anteriorally from there. For both words and faces, the relative delay between regions was longer than would be predicted by purely feedforward models of visual processing. The distinct time-courses of responses across these regions, and between hemispheres, suggest a complex and dynamic functional circuit supports face and word perception. These results and findings were published in the Journal of Neuroscience.
Perception reflects not only sensory inputs, but also the endogenous state when these inputs enter the brain. Prior studies show that endogenous neural states influence stimulus processing through non-specific, global mechanisms, such as spontaneous fluctuations of arousal. It is unclear if endogenous activity influences circuit and stimulus-specific processing and behavior as well. Here we use intracranial recordings from 30 pre-surgical epilepsy patients to show that patterns of endogenous activity are related to the strength of trial-by-trial neural tuning in different visual category-selective neural circuits. The same aspects of the endogenous activity that relate to tuning in a particular neural circuit also correlate to behavioral reaction times only for stimuli from the category that circuit is selective for. These results suggest that endogenous activity can modulate neural tuning and influence behavior in a circuit- and stimulus-specific manner, reflecting a potential mechanism by which endogenous neural states facilitate and bias perception. These results and findings were published in Nature Communications.
Ability to Recognize Faces Grows With Age, Study Finds
January 5, 2017
The Wall Street Journal
Epilepsy Research Leads To New Insights Into How Our Brains Read
August 16, 2016
WESA Radio Pittsburgh Tech Report
Study shows how words are represented in the brain
July 20, 2016
Decoding Reading in the Brain
July 19, 2016
Cognitive Neuroscience Society
“Reading” The Reading Mind
July 8, 2016