Avniel Singh Ghuman, PhD

  • Associate Professor
  • Director, Cognitive Neurodynamics Lab

Avniel Singh Ghuman, PhD, joined the Department of Neurological Surgery in September of 2011. He received his undergraduate education in math and physics at The Johns Hopkins University and completed his doctoral education in biophysics at Harvard University. He completed his postdoctoral training at the National Institute of Mental Health prior to joining the faculty at the University of Pittsburgh.

As director of MEG (Magnetoencephalography) Research, one of Dr. Ghuman’s primary roles is to facilitate, develop, and advance clinical and basic neuroscience research using MEG. To this end, he is helping to develop new research applications for MEG in collaboration with researchers throughout the community. MEG is the most powerful functional neuroimaging technique for noninvasively recording magnetic fields generated by electrophysiological brain activity, providing millisecond temporal resolution and adequate spatial resolution of neural events.

Dr. Ghuman’s research focuses on how our brain turns what falls upon our eyes into the rich meaningful experience that we perceive in the world around us. Specifically, his lab studies the neural basis of the visual perception of objects, faces, words, and social and affective visual images in the real-world. His lab examines the spatiotemporal dynamics of how neural activity reflects the stages of information processing and how information flow through brain networks responsible for visual perception.

To accomplish these research goals, Dr. Ghuman’s lab records electrophysiological brain activity from humans using both invasive (intracranial EEG; iEEG — in collaboration with Taylor Abel, MD, and Jorge González-Martínez, MD, PhD) and non-invasive (magnetoencephalography; MEG) measures. In conjunction with these millisecond scale recordings they use multivariate machine learning methods, network analysis, and advanced signal processing techniques to assess the information processing dynamics reflected in brain activity. Additionally, his lab uses direct neural stimulation to examine how disrupting and modulating brain activity alters visual perception. This combination of modalities and analysis techniques allow Dr. Ghuman to ask fine-grained questions about neural information processing and information flow at both the scale of local brain regions and broadly distributed networks. Dr. Ghuman's research can be found on the Laboratory of Cognitive Neurodynamics webpage.

Dr. Ghuman's publications can be reviewed through the National Library of Medicine's publication database.

Specialized Areas of Interest

The dynamics of brain interactions; visual cognition; magnetoencephalography (MEG), intracranial EEG (iEEG); face recognition; reading; social and affective perception.

Professional Organization Membership

Cognitive Neuroscience Society
Organization for Human Brain Mapping
Society for Neuroscience
Vision Sciences Society

Education & Training

  • BA, Math and Physics, The John Hopkins University, 1998
  • PhD, Biophysics, Harvard University, 2007

Honors & Awards

  • Young Investigator Award, NARSAD, 2012
  • Award for Innovative New Scientists, National Institute of Mental Health, 2015

Research Activities

Over the past year, Dr. Ghuman’s lab has made a number of new and ongoing discoveries. Using intracranial recordings in epilepsy patients the lab has illuminated how brain networks behave during real world behavior and how the brain codes for peoples’ faces during natural real-world conversations.

During the course of a day, our brains must accomplish a wide range of tasks and demonstrate a remarkable amount of flexibility despite their anatomic stability. How do ecologically valid brain states balance the tension between these demands of flexibility and stability? To answer this question, Dr. Ghuman’s team explored how the human functional connectome changes using continuous intracranial electroencephalography recordings in twenty epilepsy patients while they went about their day: eating, talking with visitors, reading, etc. over the course of a week. By tracking how the coherence between all pairs of the 100-120 electrodes implanted in each patient changes over each five second time window over the course of the entire week, he was able to use unsupervised autoregressive methods to identify the prevalent dynamic patterns of connectivity. 

Two major patterns emerged. First, brain networks had a stable baseline state that the brain would consistently return to after individual subnetworks took excursions of various types throughout the day. This stable state was similar across all our subjects, consisting of elevated lower beta coherence and decreased theta and gamma coherence. His second finding was that there was a discrete set of probable ways to leave this baseline state. Different sub-networks of the brain were not activated or inactivated randomly to each other: they formed a specific set of patterns of which networks could be activated together over which frequencies. These patterns were well-preserved from day to day: if one network’s beta activation were linked to another network’s gamma inactivation in one day, the same would generally hold true in other days. Additionally, the length of the excursion (e.g. the autocorrelation of each dynamic pattern) was consistent from day-to-day. 

These patterns show that, after perturbations, the brain’s functional networks are pulled to return within a stable baseline dynamic range, which may represent an optimal homeostatic state for the functional connectome. Excursions from this state occur frequently, presumably to accomplish tasks such as sleep or heightened activity, but the excursions are always marked by a return back to homeostasis. The day-to-day consistency of the largest excursions from homeostasis may indicate some underlying anatomic or energy limitation that forces departures from homeostasis to follow characteristic trajectories. Taken together, these results suggest a homeostasis-like mechanism by which the functional connectome achieves stability, while allowing for neurocognitive flexibility, through characteristic perturbations and return to this homeostatic state.

A fundamental goal of neuroscience is to understand how the brain processes information from the real world. While much has been learned from controlled laboratory experiments, we laboratory experiments cannot capture the full richness of real world environments. This is particularly problematic in the context of social perception, where passive viewing of static, unfamiliar, and isolated faces that are presented briefly on a screen bears little resemblance to rich and dynamic real-world social environments. In this study, we collected intracranial recordings from epilepsy patient-participants who wore eye-tracking glasses to capture everything they saw on a moment-to-moment basis during hours of natural unscripted interactions with friends, family, and experimenters. We used computer vision, machine learning, and artificial intelligence to address the core challenge with real world neuroscience – how to model the uncontrolled variability of the natural world. Computer vision models translated each face the person saw into a 227-dimensional model that represented distinct pose, shape, texture, and expression information. A bidirectional Canonical Component Analysis (CCA) model was used to reconstruct faces (including face motion) the participant saw at each fixation based on the neural activity alone and reconstruct the dynamics of brain activity based on the face the participant saw alone (d’ effect size approximately 1.8 and correlation coefficients exceeding 0.4). Reconstructions were accurate when comparing across different identities (d’ approximately 2.47), and also when comparing multiple fixations on faces of the same identity (d’ approximately 1.02). Neurally, information about these faces was coded in occipital, temporal, frontal, and parietal regions involved in social visual processing, motion perception, and face processing. Individual Canonical Components of the model enable a more granular breakdown to examine which specific face features in the pose, shape, texture, and expression subspaces are coded by which aspects of neural activity. This approach will be used to assess the representational structure of the neural “face space” for real world face perception and determine how this space is modulated by natural social context.

These results demonstrate that studying the brain during real-world social behavior is not only feasible, but also can be done with high fidelity to learn important details about how the brain codes for the natural social environment.

Media Appearances

Neuroscientists listened in on people’s brains for a week. They found order and chaos.
MIT Technology Review
February 7, 2023

Ability to Recognize Faces Grows With Age, Study Finds
The Wall Street Journal
January 5, 2017

Epilepsy Research Leads To New Insights Into How Our Brains Read
WESA Radio Pittsburgh Tech Report
August 16, 2016

Study shows how words are represented in the brain
July 20, 2016

Decoding Reading in the Brain
Cognitive Neuroscience Society
July 19, 2016

“Reading” The Reading Mind
July 8, 2016