There exists a fundamental distance between observer and world that lies at the core of human experience. The location of the mind within a physical body simultaneously causes this separation from the outside and poses our best solution at present to overcoming the distance: sensory perception. Modern cognitive science has popularized the view that cross-modal integration, learning, and inference enable the brain to internally represent outside objects and agents and their enduring and dynamic properties. As such,
The constructive nature of mind is at work in the gap between the internal and external.
I dwell in that gap – for I am fascinated with distance, a concept whose meaning in my life has evolved from a physical quantity, the number of miles separating childhood homes in Indonesia, Australia, and Texas, to a characterization of the relationship of self to world at every level of experience. In all, I seek to lessen the distance, and I intend to build an academic career around characterizing constructive inference in perception while working to narrow the gap: between instructors and students, leaders and communities, and ourselves and an understanding of the brain. My research uses computation, mathematics, and recently human neuroimaging to probe and model the mechanisms by which the brain constructs a rich internal representation of the external world from noisy and limited data.
Current work: Intuitive Physics
From infancy, humans have a remarkable ability to infer the physical structure of the world: how objects rest on and support each other, how much force would be required to move them, and how they behave when they fall, roll, or collide. These rapid physical inferences are essential for understanding and acting on the world, but little is known about the computational and neural mechanisms that underlie them. My research applies a synergistic combination of computational modeling and neuroimaging methods to characterize the neural representations and computations that enable us to make physical inferences about the world, predict what will happen next, and plan actions.
A Bayesian framework for structure in chord-color synesthesia
Synesthetic color associations are characteristically variable across the population. We see the same in chord-color synesthesia: there is little consistency in the color assigned to a particular chord by different synesthetes. For example, John may associate blue with C major and green with A major, while Andrew associates yellow with C major and pink with A major. However, my work in chord-color synesthesia indicates that while the colors associated with chords differ, the distances between those colors in color space are significantly consistent between synesthetes. Returning to our example, the distance between John’s blue and green would be similar to the distance between Andrew’s yellow and pink, although the colors themselves differ. In this manner, synesthetes carry a common “footprint” of relationships between chords, manifest in their color associations, that is “stamped” onto color space. While the orientation and scaling of this footprint varies among synesthetes, its basic structure is consistent. This structure is one that simultaneously captures relationships based on chord root note (A, C, etc.) and chord quality (major, minor, etc.). I use a Bayesian structure discovery algorithm to suggest that this chord architecture is similar to the Circle of Fifths, an ordering of chords commonly used in musical education. This work resulted in a paper that is in the process of being submitted. Also interesting are questions of whether the relationships between chords seen in synesthetic color-associations are an artifact of their neural representation, projected onto otherwise unordered musical sounds, or an independent musical property, perhaps describable in terms of relationships between fundamental frequencies.
Also conducted with the Eagleman lab, my larger-scale analysis of individual variability in music-color synesthesia resulted in a poster at NCUR 2013 (left), and is being developed into a paper.
Traveling waves in cortex
With Dr. Bard Ermentrout, I developed a mean-field model of traveling waves in cortex to investigate the roles of layers 2/3 and 5 in propagating activity. Recent experiments have characterized the spread of activity across and between the six layers of neocortex as a wave of neuronal activation, and have suggested that infragranular layer 5 is primarily responsible for initiating and maintaining widespread cortical activity while supragranular layers (layer 2/3) are subsidiary. Our model captures the existence and stability of these waves, and we demonstrate numerically and analytically that small amplitude traveling waves can be initiated in either cortical layer but require the contribution of layer 5. We consider the dynamics resulting from varying vertical and laminar connectivity parameters and find that the dominance of layer 5 can be attributed to increased local connectivity and stronger vertical projections originating in this layer. Work culminated in a presentation as student plenary speaker and a poster presentation at the Duqesne Undergraduate Research Symposium, as well as oral presentations at the CMU Center for the Neural Basis of Cognition and the University of Pittsburgh.
Stable reinforcement learning via temporal competition between LTP and LTD traces
Work currently in progress. I am investigating temporal competition of LTP and LTD eligibility traces as a mechanism for stable reinforcement learning. Neuronal systems that are involved in reinforcement learning must solve the temporal credit assignment problem, i.e., how is a stimulus associated with a reward that is delayed in time? Theoretical studies have postulated that neural activity underlying learning ‘tags’ synapses with an ‘eligibility trace’, and that the subsequent arrival of a reward converts the eligibility traces into actual modification of synaptic efficacies. While eligibility traces provide one simple solution to the temporal credit assignment problem, they alone do not constitute a stable learning rule because there is no other mechanism indicating when learning should cease. In order to attain stability, rules involving eligibility traces often assume that once the association is learned, further learning is prevented via an inhibition of the reward stimulus. Our lab has proposed a biophysically based theory of reinforcement-modulated synaptic plasticity that postulates the existence of two eligibility traces with different temporal profiles: one corresponding to the induction of LTP, and the other to the induction of LTD. The traces have different kinetics and their difference in magnitude at the time of reward determines if synaptic modification will correspond to LTP or LTD. Due to the difference in their decay rates, the LTP and LTD traces can exhibit temporal competition at the reward time and thus provide a mechanism for stable reinforcement learning without the need to inhibit reward. This learning rule is widely applicable. We have implemented it to successfully learn the timing of a reward stimulus, and I am currently implementing this self-stablizing reinforcement learning rule in a feed-forward excitatory network capable of recognizing patterns of input stimuli.