When a moving object cuts in front of a moving observer at a 90° angle, the observer correctly perceives that the object is traveling along a perpendicular path just as if viewing the moving object from a stationary vantage point. Although the observer’s own (self-)motion affects the object’s pattern of motion on the retina, the visual system is able to factor out the influence of self-motion and recover the world-relative motion of the object (Matsumiya and Ando, 2009). This is achieved by using information in global optic flow (Rushton and Warren, 2005; Warren and Rushton, 2009; Fajen and Matthis, 2013) and other sensory arrays (Dupin and Wexler, 2013; Fajen et al., 2013; Dokka et al., 2015) to estimate and deduct the component of the object’s local retinal motion that is due to self-motion. However, this account (known as “flow parsing”) is qualitative and does not shed light on mechanisms in the visual system that recover object motion during self-motion. We present a simple computational account that makes explicit possible mechanisms in visual cortex by which self-motion signals in the medial superior temporal area interact with object motion signals in the middle temporal area to transform object motion into a world-relative reference frame. The model (1) relies on two mechanisms (MST-MT feedback and disinhibition of opponent motion signals in MT) to explain existing data, (2) clarifies how pathways for self-motion and object-motion perception interact, and (3) unifies the existing flow parsing hypothesis with established neurophysiological mechanisms.
Layton OW & Fajen BR (2016) A Neural Model of MST and MT Explains Perceived Object Motion During Self-Motion. Journal of Neuroscience. 36(301).
Read ArticleHuman heading perception based on optic flow is not only accurate, it is also remarkably robust and stable. These qualities are especially apparent when observers move through environments containing other moving objects, which introduce optic flow that is inconsis- tent with observer self-motion and therefore uninformative about heading direction. Moving objects may also occupy large portions of the visual field and occlude regions of the back- ground optic flow that are most informative about heading perception. The fact that heading perception is biased by no more than a few degrees under such conditions attests to the robustness of the visual system and warrants further investigation. The aim of the present study was to investigate whether recurrent, competitive dynamics among MSTd neurons that serve to reduce uncertainty about heading over time offer a plausible mechanism for capturing the robustness of human heading perception. Simulations of existing heading models that do not contain competitive dynamics yield heading estimates that are far more erratic and unstable than human judgments. We present a dynamical model of primate visual areas V1, MT, and MSTd based on that of Layton, Mingolla, and Browning that is sim- ilar to the other models, except that the model includes recurrent interactions among model MSTd neurons. Competitive dynamics stabilize the model’s heading estimate over time, even when a moving object crosses the future path. Soft winner-take-all dynamics enhance units that code a heading direction consistent with the time history and suppress responses to transient changes to the optic flow field. Our findings support recurrent competitive tem- poral dynamics as a crucial mechanism underlying the robustness and stability of percep- tion of heading.
Layton OW & Fajen BR (2016) Competitive Dynamics in MSTd: A Mechanism for Robust Heading Perception Based on Optic Flow. PLOS Computational Biology. 12(6).
The focus of expansion (FoE) specifies the heading direction of an observer during self-motion, and experiments show that humans can accurately perceive their heading from optic flow. However, when the environment contains an independently moving object, heading judgments may be biased. When objects approach the observer in depth, the heading bias may be due to discrepant optic flow within the contours of the object that radiates from a secondary FoE (object-based discrepancy) or by motion contrast at the borders of the object (border-based discrepancy). In Experiments 1 and 2, we manipulated the object’s path angle and distance from the observer to test whether the heading bias induced by moving objects is entirely due to object-based discrepancies. The results showed consistent bias even at large path angles and when the object moved far in depth, which is difficult to reconcile with the influence of discrepant optic flow within the object. In Experiment 3, we found strong evidence that the misperception of heading can also result from a specific border-based discrepancy (‘‘pseudo FoE’’) that emerges from the relative motion between the object and background at the trailing edge of the object. Taken together, the results from the present study support the idea that when moving objects are present, heading perception is biased in some conditions by discrepant optic flow within the contours of the object and in other conditions by motion contrast at the border (the pseudo FoE). Center-weighted spatial pooling mechanisms in MSTd may account for both effects.
Layton OW & Fajen BR (2016) Sources of bias in the perception of heading in the presence of moving objects: Object-based and border-based discrepancies. Journal of Vision. 16(1).
Many forms of locomotion rely on the ability to accurately perceive one’s direction of locomotion (i.e., heading) based on optic flow. Although accurate in rigid environments, heading judgments may be biased when independently moving objects are present. The aim of this study was to systematically investigate the conditions in which moving objects influence heading perception, with a focus on the temporal dynamics and the mechanisms underlying this bias. Subjects viewed stimuli simulating linear self-motion in the presence of a moving object and judged their direction of heading. Experiments 1 and 2 revealed that heading perception is biased when the object crosses or almost crosses the observer’s future path toward the end of the trial, but not when the object crosses earlier in the trial. Nonethe- less, heading perception is not based entirely on the instantaneous optic flow toward the end of the trial. This was demonstrated in Experiment 3 by varying the portion of the earlier part of the trial leading up to the last frame that was presented to subjects. When the stimulus duration was long enough to include the part of the trial before the moving object crossed the observer’s path, heading judgments were less biased. The findings suggest that heading perception is affected by the temporal evolution of optic flow. The time course of dorsal medial superior temporal area (MSTd) neuron responses may play a crucial role in perceiving heading in the presence of moving objects, a property not captured by many existing models.
Layton OW & Fajen BR (2016) The Temporal Dynamics of Heading Perception in the Presence of Moving Objects. Journal of Neurophysiology. 115(1).
Humans accurately judge their direction of heading when translating in a rigid environment, unless independently movingobjects (IMOs) cross the observer’s focus of expansion (FoE). Studies show that an IMO on a laterally moving path that maintains a fixed distance with respect to the observer (non-approaching; C. S. Royden & E. C. Hildreth, 1996) biases human heading estimates differently from an IMO on a lateral path that gets closer to the observer (approaching; W. H. Warren & J. A. Saunders, 1995). C. S. Royden (2002) argued that differential motion operators in primate brain area MT explained both data sets, concluding that differential motion was critical to human heading estimation. However, neurophysiological studies show that motion pooling cells, but not differential motion cells, in MT project to heading-sensitive cells in MST (V. K. Berezovskii & R. T. Born, 2000). It is difficult to reconcile differential motion heading models with these neurophysiological data. We generate motion sequences that mimic those viewed by human subjects. Model MT pools over V1; units in model MST perform distance-weighted template matching and compete in a recurrent heading representation layer. Our model produces heading biases of the same direction and magnitude as humans through a peak shift in model MSTd without using differential motion operators, maintaining consistency with known primate neurophysiology.
Layton OW, Mingolla E, & Browning NA (2012) A Motion-Pooling Model of Visually-Guided Navigation Explains Human Behavior in the Presence of Independently Moving Objects. Journal of Vision.