The human visual system produces depth perception even when the only useful information from the world comes from motion. We are developing a model of the early visual system that mimics this ability by recognizing the sudden appearance of disappearance of surface texture. When this texture disappears or appears then the visual system can infer that the texture is being covered or uncovered by something else placed in front of that texture.
When an object is positioned in front of another one relative to the eye, the further object can no longer be completely seen. The partially blocked object looks like it sits behind the other because of visual occlusion. This perceived obstruction is a very reliable cue for depth perception, which means that it produces depth perception even in situations with minimal information. For example, the pictures below show animated sequences of random noise. The only structural information in the display comes from the motion; if the animation stops, the display looks essentially empty. After taking a closer look at one animation or the other, however, most people see two surfaces with a boundary between them, and either the left or right side is covering the other. This phenomenon can be described in a few ways. Some people say that one surface appears closer because the vertical edge belongs to that surface (border ownership). Others say that the velocity of the edge and the velocity of the closer surface are the same (common fate). Still others might say that the texture of the farther surface appears and disappears as it moves relative to the vertical edge (texture accretion/deletion). We are interested in the brain mechanisms that give rise to the perception of depth in these displays and how they relate to the above concepts in vision science.
Observers report that they see the moving surface on the right sliding behind the stationary surface on the left. If the motion sequence is paused, then the picture immediately looks like one single square, and the perception of depth and occlusion is gone.
These displays give just one example of a larger set of visual properties that produce perceptions of depth ordering. More examples can be found HERE.
Our research focuses on the interaction of visual areas in the brain that are more sensitive to motion and location with those areas that are more sensitive to form and shape. The displays here demonstrate that motion of the two sides of the display defines the vertical boundary and the form of the surfaces. This is why camouflage works well only when the hidden object is as still as possible (DEMO).
Our model accomplishes depth ordering first by creating a motion-defined boundary between the two regions and second by signaling texture accretion or deletion on either side of that boundary. Model areas concerned with visual motion detect regions in a display where visual motion changes rapidly across the image. That information is fed back into model areas concerned with visual form as likely places where boundaries defined in time as well as space might exist. To detect texture accretion and deletion, the model tries to use both form and motion information to predict where a patch of texture will move given its current position and velocity. The model's predictions are very wrong when the texture suddenly appears or disappears at the vertical boundary of these displays, which gives a reliable signal for texture accretion or deletion.
A camouflaged animal is extremely difficult to see while it remains still, but its shape and position become clear once it begins to move. This is because visual motion, how our eyes' view of the world changes over time, provides reliable and useful information that our brains use to perceive the structure of the world. This is especially true around the outline of an object, where it meets the occluded background behind it. The surface texture of a distant object appears and disappears as it's covered by a nearby object, a phenomenon called texture accretion and deletion.
We have developed a model of how neurons in visual areas of the brain are connected in order to be able to effectively perceive the accretion and deletion of texture as it moves out of or into an occluding edge. The model is inspired by a proposed neural mechanism that detects local motion by using inhibitory synaptic connections. The local motion signals produced by this model become the input for a similar model that detects sudden changes in motion across an occluding edge. The video below shows how a dot of texture that moves from one position to another (bottom row) activates directional cells (ellipses with arrows) and either produces a motion onset signal (top row, green dot) when it begins moving or a motion offset signal (top row, red dot) when it stops moving.
Barnes, T. and Mingolla, E. (2011). An augmented Barlow–Levick model detects onsets and offsets of motion. Journal of Vision 11(11), article 762.