

Evidence for perceptual layer decomposition lies in our ability to interpret 3D shape, which requires the ability to distinguish between shading, shadows, and reflectance of a surface in static scenes (Zhou & Baker, 1996 Schofield, Rock, Sun, Jiang, & Georgeson, 2010 Dövencioğlu, Welchman, & Schofield, 2013 for a review, see Kingdom, 2011). For instance, we recognize shadows as a separate layer and we do not trip over them while walking. These distinct impressions of transparency each cause characteristic disarray in images but only a small part of these many physical causes of layering is studied in the scope of visual perception.įrom our own experience we know that complex visual scenes are perceptually split into multiple causal layers so that we perceive the shape and material of objects and surfaces, and the prevailing illumination in a scene. In the case of multiple scattering, the medium might become translucent by completely obscuring the object behind (Koenderink & van Doorn, 2001).

Or multiple light scattering can be caused by the molecules in the medium, such as a glass of diluted milk (Kubelka Munk theory).

It can also be caused by single scattering of light: on a clear day, we can see objects in large distances, so that the sharpness of contours remain the same but contrast might change (known as airlight or Koschmieder's law). For example, the third image in Figure 1a might as well be mistaken for a transparent steam layer. Atmospheric refraction includes phenomena such as rain, fog, or haze. Moreover, the various physical causes of light refraction introduce many types of layering in visual information. The direction of light, hence the distortions in the image, depends on the water's surface shape, (e.g., amplitude of the waves on the water's surface Figure 1a). For instance, when light waves pass through an air to water boundary, the direction of light changes depending on the refractive indices of these two mediums (Snell's law). Light entering a transparent medium changes its direction depending on the angle of incidence and the refraction index of the medium. Responses yielded a concentrated equivalence class for water and structured glass. For two conditions, water and glass, we observed high intraobserver consistency: responses were not random. Observers adjusted image deformation parameters by moving the mouse horizontally (grain) and vertically (reach). We asked observers to adjust image deformations so that the objects in the scene looked like they were seen (a) under water, (b) behind haze, or (c) behind structured glass. This disarray in eidolon space leads to distinct impressions of transparency, specifically, high reach and grain values vividly resemble water whereas smaller grain values appear diffuse like structured glass. We created a stimulus space for perceptual equivalents of a fiducial scene by systematically varying the local disarray parameters reach and grain. How does the visual system use image deformations to make sense of layering due to transparent materials? We used eidolons to investigate equivalence classes for perceptually similar transparent layers. In these scenarios, changes in the visual information might be due to a deforming object or deformations due to a transparent medium, such as structured glass or water, or a combination of these. The human visual system is remarkably good at decomposing local and global deformations in the flow of visual information into different perceptual layers, a critical ability for daily tasks such as driving through rain or fog or catching that evasive trout.
