Science Research

| Peer-Reviewed |

Differences between the Actual Scene Model and the Image Model for Computation of Visual Depth Information of Early Vision

Received: 01 November 2014    Accepted: 11 November 2014    Published: 20 November 2014
Views:       Downloads:

Share This Article

Abstract

In this paper, we introduced two viewing modes, "Scene Mode" and "Picture Mode", for early visual depth perception depending on the dimensions of the object being viewed. The essential difference between these two modes of visual depth perception is still unclear. We discuss the basic methods of introducing a three-dimensional Cartesian system into a plane to express the depth information of an image, estimate the loss of depth information caused by this approach, and provide an analysis of the important role of providing depth information based on size constancy and vanishing point in the two viewing modes. We studied the problem of how the retina and visual cortex separate the plenoptic (all-optical) function, which is the input representation of vision, by neural computing in scene mode. We also studied the problem of how to extract information about the position and angle of light beams in the light field, and then determined the output representation of the visual depth perception. In the absence of any stereoscopic cues, such as texture, gradient, shade, shadow, color, occlusion, and binocular disparity, we compare the main differences of visual depth perception between scene mode and picture mode using a cube being viewed and its line drawing, which respectively represent the two modes.

DOI 10.11648/j.sr.20140205.20
Published in Science Research (Volume 2, Issue 5, October 2014)
Page(s) 135-149
Creative Commons

This is an Open Access article, distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution and reproduction in any medium or format, provided the original work is properly cited.

Copyright

Copyright © The Author(s), 2024. Published by Science Publishing Group

Keywords

Vision, Perceptual Mode, Vanishing Point, Dimensions, Size Constancy

References
[1] M Sonka, V Havac, and R Boyle. Image processing, analysis, and machine vision, Second edition, Thomson Learning and PT Press, 310-321, 1999
[2] D Marr. Vision: A computational investigation into the human representation and processing of visual information. New York: Freeman, 1982
[3] K A Stevens. The vision of David Marr. Perception, 41, 1061- 1072, 2012
[4] H A Mallot. Computational vision: information processing in perception and visual behavior, The MIT Press, Cambridge, London, England, 23-46, 2000
[5] J P Frisby and J V Stone. Seeing, The computational approach to biological vision, second edition, The MIT Press London, England, 539-551, 2010
[6] M Hershenson. Visual space perception: A primer, Cambridge, MA: MIT Press, 78-91, 2000
[7] J Herrault. Vision: Images, Signals and Neural Networks, World Scientific publishing Co. Pte. Ltd, 2010
[8] S E Palmer. Vision Science, MIT Press, 186-193, 1999
[9] M Caradini, J B Demb, V Mante, et al., Do we know what the early visual system does? J. Neurosci., 25(46), 10577-10579, 2003.
[10] J Kornmeier and M Bach. Object perception: When our brain is impressed but we do not notice it, Journal of Vision, 9(1):7, 1-10, 2009
[11] J Kornmeier and M Bach. The Necker cube—an ambiguous figure disambiguated in early visual processing, Vision Research, 45, 955–960, 2005
[12] J J Koenderink. Solid shape, MIT press, 1990
[13] J J Koenderink, A. J. VAN Doorn, and A. M. L. Kappers. Surface perception in pictures. Perception & Psychophysics. , 52 (5), 487-496, 1992
[14] A J Van Doorn, J J Koenderink, and J Wagemans. Light fields and shape from shading, International Journal for Numerical Methods in Engineering , 2011
[15] M S Banks, R T Held, and A R Girshick Perception of 3-D layout in stereo displays. Information Display 25, 1, 12–16. 2009.
[16] E A Cooper, E A Piazza, and M S Banks. The perceptual basis of common hotographic practice. Journal of Vision 12, 5, 8:1–14. 2012.
[17] M Pharr and G Humphreys. Physically Based Rendering: From Theory to Implementation, 2nd ed. Morgan Kaufmann, 2010.
[18] D Vishwanath, A R Girshick, and M S Banks. Why pictures look right when viewed from the wrong place. Nature Neuroscience 8, 10, 1401–1410. 2005.
[19] F Steinicke, G Bruder, and S Kuhl. Realistic perspective projections for virtual objects and environments. ACM Transactions on Graphics 30, 5, 112: 1–10. 2011.
[20] S T Watt, K Akeley, M O Ernst, and M S Banks. Focus cues affect perceived depth. Journal of Vision 5, 10, 7:834–862. 2005.
[21] J Yu, L Mcmillan, and P Sturm. Multi-perspective modeling, rendering and imaging. Computer Graphics Forum 29, 1, 227–246. 2010.
[22] S M Banks, J C A Read, R S Allison, and J S Watt. Stereoscopy and the Human Visual System. SMPTE Motion Imaging Journal 26-43, 2012
[23] M Levoy and P Hanrahan. Light field rendering. In Proc. siggraph, 1996, 31-42
[24] J J Koenderink. The structure of images, Biological cybernetics 50 (5), 363-370, 1984
[25] K P Michael. Geometric, Physical, and Visual Optics Butterworth-Heinemann, 2001
[26] D Regan. Human perception of objects, Sinauer Associates, Inc. Sunderland, Mass. 116-120, 2000
[27] A J Jackson and I L Bailey. Visual acuity, Optometry in practice, 5, 53-70, 2004
[28] S H Schwartz. Geometrical and Visual Optics. McGraw-Hill Medical, 2013
[29] J J Koenderink and A J van Doorn. Representation of local geometry in the visual system, Biological cybernetics 55 (6), 367-375, 1987
[30] O Faugeras. Three-Dimensional Computer Vision: A Geome- tric Viewpoint, MIT Press,1993
[31] O Faugeras, QT Luong, and T Papadopoulo. The Geometry of Multiple Images: The Laws That Govern the Formation of Multiple Images of a Scene Andsome of Their Applications, MIT press, 2001
[32] O Stanley and P Nikos. Geometric Level Set Methods in Imaging, Vision, and Graphics Springer-Verlag New York Inc. 2012
[33] M K Bennett. Affine and Projective Geometry, John Wiley & Sons Inc 1995
[34] J A Shufelt. Performance evaluation and analysis of vanishing point detection techniques, IEEE Transaction on Pattern Analysis and Machine Intelligence, 21, 3, 282-288, 1999
[35] A Almansa, A Esolneux, and S Vamech. Vanishing point detection without any priori information, IEEE Transaction on Pattern Analysis and Machine Intelligence, 25, 4, 502-507, 2003
[36] M Clerc and S Mallat. Texture gradient equation for recovering shape from texture, IEEE Transaction on Pattern Analysis and Machine Intelligence, 24, 4, 536-549, 2002
[37] E Adelson and J Bergen, The plenoptic function and the elements of early vision, In Computational Models of Visual Processing. MIT Press, Cambridge, MA, 385-394, 1991
[38] E H Adelson and John Y A Wang. Single Lens Stereo with a Plenoptic Camera, IEEE Transactions on Pattern Analysis and Machine Intelligence, 14, 2, 1992
[39] S Zeki. A vision of the Brain, Oxford: Blackwell Scientific Pub., 1993
[40] M S Livingstone and D H Hubel. Anatomy and physiology of a color system in the primate visual cortex, Journal of Neuroscience, 4, 309-356, 1984
[41] M S Livingstone and D H Hubel. Psychophysical evidence for separate channels for the perception of form, color, movement, and depth, J. Neurosci., 7, 3416-3468, 1987
[42] L McMillan and G Bishop. Plenoptic modeling: An image- based rendering system, Computer Graphics, 39-46, August 1995
[43] O Schreer, P Kauff, and T Sikjora, eds, 3D video commu- nication: Algorithms, concepts and real-time systems in human centered communication, John & Sons, Inc., New York, 110-150, 2005
[44] M C Potter, B Wyble, C E Hagmann, and E S McCourt. Detecting meaning in RSVP at 13 ms per picture. Attention, Perception, & Psychophysics 12,2013
[45] J G Nicholls, A R Martin, B G Wallace, and P A Fuchs. From Neuron to Brain. Fourth
[46] Zhao Songnian, Zou Qi, Jin Zhen,Yao Guozheng, Yao Li. A computational model of early vision based on synchronized response and inner product operation, Neurocomputing, 73, 3229-3241, 2010
[47] Zhao Songnian, Zou Qi, Jin Zhen, Yao Guozheng, Yao Li. Neural computation of visual imaging based on Kronecker product in the primary visual cortex, BMC Neuroscience, 11: 43,1-14, 2010
[48] D H Hubel and T N Wiesel. Ferrier lecture, Functional architecture of macaque monkey visual cortex. Proc R Soc Lond B Biol Sci, 198: 1-59, 1977
[49] D H Hubel. Exploration of the primary visual cortex: 1955-1978. Nature, 299: 515-524, 1982
[50] D A Forsyth and J Pence. Computer vision: A modern approach (2ed), Prentice –Hall, 2002
[51] S Cunningham. Computer graphics: Programming in OpenGL for Visual communication, Prentice-Hall, 2007
[52] L G Shapiro and G C Stockman, Computer vision, Prentice-Hall, 2001
Author Information
  • LAPC, Institute of Atmospheric Physics, Chinese Academy of Sciences, Beijing 100029, China

  • Institute of Geophysics of SSB, Beijing 100029, China

  • Jetsen Beijing Century Technology Co., Ltd., Beijing 100191, China

  • China rescue and salvage of ministry of the People’s republic of China, Beijing 100736, China

  • LAPC, Institute of Atmospheric Physics, Chinese Academy of Sciences, Beijing 100029, China

Cite This Article
  • APA Style

    Zhao Songnian, Yu Yunxin, Zhao Yuping, Jin Xi, Cheng Wenjun. (2014). Differences between the Actual Scene Model and the Image Model for Computation of Visual Depth Information of Early Vision. Science Research, 2(5), 135-149. https://doi.org/10.11648/j.sr.20140205.20

    Copy | Download

    ACS Style

    Zhao Songnian; Yu Yunxin; Zhao Yuping; Jin Xi; Cheng Wenjun. Differences between the Actual Scene Model and the Image Model for Computation of Visual Depth Information of Early Vision. Sci. Res. 2014, 2(5), 135-149. doi: 10.11648/j.sr.20140205.20

    Copy | Download

    AMA Style

    Zhao Songnian, Yu Yunxin, Zhao Yuping, Jin Xi, Cheng Wenjun. Differences between the Actual Scene Model and the Image Model for Computation of Visual Depth Information of Early Vision. Sci Res. 2014;2(5):135-149. doi: 10.11648/j.sr.20140205.20

    Copy | Download

  • @article{10.11648/j.sr.20140205.20,
      author = {Zhao Songnian and Yu Yunxin and Zhao Yuping and Jin Xi and Cheng Wenjun},
      title = {Differences between the Actual Scene Model and the Image Model for Computation of Visual Depth Information of Early Vision},
      journal = {Science Research},
      volume = {2},
      number = {5},
      pages = {135-149},
      doi = {10.11648/j.sr.20140205.20},
      url = {https://doi.org/10.11648/j.sr.20140205.20},
      eprint = {https://download.sciencepg.com/pdf/10.11648.j.sr.20140205.20},
      abstract = {In this paper, we introduced two viewing modes, "Scene Mode" and "Picture Mode", for early visual depth perception depending on the dimensions of the object being viewed. The essential difference between these two modes of visual depth perception is still unclear. We discuss the basic methods of introducing a three-dimensional Cartesian system into a plane to express the depth information of an image, estimate the loss of depth information caused by this approach, and provide an analysis of the important role of providing depth information based on size constancy and vanishing point in the two viewing modes. We studied the problem of how the retina and visual cortex separate the plenoptic (all-optical) function, which is the input representation of vision, by neural computing in scene mode. We also studied the problem of how to extract information about the position and angle of light beams in the light field, and then determined the output representation of the visual depth perception. In the absence of any stereoscopic cues, such as texture, gradient, shade, shadow, color, occlusion, and binocular disparity, we compare the main differences of visual depth perception between scene mode and picture mode using a cube being viewed and its line drawing, which respectively represent the two modes.},
     year = {2014}
    }
    

    Copy | Download

  • TY  - JOUR
    T1  - Differences between the Actual Scene Model and the Image Model for Computation of Visual Depth Information of Early Vision
    AU  - Zhao Songnian
    AU  - Yu Yunxin
    AU  - Zhao Yuping
    AU  - Jin Xi
    AU  - Cheng Wenjun
    Y1  - 2014/11/20
    PY  - 2014
    N1  - https://doi.org/10.11648/j.sr.20140205.20
    DO  - 10.11648/j.sr.20140205.20
    T2  - Science Research
    JF  - Science Research
    JO  - Science Research
    SP  - 135
    EP  - 149
    PB  - Science Publishing Group
    SN  - 2329-0927
    UR  - https://doi.org/10.11648/j.sr.20140205.20
    AB  - In this paper, we introduced two viewing modes, "Scene Mode" and "Picture Mode", for early visual depth perception depending on the dimensions of the object being viewed. The essential difference between these two modes of visual depth perception is still unclear. We discuss the basic methods of introducing a three-dimensional Cartesian system into a plane to express the depth information of an image, estimate the loss of depth information caused by this approach, and provide an analysis of the important role of providing depth information based on size constancy and vanishing point in the two viewing modes. We studied the problem of how the retina and visual cortex separate the plenoptic (all-optical) function, which is the input representation of vision, by neural computing in scene mode. We also studied the problem of how to extract information about the position and angle of light beams in the light field, and then determined the output representation of the visual depth perception. In the absence of any stereoscopic cues, such as texture, gradient, shade, shadow, color, occlusion, and binocular disparity, we compare the main differences of visual depth perception between scene mode and picture mode using a cube being viewed and its line drawing, which respectively represent the two modes.
    VL  - 2
    IS  - 5
    ER  - 

    Copy | Download

  • Sections