Computational photography

Computational photography provides many new capabilities. This example combines HDR (High Dynamic Range) imaging with panoramics (image-stitching), by optimally combining information from multiple differently exposed pictures of overlapping subject matter.[1][2][3][4][5]

Computational photography refers to digital image capture and processing techniques that use digital computation instead of optical processes. Computational photography can improve the capabilities of a camera, or introduce features that were not possible at all with film-based photography, or reduce the cost or size of camera elements. Examples of computational photography include in-camera computation of digital panoramas,[6] high-dynamic-range images, and light field cameras. Light field cameras use novel optical elements to capture three dimensional scene information which can then be used to produce 3D images, enhanced depth-of-field, and selective de-focusing (or "post focus"). Enhanced depth-of-field reduces the need for mechanical focusing systems. All of these features use computational imaging techniques.

The definition of computational photography has evolved to cover a number of subject areas in computer graphics, computer vision, and applied optics. These areas are given below, organized according to a taxonomy proposed by Shree K. Nayar[citation needed]. Within each area is a list of techniques, and for each technique one or two representative papers or books are cited. Deliberately omitted from the taxonomy are image processing (see also digital image processing) techniques applied to traditionally captured images in order to produce better images. Examples of such techniques are image scaling, dynamic range compression (i.e. tone mapping), color management, image completion (a.k.a. inpainting or hole filling), image compression, digital watermarking, and artistic image effects. Also omitted are techniques that produce range data, volume data, 3D models, 4D light fields, 4D, 6D, or 8D BRDFs, or other high-dimensional image-based representations. Epsilon photography is a sub-field of computational photography.

  1. ^ Steve Mann. "Compositing Multiple Pictures of the Same Scene", Proceedings of the 46th Annual Imaging Science & Technology Conference, May 9–14, Cambridge, Massachusetts, 1993
  2. ^ S. Mann, C. Manders, and J. Fung, "The Lightspace Change Constraint Equation (LCCE) with practical application to estimation of the projectivity+gain transformation between multiple pictures of the same subject matter" IEEE International Conference on Acoustics, Speech, and Signal Processing, 6–10 April 2003, pp III - 481-4 vol. 3.
  3. ^ joint parameter estimation in both domain and range of functions in same orbit of the projective-Wyckoff group" ", IEEE International Conference on Image Processing, Vol. 3, 16-19, pp. 193-196 September 1996
  4. ^ Frank M. Candocia: Jointly registering images in domain and range by piecewise linear comparametric analysis. IEEE Transactions on Image Processing 12(4): 409-419 (2003)
  5. ^ Frank M. Candocia: Simultaneous homographic and comparametric alignment of multiple exposure-adjusted pictures of the same scene. IEEE Transactions on Image Processing 12(12): 1485-1494 (2003)
  6. ^ Steve Mann and R. W. Picard. "Virtual bellows: constructing high-quality stills from video.", In Proceedings of the IEEE First International Conference on Image ProcessingAustin, Texas, November 13–16, 1994

© MMXXIII Rich X Search. We shall prevail. All rights reserved. Rich X Search