Compressive Sensing

History of Compressive Sensing (CS)

Compressive Sensing (CS) is an innovative process of acquiring and reconstructing a signal that is sparse or compressible. Around 2004 Emmanuel Candès, Terence Tao and David Donoho discovered important results on the minimum number of data needed to reconstruct an image even though the number of data would be deemed insufficient by the Nyquist–Shannon criterion. Starting in 2004, Professors Rich Baraniuk and Kevin Kelly of Rice University pioneered the application of CS to the creation of actual cameras and developed the “single-pixel” camera technique employing over $10M in government funding.

InView’s Application to the Real world

In recent years, InView has moved this exciting new technology into the commercial world by developing a line of SWIR cameras that have significantly reduced costs. The company has accomplished this by replacing the camera’s expensive InGaAs Focal Plane Array with low cost components and signal processing algorithms; based upon ground breaking CS imaging techniques.

InView has an exclusive license to Rice University’s foundational Compressive Sensing IP, and has filed additional patent applications. InView’s products are protected by US patents: 8,199,244, 7,271,747 and 7,511,643, and 20 additional applications.

The Basic Concept of Compressive Sensing Imaging

Compressive Sensing imaging performs sub-nyquist sampling of sparse images, collecting only enough information at the sampling (sensing) stage as is required to construct an output image. An image is said to be sparse and compressible if at least two adjacent pixels share the same color and intensity. Natural scenes are inherently sparse.

The image being captured passes through a spatial-light modulator which allows the camera to measure the total light energy in half of the image. This measurement step is repeated a number of times, and the series of measurements is used by the camera to reconstruct the image.

  1. X represents the input image (with the two dimensional image information shown shifted into a linear format, and with sparse portions of the image shown as white squares).
  2. Each row of Φ represents a unique and different spatial-light-modulation pattern.
  3. Multiplying vector X by one row of Φ gives a product represented by one square of Y.
  4. A time-series of multiplications using a different row of Φ each time results in the vector Y.
  5. The reconstruction code then uses Y and Φ to reconstruct the image.

InView Co-Founder, Dr. Richard Baraniuk describes Compressive Sensing