Event Camera and Audio Sensors
Basic research in neuromorphic sensors led to the development of devices that output an asynchronous, variable data-rate stream of events; these events signify the location and time of some significant occurrence:
Dynamic Vision Sensor Event Camera
The first practical event camera is called the dynamic vision sensor (DVS) . it was invented by Patrick Lichtsteiner and Tobi Delbruck. It is a silicon retina with an asynchronous output that encodes brightness changes. This sensor is used in the computer vision and robotics community; the paper below is the 4th most cited paper in IEEE Journal of Solid State Circuits over the past decade.
The sensor is produced and distributed by inivation and detailed information can be found there and at the original siliconretina.ini.uzh.ch site. Software for processing this sensor output is available on our software page.
A key (but not the first) publication on this sensor is this 2008 IEEE J. Solid State Circuits article:
The seminal DVS publication is this one from the 2005 IISW meeting:
- 64x64 Event-Driven Logarithmic Temporal Derivative Silicon Retina, (2005) P. Lichtsteiner and T. Delbruck, in 2005 IEEE Workshop on Charge-Coupled Devices and Advanced Image Sensors, Nagano, Japan, 2005, pp. 157-160.
The concept of the DVS is nicely explained in this video from the Robotics and Perception Group:
DAVIS
The Dynamic and Active Pixel Vision Sensor (DAVIS) combines active pixel technology with the DVS temporal contrast pixel. The two streams of frames and events are output concurrently. That way, the DAVIS produces conventional frames, which are the basis of all existing machine vision, and the event stream that allows quick responses with sparse data and high dynamic range.
A key publication on the DAVIS is:
- C. Brandli, R. Berner, M. Yang, S.-C. Liu, and T. Delbruck, “A 240x180 130dB 3us Latency Global Shutter Spatiotemporal Vision Sensor,” IEEE J. Solid State Circuits, p. 2333 - 2341, Volume:49 , Issue: 10, 2014.
The seminal publications are:
-
R. Berner, C. Brandli, M. Yang, S. -C. Liu, and T. Delbruck, A 240×180 120dB 10mW 12μs-latency Sparse Output Vision Sensor for Mobile Applications, Proceedings of the 2013 International Image Sensor Workshop, pp. 41-44, 2013
-
R. Berner, C. Brandli, M. Yang, S. -C. Liu, and T. Delbruck, A 240×180 120dB 10mW 12μs-latency Sparse Output Vision Sensor for Mobile Applications , IEEE VLSI Symposium, pp. 186-187, 2013
AER-EAR (aka DAS)
The AER EAR that was developed mainly by Shih-Chii Liu in a collaboration with Andre van Schaik is a neuromorphic audio sensor that encodes the frequencies of auditory input as asynchronous events in specific frequency channels. This binarual artificial ear is good at estimating the temporal difference between two auditory inputs and can therefore for example be used for sound localization. Prototypes of the AER-EAR that have been renamed DAS (for Dynamic Audio Sensor) are sold by inilabs.
More information on this chip and the system around it can be found under aer-ear.ini.uzh.ch
Key publications on this sensor are:
- IEEE, TCAS - AER EAR: A Matched Silicon Cochlea Pair With Address Event Representation Interface
- “Asynchronous Binaural Spatial Audition Sensor with 2x64x4 Channel Output,”** S.-C. Liu, A. van Schaik, B. A. Minch, and T. Delbruck, IEEE Trans. Biomedical Circuits and Systems, 8(4), p. 453-464, 2014.
A later sensor called COCHLP greatly improved the channel matching and increased the maximum possible resonance quality (Q) while burning only a trickle of power from a 0.5V supply. The COCHLP was developed mainly by M Yang and SC Liu. The COCHLP papers are:
- M. Yang, C. H. Chien, T. Delbruck, and S. C. Liu, “22.5 A 0.5V 55uW 64x2-channel binaural silicon cochlea for event-driven stereo-audio sensing,” in 2016 IEEE International Solid-State Circuits Conference (ISSCC), 2016, pp. 388–389.
This video shows output from the AER-EAR2 sensor in response to speech and song:
Physiologist's Friend
The Physiologist's Friend Chip is a neuromorphic analog VLSI chip that is a model of the early visual system. It has audible spiking cell responses to visual stimuli. You can use it in the lab or the lecture hall. In the lab it acts as a fake animal. You can use it to train students and to test data collection and analysis software. It sits in your toolbox like any other tool. In the lecture hall, you can use it with an overhead projector to do live demonstrations of how physiologists plot receptive fields. We have now open-sourced the complete design.
More information on this chip and the system around it can be found under www.ini.uzh.ch/~tobi/friend
Key publications on this sensor is: