Datasets from the Sensors group

This page lists DVS event camera datasets from DVS and silicon cochlea audio DAS datasets from the Sensors group.

Creative Commons License
All Sensors Group datasets, unless otherwise noted, are licensed under a Creative Commons Attribution-ShareAlike 4.0 International License

Name Title and Description
DVSD22 Dynamic Vision Sensor Disdrometer 2022
 

Brief Description:
Data and code for measuring raindrop size and speed with DVS event camera

Citation:
Steiner, Jan, Kire Micev, Asude Aydin, Jörg Rieckermann, and Tobi Delbruck. 2022.
“Measuring Diameters and Velocities of Artificial Raindrops with a Neuromorphic Dynamic Vision Sensor Disdrometer.”
arXiv [physics.ao-Ph]. arXiv. http://arxiv.org/abs/2211.09893.

DND21 DeNoising Dynamic vision sensors 2021
 

Contributors: Shasha Guo, Tobi Delbruck

Brief Description:
Data and code for denoising background activity.

Citation:
S. Guo and T. Delbruck, “Low Cost and Latency Event Camera Background Activity Denoising,”, IEEE Transactions on Pattern Analysis and Machine Intelligence (T-PAMI), 2022.

MVSEC-NIGHTL21

MVSEC nighttime driving labeled cars

 

Contributors: Yuhuang Hu

Brief Description:
Labeled nighttime driving cars from MVSEC

Citation:
Y. Hu, S. C. Liu, and T. Delbruck,
“v2e: From video frames to realistic
DVS event camera streams,”
in 2021 IEEE/CVF Conference on Computer
Vision and Pattern Recognition Workshops (CVPRW)
,
2021 [Online]. Available: http://arxiv.org/abs/2006.07722

DASDIGITS

Dynamic Audio Sensor N-TIDIGITS18

 

Contributors: SC Liu, T Delbruck

Brief Description: Recordings of complete TIDIGITS audio dataset from
DAS1 binaural 64x2 channel silicon cochlea.

Inquire about collaboration possibilities

Citation: Feature representations
for neuromorphic spike streams",
J. Anumula, D. Neil, T. Delbruck, and S-C. Liu
Frontiers in Neuroscience,
2018.

DHP19 DVS Human Pose Estimation
dataset and reference CNN
 

Contributors:  S. Skriabine, G. Taverni,
F. Corradi, L. Longinotti,
K. Eng, and T. Delbruck 
Chris Schmidt, Marc Bolliger,
Balgrist University Hospital,
Zurich, Switzerland\\

Brief Description: Dataset contains
synchronized 

Recordings from 4 DAVIS346 cameras with
Vicon marker ground
truth from 17 subjects
doing repeated motions.

ROSHAMBO17

RoShamBo Rock Scissors Paper
game DVS dataset

 

Brief description: Dataset
is recorded from
~20 persons each
showing the rock, scissors and paper
symbols for about 2m each
with a variety of poses, distances,
postiions, left/right hand. Data
is also included for background
consisting of sensor noise,
bodies, room, etc. Altogether
5M 64x64 DVS images of
constant event count (0.5k, 1k, 2k events)
with left right
flipping augmentation are included.

Citation: "Live Demonstration: 
Convolutional Neural Network 
Driven by Dynamic Vision Sensor 
Playing RoShamBo",
 I-A. Lungu, F. Corradi,  
and T. Delbruck, 
in 2017 IEEE Symposium on
Circuits and Systems (ISCAS 2017)

(Baltimore, MD, USA), 2017.

DDD17 DAVIS Driving Dataset 2017
 

Contributors: J. Binas, D. Neil, S-C. Liu,
and T. Delbruck 

Brief Description: Dataset contains
recordings from DAVIS346 camera
from driving scenarios
primarily on highways along with 
ground truth car data such as
speed, steering, GPS, etc

Citation: “DDD17: End-To-End DAVIS
Driving Dataset.”, J. Binas,
D. Neil, S-C. Liu, and T. Delbruck, In 
ICML’17 Workshop
on Machine Learning for Autonomous
Vehicles
. Sydney, Australia, 2017
.

See DDD17 dataset

DDD20 DAVIS Driving Dataset 2020
 

An additional 41h of DAVIS E2E
driving data has been
collected and organized.
It includes mountain, highway,
freeway, freeway, day
and night driving including
difficult glare conditions.

See DDD20 website.

PRED18 (was PRED16) VISUALISE Predator/Prey Dataset
 

Contributors:  
D. P.Moeys and T. Delbruck 

Brief Description: Dataset contains
recordings from a DAVIS240
camera mounted on a computer-controlled
robot (the predator)
that chases and attempts to capture
another human-controlled robot (the prey).

Citation:"Steering a predator robot
using a mixed frame/event-driven
convolutional neural network",
D. Moeys and T. Delbruck, EBCCSP, 2016
.

DVSACT16 DVS Datasets for Object Tracking,Action Recognitionand Object Recognition
 

Contributors: Y. Hu, H. Liu,
M. Pfeiffer, and T. Delbruck 

Brief Description: Dataset contains recordings from
DVS on tracking datasets.  

Citation: "DVS Benchmark Tracking Datasets for
object tracking, action recognition, and object recognition,"
Y. Hu, H. Liu, M. Pfeiffer, and
T. Delbruck, Frontiers in Neuroscience, 2016
.
(Original gdocs web page if previous
dataset link not working.)

DVSFLOW16 DVS/DAVIS Optical Flow Dataset
 

Contributors:  B. Rueckauer and T. Delbruck    

Brief Description: DVS optical flow
dataset contains samples
of a scene with boxes, moving
sinusoidal gratings, and a rotating disk.
The ground truth comes from
the camera's IMU rate gyro. 

Citation: "Evaluation of Algorithms
for Normal Optical Flow
from Dynamic Vision Sensors"
B. Rueckauer and T. Delbruck,
Frontiers in Neuroscience, 2015.

DVS09 DVS128 Dynamic Vision SensorSilicon Retina Data
 

Contributors:T. Delbruck 

Brief Description: DVS recordings. This data can be
downloaded directly from browser. 

Citation: Delbruck, T. (2008).
Frame-free dynamic digital vision.
in Proceedings of Intl. Symp.
on Secure-Life Electronics
(Tokyo, Japan: University of Tokyo), 21–26.
Available at: https://drive.google.com/open?id=0BzvXOhBHjRheTS1rSVlZN0l2MDg .

WHISPER SET01 WASN WHISPER Dataset
 

Contributors: E. Ceolini, I. Kiselev, S. Liu

Brief Description: Recordings from Ad-Hoc Wireless Acoustic Network using

4 modules of the WHISPER platform. 

Citation: 

"Evaluating multi-channel multi-device speech separation

algorithms in the wild: a hardware-software solution",

E. Ceolini, I. Kiselev, and S. Liu, To be published, 2020.

   


Datasets from collaborators with Sensors Group:

  1. Event Camera Dataset and Simulator for Pose Estimation, Visual Odometry, and SLAM from the RPG group at UZHhttp://rpg.ifi.uzh.ch/davis_data.html 
    Paperhttps://arxiv.org/pdf/1610.08336v1
  2. MVSEC: The Multi Vehicle Stereo Event Camera Dataset from our collaborators at UPenn. https://daniilidis-group.github.io/mvsec/ As described in Zhu, A. Z., Thakur, D., Ozaslan, T., Pfrommer, B., Kumar, V., and Daniilidis, K. (2018). The Multi Vehicle Stereo Event Camera Dataset: An Event Camera Dataset for 3D Perception. IEEE Robotics and Automation Letters, 1–1. doi:10.1109/LRA.2018.2800793

    See also

    This excellent resource https://github.com/uzh-rpg/event-based_vision_resources#datasets