DataBases 4 Kinect

TST Fall detection dataset v1
TST Fall detection dataset v2
TST TUG dataset
TST Intake Monitoring dataset v1
TST Intake Monitoring dataset v2

Complete Viewer – C++ program to create your dataset

Complete Viewer is a C++ program used to save all the streams provided by the Kinect V2.

Compared to other solutions available on web, it allows to store all the raw streams at different fps.

The saved streams are:

  • depth
  • RGB (bmp or png format)
  • mapping matrix
  • Infrared
  • Skeleton
  • Timestamp information

RecordingTool_KinectV2

You can download the latest version of the program at this link:
Complete Viewer v2.0 OpenCV

Hardware/Software requirements are:

  • 64-bit (x64) processor
  • Physical dual-core 3.1 GHz (2 logical cores per physical) or faster processor
  • USB 3.0 controller dedicated to the Kinect for Windows v2 sensor or the Kinect Adapter for Windows for use with the Kinect for Xbox One sensor
  • 4 GB of RAM
  • Graphics card that supports DirectX 11
  • Windows 8 or higher, Windows Embedded 8
  • Microsoft Kinect for Windows SDK v2.0
  • OpenCV

We have tested this program on the following machines:

  1. Windows 8.1 Pro 64-bit i7-2700K CPU @ 3.50GHz (8 CPUs), 16 GB RAM
  2. Windows 8.1 Pro 64-bit i7-5500U CPU @ 2.40GHz (4 CPUs), 16 GB RAM

If you use the program, please cite the following paper:

E. Cippitelli, S. Gasparrini, S. Spinsante, E. Gambi, “Kinect as a Tool for Gait Analysis: Validation of a Real Time Joints Extraction Algorithm Working in Side View,” . Sensors 2015, 15, 1417-1434. Open Access, available here.

BibTeX

You can also download other versions:

  • Complete Viewer v2.0, can store RGB data only in bitmap format (OpenCV not required).
  • Complete Viewer v1.0, can store RGB data only in bitmap format (OpenCV not required), does not support different fps.

Let us know for what type of project you will use this program! 🙂

Write to e[dot]cippitelli[at]univpm[dot]it or s[dot]gasparrini[at]univpm[dot]it

Contributions from:

Matteo Zamporlini – “Studio e implementazione di strumenti per la gestione di dati forniti dal sensore Kinect V2”
Valerio Saverino – “Analisi delle prestazioni del sensore Kinect V2”
Pierpaolo Pignelli – “Analisi e ottimizzazione del software di acquisizione dati per il sensore Kinect v2”

_

TST Fall detection dataset v1

The dataset stores depth frames (320×240) collected using Microsoft Kinect v1 in top-view configuration. Four volunteers, aged between 26-27 years and height in 1.62-1.78m, have been recruited for a total number of 20 tests. The dataset is separated in two main groups:

  • Group A (test 1-10): two or more people walk in the monitored area;
  • Group B (test 11-20): a person performs some falls in the covered area.

Fall

Depth frames:

Test 1-10                      Test 11-20

Use this Matlab code to open the dataset.

If you use the dataset, please cite the following paper:

S. Gasparrini, E. Cippitelli, S. Spinsante, E. Gambi, “A Depth-Based Fall Detection System Using a Kinect® Sensor,” Sensors 2014, 14(2), 2756-2775; doi:10.3390/s140202756. Available at: http://www.mdpi.com/1424-8220/14/2/2756
BibTeX

_

TST Fall detection dataset v2

The dataset has been collected using Microsoft Kinect v2 and IMU (Inertial Measurement Unit) manufactured by Shimmer Research. It is composed of ADL (activity daily living) and fall actions simulated by 11 volunteers. The people involved in the test are aged between 22 and 39, with different height (1.62-1.97 m) and build. The actions performed by single person are separated in two main groups: ADL (Activity of Daily Living) and Fall. Each activity is repeated three times by each subject involved.

The stored data are:

  • depth frames
  • two raw acceleration streams, provided by SHIMMER devices constrained to the waist and right wrist of the volunteer
  • skeleton joints in depth and skeleton space (see JointType enumeration for the order)
  • time information useful for synchronization

The data base contains 264 di fferent actions for a total of 46k skeleton samples and 230k acceleration values.

Each person performs the following movements.


ADL

DBFall_sit

Sit on a chair

DBFall_grasp

Walk and grasps an object from the floor

DBFall_walk

Walk back and forth

DBFall_lay

Lie down on the mattress


Fall

DBFall_front

Fall from the front and ends up lying

DBFall_back

Fall backward and ends up lying

DBFall_side

Fall to the side and ends up lying

DBFall_EndUpSit

Fall backward and ends up sitting

 

 

 

 

 

 


Use this Matlab code to open the dataset.

Data Set 1                      Data Set 2                      Data Set 3                       Data Set 4                      Data Set 5

Data Set 6                      Data Set 7                      Data Set 8                      Data Set 9                      Data Set 10

Data Set 11

If you use the dataset, please cite the following paper:

S. Gasparrini, E. Cippitelli, E. Gambi, S. Spinsante, J. Wahslen, I. Orhan and T. Lindh, “Proposal and Experimental Evaluation of Fall Detection Solution Based on Wearable and Depth Data Fusion”, ICT Innovations 2015, Springer International Publishing, 2016. 99-108, doi:10.1007/978-3-319-25733-4_11.

BibTeX

_

TST TUG dataset

The dataset has been collected using Microsoft Kinect v2 and IMU (Inertial Measurement Unit) manufactured by Shimmer Research. It is composed of TUG (Timed Up and Go test) actions performed three times by 20 volunteers. The people involved in the test are aged between 22 and 39, with different height (1.62-1.97 m) and build.

The stored data are:

  • depth frames
  • raw acceleration stream, provided by SHIMMER device constrained to the chest of the volunteer
  • skeleton joints in depth and skeleton space (see JointType enumeration for the order)
  • time information useful for synchronization

Use this Matlab code to open the dataset.

Data Set 1-10                      Data Set 11-20                      Data Set 21-30

Data Set 31-40                   Data Set 41-50                      Data Set 51-60

If you use the dataset, please cite the following paper:

E. Cippitelli, S. Gasparrini, E. Gambi, S. Spinsante, J. Wahslen, I. Orhan, and T. Lindh, “Time Synchronization and Data Fusion for RGB-Depth Cameras and Wearable Inertial Sensors in AAL Applications,” IEEE ICC2015 – Workshop on ICT-Enabled Services and Technologies for eHealth and Ambient Assisted Living, London (UK), 8-12 June 2015.

BibTeX

_

TST Intake Monitoring dataset v1

It is composed of food intake movements, recorded with Kinect V1 (320×240 depth frame resolution), simulated by 35 volunteers for a total of 48 tests. The device is located on the ceiling at a 3 m distance from the floor. The people involved in the tests are aged between 22 and 39, with different height (1.62-1.97 m) and build.

Depth frames:

Test 1-10                      Test 11-20                     Test 21-30                     Test 31-40                      Test 41-48

Use this Matlab code to open the dataset and for calculate the Point Cloud from the depth frame.

Three different unsupervised machine learning algorithms (SOM – SOM_Ex – GNG) have been exploited to track the movements.

For each frames we have also published:

  • the network provided by each one of the previous algorithms
  • the ground truth position for the head also for the left/right hands

The networks and the ground truth are available here.

Finally, this is the Matlab code to create the video previously shown. It allows to compare at the same time depth frames and the networks.

If you use the dataset, please cite the following paper:

S. Gasparrini, E. Cippitelli, E. Gambi, S. Spinsante and F. Florez-Revuelta, “Performance Analysis of Self-Organising Neural Networks Tracking Algorithms for Intake Monitoring Using Kinect,” 1st IET International Conference on Technologies for Active and Assisted Living (TechAAL), 6th November 2015, Kingston upon Thames (UK).
BibTeX

_

TST Intake Monitoring dataset v2

The dataset is similar to the previous one but now 20 people, aged between 23-41 years and height in 1.62-1.93m, have been recruiting for a total of 60 tests with 3 repetitions per person:

  • repetition 1: eat a snack using the hand and drink water from a glass (Test 1,4,7,10,.. ,58);
  • repetition 2: eat a soup with a spoon and pour/drink water (Test 2,5,8,11,.. ,59);
  • repetition 3: use knife and fork for the main meal and finally wiping the mouth with a napkin (Test 3,6,9,12,..,60).

Depth frames:

Test 1-10                             Test 11-20                           Test 21-30

Test 31-40                           Test 41-50                           Test 51-60

Use this Matlab code to open the dataset and for calculate the Point Cloud from the depth frame.

Three different unsupervised machine learning algorithms (SOM – SOM_Ex – GNG) have been exploited to track the movements.

For each frames we have also published:

  • the network provided by each one of the previous algorithms
  • the ground truth position for the head also for the left/right hands

The networks and the ground truth are available here.

If you use the dataset, please cite the following paper:

S. Gasparrini, E. Cippitelli, E. Gambi, S. Spinsante and F. Florez-Revuelta, “Performance Analysis of Self-Organising Neural Networks Tracking Algorithms for Intake Monitoring Using Kinect,” 1st IET International Conference on Technologies for Active and Assisted Living (TechAAL), 6th November 2015, Kingston upon Thames (UK).

BibTeX
 

 

This page is under construction, and it will be updated with new datasets.

Last update: 21 November 2016

If you find bugs or failure, please contact us:

Write to e[dot]cippitelli[at]univpm[dot]it or s[dot]gasparrini[at]univpm[dot]it

Comments are closed