Sheffield KInect Gesture (SKIG) Dataset

 

 

Introduction:

This new dataset contains 2160 hand gesture sequences (1080 RGB sequences and 1080 depth sequences) collected from 6 subjects. All these sequences are synchronously captured with a Kinect sensor (including a RGB camera and a depth camera). This dataset collects 10 categories of hand gestures in total: circle (clockwise), triangle (anti-clockwise), up-down, right-left, wave, ”Z”, cross, comehere, turn-around, and pat. In the collection process, all these ten categories are performed with three hand postures: fist, index and flat. To increase the diversity, we recorded the sequences under 3 different backgrounds (i.e., wooden board, white plain paper and paper with characters) and 2 illumination conditions (i.e., strong light and poor light). Consequently, for each subject, we recorded 10(categories)*3(poses)*3(backgrounds)*2(illumination) *2(RGB and depth) = 360 gesture sequences. (More can be found in Readme.txt)

 

Download:

Subject_1_RGB

Subject_1_Depth

Subject_2_RGB

Subject_2_Depth

Subject_3_RGB

Subject_3_Depth

Subject_4_RGB

Subject_4_Depth

Subject_5_RGB

Subject_5_Depth

Subject_6_RGB

Subject_6_Depth

 

If you use our dataset for publishing results, please cite the following paper:

L. Liu and L. Shao, “Learning Discriminative Representations from RGB-D Video Data”, In Proc. International Joint Conference on Artificial Intelligence (IJCAI), Beijing, China, 2013.

 

CONTACT:

Li Liu: liuli1213@gmail.com

Ling Shao

 

ACKNOWLEDGMENT:

This work is supported by the University of Sheffield. If you have any questions, feel free to contact us.