Shefﬁeld KInect Gesture (SKIG) Dataset
This new dataset contains 2160 hand gesture sequences (1080 RGB sequences and 1080 depth sequences) collected from 6 subjects. All these sequences are synchronously captured with a Kinect sensor (including a RGB camera and a depth camera). This dataset collects 10 categories of hand gestures in total: circle (clockwise), triangle (anti-clockwise), up-down, right-left, wave, ”Z”, cross, comehere, turn-around, and pat. In the collection process, all these ten categories are performed with three hand postures: ﬁst, index and ﬂat. To increase the diversity, we recorded the sequences under 3 different backgrounds (i.e., wooden board, white plain paper and paper with characters) and 2 illumination conditions (i.e., strong light and poor light). Consequently, for each subject, we recorded 10(categories)*3(poses)*3(backgrounds)*2(illumination) *2(RGB and depth) = 360 gesture sequences. (More can be found in Readme.txt)
If you use our dataset for publishing results, please cite the following paper:
L. Liu and L. Shao, “Learning Discriminative Representations from RGB-D Video Data”, In Proc. International Joint Conference on Artificial Intelligence (IJCAI), Beijing, China, 2013.
Li Liu: firstname.lastname@example.org
This work is supported by the University of Shefﬁeld. If you have any questions, feel free to contact us.