Thai Sign Language Recognizer
with Kinect.
Depth
Image
Kinect device gives depth image output which is suitable for background subtration and also regions and connected components. Kinect also provides hands position relative to the screen space. This can reduce time consumption in finding hands positions in color screen.
Convexity
Defects
This is the data we use to classify images into groups of gestures. Since different hand gestures have variety of convexity defects. For more knowledge about convexity defects expand more here
Hu's moments
Each tracked hand image section has their own image moments which in one of many ways we can use Hu's algorithm. To find more about Hu's moments click here
The moments can be viewed as characteristic of a gesture which we can use to calculate similarity of an instance to a class.
K-Nearest Neighbor
Is a way of finding similar data class to label an instance or to classify it. This starts by looking for K nearest points of data (least differences) and find a concensus to recognize which class this instance belongs to. For more information about k-NN seek more here
Visualization
Scroll down
(a)(b)(c)
(d)(e)(f)
(a) recognizes sign language of number 0
(b) recognizes sign language of number 3 (with 2 convexity defects)
(c) recognizes sign language of a thai alphabet ก (k)
(d) recognizes sign language of a thai alphabet บ (b)
(e) recognizes sign language of a word "good" (thumb up)
(f) recognizes sign language of a thai letter ย (y)
This program can take input more than one hand position and recognizes each hand's gestures simultaneously.
Video demonstration of our aplication running in realtime
In future we wish to improve this application to work with both static and moving hand gestures to expand more recognization of language and hope to contribute our knowledge to ease up people who are using sign language to communicate with others who may or may not understand any sign language.