AbstractGesture recognition and its implementation that support Human Computer systems are becoming very popular mode of interaction now a days. It allows to interfacing the man machine commutative information flow naturally. Vision based gesture recognition has the potential that can provide intuitive and effective interaction between man and machine. However there are not adequate tools and techniques that support for developing, detecting or executing these tasks. In this paper we will implement a prototype that facilitates recording data during building some action based activities captured by the Kinect sensor. We analyze those recorded clips and visualize the user interactions by recognition the gestures objects based on depth, IR and skeletal data. Kinect tools include an analysis feature, a time-line-based approach that manually or automatically can mark the recording sequences of clips. We will implement both discrete and continuous gestures by using AdaBoast machine learning approach to detect hands activities. Our result suggest that the learning mechanism can achieve more than 98% of confidence level of given gestures. Keywords: Gesture recognition, Kinect, HCI, Machine learning, AdaBoost, Computer vision