Human Intent Prediction in Human-Robot Collaboration – A Pipe Maintenance Example
Abstract: Human-robot collaboration has gained its popularity in various civil engineering applications. The key to a successful human-robot collaboration is the design of an intelligent robot system that is aware of human intents and can predict human motions. Despite the advances in human intent prediction in the context of human-robot collaboration, challenges still present. Most intelligent systems can only predict human motions based on the repetitive patterns of human behaviors in well-defined tasks, for a relatively short period of time. A method that is capable of capturing the changing characteristics of human motions and predict human motions in dynamic and open workplaces is needed. This paper proposes an innovative analytical method that predicts human motions using the Long Short-Term Memory (LSTM) with incremental learning. A virtual reality based human subject experiment (n=120) was performed to collect the gaze tracking data and the corresponding two-hands motion data in a pipe maintenance task. First, the relationship between the gaze focus and the hand motion is explored via the symbolic aggregate approximation (SAX) to identify the latency between a person’s gaze focus direction and the hand motions. Then the continuous time series data of gaze focus is used to predict the motions of two hands, with eye-hand delays adjustments incorporated. The proposed method can significantly improve the accuracy of human motion prediction in a complex pipe maintenance task, and thus benefit a better design of collaborative robotic systems.