GET IN TOUCH WITH PAKKO, CREATIVE DIRECTOR ALIGNED FOR THE FUTURE OF CREATIVITY.
PAKKO@PAKKO.ORG

LA | DUBAI | NY | CDMX

PLAY PC GAMES? ADD ME AS A FRIEND ON STEAM

 


Back to Top

Pakko De La Torre // Creative Director

A mixed perception-based human-robot collaborative maintenance approach driven by augmented reality and online deep reinforcement learning

A mixed perception-based human-robot collaborative maintenance approach driven by augmented reality and online deep reinforcement learning

With the increasing emergence of Cyber Physical Production Systems (CPPS) and Internet-of-Things (IoT), the modern manufacturing system contains more complex machines, which will lead to the increasing uncertainty of the fault occurrence along with the increasing usage time [1,2]. Although the extended Remaining Useful Lifetime (RUL) and the self-healing capability can be achieved in some of the modern systems, most of the faults (e.g., tool holder fault) still need manual maintenance. If any machine occurs the mentioned fault and cannot be maintained in a timely and proper manner, it may tend to change and become deteriorated over time. Hence, not only the production intermission of the entire system will be caused but also tremendous economic losses and casualties will be caused. In the traditional manual maintenance mode, the improper maintenance cost of machines accounts for 10%−20% of the total cost of the company, while the improper maintenance cost accounts for over 50% of the purchase cost [3]. Besides, the maintenance of complex machines is time-consuming and labor-intensive, the traditional manual maintenance approach is inefficient and can hardly meet the rapidly changing maintenance requirements [4]. Accordingly, efficient maintenance is of great criticalness for the enterprise to achieve a sustainable and competitive advantage.

Collaborative robot (cobot) as an important part of the physical assistance system, which can perform flexible tasks. In 2021, 7.5% (39,000 out of more than 517,000) of worldwide installed robots were collaborative robots deployed on the productive activities (e.g., maintenance), which is an increase of 50% from 2020 [5]. Based on more investments on cobots in maintenance, more repetitive tasks (e.g., screwing) of maintenance personnel can be taken over by cobots. Hence, assistive technologies of more cobots can enable maintenance personnel to focus on core activities (e.g., analyze the root of the fault and initiate the effective measure), which can ensure high availability and more productivity. In modern manufacturing, individualized and customized production of many complex machine tools relies on not only the nimble and sapiential operations of personnel but also the reliable and repeated manipulation of cobots [6]. Human-robot collaboration (HRC) can integrate human intelligence into the cobots, which can boost the extant capacities of both humans and cobots to cater to more industrial scenarios [7,8]. Therefore, the process of operating production equipment and performing maintenance tasks in a collaborative manner between workers and cobots has become the key to expanding the general performance of the manufacturing system.

One of the crucial challenges faced by the human-robot collaborative maintenance is to recognize the working intention of the personnel so that the cobot can actively adjust its tasks and provide assistance to the personnel promptly during the maintenance process. Due to the hidden intention information in human actions, the related researchers have focused on using human gestures to identify the working intentions [9,10]. Traditional machine learning methods, such as the Hidden Markov Models (HMMs) [11], the Multilayer Perceptron (MLP) [12], and the Layered HMMs (LHMMs) [13], have been applied to recognize and understand human actions. As a result, the intention can be analyzed by using recognized human motion as the input of the model, and the accuracy of the intention recognition can reach around 70%, which indicates that it is of great essence to further improve the recognition accuracy for the efficient human-robot collaborative maintenance. Recently, deep learning (e.g., BiFPN-enabled feature extraction [14,15], EfficientNet-based image recognition [16,17]) has obtained great success in the field of human intention recognition. Researchers have implemented deep learning for the working intention recognition of humans [18], [19], [20], which have achieved superior performance in the terms of accuracy. Thus, it is of great significance to develop a more efficient deep learning model with high recognition accuracy.

In addition, the complex maintenance process requires cobots to perform effective decision-making based on maintenance demands so as to accomplish collaborative tasks. Most of the decision-making methods of cobots in industrial scenarios are applied based on the premise of known control tasks [21,22]. According to specific task requirements, the program in the controller is set by the experienced and skillful operators in advance, and the cobot only needs to execute the preset commands in sequence to complete the prescribed task [23,24]. When dealing with these problems, traditional decision-making algorithms usually need to obtain the parameters of the cobot, and then implement mathematical modeling of the controlled object [25].Although the solution of the optimal trajectory can be achieved in the coordinate system space, the mathematical model of the cobot and the environment can hardly be accurately established in practical industrial applications. Thus, traditional decision-making algorithms have problems such as poor robustness, weak generalization, etc. Moreover, the unpredictable changes or disturbances in the external environment are also important factors restricting the effect of the decision-making. Fortunately, with the continuous development of reinforcement learning algorithm theory, the realization of the learning-capable cobot has become possible [26,27]. The multi-axis cobot based on the reinforcement learning control algorithm can complete the corresponding self-learning task through the interaction with the environment.

Despite this progress, existing approaches are insufficient to address the current challenges and problems owing to the rising process complication and fickle abnormalities in the maintenance scenario. These urgent challenges are summarized as follows.

Existing research has not fully utilized the three-dimensional spatial and visual features of various elements in the complex maintenance environment. How to improve the intention recognition accuracy of the maintenance personnel in the complicated maintenance scenario to help the cobot understand the intention of personnel well?

In most of the existing decision-making methods, large amounts of decisions are still made by operators, and the cobots involved in collaboration can hardly have enough intelligence to deal with the unknown control tasks. Hence, how to design an adaptive method to help the cobot self-learn and interact with the unknown maintenance environment in the collaborative pattern?

How to design a user-friendly interface to help the maintenance personnel interact with the cobot and execute the auxiliary maintenance task without the limitation of spatial and human factors?

The aforementioned challenges motivate this article, and a mixed perception-based human-robot collaborative maintenance approach driven by augmented reality and online deep reinforcement learning is proposed to tackle these problems. In the first stage, a mixed perception module is designed for recognizing human safety and maintenance request according to human actions and gestures separately. On the basis of this, the perception results can be obtained and transmitted to the DRL-enabled decision-making model. During the second stage, an improved online deep reinforcement learning model integrated with the asynchronous structure and the function of anti-disturbance is proposed, which can assign tasks to multiple threads simultaneously and interact with the environment independently. Thus, the improved DRL can not only learn efficiently and quickly in the multi-continuous action space (e.g., human-robot collaboration decision-making), but also hardly need to consume a lot of memory resources due to the asynchronous structure, which is exactly the need for the HRC maintenance decision-making. In the third stage, augmented reality can help the operators observe and teleoperate the cobot accurately in an intuitive manner. Thus, the motion planning trajectories supported by the decision-making can be integrated into the physical space through AR glasses to avoid some potential safety issues, which benefits from a user-friendly interaction interface offered by AR. Finally, the maintenance task can be accomplished through visible guidance supported by AR glasses. Based on the contributions mentioned above, the gaps of the existing research can be filled with the precise understand of the maintenance intention, the efficient decision-making and the convenient interaction interface.

The remaining contents of this paper are constructed as follows. Section 2 describes the overview of the related works and the research gaps. Section 3 presents a three-hierarchy approach integrated with the maintenance intention recognition, the adaptive HRC decision-making, and the AR-assisted maintenance. Then, a real-world case in a typical machining workshop is implemented to validate the superiority and the capability of the proposed HRC maintenance in Section 4. Moreover, the overall performance advantages and the training program of maintenance personnel are discussed in Section 5. Finally, the conclusion of this study and the valuable future research issues are proposed in Section 6. In order to increase the readability, the utilized notations in this research are summarized and presented in Table 1.

This content was originally published here.