Collaborative Operation of Artificial Intelligence and Mechanical Arm Based on Transformer-XL
Abstract
In response to the problems of insufficient flexibility, slow response speed, and poor long-term dependency processing ability in traditional mechanical arm collaborative operations, this article combined Transformer-XL and deep Q-learning (DQL) to improve the adaptability and decision-making efficiency of mechanical arms in complex dynamic environments, thereby achieving more efficient and accurate collaborative operations. Firstly, the collected sensor data can be cleaned and normalized to improve the effectiveness of model training. Secondly, building deep learning models based on the Transformer-XL architecture can improve the ability to capture long-term dependency relationships. Then, based on the DQL algorithm, the decision-making process of the mechanical arm in dynamic environments can be optimized. Finally, the output of Transformer-XL can be combined with DQL to form an efficient collaborative control strategy. The experimental results show that compared with traditional control methods, DQL (combined with Transformer-XL) exhibits significant advantages in various task environments. In more complex dynamic obstacle avoidance tasks, the decision time is only 220 milliseconds, while maintaining a success rate of 88%. It demonstrates faster response speed and higher success rate in complex dynamic environments. At the same time, it still maintains a success rate of 60% with a delay of 90 seconds, demonstrating stronger adaptability in dynamic environments compared to traditional control strategies. The outstanding performance in high dynamic environments also validates the reliability of the research content.
Keywords: Mechanical Arm; Transformer-XL Architecture; Deep Q-learning; Normalization Processing; Control Strategy
Cite As
W. Zhang, M. Xia, B. Yang, "Collaborative Operation of Artificial Intelligence and Mechanical Arm Based on Transformer-XL",
Engineering Intelligent Systems, vol. 33 no. 5, pp. 499-511, 2025.
 
						