Smart Adaptation: The Fusion of AI and Robotics for Dynamic Environments
Other Articles
The advancement of artificial intelligence (AI) has ushered in a new era of automated robotics that are adaptive to their environments.
Study conducted by Prof. Dan ZHANG and his research team
The field of robotics has made remarkable strides over the past few decades, yet it continues to face challenges that hinder the full utilisation of its potential. Traditional robots often rely on pre-programmed instructions and restricted configurations, limiting their ability to respond to unforeseen circumstances. AI technologies - encompassing cognition, analysis, inference, and decision-making-enable robots to operate intelligently, significantly enhancing their capabilities to assist and support humans. As a primary carrier of AI technologies, robotics serves as a key application for AI to demonstrate its robust capabilities. The integration of AI presents a transformative solution to these challenges, offering innovative approaches that empower robots to learn from their surroundings, adapt to new situations, and make decisions based on real-time data. By augmenting robots with AI technologies within engineering systems, we can expect more ever-present applications in industry, agriculture, logistics, medicine, and beyond, allowing robots to perform complex tasks with greater autonomy and efficiency. This technological enhancement unleashes the potential of robotics in real-world applications, offering solutions to pressing medical and environmental problems and facilitating a paradigm shift towards intelligent manufacturing in the context of Industry Revolution 4.01.
With the application of AI, a research team led by Dan ZHANG, Director of PolyU-Nanjing Technology and Innovation Research Institute, and Chair Professor of Intelligent Robotics and Automation in the Department of Mechanical Engineering at the Hong Kong Polytechnic University, has fabricated a number of novel robotic systems with high dynamic performance.
Figure 1. Architecture of the grasp post propagation algorithm, with blue and red points representing grasps before and after processing, respectively. The right part visualises the grasp parameter matrices before and after processing, where colours represent the relative values of the grasp parameters.
Recently, the research team has proposed a grasp pose detection framework that applies deep neural networks to generate a rich set of omnidirectional (in six degrees of freemen “6-DoF”i ) grasp poses with high precision2. To detect the objects to be grasped, convolutional neural networks (CNNs) are applied in a multi-scale cylinder with varying radii, providing detailed geometric information about each object’s location and size estimation (Figure 1). Multiple multi-layer perceptrons (MLPs) optimise the precision parameters of the robotic manipulator to grasp objects, including the gripper width, grasp score (for specific in-plane rotation angles and gripper depths) as well as collision detection. These parameters are fed into an algorithm within the framework, extending grasps from pre-set configurations to generate comprehensive grasp poses tailored for the scene (Figure 2). Experiments reveal that the proposed method consistently outperforms the benchmark method in laboratory simulations3, achieving an average success rate of 84.46% compared to 78.31% for the benchmark method in real-world experiments4.
Figure 2. Real robot experiment system.
A: UR5 robot.
B: Intel RealSense RGBD camera.
C: Robotic two-finger gripper.
D: Cluttered scene.
E: Box.
Snapshots 1 and 2 are easy-to-grasp objects and hard-to-grasp objects, respectively.
In addition, the research team leverages AI technologies to enhance the functionality and user experience of a novel robotic knee exoskeleton for the gait rehabilitation of patients with knee joint impairment5. The structure of the exoskeleton includes an actuator powered by an electric motor to assist knee flexion/extension actively, an ankle joint that transfers the weight of the exoskeleton to the ground, and a stiffness adjustment mechanism powered by another electric motor (Figure 3). A Long Short-Term Memory (LSTM) network in a machine learning algorithm is applied to provide real-time nonlinear stiffness and torque adjustments, mimicking the biomechanical characteristics of the human knee joint. The network ii is trained on a large dataset of electromyography (EMG) signals and knee joint movement data iii, enabling real-time adjustments of the exoskeleton’s stiffness and torque based on the user’s physiological signals and movement conditions. By predicting necessary adjustments, the system adapts to various gait requirements, enhancing the user’s walking stability and comfort. The integration of an adaptive acceptance control algorithm based on Radial Basis Function (RBF) networks enables the robotic knee exoskeleton to automatically adjust joint angles and stiffness parameters without the need for force or torque sensors. This enhances the accuracy of position control and improves the exoskeleton’s responsiveness to different walking postures. This data-driven approach refines the model’s predictions and improves overall performance over time. Experimental results demonstrate that the model outperforms traditional fixed control methods in terms of accuracy and real-time responsiveness, generating the desired reference joint trajectory for users at different walking speeds.
Figure 3. Mechanical structure of the robotic knee exoskeleton
The research reveals that AI techniques, particularly deep learning, have improved the ability of robots to perceive and understand their environments. This advancement contributes to more effective and flexible solutions for handling tasks beyond fixed configurations in standard settings. The melding of AI and robotics not only enhances precision and accuracy but also introduces new capabilities for robotic automation, enabling real-time decision-making and continuous learning. As a result, robots can improve their performance over time, leading to extended utilisation of robotics in society for future endeavours.
i. This omnidirectional-grasp refers to the ability of the grasp to move to the left/right, up/down, forward/backward, as well as titling up/down, turning left/right and titling from side to side.
ii. A type of recurrent neural network (RNN) architecture designed to effectively learn from sequence of data.
iii. Including knee joint angles and electromyography (EMG) data from the rectus femoris (RF) and semitendinosus (SE) muscles.
References |
---|
1. World Economic Forum. (n.d.). Fourth Industrial Revolution. Retrieved October 15, 2024, from https://www.weforum.org/focus/fourth-industrial-revolution/
2. Tang, W., Tang, K., Zi, B., Qian, S., & Zhang, D. (2024). High Precision 6-DoF Grasp Detection in Cluttered Scenes Based on Network Optimization and Pose Propagation. IEEE Robotics and Automation Letters, 9(5), 4407–4414.https://doi.org/10.1109/LRA.2024.3377951
3. Fang, H.S., Wang, C., Gou, M., & Lu, C. (2020). GraspNet-1Billion: A Large-Scale Benchmark for General Object Grasping. Proceedings of the 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Seattle, WA, USA, 11441-11450. https://doi.org/10.1109/CVPR42600.2020.01146
4. Mousavian, A., Eppner, C., & Fox, D. (2019). 6-DOF graspnet: Variational grasp generation for object manipulation. Proceedings of the 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Long Beach, CA, USA, 2901–2910.
5. Chen, B., Zhou, L., Zi, B., Li, E., & Zhang, D. (2024). Adaptive Admittance Control Strategy for a Robotic Knee Exoskeleton With a Nonlinear Variable Stiffness Actuator. IEEE/ASME Transactions on Mechatronics, 1–12. https://doi.org/10.1109/TMECH.2024.3422478
Prof. Dan ZHANG Director of PolyU-Nanjing Technology and Chair Professor of Intelligent Robotics and Automation in the Department of Mechanical Engineering |