![]() ![]() The experimental results prove the effectiveness and safety of the framework and show its application value.ĭesign usually relies on human ingenuity, but the past decade has seen the field's toolbox expanding to Artificial Intelligence (AI) and its adjacent methods, making room for hybrid, algorithmic creations. The experiment of the robot learning to press buttons was carried out on a universal 6-DOF collaborative robot. The framework covers the entire process of robot skill learning and application, and the proposed compliant movement primitives can simultaneously achieve the robot’s trajectory learning and interactive compliance learning. Specifically, the trajectories are collected from the kinematics of the robot, and the stiffness profiles are collected from the designed variable stiffness interface based on stiffness optimization then the collected data is optimized, segmented, and learned to create the robot’s compliant movement primitive library the primitives in the library are adjusted and combined to generate the robot’s desired trajectory and desired stiffness, which are then input into the dynamics-based variable impedance controller thereafter the controller drives the robot to perform the desired compliant motion and complete various tasks. The framework consists of four modules: kinesthetic teaching, task learning, compliant movement primitive library, and task generalization. For this reason, a robot skill learning framework based on compliant movement primitives is proposed in this paper. The paper provides an evaluation of the approach on two different primitive motion tasks.Ĭollaborative robots are increasingly widely used in our lives, and at the same time, the skill learning ability of robots is becoming more and more important. In this paper, we present how the iterative learning of tasks can be accelerated by a learning from demonstration (LfD) method based on the extraction of via-points. These two approaches are, therefore, frequently combined. On the other hand, reinforcement learning (RL) methods such as policy search have the capability to refine an initial skill through exploration, in policy search, the learning process is often very dependent on the initialization of the RL process and is efficient in finding only local solutions. This happens especially when learning stochastic tasks because of the correspondence problem and unmodeled physical properties of tasks. Models based solely on learning from demonstration often have very good generalization capabilities but are not completely accurate when adapting to new scenarios. ![]() Learning from demonstration provides ways to transfer knowledge and skills from humans to robots. Results obtained for trajectories with three degrees of freedom (two translations and one rotation) show that the system is able to encode multiple task parameters from a low number of demonstrations and generate trajectories that are collision free. As the last step before reproduction on a real robotic arm we approximate this trajectory with a Dynamic movement primitive (DMP) - based system to retrieve a smooth trajectory. For reproduction we propose a novel trajectory optimization that is able to generate a simplified version of the trajectory for different configurations of the task parameters. The classified task parameters are used to construct a cost function, responsible for describing the demonstration data. ![]() We present the methodology for learning from demonstration based on a classification of task parameters. Task parameters represent certain dependencies observed in demonstrations used to constrain and define a robot action because of the infinite nature of the state-space environment. ![]() Learning from demonstration involves the extraction of important information from demonstrations and the reproduction of robot action sequences or trajectories with generalization capabilities. ![]()
0 Comments
Leave a Reply. |