# Xuefeng Zhou · Zhihao Xu · Shuai Li · Hongmin Wu · Taobo Cheng · Xiaojing Lv

# AI based Robot Safe Learning and Control

# AI based Robot Safe Learning and Control

Xuefeng Zhou • Zhihao Xu • Shuai Li • Hongmin Wu • Taobo Cheng • Xiaojing Lv

# AI based Robot Safe Learning and Control

Xuefeng Zhou Robotic Team Guangdong Institute of Intelligent Manufacturing Guangzhou, Guangdong, China

Shuai Li School of Engineering Swansea University Swansea, UK

Taobo Cheng Robotic Team Guangdong Institute of Intelligent Manufacturing Guangzhou, Guangdong, China

Zhihao Xu Robotic Team Guangdong Institute of Intelligent Manufacturing Guangzhou, Guangdong, China

Hongmin Wu Robotic Team Guangdong Institute of Intelligent Manufacturing Guangzhou, Guangdong, China

Xiaojing Lv School of Aircraft Maintenance Engineering Guangzhou Civil Aviation College Guangzhou, Guangdong, China

ISBN 978-981-15-5502-2 ISBN 978-981-15-5503-9 (eBook) https://doi.org/10.1007/978-981-15-5503-9

© The Editor(s) (if applicable) and The Author(s) 2020. This book is an open access publication. Open Access This book is licensed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made.

The images or other third party material in this book are included in the book's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the book's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.

The use of general descriptive names, registered names, trademarks, service marks, etc. in this publication does not imply, even in the absence of a specific statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use.

The publisher, the authors and the editors are safe to assume that the advice and information in this book are believed to be true and accurate at the date of publication. Neither the publisher nor the authors or the editors give a warranty, express or implied, with respect to the material contained herein or for any errors or omissions that may have been made. The publisher remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

This Springer imprint is published by the registered company Springer Nature Singapore Pte Ltd. The registered company address is: 152 Beach Road, #21-01/04 Gateway East, Singapore 189721, Singapore

To our ancestors and parents, as always

## Preface

Robot is hailed as "the pearl at the top of manufacturing crown", and it is an important carrier of a new round of technological revolution and manufacturing integration innovation. With the continuous improvement of robot intelligence, human–robot integration has become the inevitable development direction of the new generation of robots. Human and robot play their own advantages, through various environments, operators and robot natural interaction, to complete complex work. Robots with the ability of human–computer integration will play an important role in intelligent manufacturing, family service, medical education, space exploration and other fields, with a very broad research and application prospects.

In order to achieve human–robot integration, the ability of human–computer cooperation and autonomous operation is required. In tasks such as human–computer interaction and flexible assembly, the robot interacts with external environment directly, which requires the ability to accurately control the interaction force, to ensure human–computer friendliness and task execution ability. On the other hand, due to the unstructured environment, the workspace of robot is affected by human beings and other objects. In order to ensure the safety of the system, the robot must adapt to the time-varying space constraints: for instance, when human beings enter the working space of robots, robots must be able to avoid human beings so as to avoid collision. In the more extreme cases, after the collision happened, the robot needs to be compliant with the collision. As to the robot itself, restricted by the mechanical structure, driving system and other factors, the robot must meet its own behavior constraints such as angle, angular speed limit, etc.; in addition, in order to improve the flexibility of the system, the new generation robots usually have redundant degrees of freedom, using their own structural characteristics to achieve the optimization of specific performance indicators. Therefore, the study of robot compliance control under complex space and behavior constraints is one of the key technologies to achieve human–robot integration.

Neural network can simulate the working mechanism of biological neural system, learn from the environment and realize information processing. Among them, the dynamic neural network has the characteristics of adaptability, nonlinearity, parallelism, distributed storage and so on. It can be used to deal with the complex problems which are difficult to be solved by traditional methods. At present, dynamic neural network has made a great progress in deep learning, estimation and prediction, image processing, complex system control and other fields.

In this book, mainly focusing on the safe control of robot manipulators, we design dynamic neural network based control schemes for robots with redundant Degrees of Freedom (DOFs). The control strategies include adaptive tracking control for robots with model uncertainties, compliance control in uncertain environments, obstacle avoidance in dynamic workspaces. The idea for this book on solving safe control of robot arms was conceived during the industrial applications and the research discussion in the laboratory. Most of the materials in this book are derived from the authors' papers published in journals, such as IEEE Transactions on Industrial Electronics. The robots considered in this book include SCARA and collaborative robots (such as Kinova JACO2 and LBR iiwa). Therefore, the control methods developed in this book can be used in real applications after proper modification. To make the contents clear and easy to follow, in this book, each part (and even each chapter) is written in a relatively self-contained manner.

This book is divided into the following 6 chapters.

Chapter 1 In this chapter, an adaptive tracking controller is designed for redundant manipulators. Model uncertainties and repeatability are considered. The control scheme requires neither joint accelerations nor cartesian velocity, which is more suitable in practical engineering. By using pseudo-inverse method, repeatability is optimized in the null space of the Jacobian, the continuity of joint speed is also guaranteed. Future studies will concentrate on the experimental validation of the proposed controller.

Chapter 2 An adaptive kinematic identifier is used to learn kinematic parameters online, and a dynamic neural network is presented to solve the redundancy resolution problem. The interplay of the adaptive online identifier and the neural controller makes it a coupled system with nonlinearity. Using the Lyapunov theory, the global converges of tracking error is theoretically verified. Numerical experiment results and comparisons based on a JACO2 robot arm illustrate the effectiveness of the proposed algorithm and demonstrate advantages over existing ones. The Jacobian adaption strategy together with recurrent neural network (RNN) achieves task space tracking both in static and dynamic situations. Pseudo-inverse calculation of Jacobian matrix is avoided, so that the real-time performance of controller is guaranteed. The boundedness of joint speed can also protect the robot and enhance the safety performance. Before ending this chapter, it is worth pointing out that this is the first kinematic regression based dynamic neural model for self-adaptive redundant manipulator motion control, with provable convergence and guaranteed performance bounds.

Chapter 3 In this chapter, we propose an adaptive admittance control method for redundant manipulators based on RNN, in which model uncertainties of both interaction model and physical parameters are taken into consideration. Theoretical derivation using the Lyapunov technique shows the convergence of the proposed adaptive RNN, and numerical results on a 7-DOF robot iiwa demonstrate the effectiveness of the proposed control strategy. Compared with existing control methods, the proposed controller shows good performance not only in handling physical constraints, but also in eliminating the calculation of pseudo-inversion. At last, it is remarkable that this is the first time to extend RNN based method to the case of force control for redundant manipulators, especially the ones with model uncertainties. This study will be of great significance in industrial application such as grinding robots, assembling robots, etc.

Chapter 4 In this chapter, a novel obstacle avoidance strategy is proposed based on a deep recurrent neural network. The robots and obstacles are presented by sets of critical points, then the distance between the robot and obstacle can be approximately described as point-to-points distances. By understanding the nature escape velocity methods, a more general description of obstacle avoidance strategy is proposed. Using Minimum-Velocity-Norm (MVN) scheme, the obstacle avoidance together with path tracking problem is formulated as a Quadratic Planning (QP) problem, in which physical limits are also considered. By introducing model information, a deep RNN with a simple structure is established to solve the QP problem online. Simulation results show that the proposed method can realize the avoidance of static and dynamic obstacles.

Chapter 5 In this chapter, a novel collision-free compliance controller is constructed based on the idea of QP programming and neural networks. Different from existing methods, in this chapter, the control problem is described from an optimization perspective, and the compliance control and collision avoidance are formulated as equality or inequality constraints. The physical constraints such as limitations of joint angles and velocities are also taken into consideration. Before ending this chapter, it is worth pointing out that it is the first RNN based compliance control method, which considers collision avoidance problem in real time, and also shows great potential in handling physical limitations. In this chapter, simple numerical simulations in MATLAB are carried out to verify the efficiency of the proposed controller. In the future, we will check the control framework with different impedance models in physically realistic simulation environments, and then consider machine vision technology and system delay problem on physical experimental platforms.

Chapter 6 This paper focuses on motion–force control problem for redundant manipulators, while physical constraints and torque optimization are taken into consideration. Firstly, tracking error and contact force are modelled in orthogonal spaces, respectively, and then the control problem is turned into a QP problem, which is further rewritten in velocity level by rewriting objective function and constraints. To handle multiple physical constraints, a RNN based scheme is designed to solve the redundancy resolution online. Numerical experiment results show the validity of the proposed control scheme. Before ending this paper, it is noteworthy that this is the first paper to deal with motion–force control of redundant manipulators in the framework of RNNs and redundant manipulators with force sensitivity, e.g., grinding robots, can be readily controlled with the proposed RNN model but cannot with existing RNN models in this field.

At the end of this preface, it is worth pointing out that, in this book, some distributed methods for the cooperative control of multiple robot arms and their applications are provided. The ideas in this book may trigger more studies and researches in neural networks and robotics, especially neural network based cooperative control of multiple robot arms. There is no doubt that this book can be extended. Any comments or suggestions are welcome, and the authors can be contacted via e-mail: shuaili@ieee.org (Shuai Li).

Guangzhou, China Xuefeng Zhou Guangzhou, China Zhihao Xu Swansea, UK Shuai Li Guangzhou, China Hongmin Wu Guangzhou, China Taobo Cheng Guangzhou, China Feb 2020

Xiaojing Lv

## Acknowledgements

During the work on this book, we have had the pleasure of discussing its various aspects and results with many cooperators and students. We highly appreciate their contributions, which particularly allowed us to improve our book manuscript.

The continuous support of our research by the Nature Science Foundation of Guangdong Province (Grant NO. 2020A1515010631), Guangdong Province Key Areas R&D Program (Grant NO. 2019B090919002 and 2020B090925001), Foshan Key Technology Research Project (Grant NO. 1920001001148), Foshan Innovation and Entrepreneurship Team Project (Grant NO. 2018IT100173), GDAS' Project of Science and Technology Development 2017GDASCX-0115, Guangzhou Science Research Plan—Major Project (Grant No. 201804020095), Guangdong Province Science and Technology Major Projects (Grant No. 2017B010110010), Guangdong Innovative Talent Project of Young College (Grant No. 2016TQ03X463).

## Contents





# Acronyms


# **Chapter 1 Adaptive Jacobian Based Trajectory Tracking for Redundant Manipulators with Model Uncertainties in Repetitive Tasks**

**Abstract** Tracking control of manipulators, which is also called kinematic control, has always been a fundamental problem in robot control, especially for redundant robots with higher degrees of freedom. This problem would become more difficult for systems with model uncertainties. This chapter presents an adaptive tracking controller that considers uncertain physical parameters. Based on the realtime feedback of task-space coordinates, by updating the motion parameters online, a Jacobian adaptive control strategy that does not require cartesian velocity and joint acceleration is established, which makes the controller much simpler. Then the Jacobian pseudoinverse method is used to obtain the optimal repetitive solution as a secondary task. Lyapunov theory is used to prove that the tracking error of the end effector could asymptotically converge to zero. Numerical simulations verify the effectiveness of the proposed method.

#### **1.1 Introduction**

Robots have been widely used in industrial, agricultural, aerospace and other fields. Therefore, research on robotics, especially robot control technology, has been a hot issue in recent decades [1–5]. In order to improve the operation accuracy of the robot, tracking control has always been a fundamental problem in robot control, which has attracted wide attention from researchers.

The tracking control of manipulators can be divided into two categories: joint space tracking and task space tracking. The target of joint space tracking is to design a controller to drive each joint of the robot to track a predetermined trajectory (see, for example [6, 7] and references therein). Another direction of tracking control is task space tracking, which is to establish the desired trajectory in cartesian space. Since the control commands do not match the target (control commands are sent to the actuators of each joint, and then the end effector is controlled to execute in Cartesian space, the mapping between the two spaces is highly nonlinear), taskspace tracking is more difficult than joint space. Therefore, we should first solve the kinematic inverse problem, that is, obtain the required joint space position or speed, and realize task space tracking. This can be done off-line or online. The desired path in cartesian space is discretized into a set of key points, and the corresponding joint configurations are determined in turn, and the desired joint velocity and acceleration are obtained by interpolation [8]. Similar studies can be seen in [9, 10]. This method is currently widely used in industrial applications, but it will have a certain impact on the real-time performance of the system. For redundant manipulators, there is an infinite section configuration corresponding to a particular Cartesian description. Therefore, the second task can be accomplished by adjusting the joints, such as avoiding obstacles and optimizing energy consumption.

With full knowledge of physical parameters, a series of studies on real-time controllers can be found in [11–14]. In fact, robots usually have model uncertainties, including kinematic uncertainties, which may be caused by processing and measurement errors. On the other hand, robots may work with different tools, which can also lead to model uncertainties. The parametric drift may lead to inaccuracy in Jacobian, resulting in performance degradation or unpredictable response, which should be compensated. Before designing the controller, several calibration methods are proposed to determine the exact parameters [15, 16]. With the development of optical technology, researchers could measure the exact position and direction of the end-effector online. A series of real-time tracking controllers are proposed. Liu et al. proposes an adaptive tracking scheme based on online learning of the Jacobian matrix, by discussing the selection of control gain in detail, the authors prove the stability of the closed-loop system [18]. In [19], a robust controller considering actuator saturation is designed. Lyapunov theory indicates the semi-global stability of the system. In [20], a dynamic regulation controller is also established, which consists of a transposed Jacobian operator and a gravity compensator. When the required path is variable, Cheah et al. propose a passive tracking controller [21], which proves the global convergence of the tracking error. Liu et al. use the fuzzy logic system to understand the uncertainty of the robot model, and design a tracking control scheme based on sliding mode control. However, these studies require cartesian velocity or joint acceleration, which is actually difficult to obtain due to hardware constraints. Therefore, Wang et al. propose a tracking controller based on a low-pass filter, which omits the cartesian velocity measurement [22]. Similar studies can be seen in [23– 25]. The above research mainly focuses on the general problems of position control, the physical uncertainty of robots and ignores the secondary tasks.

Based on the above research, this chapter studies the motion control of redundant manipulators, in which we take the uncertain kinematic parameters into account. In practice, robots are usually scheduled to perform periodic tasks, therefore, we choose repeatability as a secondary task. In order to avoid the measurement of velocity and joint acceleration in task space, a new adaptive controller is designed, which achieves secondary tasks by optimizing the functions in null space of Jacobian matrix. We also provide stability analysis and numerical simulations.

The remainder of this chapter is organized as follows. In the second part, we will introduce the basic kinematics of redundant robots and give several important properties that will be used in the following sections. In the third part, the proposed adaptive controller is discussed in detail, including an adaptive method and repeatable optimization of model parameters. The convergence analysis of tracking error is discussed. In Sect. 1.4, we provide examples and numerical simulations to verify the effectiveness of the proposed tracking method. Finally, Sect. 1.5 concludes the chapter. Before concluding this section, we emphasize the main contributions of this chapter as follows:


#### **1.2 Problem Formulation**

Without loss of generality, the robot manipulator studied in this chapter is selected as a serial robot, which is most commonly used in industrial applications. The kinematic model of a serial robot manipulator is

$$f(\theta(t)) = \mathbf{x}(t),\tag{1.1}$$

where θ (*t*) <sup>∈</sup> <sup>R</sup>*<sup>n</sup>* is the joint angles, and *<sup>x</sup>*(*t*) <sup>∈</sup> <sup>R</sup>*<sup>m</sup>* is the vector describing the position and orientation of the end-effector in cartesian space. *<sup>f</sup>* (•) : <sup>R</sup>*<sup>n</sup>* <sup>→</sup> <sup>R</sup>*<sup>m</sup>* is the forward kinematic mapping of the robot. *f* (•)is a nonlinear function. Differentiating *x*(*t*) with respect to time *t*, the cartesian velocity *x*˙(*t*) is formulated as

$$
\dot{\mathbf{x}}(t) = J(\theta(t), a\_k)\theta(t), \tag{1.2}
$$

where *<sup>J</sup>* (*q*θ (*t*), *ak* ) <sup>=</sup> <sup>∂</sup> *<sup>f</sup>* (θ (*t*), *ak* )/∂θ (*t*) <sup>∈</sup> <sup>R</sup>*<sup>m</sup>*×*<sup>n</sup>* is the Jacobian matrix. As to a redundant manipulator, *<sup>n</sup>* <sup>&</sup>gt; *<sup>m</sup>*. *ak* <sup>∈</sup> <sup>R</sup>*<sup>l</sup>* denotes the vector of kinematic parameters, also called physical parameters, while in this chapter, mainly refers to length of each joint. Therefore, *ak* is considered as a constant vector.

The movement of the end-effector *J* (θ (*t*), *ak* )θ (˙ *t*) consists of two parts: physical parameter dependent term and joint angle-speed dependent term, and can be described in the linearization-in-parameter form [21]:

$$J(\theta(t), a\_k)\dot{\theta}(t) = Y\_k(\theta(t), \dot{\theta}(t))a\_k,\tag{1.3}$$

where *Yk* (θ (*t*), θ (˙ *<sup>t</sup>*)) <sup>∈</sup> <sup>R</sup>*<sup>m</sup>*×*<sup>l</sup>* is called kinematic regressor matrix.

In this chapter, to avoid measuring the task-space velocity, a low-pass filter is used as follows

$$
\dot{\mathbf{y}} + \lambda\_1 \mathbf{y} = \lambda\_1 \dot{\mathbf{x}},
\tag{1.4}
$$

where λ<sup>1</sup> is a positive constant and *y* is the filtered output of the task-space velocity with initial value *y*(0) = 0. Rewriting (1.4) leads to

$$\mathbf{y} = \lambda\_1 \dot{\mathbf{x}} / (p + \lambda\_1),\tag{1.5}$$

where *p* is the Laplace variable.

Combining (1.3) and (1.5), we have

$$\mathbf{y} = W\_k(t)a\_k,\ \ W\_k(t) = \lambda\_1 Y\_k(\theta, \theta)/(\lambda\_1 + p),\tag{1.6}$$

where *Wk* (0) = 0. For simplicity, we write *J* (θ ), *Yk* (θ , θ )˙ as *J* and *Yk* , respectively.

**Remark 1.1** In real applications, *ak* has two different forms, which correspond to different meanings. The first value is the actual value of *ak* , the other one is the nominal value of *a<sup>n</sup> <sup>k</sup>* . *a<sup>n</sup> <sup>k</sup>* is usually a non-calibrated measurement result of *ak* , which is usually provided by the manufacturer or manual measurement. The real value of *ak* is difficult to obtain in real applications, and *ak* is generally not the same as its nominal value of *a<sup>n</sup> <sup>k</sup>* . Due to assembly errors and long-term operations (such as friction, wear, *etc.,*) besides, robots may operate different tools to perform tasks, which also leads to motion uncertainty. In this case, the direct use of *a<sup>n</sup> <sup>k</sup>* control method can lead to control errors, which is unacceptable in high precision tracking control.

#### **1.3 Main Results**

In this section, we will show the detailed process of the controller design. Firstly, the ideal case where all parameters are known is considered, then the basic idea of parameter-updating-in-realtime is designed in the case where parameters are unknown, and is then expanded to repeated optimization in null space. Finally, the stability of the closed-loop system is discussed.

#### *1.3.1 Adaptive Tracking Method*

Define the tracking error in Cartesian space as

$$e(t) = \mathbf{x}(t) - \mathbf{x}\_{\mathbf{d}}(t),\tag{1.7}$$

*(1) Known parameter case*

When the kinematic parameters *ak* is perfectly known, the accurate Jacobian matrix *J* can be obtained, therefore, the reference trajectory can be designed as

$$
\ddot{X}(t) = \ddot{x}\_d(t) + k\_1 \dot{x}\_d(t) - k\_2 e(t) - k\_1 J\theta(t),\tag{1.8}
$$

where *k*<sup>1</sup> and *k*<sup>2</sup> are positive control gains. According to Eq. (1.2), the Eq. (1.8) can be reformulated as *x*¨(*t*) = ¨*x*d(*t*) + *k*1*x*˙d(*t*) − *k*2*e*(*t*) − *k*1*x*˙(*t*), by calculating the second derivative of Eq. (1.7), and substituting Eq. (1.8), we have

$$\begin{split} \ddot{e}(t) &= \ddot{\mathbf{x}}(t) - \ddot{\mathbf{x}}\_{\mathbf{d}}(t) \\ &= k\_1 \dot{\mathbf{x}}\_{\mathbf{d}}(t) - k\_2 e(t) - k\_1 \dot{\mathbf{x}}(t). \end{split} \tag{1.9}$$

Eq. (1.9) can be rewritten as

$$
\ddot{e}(t) + k\_1 \dot{e}(t) + k\_2 e(t) = 0,\tag{1.10}
$$

it is obvious that is *e*(*t*) will eventually converge to zero, if *k*<sup>1</sup> and *k*<sup>2</sup> are Hurwitz. Combining Eqs. (1.8) and (1.2), and letting initial joint velocity θ (˙ 0) be 0, one can easily derive the corresponding control signals of joint speed as below

$$
\dot{\theta} = \dot{\theta}\_j + \dot{\theta}\_n \tag{1.11a}
$$

$$\dot{\theta}\_j = \int\_0^t J^\dagger \left[ (\ddot{\mathbf{x}}\_\mathbf{d} + k\_1 \dot{\mathbf{x}}\_\mathbf{d} - k\_2 e - k\_1 J \dot{\theta}) - \dot{J} \dot{\theta} \right] \mathbf{d} \,\tag{1.11b}$$

$$
\dot{\theta}\_n = (I - J^\dagger J)\alpha,\tag{1.11c}
$$

where *I* is n-dimensional identity matrix, *J* † = *J <sup>T</sup>* (*J J <sup>T</sup>* )−<sup>1</sup> is the Pseudo-Inverse of *J* , and θ˙ *<sup>n</sup>* is a speed component in the null space of Jacobian, α can be selected arbitrarily. It is notable that *J* θ˙ *<sup>n</sup>* = 0, indicating that the speed component in the null space has no influence on the movement of end-effector. By getting the time-derivative of Eq. (1.2) and substituting Eqs. (1.11), (1.2), (1.7), one can easily verify that the error dynamics under kinematic controller Eq. (1.11) is the same as Eq. (refeq10), the tracking error will gradually converge to 0.

**Remark 1.2** Equation (1.8) gives fundamental description of reference trajectory in the Cartesian space, it is notable that all the required information except *J* (θ , *ak* ) on the right side of the equation is easy to obtain. This inspires us to design a similar control strategy with the existence of kinematic uncertainties.

#### *(2) Unknown parameter case*

In this situation, since the exact value of *ak* is unknown, *J* is unknown. Therefore, in this case, we use *J*ˆ instead of *J* by replacing *ak* with its estimation *a*ˆ*<sup>k</sup>* , and let *<sup>a</sup>*ˆ*<sup>k</sup>* (0) <sup>=</sup> *<sup>a</sup><sup>n</sup> <sup>k</sup>* , then the estimated *x*˙(*t*) is ˆ *x*˙(*t*) = *J*ˆθ˙. By replacing *ak* by *a*ˆ*<sup>k</sup>* , according to (1.3), the estimated cartesian speed ˆ *x*˙ satisfies

$$
\hat{\dot{\lambda}} = \hat{J}\dot{q} = Y\_k(q, \dot{\theta})\hat{a}\_k,\tag{1.12}
$$

The modified reference trajectory is thus designed as

$$\dot{\theta}(t) = \int\_0^t \{\dot{J}^\dagger[\ddot{\mathbf{x}}\_d + (k\_1 + k\_2)\dot{\mathbf{x}}\_d - k\_1k\_2e - \dot{\hat{J}}\dot{\theta} - k\_3e] - (k\_1 + k\_2)\dot{\theta}\} \text{d}t. \tag{1.13}$$

Since the accurate feedback of cartesian velocity *x*˙ is unavailable, the derivative of tracking error *e*˙ = ˙*x* − ˙*x*<sup>d</sup> is also unknown, therefore, we define the alternative value of *e*˙ by using the estimated Cartesian speed *x*˙:

$$
\Delta \hat{\vec{\mathbf{x}}} = \hat{\vec{\mathbf{x}}} - \dot{\mathbf{x}}\_{\mathsf{d}} = \hat{J}\dot{\theta} - \dot{\mathbf{x}}\_{\mathsf{d}},\tag{1.14}
$$

then the updating law of kinematic parameters is designed as

$$\dot{\hat{a}}\_{k} = k\_{1}Y\_{k}^{\mathrm{T}}(\Delta \hat{\hat{\boldsymbol{x}}} + k\_{1}\boldsymbol{e}) + k\_{3}Y\_{k}^{\mathrm{T}}\boldsymbol{e} - W\_{k}^{\mathrm{T}}(t)\varGamma\_{\mathrm{l}}(W\_{k}(t)\hat{a}\_{k} - \mathbf{y}),\tag{1.15}$$

where Γ<sup>1</sup> is a positive definite diagonal matrix, *k*1, *k*<sup>2</sup> and *k*<sup>3</sup> are positive control gains.

**Remark 1.3** Without loss of generality, the initial value of the estimated kinematic parameters can be selected according to the nominal value, which can be obtained from handbook or manual measurement. In fact, the adjustment of *a*ˆ*<sup>k</sup>* (0) does affect the tracking process, which can be verified in the next section. The greater the error between *a*ˆ*<sup>k</sup>* (0) and *ak* , the greater the initial simulation error. However, according to (1.15), no matter what the exact value of *a*ˆ*<sup>k</sup>* (0), the estimated value of *a*ˆ*<sup>k</sup>* will eventually converge to *ak* , which can be verified by stability analysis and numerical experiments.

Now, we are ready to offer a theorem about the task-space tracking problem for robots with uncertain physical parameters using the proposed adaptive controller as below.

**Theorem 1.1** The control error *e*(*t*) for a redundant manipulator described by (1.7) would globally converge to 0, provided the joint speed controller described as (1.13), along with the kinematic adaptation law (1.15).

*Proof* Differentiating (1.7) and substituting (1.3) and (1.12), we have

$$\begin{split} \dot{e} &= \dot{\mathbf{x}} - \hat{\ddot{\mathbf{x}}} + \hat{\ddot{\mathbf{x}}} - \dot{\mathbf{x}}\_{\mathbf{d}} \\ &= Y\_k a\_k - Y\_k \hat{a}\_k + \hat{\ddot{\mathbf{x}}} - \dot{\mathbf{x}}\_{\mathbf{d}} \\ &= -Y\_k \tilde{a}\_k + \Delta \hat{\ddot{\mathbf{x}}}. \end{split} \tag{1.16}$$

Taking the time derivative of Δˆ *x*˙ and combining Eqs. (1.14) and (1.16) derives

$$\begin{split} \frac{d}{dt}(\Delta \hat{\vec{x}}) &= \dot{\hat{J}}\dot{\theta} + \hat{J}\ddot{\theta} - \ddot{\mathbf{x}}\_{\mathrm{d}} \\ &= (k\_{1} + k\_{2})\dot{\mathbf{x}}\_{\mathrm{d}} - k\_{1}k\_{2}e - k\_{3}e - k\_{2}\hat{J}\dot{\theta} - k\_{1}\hat{J}\dot{\theta} \\ &= k\_{2}\dot{\mathbf{x}}\_{\mathrm{d}} - k\_{2}\hat{J}\dot{\theta} - k\_{1}k\_{2}e - k\_{3}e + k\_{1}\dot{\mathbf{x}}\_{\mathrm{d}} - k\_{1}(\dot{\mathbf{x}}\_{\mathrm{d}} + \dot{e} + Y\_{k}\tilde{a}\_{k}) \\ &= -k\_{2}\Delta \hat{\dot{x}} - k\_{2}k\_{1}e - k\_{3}e - k\_{1}Y\_{k}\tilde{a}\_{k} - k\_{1}\dot{e}, \end{split} \tag{1.17}$$

where *a*˜*<sup>k</sup>* = *ak* − ˆ*ak* represents the difference between the real value of physical parameters *ak* and the estimated one *a*ˆ*<sup>k</sup>* . Eq. (1.17) can be written as

$$\frac{d}{dt}(\Delta \hat{\vec{x}} - k\_1 e) = -k\_2(\Delta \hat{\vec{x}} + k\_1 e) - k\_3 e - k\_1 Y\_k \tilde{a}\_k. \tag{1.18}$$

Select a Lyapunov function candidate as follows

$$V = (\Delta \hat{\boldsymbol{\hat{x}}} + k\_1 \boldsymbol{e})^{\mathrm{T}} (\Delta \hat{\boldsymbol{\hat{x}}} + k\_1 \boldsymbol{e})/2 + k\_3 \boldsymbol{e}^{\mathrm{T}} \boldsymbol{e}/2 + \tilde{a}\_k^{\mathrm{T}} \tilde{a}\_k/2. \tag{1.19}$$

By taking the time derivative of (1.19) and combing (1.15), (1.16) and (1.17), we have

$$\begin{split} \dot{V} &= (\Delta \hat{\boldsymbol{x}} + k\_1 e)^{\mathrm{T}} \mathbf{d} (\Delta \hat{\boldsymbol{\hat{x}}} + k\_1 e) / \mathrm{d}t + k\_3 e^{\mathrm{T}} \dot{\tilde{e}} + \tilde{a}\_k^T \dot{\tilde{a}}\_k \\ &= (\Delta \hat{\boldsymbol{x}} + k\_1 e)^{\mathrm{T}} (-k\_2 (\Delta \hat{\boldsymbol{\hat{x}}} + k\_1 e) - k\_3 e - k\_1 Y\_k \tilde{a}\_k) + \tilde{a}\_k^T (k\_1 Y\_k^{\mathrm{T}} (\Delta \hat{\boldsymbol{\hat{x}}} + k\_1 e) + k\_3 Y\_k^{\mathrm{T}} e \\ &- W\_k^{\mathrm{T}} (t) \Gamma\_1 (W\_k (t) \hat{a}\_k - \mathbf{y}) + k\_3 e^{\mathrm{T}} (-Y\_k \tilde{a}\_k + \Delta \hat{\boldsymbol{\hat{x}}}) \\ &= -k\_2 (\Delta \hat{\boldsymbol{x}} + k\_1 e)^{\mathrm{T}} (\Delta \hat{\boldsymbol{x}} + k\_1 e) - k\_1 k\_3 e^{\mathrm{T}} e - \tilde{a}\_k^T W\_k^{\mathrm{T}} (t) \Gamma\_1 W\_k (t) \tilde{a}\_k \\ &\leq 0. \end{split} \tag{1.20}$$

Then we can obtain that Δˆ *x*˙, *e* and *a*˜*<sup>k</sup>* are all bounded. Based on Eqs. (1.14) and (1.3), *J*ˆθ˙, *a*ˆ*<sup>k</sup>* and *Yka*˜*<sup>k</sup>* are also bounded. Notably that *Wk* (*t*)*a*˜*<sup>k</sup>* is the output of a stable system with bounded input *Yk* (*t*)*a*˜*<sup>k</sup>* , we have *Wk* (*t*)*a*˜*<sup>k</sup>* is also bounded. Then according to Eq. (1.15), ˙ *a*ˆ*<sup>k</sup>* is bounded. Differentiating *Wk* (*t*)*a*˜*<sup>k</sup>* with respect to time, we have

$$\frac{\mathbf{d}}{\mathbf{d}t}(W\_k(t)\tilde{a}\_k) = \lambda\_1(Y\_k - W\_k(t))\tilde{a}\_k + W\_k(t)\dot{\hat{a}}\_k. \tag{1.21}$$

d(*Wk* (*t*)*a*˜*<sup>k</sup>* )/d*t* is also bounded. Then we have *e*˙, d(Δˆ *x*˙)/d*t* and d(*Wk* (*t*)*a*˜*<sup>k</sup>* )/d*t* are all bounded, which means the time derivative of (1.20), *V*¨ is bounded. Using Barbalat's Lemma, we have Δˆ *x*˙ + *k*1*e* → 0 , *e* → 0 , as *t* → ∞ .

**Remark 1.4** We have proved the convergence of the tracking error under the condition of kinematic uncertainties. In fact, when *ak* is perfectly known, Eq. (1.13) will be degenerated as

$$\dot{\theta}(t) = \int\_0^t [J^\dagger(\ddot{\mathbf{x}}\_d + (k\_1 + k\_2)\dot{\mathbf{x}}\_d - k\_1k\_2e - \dot{J}\dot{q} - k\_3e) - (k\_1 + k\_2)\dot{\theta}]d\mathbf{t}, \quad (1.22)$$

which has the similar form compared with Eq. (1.11). Therefore, known parameter case described in Eq. (1.11) can be considered as a special form of Eq. (1.13).

**Remark 1.5** The control velocity *q*˙ in Eq. (1.13) is not the final result of this chapter. The velocity component in null space is ignored, although it has no effect on the movement of end-effector as well as the stability proof, this part can not be neglected, because the redundancy mechanism is of great engineering significance to the manipulator.

#### **Algorithm 1** The proposed tracking method

**Input:** Parameters *k*1, *k*2, *k*3, *K*, Γ1, , initial states θ (˙ 0) = 0, θ (0), nominal kinematic parameter *a*ˆ*<sup>k</sup>* (0), desired path *x*d(*t*), *x*˙d(*t*) and *x*¨d(*t*), task duration *Te*, feedback of end effector *x*(*t*), analytical expressions of estimated Jacobian matrix *J*ˆ and kinematic regressor matrix *Yk* .

**Output:** To achieve task-space tracking of the redundant manipulator

```
1. Initialize ak (0) ← an
                          k .
```

**Until**(*t* > *Te*)

#### *(3) Repeatability optimization*

In this subsection, in order to make full use the of redundant design of a redundant manipulator, a repeatability optimization scheme is developed in the null space of the Jacobian matrix, which is helpful to improve the stability and reliability of robots in periodic tasks.

Define a following function to describe a robot's repeatability as

$$F(\theta) = -K \left(\theta - \theta\_{ini}\right)^{\mathrm{T}} (\theta - \theta\_{ini}) / 2,\tag{1.23}$$

where *K* is a positive parameter scaling the weight of repeatability optimization, θ*ini* is the initial value of the joint angles. By using gradient descent method, a velocity component in null space can be calculated as

$$\alpha = [\partial(F(\theta))/\partial(\theta\_l), \dots, \partial(F(\theta))/\partial(\theta\_n)].\tag{1.24}$$

Combining Eqs. (1.24) and (1.23), we have

$$\boldsymbol{\alpha} = \left[ \theta\_{\mathrm{int}}(1) - \theta(1), \dots, \theta\_{\mathrm{int}}(i) - \theta(i), \dots, \theta\_{\mathrm{int}}(n) - \theta(n) \right]^{\mathrm{T}}.\tag{1.25}$$

where θ*ini*(*i*) and θ (*i*) represent the *i*th element of θ*int* and θ, respectively, *i* = 1, ··· , *n*.

Then the complete form of the proposed adaptive controller is

$$
\dot{\theta} = \dot{\theta}\_j + \dot{\theta}\_n \tag{1.26a}
$$

$$\dot{\theta}\_j = \int\_0^t \left[ J^\dagger (\ddot{\mathbf{x}}\_d + (k\_1 + k\_2)\dot{\mathbf{x}}\_d - k\_1 k\_2 e - \dot{J}\dot{\theta} - k\_3 e) - (k\_1 + k\_2)\dot{\theta} \right] dt \tag{1.26b}$$

$$\dot{\theta}\_n = (I - J^\dagger J)[\theta\_{\text{int}}(1) - \theta(1), \dots, \theta\_{\text{int}}(i) - \theta(i), \dots, \theta\_{\text{int}}(n) - \theta(n)]^\text{T} \tag{1.26c}$$

$$\dot{\hat{a}}\_{k} = k\_1 Y\_k^T (\Delta \hat{\mathbf{x}} + k\_1 \mathbf{e}) + k\_3 Y\_k^T \mathbf{e} - W\_k^T(t) \Gamma\_1 (W\_k(t) \hat{a}\_k - \mathbf{y}) \tag{1.26d}$$

**Fig. 1.1** change curve of *K* with time *t*

Remarkable that at the beginning stage of the tracking cycle, the repeatability is less important, and then it rises as the task continues. To this end, we set *K* as a variable:

$$K = \begin{cases} 0 & NT \le t < NT + T/2, \\ K^\* & \frac{2NT + T}{2} < t < (N+1)T \end{cases} \tag{1.27}$$

where *K*<sup>∗</sup> = *Kmax* (1 − *cos*(π(*t* − *N T* − *T*/2)/*T* ), *N* = 0, 1, 2,... are natural numbers, *T* is the period of cyclic motion. If *t* < *N T* + *T*/2, the robot has just left the initial state to perform a task, thus we let *K* = 0, this will cause α = 0, the joint control velocity is the same as (1.13). When *t* > *N T* + *T*/2, *K* increases from 0 to maximum value *Kmax* with time, forcing the robot to repeat the initial state. The change curve of *K* with time is shown in Fig. 1.1.

**Remark 1.6** The main reason for this selection of *K* is to ensure the continuity of joint speed signals during a motion cycle. Notable that the discontinuities of *K* still appear at the moment *T* = *N T* . If the robot can repeat the initial joint state, θ − θ*ini* would converge to 0, so α can be also regarded as continuous. Therefore, the definition of *K* in (1.27) is acceptable.

#### **1.4 Numeral Simulations**

In this section, several groups of numerical experiments are carried out to show the effectiveness of the designed controller. Firstly, a comparative simulation is given to show that the adaptive tracking law could achieve a satisfing performance with the existence of kinematic uncertainties. Secondly, we will check the performance in periodic tasks. Finally, more general trajectories are discussed to show the adaptiveness and robustness of the control algorithm.

**Fig. 1.2** The 4-DOF redundant manipulator to be simulated in this chapter. Left: Physical structure of the 4-link robot manipulator. Right: D-H parameters

#### *1.4.1 Simulation Settings*

The vector of initial joint angles is selected as θ*ini* = [π/2, −π/2, 0, 0] Trad, and the corresponding cartesian position is *xini* = [0.6, 0.3] T. Since the exact value of kinematic parameters (see *di* in Fig. 1.2), we assume the nominal values to be *a<sup>n</sup> k* = [0.25, 0.25, 0.12, 0.18] Tm, and let *<sup>a</sup>*ˆ*<sup>k</sup>* (0) <sup>=</sup> *<sup>a</sup><sup>n</sup> <sup>k</sup>* . The control gains *k*1, *k*<sup>2</sup> and *k*<sup>3</sup> are set to be *k*<sup>1</sup> = 50, *k*<sup>2</sup> = 50, *k*<sup>3</sup> = 50, Γ = 10. As to the repeatable tasks, the parameter scaling the velocity component in the null space is selected as *Kmax* = 10. The time constant of low-pass filter is λ = 40. It is notable that matrix *J*ˆ is essential in the proposed tracking controller, which is used to estimate the actual Jacobian matrix *J* (*q*, *ak* ). To further show the detail of the proposed controller, analytical expression of *J*ˆ is given in Appendix.

#### *1.4.2 Verification of Parameter Estimation*

Comparative simulations is firstly carried out to show the effectiveness of the proposed updating law (1.15). The desired path to be tracked is defined as *x*d(*t*) = 0.4 + 0.2*cos*(2*t*), *yd* (*t*) = 0.3 + 0.2*sin*(2*t*). In the first simulation, the nominal values are used directly in the tracking control according to Eq. (1.13). By contrast, *a*ˆ*<sup>k</sup>* is updated using (1.15) in the comparable simulation, and α is set to be zero (i.e., we didn't use repeatability in this part). Simulation results are shown in Fig. 1.3. Both controllers ensure the boundedness of the tracking error. When *ak* is known, benefiting from the closed-loop control mechanism, the tracking errors along two axes are much less than 5 mm. The tracking errors with parameter estimation are less

**Fig. 1.3** Error profile with and without parameter estimation when tracking a circle. **a** Tracking errors without parameter estimation. **b** Tracking errors with parameter estimation. **c** Norm of tracking errors with and without parameter estimation

than 1 mm. Fig. 1.3c shows comparative results of tracking error norm corresponding to known and unknown *ak* , intuitively showing the effectiveness of the proposed controller under the condition of unknown models.

#### *1.4.3 Verification of Repeatability Optimization*

Then we check the effectiveness of repeatability optimization. Based on the simulation of the previous part, we introduce the proposed repeatability optimization scheme (i.e., the controller is the same as the adaptive tracking controller in the previous part except α = 0.) Simulation results are shown in Fig. 1.4. The curve of tracking error *e* is the same as the one when α = 0, showing the property that the velocity component in null space have no influence on the cartesian movement (Fig. 1.4a). The estimated kinematic parameters *a*ˆ*<sup>k</sup>* are shown in Fig. 1.4b, which slowly converges to *ak* with time. The error norm ||*Yka*ˆ*<sup>k</sup>* − ˙*x*||<sup>2</sup> of the estimated cartesian speed reduced to zero quickly, as shown in Fig. 1.4c. The curve of the repeatability function is shown in Fig. 1.4d, we can observe that when *t* = *T*, 2*T*, ··· , ||*q* − *qini*||<sup>2</sup> equals to zero. This is to say, when repeatability optimization is used, ||*q* − *qini*||<sup>2</sup> changes periodically. Figure 1.4e shows the motion trajectory tracked by its end-effector of the robot manipulator, illustrating the precise tracking of the circle desired trajectory.

#### *1.4.4 Cardioid Tracking*

To further verify effectiveness of the proposed control scheme, the manipulator is required to track a cardioid trajectory in 2-D workspace. The desired path is defined as *x*d(*t*) = 0.1(2*sin*(2*t*) − *sin*(4*t*)) + 0.6 m, *y*d(*t*) = 0.1(2*cos*(2*t*) − *cos*(4*t*)) + 0.2 m. Simulation results are shown in Fig. 1.5. The motion trajectory achieved by its end-effector of the manipulator is shown in Fig. 1.5a. The corresponding tracking

**Fig. 1.4** Simulation results with parameter estimation when tracking a circle. **a** Tracking error profile. **b** Estimated parameter *a*ˆ*<sup>k</sup>* . **c** Difference between the estimated value *Yka*ˆ*<sup>k</sup>* and the real one. **d** ||*q* − *qini*||<sup>2</sup> with repeatability optimization. **e** Motion trajectory

errors are given in Fig. 1.5b, showing that the robot successfully tracks the given trajectory. ||*q* − *qini*||<sup>2</sup> is guaranteed to 0 when *t* = *T*, 2*T*, 3*T* (Fig. 1.5e), and the estimated kinematic parameters are shown in Fig. 1.5c. All in all, the proposed controller ensures stable tracking under the condition of model uncertainties, and the repeatability is also achieved.

#### **1.5 Summary**

In this chapter, an adaptive tracking controller is designed for redundant manipulators. Model uncertainties and repeatability are considered. The control scheme requires neither joint accelerations nor cartesian velocity, which is more suitable in practical engineering. By using the pseudo-inverse method, repeatability is optimized in the null space of Jacobian, the continuous of joint speed is also guaranteed. Future studies will concentrate on the experimental validation of the proposed controller.

**Fig. 1.5** Simulation results when tracking a cardioid curve. **a** Motion trajectory of the manipulator. **b** Tracking error. **c** Estimated parameter *a*ˆ*<sup>k</sup>* . **d** Difference between the estimated value *Yka*ˆ*<sup>k</sup>* and the real one. **e** ||*q* − *qini*||<sup>2</sup> with repeatability optimization

#### **Appendix**

Given the joint angle θ = [θ1, θ2, θ3, θ4] <sup>T</sup> and the estimated *a*ˆ*<sup>k</sup>* = [ˆ*ak* (1), *a*ˆ*<sup>k</sup>* (2), *a*ˆ*<sup>k</sup>* (3), *a*ˆ*<sup>k</sup>* (4)] T. By simplifying *cos*(θ*i*) = *ci* , *sin*(θ*i*) = *si* , *a*ˆ*<sup>k</sup>* (*i*) = *ai* , the analytical expression of *J*ˆ is given as below.

$$\begin{array}{l} J(1,1) = -a\_1s\_1 - a\_2s\_{12} - a\_3s\_{123} - a\_4s\_{1234}, \\ \hat{J}(1,2) = -a\_2s\_{12} - a\_3s\_{123} - a\_4s\_{1234} \\ \hat{J}(1,3) = -a\_3s\_{123} - a\_4s\_{1234} \\ \hat{J}(1,4) = -a\_4s\_{1234} \\ \hat{J}(2,1) = a\_1c\_1 + a\_2c\_{12} + a\_3c\_{123} + a\_4c\_{1234} \\ \hat{J}(2,2) = a\_2c\_{12} + a\_3c\_{123} + a\_4c\_{1234} \\ \hat{J}(2,3) = a\_3c\_{123} + a\_4c\_{1234} \\ \hat{J}(2,4) = a\_4c\_{1234}. \end{array}$$

Based on the analytical expression of *J*ˆ given above, ˙ *J*ˆ can be formulated as follows. ˙

$$\begin{array}{l} \hat{J}(1,1) = -a\_1 c\_1 \dot{\theta}\_1 - a\_2 c\_{12} (\dot{\theta}\_1 + \dot{\theta}\_2) - a\_3 c\_{123} (\dot{\theta}\_1 + \dot{\theta}\_2 + \dot{\theta}\_3) \\\ -a\_4 c\_{1234} (\dot{\theta}\_1 + \dot{\theta}\_2 + \dot{\theta}\_3 + \dot{\theta}\_4) \\\ \dot{J}(1,2) = -a\_2 c\_{12} (\dot{\theta}\_1 + \dot{\theta}\_2) - a\_3 c\_{123} (\dot{\theta}\_1 + \dot{\theta}\_2 + \dot{\theta}\_3) - a\_4 c\_{1234} (\dot{\theta}\_1 + \dot{\theta}\_2 + \dot{\theta}\_3 + \dot{\theta}\_4) \\\ \dot{J}(1,3) = -a\_3 c\_{123} (\dot{\theta}\_1 + \dot{\theta}\_2 + \dot{\theta}\_3) - a\_4 c\_{1234} (\dot{\theta}\_1 + \dot{\theta}\_2 + \dot{\theta}\_3 + \dot{\theta}\_4) \\\ \dot{J}(1,4) = -a\_4 c\_{1234} (\dot{\theta}\_1 + \dot{\theta}\_2 + \dot{\theta}\_3 + \dot{\theta}\_4) \end{array}$$

$$\begin{array}{l} \hat{J}(2,1) = -a\_{1}s\_{1}\dot{\theta}\_{1} - a\_{2}s\_{12}(\dot{\theta}\_{1} + \dot{\theta}\_{2}) - a\_{3}s\_{123}(\dot{\theta}\_{1} + \dot{\theta}\_{2} + \dot{\theta}\_{3}) \\ \quad - a\_{4}s\_{1234}(\dot{\theta}\_{1} + \dot{\theta}\_{2} + \dot{\theta}\_{3} + \dot{\theta}\_{4}) \\ \hat{J}(2,2) = -a\_{2}s\_{12}(\dot{\theta}\_{1} + \dot{\theta}\_{2}) - a\_{3}s\_{123}(\dot{\theta}\_{1} + \dot{\theta}\_{2} + \dot{\theta}\_{3}) - a\_{4}s\_{1234}(\dot{\theta}\_{1} + \dot{\theta}\_{2} + \dot{\theta}\_{3} + \dot{\theta}\_{4}) \\ \hat{J}(2,3) = -a\_{3}s\_{123}(\dot{\theta}\_{1} + \dot{\theta}\_{2} + \dot{\theta}\_{3}) - a\_{4}s\_{1234}(\dot{\theta}\_{1} + \dot{\theta}\_{2} + \dot{\theta}\_{3} + \dot{\theta}\_{4}) \\ \hat{J}(2,4) = -a\_{4}s\_{1234}(\dot{\theta}\_{1} + \dot{\theta}\_{2} + \dot{\theta}\_{3} + \dot{\theta}\_{4}). \end{array}$$

#### **References**


**Open Access** This chapter is licensed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made.

The images or other third party material in this chapter are included in the chapter's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the chapter's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.

# **Chapter 2 RNN Based Trajectory Control for Manipulators with Uncertain Kinematic Parameters**

**Abstract** Among tracking control of redundant manipulators, potential limitations such as model uncertainties and physical limitations may exist. Conventional solutions may fail when model parameters differ from nominal ones. In this chapter, a novel kinematic controller with the capability of self-adaptation is proposed to address this challenging issue. Based on the coordinate feedback, a Jacobian adaption strategy is firstly built by updating kinematic parameters online. Using Karush–Kuhn–Tucker conditions, the redundancy solution problem is then turned into a quadratic optimization one, and a recurrent neural network based controller is designed to derive the optimal solution recurrently. Theoretical analysis demonstrates the global convergence of the tracking error. Compared with existing methods, kinematic model uncertainty of the robot is allowed in this chapter, and the pseudoinverse of Jacobian matrix is avoided, with the consideration of physical limitation in a joint framework. Numerical experiments based on Kinova JACO2 show the effectiveness of the proposed controller.

#### **2.1 Introduction**

With the development of mechanics, electronics and computer technology advance, using robot manipulators is becoming popular in modern manufacturing such as welding, painting, assembling, *etc* [1–4]. Among these applications, tracking control of manipulators, focusing on the calculation of control actions to drive the robot to move along the user-defined trajectory in Cartesian space, is always a core problem in robot control, and has been studied intensively by researchers in recent decades.

Redundant manipulators have more degrees of freedom (DOFs) than those required to accomplish a given task [5], and have shown great potentials in enhancing robot flexibility, dexterity, and versatility, avoiding obstacles [6–9], and optimizing energy consumption [10]. However, the nonlinear function description from the joint to Cartesian space, as well as the redundancy in DOFs, makes it a challenging problem to achieve precise tracking control of redundant manipulators.

In recent decades, some results on resolving the redundancy of manipulators have been reported. In most approaches, the problem is solved at the velocity or acceleration level, namely, to derive the corresponding joint velocity or acceleration according to the trajectory description in Cartesian space. Masayuki et al. [12] propose a redundancy solution method for a S-R-S redundant manipulator at the angle level, in which analytic solutions are firstly derived, and analytical methods for joint avoidance is then considered. However, this method is effective only for robots with a specific configuration and is not scalable to manipulators with a general mechanical structure. To solve the kinematic control problem with general configurations, some controllers are proposed, including adaptive control methods [13, 14], barrier-Lyapunov-function based methods [15, 16], and Jacobian-matrix-pseudo-inverse (JMPI) methods [17– 19]. In [17], an asymmetric barrier Lyapunov function-based method is introduced to handle the output limitation. This method consists of a full state feedback controller and an output feedback controller. Using the JMPI method, one can get the control signals in joint space according to the desired path and the pseudo-inverse of the Jacobian matrix. For a redundant manipulator, there is a null space for the Jacobian matrix [20], which is helpful to design controllers considering a secondary task. Therefore, JMPI based methods have been widely used in redundancy solutions. Galicki [21] proposes a JMPI based tracking controller, and an alternative method around the singular point is discussed. In [22], a weighted damped least-squares method is developed to calculate pseudo-inverse around the singularity, an appropriate damping factor is derived according to the minimum singular value. In [23], pseudo-inverse of Jacobian is calculated online by a Taylor-type discrete-time neural network, which is composed of T-ZNN-K and T-ZNN-U models. In [24], a special type of nonlinear function based neural network is designed for tracking control of a PA10 manipulator, and the finite-time convergence of tracking error is also analyzed.

Although the above-mentioned methods have achieved great success, those methods are afflicted with several major limitations in scenarios that require higher performance of real-time ability, accuracy, and self-adaptation. Firstly, the precise kinematic parameters are required in existing works. Describing the mapping from the joint movement in joint space to the movement of the end-effector in Cartesian space, the Jacobian matrix contains kinematic characteristics, such as configuration and kinematic parameters. For a specified robot, the configuration can be derived, but it is usually difficult to obtain accurate kinematic parameters. For example, because of the manufacturing error, different operation tools, etc., the DH parameters may differ to the reference ones in official guidebooks [25]. In this case, the Jacobian matrix based on the inaccurate parameters would cause errors in pseudo-inverse calculation and even instability of the system [26]. On the other hand, the calculation of pseudo-inverse operation is time-consuming, which would lead to a huge cost in real applications which requires pseudo-inverse calculation in every control cycle. Additionally, due to mechanical reasons, the robot manipulator is suffered from physical constraints, such as joint angular and speed limitations.

In terms of the kinematic control in the presence of model uncertainties, the realtime feedback of the end-effector enables closed-loop control for researchers. This can be done by high performance measuring devices such as high precision cameras and laser trackers [27]. In [28], based on the parameter linearization property, a robust controller is proposed, which shows semi-global stability in regulation control in fix-point control. As to the tracking control of manipulators, Hou proposes a neural network based control strategy, in which the position/orientation of the robot is described by a unit quaternion, and the network is used to learn the unknown nonlinear part of the system. One main contribution of the research is that singularities associated with three-parameter representation can be avoided. Cheah et al. propose several adaptive controllers for manipulators in different industrial applications, such as visual tracking, force tracking and trajectory tracking [30–34]. In [35], Chen and Zhang design a new adaptive controller in the acceleration level, the basic idea is that the Jacobian matrix is updated in realtime rather than kinematic parameters. One major drawback of the strategy is that the controller requires the actual values of end velocity and acceleration, which may contain noise in actual applications. In order to reduce the influence of noise in sensor feedback, Wang introduces a lowpass filter, and then an adaptive torque calculation controller is designed in the inner loop [36]. Xu develops a modified controller [37], in which the joint command is deigned at the acceleration level. It is verified that the controller does not require measurement of end velocity and joint acceleration. The influence of the control parameters on tracking errors and convergence rate is also discussed. The above methods mainly focus on the uncertain model parameters, and the redundancy of the manipulators is not considered. Despite the pseudo-inverse can be used instead of traditional inverse calculation of Jacobian matrix, the disadvantage of JMPI methods remains unresolved. In order to overcome the limitations based on the JMPI method, researches transform this problem into a quadratic programming problem, with the aim of obtaining an optimal solution with the specified evaluation index under the physical constraint. Physical constraints can be formulated into equation constraints or inequality constraints. Zhang et al. [38] develops a dual neural network to solve quadratic programming problems, and it is shown that this strategy is suitable for redundancy solutions. Based on the idea, a series of research is reported in eliminating the position error accumulation [39], nonconvex optimization [40], acceleration-level

Inspired by the above literature, in this chapter, we focus on the adaptive tracking problem for redundant manipulators. The remained of this chapter is arranged as below. In Sect. 2.2, fundamental robot kinematics together with useful properties are given, we also show the control objective. In Sect. 2.3, an adaptive Jacobian method is designed by updating kinematic parameters online, and a RNN is used to achieve redundant resolution based on the estimated Jacobian matrix, convergence analysis of the tracking error in Cartesian space is also discussed. In Sect. 2.4, numerical results and comparisons are conducted on a 6DOF robot JACO2. Finally, conclusions are drawn in Sect. 2.5. Before ending the introductory section, we highlight the main contributions of this chapter as below:

compliance [41], parallel robots [42] and multiple robot systems [43].


#### **2.2 Problem Formulation and Existing Results**

#### *2.2.1 Robot Kinematics*

Without loss of generality, we consider serial robot manipulators in this chapter. The kinematic model for robot manipulators is described as follows:

$$f(\theta(t)) = x(t),\tag{2.1}$$

where θ (*t*) <sup>∈</sup> <sup>R</sup>*<sup>n</sup>* represents the vector of the joint angles at time *<sup>t</sup>*, and *<sup>x</sup>*(*t*) <sup>∈</sup> <sup>R</sup>*<sup>m</sup>* represents the Cartesian coordinate vector of the end effector. For a specific robot manipulator, *<sup>f</sup>* (•) : <sup>R</sup>*<sup>n</sup>* <sup>→</sup> <sup>R</sup>*<sup>m</sup>* is used to describe the forward kinematics from joint space to Cartesian space, which is a continuous nonlinear mapping containing kinematic parameters and structure information.

By differentiating *x*(*t*) with respect to time *t*, we can get the relationship between Cartesian velocity *<sup>x</sup>*˙(*t*) <sup>∈</sup> <sup>R</sup>*<sup>m</sup>* and joint velocity (or joint control signal) θ (˙ *<sup>t</sup>*) <sup>∈</sup> <sup>R</sup>*<sup>n</sup>* as follows:

$$J(\theta(t), a\_k)\theta(t) = \dot{\mathfrak{x}}(t),\tag{2.2}$$

with *<sup>J</sup>* (θ (*t*), *ak* ) <sup>=</sup> <sup>∂</sup> *<sup>f</sup>* (θ (*t*), *ak* )/∂θ (*t*) being the Jacobian matrix, and *ak* <sup>∈</sup> <sup>R</sup>*<sup>l</sup>* denotes the vector of kinematic parameters.

Once the physical structure of manipulator is determined, its kinematic equation (2.2) satisfies the following linearization property [43], which describes the relationship between the robot's end velocity and its kinematic parameters:

$$J(\theta(t), a\_k)\dot{\theta}(t) = Y\_k(\theta(t), \dot{\theta}(t))a\_k,\tag{2.3}$$

where *Yk* (θ (*t*), θ (˙ *<sup>t</sup>*)) <sup>∈</sup> <sup>R</sup>*<sup>m</sup>*×*<sup>l</sup>* is called kinematic regressor matrix. Remarkable that *Yk* (θ (*t*), θ (˙ *t*)) is the function of θ (*t*) and θ (˙ *t*), and has no relation with *ak* .

#### *2.2.2 Control Objective*

In this chapter, we consider the task space tracking problem for redundant manipulators, where precise values of kinematic parameters are unavailable. The measurable states are joint angles θ (*t*) and the end coordinates *x*(*t*). The desired Cartesian path *<sup>x</sup>*d(*t*) <sup>∈</sup> <sup>R</sup>*<sup>m</sup>* and its time derivative *<sup>x</sup>*˙d(*t*) are accessible, both *<sup>x</sup>*d(*t*) and *<sup>x</sup>*˙d(*t*) are bounded. The nominal value of the kinematic parameter vector is also available, which is denoted as *a<sup>n</sup> k* .

The control objective is to generate joint velocity command in real-time, e.g., designing θ (˙ *t*) to drive the end-effector of a redundant robot to track *x*d(*t*) , in the sense that *f* (θ (*t*)) = *x*(*t*) → *x*d(*t*). During the whole tracking process, velocity of every joint θ˙ *<sup>i</sup>*(*t*) should not exceed its limits θ˙*<sup>i</sup>* min, θ˙*<sup>i</sup>* max.

#### **2.3 Main Results**

In this section, we will show the detailed process of the controller design. When controlling a redundant robot, one important problem is to make use of its flexibility, such as avoiding obstacles, optimizing energy consumption, and avoiding Singularities, etc. In this chapter, when the kinematic controller is designed to achieve task-space tracking in the presence of physical uncertainties, we consider the energy-saving problem at speed level. Therefore, the secondary task in set to minimize the velocity norm *u*T*u*. The control strategy consists of three parts: a position controller in the outer-loop, a Jacobian adaption part which is capable of handling kinematic uncertainties online, and an RNN which is used to solve the redundancy resolution problem. The stability of the closed-loop system will be also discussed.

#### *2.3.1 Position Controller*

Firstly, a precise measurement of actual coordinate *x* in real-time *t* is taken to build the closed-loop system. The difference between the desired path and the corresponding feedback can be defined as

$$e(t) = \mathbf{x}\_{\mathbf{d}}(t) - \mathbf{x}(t). \tag{2.4}$$

In order to make *e*(*t*) converge to 0, by using the zeroing dynamics [53], the derivative of *e*(*t*) is designed as

$$
\dot{e}(t) = -ke(t),\tag{2.5}
$$

with *k* > 0 being a positive constant scaling the convergence rate of *e*(*t*). Combining (2.4) and (2.5) yields

$$
\dot{\mathbf{x}}(t) = \dot{\mathbf{x}}\_{\mathsf{d}}(t) + k(\mathbf{x}\_{\mathsf{d}}(t) - \mathbf{x}(t)). \tag{2.6}
$$

Let θ (˙ *t*) = *u*(*t*), according to (2.6), if *u*(*t*) is properly designed to make the robot's end-effector move at a speed of *x*˙(*t*), in the sense that *x*˙(*t*) = *J* (θ (*t*), *ak* )*u*(*t*), the tracking error *e*(*t*) in task-space would convergence to zero exponentially.

When *ak* is unknown, the precise Jacobian matrix described in (2.2) is unavailable. The redundancy solution can not be achieved by using *J* (θ , *ak* ). Therefore, we use the estimate Jacobian *J* (θ (*t*), *a*ˆ*<sup>k</sup>* ) by replacing the unknown parameters *ak* in *J* (θ (*t*), *ak* ) with its estimate *<sup>a</sup>*ˆ*<sup>k</sup>* , and the initial value of *<sup>a</sup>*ˆ*<sup>k</sup>* is set as *<sup>a</sup>*ˆ*<sup>k</sup>* (0) <sup>=</sup> *<sup>a</sup><sup>n</sup> <sup>k</sup>* , the estimate error is defined as *a*˜*<sup>k</sup>* = ˆ*ak* − *ak* . Using *J* (θ (*t*), *a*ˆ*<sup>k</sup>* ) and the control signal θ (˙ *t*), we can estimate the velocity of the end-effector as

$$
\hat{\hat{\chi}}(t) = J(\theta(t), \hat{a}\_k(t))u(t). \tag{2.7}
$$

Remarkable that the linearization property described in (2.3) still holds for estimated *a*ˆ*k* :

$$J(\theta(t), \hat{a}\_k(t))u(t) = Y\_k(\theta(t), u(t))\hat{a}\_k(t),\tag{2.8}$$

this property will be used in the following stability proof. The adaptive Jacobian method by updating its kinematic parameters *a*ˆ*<sup>k</sup>* is thus developed as

$$
\hat{a}\_k(t) = -\Gamma\_1 Y\_k^\mathrm{T}(\theta(t), u(t))e(t), \tag{2.9}
$$

where <sup>Γ</sup><sup>1</sup> <sup>∈</sup> <sup>R</sup>*l*×*<sup>l</sup>* is a diagonal positive definite matrix, *<sup>e</sup>*(*t*) is the tracking error in Cartesian space as defined in (2.4), and *u*(*t*) is the bounded joint speed vector satisfying *J* (θ (*t*), *a*ˆ*<sup>k</sup>* )*u*(*t*) = ˙*x*(*t*), which will be designed later. Unless otherwise specified, *J* (θ (*t*), *a*ˆ*<sup>k</sup>* ) is simplified as *J*ˆ.

**Remark 2.1** Figure 2.1 gives a brief framework of the proposed control scheme for redundant manipulators with uncertain kinematic parameters. The desired trajectory of the end-effector is specified by *x*d(*t*) and *x*˙d(*t*). The desired trajectory together

**Fig. 2.1** Framework of the proposed scheme for redundant manipulators with uncertain kinematics, in which the neural control algorithm includes three interactive modules, i.e., position control module, parameter identification module, and redundancy resolution module

with feedback *x*(*t*) are fed into the position controller (2.6). The tracking error *e*(*t*) and joint speed θ (˙ *t*) are used to learn the kinematic parameters online by identifier (2.9). According to the output of the position controller, identified parameter *a*ˆ*<sup>k</sup>* , feedback of the manipulator and physical limits, an RNN based controller is used to achieve the redundancy solution problem.

#### *2.3.2 Redundant Solution Using RNN*

In this subsection, we focus on redundancy resolution problem based on Jacobian adaption method introduced in Sect. 2.3.1. The main purpose of redundancy resolution is to find an optimal joint speed *u*(*t*) to make equation *J* (θ (*t*), *ak* )*u*(*t*) = *x*˙d(*t*) + *k*|*e*(*t*)| <sup>ρ</sup>sgn(*e*(*t*)) hold, at the same time, a secondary task can be also achieved. The redundancy resolution problem can be converted into a quadratic optimization one with specified constraints. To minimize the kinetic energy of the robot, we select velocity norm *u*T*u* = θ˙<sup>T</sup>θ˙ as an object function to be optimized, the joint range θ*<sup>i</sup>* min ≤ θ*<sup>i</sup>* ≤ θ*<sup>i</sup>* max and joint speed limits θ˙*<sup>i</sup>* min ≤ θ˙ *<sup>i</sup>* ≤ θ˙*<sup>i</sup>* max are regarded as inequality constrains. Because *J* (θ , *ak* ) is unavailable, we use *J*ˆ instead of *J* (θ , *ak* ), and rewrite *x*˙ = *b*0. Then the redundancy resolution problem is reformulated as the following quadratic optimization formulations:

$$\min \ u^{\mathsf{T}} u,\tag{2.10a}$$

$$\text{s.t.}\ \ b\_0 = Ju,\tag{2.10b}$$

$$
u \in \mathfrak{Q},\tag{2.10c}$$

where <sup>Ω</sup> = {*<sup>u</sup>* <sup>∈</sup> <sup>R</sup>*n*|*u<sup>i</sup>* min ≤ *ui* ≤ *u<sup>i</sup>* max} is a convex set describing the physical constraints, where *u<sup>i</sup>* min <sup>=</sup> max{α(θ*<sup>i</sup>* min <sup>−</sup> <sup>θ</sup>*i*), <sup>θ</sup>˙*<sup>i</sup>* min}, *u<sup>i</sup>* max = min{α(θ*<sup>i</sup>* max − θ ), θ˙*<sup>i</sup>* max}, α > 0 is a positive constant. The convex set ensures the boundedness of both joint angles and speed [44]. According to the Karush−Kuhn−Tucker condition [45], an equivalent description of the optimal solution to the quadratic optimization as shown in (2.10) is described as

$$
\mu = P\_{\mathcal{Q}}(\mu - \partial L/\partial \mu), \tag{2.11a}
$$

$$b\_0 = Ju,\tag{2.11b}$$

where *P*<sup>Ω</sup> (•) is a projection operation to the set Ω, *P*<sup>Ω</sup> (*x*) = argmin*<sup>y</sup>*∈<sup>Ω</sup> ||*y* − *x*||, and *L* = *L*(*u*, λ)is a Lagrange function defined as *L*(*u*, λ) = *u*<sup>T</sup>*u*/2 + λ<sup>T</sup>(*b*<sup>0</sup> − *J u*ˆ ), where <sup>λ</sup> <sup>∈</sup> <sup>R</sup>*<sup>m</sup>* is a Lagrange multiplier corresponding to the equality constraint.

Note that the difference between *J*ˆ and *J* would lead to extra error, which will result in tracking failure. To resolve the quadratic optimization problem (2.11), we are ready to present the RNN based controller together with the updating kinematic parameters online:

#### **Algorithm 1** The proposed tracking method

**Input:** Parameters *k*, α,Γ , , joint angle limits θ*<sup>i</sup>* max, θ*<sup>i</sup>* min, joint speed limits <sup>θ</sup>˙*<sup>i</sup>* max, θ˙*<sup>i</sup>* min, initial states *u*(0), θ (0), nominal kinematic parameter *a*ˆ*<sup>k</sup>* (0), desired path *x*d(*t*), *x*˙d(*t*), task duration *T* , feedback of end effector *x*(*t*), analytical expressions of estimated Jacobian matrix *J*ˆ and kinematic regressor matrix *Yk* .

**Output:** To achieve task-space tracking of the redundant manipulator

1: Initialize λ(0), *ak* (0) <sup>←</sup> *<sup>a</sup><sup>n</sup> k* .

2: **Repeat**

3: *x*, θ ← Sensor readings

4: Calculate tracking error *e* ← Equation (2.4)


**Until** (*t* > *T* )

$$
\varepsilon \dot{\mu} = -u + P\_{\Omega}(-\tilde{J}^{\text{T}}\lambda),
\tag{2.12a}
$$

$$
\varepsilon \dot{\lambda} = \hat{J}u - b\_0,\tag{2.12b}
$$

$$\dot{a}\_k = -\Gamma\_\text{l} Y\_k^\text{T}(\theta, u)e,\tag{2.12c}$$

where ε is a positive factor scaling the convergence of RNN. The proposed control scheme is shown in Algorithm 2.3.2.

**Remark 2.2** It is worth pointing that although the proposed RNN in (2.12a) and (2.12b) looks similar to existing ones (*e.g.*, [46, 47]), the modification is very meaningful. The proposed RNN is capable of handling kinematic uncertainties. When the kinematic parameters *ak* in known, *J*ˆ is equal to *J* , (2.12a) and (2.12b) have the same expression with traditional ones, which shows that a known parameter case is only a special form of our control scheme, thus the proposed RNN is more general. The proposed control scheme offers an important expansion to model uncertainties, which is of universal significance in engineering applications.

**Remark 2.3** Using the proposed RNN based controller, the control command *u*(*t*) can be derived according to (2.12a), which is capable of optimizing *u*<sup>T</sup>*u*, meanwhile, the projection operation *P*ω(•) handles inequality constraints. (2.12b) plays an important role in task-space tracking. By referring to (2.12c), we update the Jacobian indirectly by renewing its kinematic parameters online based on the property (2.3), which is different with other Jacobian adaption mathods (e.g., [48]), where joint acceleration is required. The necessary values of our updating law are joint angle θ, joint speed *u* and tracking error *e*, therefore, the proposed control strategy can be realized easily.

#### *2.3.3 Convergence Analysis*

In this part, we conduct theoretical analysis on the convergence of tracking error under the RNN based tracking controller (2.12a) and (2.12b) along with the kinematic parameter updating law described in (2.12c).

Firstly, two Lemmas are offered as below, which will be used in the proof process of convergence analysis.

**Lemma 2.1** *For any closed convex set* <sup>∈</sup> <sup>R</sup>*p,* (*<sup>x</sup>* <sup>−</sup> *<sup>P</sup>*(*x*))*<sup>T</sup>*(*P*(*x*) <sup>−</sup> *<sup>y</sup>*) <sup>≥</sup> <sup>0</sup>*,* <sup>∀</sup>*<sup>y</sup>* <sup>∈</sup> *,* <sup>∀</sup>*<sup>x</sup>* <sup>∈</sup> <sup>R</sup>*p, and the equality holds only if x* <sup>∈</sup> *[49].*

**Lemma 2.2** *For any closed convex set* <sup>∈</sup> <sup>R</sup>*p,*(*<sup>x</sup>* <sup>−</sup> *<sup>P</sup>*(*x*))*<sup>T</sup>*(*<sup>x</sup>* <sup>−</sup> *<sup>y</sup>*) <sup>≥</sup> <sup>0</sup>*,* <sup>∀</sup>*<sup>y</sup>* <sup>∈</sup> *,* <sup>∀</sup>*<sup>x</sup>* <sup>∈</sup> <sup>R</sup>*p, and the equality holds only if x* <sup>∈</sup> *[47].*

Based on Lemma 1 and 2, we can obtain the following theorem about convergence of tracking error under the proposed redundancy resolution scheme (2.12).

**Theorem 2.1** The control error *e*(*t*) defined in (2.4) for a redundant manipulator globally converges to 0, provided the RNN based redundancy resolution (2.12a) and (2.12b), along with the kinematic adaptation law (2.12c).

*Proof:* The convergence analysis includes two parts. Firstly, we will prove that the output *u* of proposed RNN (2.12a), (2.12b) would reach the optimal solution of (2.11). Secondly, we will show the convergence of tracking error *e* along with the adaptation law (2.12c). Note that the proof bears similarity to that with known parameters, but the extra dynamics on parameter adaptation makes it necessary to analyze the joint stability, which constructs the main difference of this proof from existing works.

Part I. By defining ξ = [*u*<sup>T</sup>, λ<sup>T</sup>] T, controller (2.12a), (2.12b) can be reformulated as

$$
\epsilon \dot{\xi} = -\xi + P\_{\tilde{\Omega}}(\xi - R(\xi)),
\tag{2.13}
$$

where <sup>Ω</sup>¯ = {(*u*, λ)|*<sup>u</sup>* <sup>∈</sup> Ω,λ <sup>∈</sup> <sup>R</sup>*<sup>m</sup>*}, and *<sup>R</sup>*(ξ ) = [*<sup>u</sup>* <sup>−</sup> *<sup>J</sup>*ˆ<sup>T</sup>λ, <sup>−</sup>*b*<sup>0</sup> <sup>+</sup> *J u*<sup>ˆ</sup> ] T. Define ∇ *R* = ∂ *R*(ξ )/∂ξ , we have

$$
\nabla R = \begin{bmatrix} I & -\tilde{J}^{\mathrm{T}} \\ \hat{J} & 0 \end{bmatrix},
$$

where *<sup>I</sup>* is a *<sup>n</sup>*-dimensional identity matrix, and <sup>∇</sup> *<sup>R</sup>* <sup>∈</sup> <sup>R</sup>(*m*+*n*)×(*m*+*n*) is a skew symmetric matrix. The transpose matrix of ∇ *R* is defined as ∇<sup>T</sup> *R*. Remarkable that ∇ *R* satisfies the following positive semi-definite property:

$$\mathbf{y}^{\mathrm{T}} \nabla \mathcal{R} \mathbf{y} = \mathbf{y}^{\mathrm{T}} (\nabla \mathcal{R} + \nabla^{\mathrm{T}} \mathcal{R}) \mathbf{y} / 2 \ge \mathbf{0}, \ \forall \mathbf{y} \in \mathbb{R}^{m+n}. \tag{2.14}$$

This property will be used later. Define the following Lyapunov function candidate as

$$V\_1 = ||\xi - P\_{\bar{\Omega}}(\xi)||\_2^2 / 2. \tag{2.15}$$

It is obvious that *V*<sup>1</sup> = 0 if and only if ξ ∈ Ω¯ . According to the result of reference in E.9, ∂||ξ − *P*Ω¯ (ξ )||<sup>2</sup> <sup>2</sup>/∂ξ = 2(ξ − *P*Ω¯ (ξ )). Differentiating *V*<sup>1</sup> with respect to time and substituting (2.13) yield:

$$\begin{split} \dot{V}\_1 &= \left(\xi - P\_{\vec{\Omega}}(\xi)\right)^{\mathsf{T}} \dot{\xi} \\ &= -\left(\xi - P\_{\vec{\Omega}}(\xi)\right)^{\mathsf{T}} (\xi - P\_{\vec{\Omega}}(\xi - R(\xi)))/\epsilon. \end{split} \tag{2.16}$$

There is no doubt that *<sup>P</sup>*Ω¯ (ξ <sup>−</sup> *<sup>R</sup>*(ξ )) <sup>∈</sup> <sup>Ω</sup>¯ , according to Lemma 2, <sup>∀</sup><sup>ξ</sup> <sup>∈</sup> <sup>R</sup>*<sup>m</sup>*+*<sup>n</sup>* satisfies the inequality (ξ − *P*<sup>Ω</sup>¯ (ξ ))<sup>T</sup>(ξ − *P*<sup>Ω</sup>¯ (ξ − *R*(ξ ))) ≥ 0. Then we have *V*˙ <sup>1</sup> ≤ 0 because > 0, and *V*˙ <sup>1</sup> = 0 only if ξ ∈ Ω¯ . Based on to LaSalle's invariance principle [50], it can be proved that ξ gradually converges into Ω¯ , which indicates *u* converges into Ω , the boundedness of joint angles and speed is thus guaranteed. Note that the equilibrium point ξ <sup>∗</sup> satisfies

$$
\xi^\* = P\_{\vec{\Omega}}(\xi^\* - R(\xi^\*)). \tag{2.17}
$$

According to definition 1 and Lemma 1 in [51], ξ <sup>∗</sup> satisfies the following property

$$(\mathbf{y} - \boldsymbol{\xi}^\*)^T \boldsymbol{R}(\boldsymbol{\xi}^\*) \ge 0, \quad \forall \mathbf{y} \in \bar{\Omega}. \tag{2.18}$$

Define function *V*<sup>2</sup> as

$$V\_2 = \left(\xi - P\_{\vec{\Omega}}(\xi - R(\xi))\right)^{\mathrm{T}} R(\xi) + \|\xi - \xi^\*\|\_2^2 / 2$$

$$-||\xi - P\_{\vec{\Omega}}(\xi - R(\xi))||\_2^2 / 2 + V\_1. \tag{2.19}$$

Some mathematical calculations on the first and third items of the definition (2.19) give

$$\begin{aligned} & \left(\xi - P\_{\vec{\Omega}}(\xi - R(\xi))\right)^{\mathrm{T}} R(\xi) - \|\xi - P\_{\vec{\Omega}}(\xi - R(\xi))\|\_{2}^{2}/2 \\ & \geq \left(\xi - P\_{\vec{\Omega}}(\xi - R(\xi))\right)^{\mathrm{T}} R(\xi) - \|\xi - P\_{\vec{\Omega}}(\xi - R(\xi))\|\_{2}^{2} \\ & = \left(\xi - R(\xi) - P\_{\vec{\Omega}}(\xi - R(\xi))\right)^{\mathrm{T}} (P\_{\vec{\Omega}}(\xi - R(\xi)) - \xi). \end{aligned} \tag{2.20}$$

Noticing that ξ would gradually converge into the convex set Ω¯ , then we get ξ ∈ Ω¯ . According to Lemma 1, inequality (ξ − *P*<sup>Ω</sup>¯ (ξ − *R*(ξ )))<sup>T</sup> · (*P*<sup>Ω</sup>¯ (ξ − *R*(ξ )) − ξ ) ≥ 0 holds for any <sup>ξ</sup> <sup>−</sup> *<sup>R</sup>*(ξ ) <sup>∈</sup> <sup>R</sup>*<sup>m</sup>*+*<sup>n</sup>*. Recalling the definition of *<sup>V</sup>*2, we have

$$V\_2 \ge ||\xi - \xi^\*||\_2^2 / 2 + V\_1. \tag{2.21}$$

Thus *V*<sup>2</sup> is a Lyapunov function candidate. Differentiating *V*<sup>2</sup> with respect to time and combining (2.13) yields:

$$\dot{V}\_2 = (\xi - P\_{\ddot{\Omega}}(\xi - R(\xi)))^T \nabla R \dot{\xi} + \dot{\xi}^T R(\xi) + (\xi - \xi^\*)^T \dot{\xi}^\*$$

$$\begin{split} & - \left( \xi - P\_{\hat{\mathcal{Q}}} (\xi - R(\xi)) \right)^{\mathrm{T}} \dot{\xi} + \dot{V}\_{1} \\ &= - \left( \xi - P\_{\hat{\mathcal{Q}}} (\xi - R(\xi)) \right)^{\mathrm{T}} \nabla R \left( \xi - P\_{\hat{\mathcal{Q}}} (\xi - R(\xi)) \right) / \epsilon \\ & - \left( \xi - R(\xi) - P\_{\hat{\mathcal{Q}}} (\xi - R(\xi)) \right)^{\mathrm{T}} (P\_{\hat{\mathcal{Q}}} (\xi - R(\xi)) - \xi^{\*}) / \epsilon \\ & - \left( \xi - \xi^{\*} \right)^{\mathrm{T}} R(\xi) / \epsilon + \dot{V}\_{1} . \end{split} \tag{2.22}$$

Remarkable that ξ <sup>∗</sup> ∈ Ω¯ , according to Lemma 1, inequality (ξ − *R*(ξ ) − *P*Ω¯ (ξ − *<sup>R</sup>*(ξ )))<sup>T</sup>(*P*<sup>Ω</sup>¯ (ξ <sup>−</sup> *<sup>R</sup>*(ξ )) <sup>−</sup> <sup>ξ</sup> <sup>∗</sup>) <sup>≥</sup> 0 holds for any <sup>ξ</sup> <sup>−</sup> *<sup>R</sup>*(ξ ) <sup>∈</sup> <sup>R</sup>*<sup>m</sup>*+*<sup>n</sup>*. Using (2.14), we have −(ξ − *P*<sup>Ω</sup>¯ (ξ − *R*(ξ )))<sup>T</sup>∇ *R*(ξ − *P*<sup>Ω</sup>¯ (ξ − *R*(ξ )))/ ≤ 0 since > 0. According to mean value theorem, we have

$$R(\xi) - R(\xi^\*) = \nabla R(\xi') (\xi - \xi^\*),\tag{2.23}$$

where ξ ∈ [ξ,ξ ∗]. After some mathematical calculations on the following polynomials and substituting (2.23), we have

$$(\xi - \xi^\*)^\mathrm{T} R(\xi)$$

$$= (\xi - \xi^\*)^\mathrm{T} \nabla R(\xi') (\xi - \xi^\*) + (\xi - \xi^\*)^\mathrm{T} R(\xi^\*). \tag{2.24}$$

Using the properties (2.14) and (2.18), we have (ξ − ξ <sup>∗</sup>)<sup>T</sup>∇ *R*(ξ )(ξ − ξ <sup>∗</sup>) ≥ 0 and (ξ − ξ <sup>∗</sup>)<sup>T</sup> *R*(ξ <sup>∗</sup>) ≥ 0, then

$$(\xi - \xi^\*)^T R(\xi) \ge 0. \tag{2.25}$$

Combining inequalities (2.16), (2.24) and (2.25) yields *V*˙ <sup>2</sup> ≤ 0, and *V*˙ <sup>2</sup> = 0 only if ξ ∈ Ω¯ , which indicates (ξ − *P*<sup>Ω</sup>¯ (ξ − *R*(ξ )))<sup>T</sup>∇ *R*(ξ − *P*<sup>Ω</sup>¯ (ξ − *R*(ξ ))) = 0, (ξ − *R*(ξ ) − *P*<sup>Ω</sup>¯ (ξ − *R*(ξ )))<sup>T</sup>(*P*<sup>Ω</sup>¯ (ξ − *R*(ξ )) − ξ <sup>∗</sup>) = 0 and (ξ − ξ <sup>∗</sup>)<sup>T</sup> *R*(ξ ) = 0. From (2.24), we get (ξ − ξ <sup>∗</sup>)<sup>T</sup>∇ *R*(ξ )(ξ − ξ <sup>∗</sup>) and (ξ − ξ <sup>∗</sup>)<sup>T</sup> *R*(ξ <sup>∗</sup>) = 0. Notable that ξ = ξ <sup>∗</sup> is the solution of the above equations. Based on LaSalle's invariance principle, we arrive at a conclusion that ξ would gradually reach its equilibrium point ξ <sup>∗</sup>, *i.e.*, *u*(*t*) would converge to its optimal solution of redundancy resolution problem (2.10).

Part II. Consider the Lyapunov function candidate

$$V\_3 = e^{\mathbf{T}} e/2 + \tilde{a}\_k^{\mathbf{T}} \Gamma\_1^{-1} \tilde{a}\_k / 2. \tag{2.26}$$

Differentiating *V*<sup>3</sup> with respect to time and substituting (2.4), (2.8) and (2.9), we have

$$\begin{split} \dot{V}\_{3} &= e^{\mathsf{T}}(\dot{\boldsymbol{x}}\_{\mathsf{d}} - \dot{\boldsymbol{x}}) + \tilde{\boldsymbol{a}}\_{k}^{\mathsf{T}} \boldsymbol{\Gamma}\_{1}^{-1} \dot{\tilde{a}}\_{k} \\ &= e^{\mathsf{T}}(\dot{\boldsymbol{x}}\_{\mathsf{d}} - \boldsymbol{J}(\theta, a\_{k}) \boldsymbol{u} + (1 - 1)k|e|^{\rho} \text{sgn}(e) + \tilde{\boldsymbol{a}}\_{k}^{\mathsf{T}} \boldsymbol{\Gamma}\_{1}^{-1} \dot{\tilde{a}}\_{k} \\ &= e^{\mathsf{T}}(b\_{0} - \boldsymbol{Y}\_{k}(\theta, \boldsymbol{u})(\hat{a}\_{k} - \tilde{a}\_{k}) - \boldsymbol{k}|e|^{\rho} \text{sgn}(e)) - \tilde{\boldsymbol{a}}\_{k}^{\mathsf{T}} \boldsymbol{Y}\_{k}^{\mathsf{T}}(\theta, \boldsymbol{u})e \\ &= e^{\mathsf{T}}(b\_{0} - \hat{\boldsymbol{J}}u + \boldsymbol{Y}\_{k}(\theta, \boldsymbol{u})\tilde{a}\_{k}) - \boldsymbol{k}|e|^{\rho + 1} - \tilde{\boldsymbol{a}}\_{k}^{\mathsf{T}} \boldsymbol{Y}\_{k}^{\mathsf{T}}(\theta, \boldsymbol{u})e. \end{split} \tag{2.27}$$

As proved above, using the neural network (2.12), *u*<sup>T</sup>*u* will be minimized under the constraints *b*<sup>0</sup> = *J u*ˆ and *u* ∈ Ω. Notable that *a*˜<sup>T</sup> *<sup>k</sup> Y* <sup>T</sup> *<sup>k</sup>* (θ , *u*)*e* is a scalar value, we have *a*˜T *<sup>k</sup> Y* <sup>T</sup> *<sup>k</sup>* (θ , *u*)*e* = *e*<sup>T</sup>*Yk* (θ , *u*)*a*˜*<sup>k</sup>* . Then (2.27) can be rewritten as

$$\begin{split} \dot{V}\_3 &= e^{\mathrm{T}} Y\_k(\theta, u) \tilde{a}\_k - k|e|^{\rho+1} - \tilde{a}\_k^{\mathrm{T}} Y\_k^{\mathrm{T}}(\theta, u)e \\ &= -k|e|^{\rho+1} \le 0. \end{split} \tag{2.28}$$

Then we have *e* = *x*<sup>d</sup> − *x* is bounded. Taking the time derivative of *V*˙ 3, we have:

$$\begin{split} \ddot{V}\_3 &= -k(\rho+1)|e|^\rho \text{sgn}(e)\dot{e} \\ &= -k(\rho+1)|e|^\rho \text{sgn}(e)(\dot{\mathbf{x}}\_\mathbb{d} - J(\theta, a\_k)\mu). \end{split} \tag{2.29}$$

Since *J* (θ , *ak* )is composed of trigonometric functions of θ and kinematic parameters *ak* , *J* (θ , *ak* ) is bounded. *x*˙<sup>d</sup> is also bounded. As illustrated in part I, *u* is bounded, thus *V*¨ <sup>3</sup> is guaranteed to be bounded. Using Barbalat's lemma [52], we have *V*˙ <sup>3</sup> → 0 as *t* → ∞. Then *e* → 0 as *t* → ∞. This completes the Proof. -

**Remark 2.4** The convergence analysis shows the stability of the proposed control strategy. The tracking error would globally convergence to 0. The proof also illustrates that the control command *u*(*t*) is ensured *u*(*t*) ∈ Ω, ∀*t* ≥ 0, provided *u*(0) ∈ Ω, the boundedness of joint speed is thus guaranteed all the time.

#### **2.4 Illustrative Examples**

#### *2.4.1 Numerical Setup*

We consider the position tracking problem in task space, then JACO2 can thus be regarded as a functional redundant manipulator. The architecture of JACO2 is shown in Fig. 2.2, and the DH parameters are shown in Table 2.1. Noticing that the last 3 joints of JACO2 do not intersect at a single point, these joints cannot be simplified as spherical joint, therefore the configuration of JACO2 is more general than other 6DOF manipulators, e.g., the PUMA 560. The initial state of joint position vector

#### 2.4 Illustrative Examples 29


**Table 2.1** DH parameters of the Kinova JACO2 robot manipulator

**Fig. 2.3** Results of regulation control on JACO2 to a fixed point [0.3, 0.4, 0.4]m in the Cartesian space. **a** Motion trajectory of end effector (red curve) and the corresponding incremental configurations of JACO2. **b** Error-time curve along three directions. **c** Angle-time curve of 6 joints. **d** Command-time curve of joint velocity *u*

**Fig. 2.4** Results when JACO2 tracks a given circle in Cartesian space. **a** Motion trajectory of end effector (red curve) and the corresponding incremental configurations of JACO2. **b** Error-time curve along three directions. **c** Angle-time curve of 6 joints. **d** Command-time curve of joint velocity *u*. **e** The first Cartesian velocity input *b*0(x-axis direction) described by (2.6) and the corresponding output *J u*ˆ (1). **f** The second Cartesian velocity input *b*0(y-axis direction) described by (2.6) and the corresponding output *J u*ˆ (2). **g** The third Cartesian velocity input *b*0(z-axis direction) described by (2.6) and the corresponding output *J u*ˆ (3). **h** The Euclidean norm of the manipulator's joint velocity

θ (0) is randomly set as [0.5, 0, 1.5, 0, 0, 0] Trad, and the initial joint speed *u*(0) is selected to be zero. The nominal values of kinematic parameters are selected as *a*ˆ*<sup>k</sup>* (0) = [0.25, 0.2, 0, −0.2, −0.1, −0.2] Tm. The set Ω describing joint speed limits are set to be [−2, 2] 6rad/s. The control gain *k* is set to be 8, and the gain matrix Γ<sup>1</sup> is selected as 0.5*I*, where *I* is a 6-dimensional identity matrix (Fig. 2.4).

#### *2.4.2 Fixed-point Regulation*

In order to verify the proposed tracking strategy, a set value adjustment experiment is carried out. Set the desired fixed position of the end-effector to [0.3, 0.4, 0.4] Tm, select the zoom factor to be = 0.008. The simulation results are shown in Fig. 2.3. The adjustment error converges to zero, and the convergence time is about 0.5s, as shown in Fig. 2.3b. Therefore, the joint angle θ reaches a set of constant values, as shown in Fig. 2.3c. The combined velocity *u*(*t*)reaches the limit at the beginning of the simulation, making the end-effector move toward the target as fast as possible, and slowing down rapidly when the end-effector approaches the target. During the whole simulation process, *u* is guaranteed not to exceed its limit, as shown in Fig. 2.3d. Finally, as shown in Fig. 2.3a, the robot successfully reaches the fixed point under the proposed control scheme.

#### *2.4.3 Circular Trajectory*

In this section, tracking of a smooth circular trajectory using the control scheme is carried out. The end effector of JACO2 is expected to move at an angular speed of 0.5rad/s along a circular trajectory. The desired circle is centered at [0.3, 0.3, 0.3] T with a radius of 0.1732m, and has a revolute angle of 45◦ around the x-axis. The scaling coefficient is selected as = 0.008 The convergence time is about 0.5s, which is similar to the regulation case. As shown in Fig. 2.4d, when the simulation begins, the robot moves at the maximum speed when the tracking error is big (Fig. 2.3a), which makes the end effector move close to the desired circle. Then the robot moves at a low speed periodically, and correspondingly, the joint angle θ changes with the same frequency (Fig. 2.4b), meanwhile, the tracking error is already close to zero (Fig. 2.4a), which means that the robot has successfully tracked the desired circular trajectory with time. According to (2.6), the reference speed vector of the end-effector *b*<sup>0</sup> can be derived, and its components along *x*−, *y*−, and *z*− directions are shown as blue lines in Fig. 2.4e–g, in which red lines represent the corresponding values of *J u*ˆ . The red lines quickly converge to blue ones, demonstrating that the proposed control strategy is able to track the given trajectory under kinematic uncertainties. The joint velocity norm ||*u*||<sup>2</sup> <sup>2</sup> is shown in Fig. 2.4h.

#### *2.4.4 Square Trajectory*

In this section, the JACO2 is used to track a square trajectory. The corners of the desired square in the Cartesian space are set to be [0.3, 0.4, 0.4] T, [0.4, 0.3, 0.4] T, [0.3, 0.2, 0.2] T,and [0.2, 0.3, 0.3] T. The motion period is 12.56s. The velocity norm of desired path || ˙*x*d(*t*)|| keeps constant, which means that the expected velocity **Fig. 2.5** Results when JACO2 tracks a square trajectory in Cartesian space. **a** Motion trajectory of end effector (red curve) and the corresponding incremental configurations of JACO2. **b** Error-time curve along three directions. **c** Angle-time curve of 6 joints. **d** Command-time curve of joint velocity *u*. **e** The first Cartesian velocity input *b*0(x-axis direction) described by (2.6) and the corresponding output *J u*ˆ (1). **f** The second Cartesian velocity input *b*<sup>0</sup> (y-axis direction) described by (2.6) and the corresponding output *J u*ˆ (2). **g** The third Cartesian velocity input *b*0(z-axis direction) described by (2.6) and the corresponding output *J u*ˆ (3). **h** The Euclidean norm of the manipulator's joint velocity

of end effector *x*˙d(*t*) remains constant between two adjacent vertices, while *x*˙d(*t*) changes discontinuously at the four corners. The scaling coefficient is selected as = 0.008. Numerical results are shown in Fig. 2.5. In the initial stage, the tracking error approaches zero over time after a short transient state, at the same time, the joint speed remains within the set Ω at all times (Fig. 2.5d). The output of the position controller (2.6) and the resulting responses under the proposed control scheme along the *x*−, *y*−, and *z*−directions are shown in Fig. 2.5e–g. The red lines converge to blue ones quickly both at the beginning of simulation and after discontinuous change of the desired velocity, and the joint speed also switches at these moments, as shown in Fig. 2.5d. As a result, there exist vibration on the error curve at time *t* = 3.14, 6.28, 9.24, 12.56, 15.7, 18.8 s, with the maximum value of[4 × 10−<sup>3</sup>, 2 × 10−<sup>3</sup>, 1.5 × 10−<sup>3</sup>] Tm. The joint velocity norm ||*u*||<sup>2</sup> <sup>2</sup> is shown in Fig. 2.5h (Table 2.2).


**Table 2.2** Comparisons among different tracking controllers on manipulators

#### *2.4.5 Comparison*

In this section, we compare the proposed method with the performance of the existing redundant robot tracking controller as shown in Table 2.2. JMPI [28, 35] and RNN policy-based tracking controller [39, 40, 61, 62]. In the reference [28, 35] and our study, the exact kinematic model of the robot is not needed, which can be used to solve the kinematic uncertainty problem.The controllers proposed [28, 35, 40, 61] are calculated according to the speed level, while the controllers proposed [39, 62] are designed according to the acceleration level.In this chapter, we develop a speed-level controller. The controller obtains the control command in [28, 35] by computing the pseudo-inverse of the jacobian. These strategies cannot be used when the robot is in a singular configuration. Although DLS [22] and other improved methods are introduced, the convergence of tracking error of singular configuration cannot be guaranteed, and its physical limitations are not considered. In addition to referencing [62], the initial position of the end-effector can be set randomly in the controller, which needs to be on the desired path when referencing [62]. The controllers kin [39, 40, 62] based on RNN can guarantee the boundedness of the control command. The three controllers can track the time-varying trajectory, but the position adjustment fails in [62]. In summary, our controller can achieve stable tracking under both regulation and path tracking, and it does not need the accurate kinematic model and pseudo-inverse calculation of the Jacobian matrix, so it has good flexibility and adaptability to the uncertain environment.

#### **2.5 Summary**

This chapter studies the kinematic control of redundant robots with uncertain kinematics. A dynamic neural network is proposed to solve the redundancy problem by using an adaptive motion identifier to learn motion parameters online. The interaction between the adaptive online identifier and the neural controller makes it a nonlinear coupled system. The global convergence of tracking error is verified by the Lyapunov theory. Numerical experiments and comparisons based on JACO2 robot arm demonstrate the effectiveness of the algorithm and its superiority over existing algorithms. This method is combined with RNN to realize static and dynamic task space tracking. The pseudo-inverse computation of the Jacobian matrix is avoided and the real-time performance of the controller is guaranteed. The boundedness of joint speed can also protect the robot and improve safety performance. Before concluding this chapter, it is worth pointing out that this is the first dynamic neural model of motion control for a manipulator with adaptive redundancy based on kinematic regression, with demonstrable convergence and guaranteed performance limits.

#### **Appendix**

According to (2.3), the analytical expression of JACO 2s kinematic regressor matrix *Yk* <sup>∈</sup> <sup>R</sup>3×<sup>6</sup> is given as follows.

*Y*<sup>11</sup> = 0, *Y*<sup>12</sup> = −θ˙ <sup>2</sup>*c*1*c*<sup>2</sup> + θ˙ <sup>1</sup>*s*1*s*2, *Y*<sup>13</sup> = −θ˙ <sup>1</sup>*c*1, *Y*<sup>14</sup> = θ˙ <sup>2</sup>(*c*1*c*2*c*<sup>3</sup> + *c*1*s*2*s*3) + θ˙ <sup>1</sup>(*c*2*s*1*s*<sup>3</sup> − *c*3*s*1*s*2) − θ˙ <sup>3</sup>(*c*1*c*2*c*<sup>3</sup> + *c*1*s*2*s*3), *Y*<sup>15</sup> = (θ˙ <sup>1</sup>((√3*c*1*c*4)/<sup>2</sup> <sup>+</sup> ( <sup>√</sup>3*s*4(*s*1*s*2*s*<sup>3</sup> <sup>+</sup> *<sup>c</sup>*2*c*3*s*1))/<sup>2</sup> <sup>+</sup> (*c*2*s*1*s*3)/<sup>2</sup> <sup>−</sup> (*c*3*s*1*s*2)/2) <sup>−</sup> θ˙ <sup>4</sup>((√3*s*1*s*4)/2+( <sup>√</sup>3*c*4(*c*1*c*2*c*3+*c*1*s*2*s*3))/2)+θ˙ <sup>2</sup>((*c*1*c*2*c*<sup>3</sup> + *c*1*s*2*s*3)/2)−( <sup>√</sup>3*s*4(*c*1*c*2*s*<sup>3</sup> <sup>−</sup> *c*1*c*3*s*2))/2 + (*c*1*s*2*s*3)/2) − θ˙ <sup>3</sup>((*c*1*c*2*c*3)/2 − ( <sup>√</sup>3*s*4(*c*1*c*2*s*<sup>3</sup> <sup>−</sup> *<sup>c</sup>*1*c*3*s*2))/2), *Y*<sup>16</sup> = (θ˙ <sup>5</sup>((√3*c*5(*s*1*s*<sup>4</sup> <sup>+</sup> *<sup>c</sup>*4(*c*1*c*2*c*<sup>3</sup> <sup>+</sup> *<sup>c</sup>*1*s*2*s*3)))/<sup>2</sup> <sup>+</sup> ( <sup>√</sup>3*s*5((*c*4*s*1)/<sup>2</sup> <sup>−</sup> (*s*4(*c*1*c*2*c*<sup>3</sup> + *c*1*s*2*s*3))/2 + ( <sup>√</sup>3(*c*1*c*2*s*<sup>3</sup> <sup>−</sup> *<sup>c</sup>*1*c*3*s*2))/2))/2) <sup>−</sup> <sup>θ</sup>˙ <sup>4</sup>((√3*s*1*s*4)/<sup>4</sup> <sup>−</sup> ( <sup>√</sup>3*c*5((*s*1*s*4) /2 + (*c*4(*c*1*c*2*c*<sup>3</sup> + *c*1*s*2*s*3))/2))/2 − ( <sup>√</sup>3*s*5(*c*4*s*<sup>1</sup> <sup>−</sup> *<sup>s</sup>*4(*c*1*c*2*c*<sup>3</sup> <sup>+</sup> *<sup>c</sup>*1*s*2*s*3)))/<sup>2</sup> <sup>+</sup> ( <sup>√</sup>3*c*4(*c*1*c*2*c*<sup>3</sup> <sup>+</sup> *<sup>c</sup>*1*s*2*s*3))/4) <sup>+</sup> <sup>θ</sup>˙ <sup>1</sup>((√3*c*1*c*4)/<sup>4</sup> <sup>+</sup> ( <sup>√</sup>3*s*5(*c*1*s*<sup>4</sup> <sup>−</sup> *<sup>c</sup>*4(*s*1*s*2*s*<sup>3</sup> <sup>+</sup> *<sup>c</sup>*2*c*<sup>3</sup> *s*1)))/2 − ( <sup>√</sup>3*c*5((*c*1*c*4)/<sup>2</sup> <sup>−</sup> ( <sup>√</sup>3(*c*2*s*1*s*<sup>3</sup> <sup>−</sup> *<sup>c</sup>*3*s*1*s*2))/<sup>2</sup> <sup>+</sup> (*s*4(*s*1*s*2*s*<sup>3</sup> <sup>+</sup> *<sup>c</sup>*2*c*3*s*1))/2)) /2 + ( <sup>√</sup>3*s*4(*s*1*s*2*s*<sup>3</sup> <sup>+</sup> *<sup>c</sup>*2*c*3*s*1))/<sup>4</sup> <sup>+</sup> (*c*2*s*1*s*3)/<sup>4</sup> <sup>−</sup> (*c*3*s*1*s*2)/4) <sup>+</sup> <sup>θ</sup>˙ <sup>2</sup>(+( <sup>√</sup>3(*c*1*c*2*c*<sup>3</sup> + *c*1*s*2*s*3))/2))/2( <sup>√</sup>3*c*5((*s*<sup>4</sup> · (*c*1*c*2*s*<sup>3</sup> <sup>−</sup> *<sup>c</sup>*1*c*3*s*2))/<sup>2</sup> <sup>−</sup> ( <sup>√</sup>3*s*4(*c*1*c*2*s*<sup>3</sup> <sup>−</sup> *<sup>c</sup>*1*c*3*s*2))/<sup>4</sup> <sup>+</sup> (*c*1*c*2*c*3)/4 + (*c*1*s*2*s*3)/4 + ( <sup>√</sup>3*c*4*s*5(*c*1*c*2*s*<sup>3</sup> <sup>−</sup> *<sup>c</sup>*1*c*3*s*2))/2) <sup>−</sup> <sup>θ</sup>˙ <sup>3</sup>((√3*c*5((*s*4(*c*1*c*2*s*<sup>3</sup> − *c*1*c*3*s*2))/2 + ( <sup>√</sup>3(*c*1*c*2*c*<sup>3</sup> <sup>+</sup> *<sup>c</sup>*1*s*2*s*3))/2))/<sup>2</sup> <sup>−</sup> ( <sup>√</sup>3*s*4(*c*1*c*2*s*<sup>3</sup> <sup>−</sup> *<sup>c</sup>*1*c*3*s*2))/<sup>4</sup> <sup>+</sup> (*c*1*c*2*c*3)/4 + (*c*1*s*2*s*3)/4 + ( <sup>√</sup>3*c*4*s*5(*c*1*c*2*s*<sup>3</sup> <sup>−</sup> *<sup>c</sup>*1*c*3*s*2))/2)), *Y*<sup>21</sup> = 0, *Y*<sup>22</sup> = −θ˙ <sup>1</sup>*c*1*s*<sup>2</sup> − θ˙ <sup>2</sup>*c*2*s*1, *Y*<sup>23</sup> = −θ˙ <sup>1</sup>*s*1, *Y*<sup>24</sup> = θ˙ <sup>2</sup>(*s*1*s*2*s*<sup>3</sup> + *c*2*c*3*s*1) − θ˙ <sup>3</sup>(*s*1*s*2*s*<sup>3</sup> + *c*2*c*3*s*1) − θ˙ <sup>1</sup>(*c*1*c*2*s*<sup>3</sup> − *c*1*c*3*s*2),

*Y*<sup>25</sup> = θ˙ <sup>2</sup>((*s*1*s*2*s*3)/2 + (*c*2*c*3*s*1)/2 − ( <sup>√</sup>3*s*4(*c*2*s*1*s*<sup>3</sup> <sup>−</sup> *<sup>c</sup>*3*s*1*s*2))/2) <sup>−</sup> <sup>θ</sup>˙ <sup>3</sup>((*s*1*s*2*s*3) /2 + (*c*2*c*3*s*1)/2 − ( <sup>√</sup>3*s*4(*c*2*s*1*s*<sup>3</sup> <sup>−</sup> *<sup>c</sup>*3*s*1*s*2))/2) <sup>+</sup> <sup>θ</sup>˙ <sup>1</sup>((√3*c*4*s*1)/<sup>2</sup> <sup>−</sup> ( <sup>√</sup>3*s*4(*c*1*c*2*c*<sup>3</sup> <sup>+</sup> *c*1*s*2*s*3))/2 − (*c*1*c*2*s*3)/2 + (*c*1*c*3*s*2)/2) + θ˙ <sup>4</sup>((√3*c*1*s*4)/<sup>2</sup> <sup>−</sup> ( <sup>√</sup>3*c*4(*s*1*s*2*s*<sup>3</sup> <sup>+</sup> *c*2*c*3*s*1))/2), *Y*<sup>26</sup> = θ˙ <sup>1</sup>((√3*c*4*s*1)/<sup>4</sup> <sup>−</sup> ( <sup>√</sup>3*c*5((*c*4*s*1)/<sup>2</sup> <sup>−</sup> (*s*4(*c*1*c*2*c*<sup>3</sup> <sup>+</sup> *<sup>c</sup>*1*s*2*s*3))/<sup>2</sup> <sup>+</sup> ( <sup>√</sup>3(*c*1*c*<sup>2</sup> *s*<sup>3</sup> − *c*1*c*3*s*2))/2))/2 + ( <sup>√</sup>3*s*5(*s*1*s*<sup>4</sup> <sup>+</sup> *<sup>c</sup>*4(*c*1*c*2*c*<sup>3</sup> <sup>+</sup> *<sup>c</sup>*1*s*2*s*3)))/<sup>2</sup> <sup>−</sup> ( <sup>√</sup>3*s*4(*c*1*c*2*c*<sup>3</sup> <sup>+</sup> *c*1*s*2*s*3))/4 − (*c*1*c*2*s*3)/4 + (*c*1*c*3*s*2)/4) − θ˙ <sup>5</sup>((√3*c*5(*c*1*s*<sup>4</sup> <sup>−</sup> *<sup>c</sup>*4(*s*1*s*2*s*<sup>3</sup> <sup>+</sup> *<sup>c</sup>*2*c*3*s*1))) /2 + ( <sup>√</sup>3*s*5((*c*1*c*4)/<sup>2</sup> <sup>−</sup> ( <sup>√</sup>3(*c*2*s*1*s*<sup>3</sup> <sup>−</sup> *<sup>c</sup>*3*s*1*s*2))/<sup>2</sup> <sup>+</sup> (*s*4(*s*1*s*2*s*<sup>3</sup> <sup>+</sup> *<sup>c</sup>*2*c*3*s*1))/2))/2) <sup>+</sup> θ˙ <sup>2</sup>((*s*1*s*2*s*3)/4 + ( <sup>√</sup>3*c*5((√3(*s*1*s*2*s*<sup>3</sup> <sup>+</sup> *<sup>c</sup>*2*c*3*s*1))/<sup>2</sup> <sup>+</sup> (*s*4(*c*2*s*1*s*<sup>3</sup> <sup>−</sup> *<sup>c</sup>*3*s*1*s*2))/2))/<sup>2</sup> <sup>+</sup> (*c*2*c*3*s*1)/4 − ( <sup>√</sup>3*s*4(*c*2*s*1*s*<sup>3</sup> <sup>−</sup> *<sup>c</sup>*3*s*1*s*2))/<sup>4</sup> <sup>+</sup> ( <sup>√</sup>3*c*4*s*5(*c*2*s*1*s*<sup>3</sup> <sup>−</sup> *<sup>c</sup>*3*s*1*s*2))/2) <sup>−</sup> θ˙ <sup>3</sup>((*s*1*s*2*s*3)/4 + ( <sup>√</sup>3*c*5((√3(*s*1*s*2*s*<sup>3</sup> <sup>+</sup> *<sup>c</sup>*2*c*3*s*1))/<sup>2</sup> <sup>+</sup> (*s*4(*c*2*s*1*s*<sup>3</sup> <sup>−</sup> *<sup>c</sup>*3*s*1*s*2))/2))/<sup>2</sup> <sup>+</sup> (*c*2*c*3*s*1)/4 − ( <sup>√</sup>3*s*4(*c*2*s*1*s*<sup>3</sup> <sup>−</sup> *<sup>c</sup>*3*s*1*s*2))/<sup>4</sup> <sup>+</sup> ( <sup>√</sup>3*c*4*s*5(*c*2*s*1*s*<sup>3</sup> <sup>−</sup> *<sup>c</sup>*3*s*1*s*2))/2) <sup>−</sup> θ˙ <sup>4</sup>((√3*c*5((*c*1*s*4)/<sup>2</sup> <sup>−</sup> (*c*4(*s*1*s*2*s*<sup>3</sup> <sup>+</sup> *<sup>c</sup>*2*c*3*s*1))/2))/<sup>2</sup> <sup>−</sup> ( <sup>√</sup>3*c*1*s*4)/<sup>4</sup> <sup>+</sup> ( <sup>√</sup>3*s*5(*c*1*c*<sup>4</sup> <sup>+</sup> *s*4(*s*1*s*2*s*<sup>3</sup> + *c*2*c*3*s*1))) /2 + ( <sup>√</sup>3*c*4(*s*1*s*2*s*<sup>3</sup> <sup>+</sup> *<sup>c</sup>*2*c*3*s*1))/4), *Y*<sup>31</sup> = 0, *Y*<sup>32</sup> = −θ˙ <sup>2</sup>*s*2, *Y*<sup>33</sup> = 0, *Y*<sup>34</sup> = θ˙ <sup>3</sup>(*c*2*s*<sup>3</sup> − *c*3*s*2) − θ˙ <sup>2</sup>(*c*2*s*<sup>3</sup> − *c*3*s*2), *Y*<sup>35</sup> = θ˙ <sup>3</sup>((*c*2*s*3)/2 − (*c*3*s*2)/2 + ( <sup>√</sup>3*s*4(*c*2*c*<sup>3</sup> <sup>+</sup> *<sup>s</sup>*2*s*3))/2) <sup>−</sup> <sup>θ</sup>˙ <sup>2</sup>((*c*2*s*3)/2 − (*c*3*s*2) /2 + ( <sup>√</sup>3*s*4(*c*2*c*<sup>3</sup> <sup>+</sup> *<sup>s</sup>*2*s*3))/2) <sup>+</sup> ( <sup>√</sup>3θ˙ <sup>4</sup>*c*4(*c*2*s*<sup>3</sup> − *c*3*s*2))/2, *Y*<sup>36</sup> = θ˙ <sup>5</sup>((√3*s*5((√3(*c*2*c*<sup>3</sup> <sup>+</sup> *<sup>s</sup>*2*s*3))/<sup>2</sup> <sup>+</sup> (*s*4(*c*2*s*<sup>3</sup> <sup>−</sup> *<sup>c</sup>*3*s*2))/2))/<sup>2</sup> <sup>−</sup> ( <sup>√</sup>3*c*4*c*<sup>5</sup> (*c*2*s*<sup>3</sup> − *c*3*s*2))/2) + θ˙ <sup>4</sup>((√3*c*4(*c*2*s*<sup>3</sup> <sup>−</sup> *<sup>c</sup>*3*s*2))/<sup>4</sup> <sup>+</sup> ( <sup>√</sup>3*s*4*s*5(*c*2*s*<sup>3</sup> <sup>−</sup> *<sup>c</sup>*3*s*2))/<sup>2</sup> <sup>−</sup> ( <sup>√</sup>3*c*4*c*5(*c*2*s*<sup>3</sup> <sup>−</sup> *<sup>c</sup>*3*s*2))/4) <sup>−</sup> <sup>θ</sup>˙ <sup>2</sup>((*c*2*s*3)/4 − (*c*3*s*2)/4 + ( <sup>√</sup>3*c*5((√3(*c*2*s*<sup>3</sup> <sup>−</sup> *<sup>c</sup>*3*s*2)) /2 − (*s*4(*c*2*c*<sup>3</sup> + *s*2*s*3))/2))/2 + ( <sup>√</sup>3*s*4(*c*2*c*<sup>3</sup> <sup>+</sup> *<sup>s</sup>*2*s*3))/<sup>4</sup> <sup>−</sup> ( <sup>√</sup>3*c*4*s*5(*c*2*c*<sup>3</sup> <sup>+</sup> *<sup>s</sup>*2*s*3)) /2) + θ˙ <sup>3</sup>((*c*2*s*3)/4 − (*c*3*s*2)/4 + ( <sup>√</sup>3*c*5((√3(*c*2*s*<sup>3</sup> <sup>−</sup> *<sup>c</sup>*3*s*2))/<sup>2</sup> <sup>−</sup> (*s*4(*c*2*c*<sup>3</sup> <sup>+</sup> *<sup>s</sup>*2*s*3)) /2))/2 + ( <sup>√</sup>3*s*4(*c*2*c*<sup>3</sup> <sup>+</sup> *<sup>s</sup>*2*s*3))/<sup>4</sup> <sup>−</sup> ( <sup>√</sup>3*c*4*s*5(*c*2*c*<sup>3</sup> <sup>+</sup> *<sup>s</sup>*2*s*3))/2).

#### **References**


**Open Access** This chapter is licensed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made.

The images or other third party material in this chapter are included in the chapter's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the chapter's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.

# **Chapter 3 RNN Based Adaptive Compliance Control for Robots with Model Uncertainties**

**Abstract** Position-force control is challenging for redundant manipulators, especially for the ones considering both joint physical limitations and model uncertainties. In this chapter, we considered adaptive motion-force control of redundant manipulators with uncertainties of the interaction model and physical parameters. The whole control problem is formulated as a QP equation with a set of equality and inequality constraints, where based on admittance control strategy, the desired motion-force task is combined with the kinematic property of redundant manipulators, corresponding to an equality constraint in the formed QP equation. Moreover, the uncertainties of both system model and physical parameters are also considered, together with the complicated joint physical structure constraints, formulating as a set of inequality constraints. Then an adaptive recurrent neural network is designed to solve the QP problem online. This control scheme generalizes recurrent neural network based kinematic control of manipulators to that of position-force control, which opens a new avenue to shift position-force control of manipulators from pure control perspective to cross design with both convergence and optimality consideration. Numerical results on a 7-DOF manipulator LBR iiwa and comparisons with existing methods show the validity of the proposed control method.

#### **3.1 Introduction**

A manipulator is called redundant if its DOFs are greater than those required to complete a task. The redundant DOFs enable the robot to maintain the position and direction of the end actuator to complete a given task and adjust its joint configuration to complete a secondary task. Take advantage of this feature, typical manipulator systems such as collaborative robots, space robotic arms, dexterous hands [1, 2] are all designed as redundant ones.

In Chaps. 1 and 2, we mainly focus on kinematic problems, in which we assume the end-effector of a manipulator could move freely in cartesian space. In fact, in industrial applications, the interaction between robot and external environment must be considered, for example, in tasks such as grinding, human-robot interaction, not only the high-precision motion control to a given trajectory should be guaranteed, but also the contact force exerted by the external environment should be guaranteed.

There are several approaches to achieve force control for robot manipulators. By introducing series elastic actuators as flexible units, force control can be realized by adjusting the compliance of joint angles. In [3], in order to overcome the discontinuous friction and complexity problem of traditional back-stepping based methods, a modified command-filter is introduced, and then an adaptive back-stepping controller is designed. Experimental results show the effectiveness of the proposed method. Other control schemes realize force in cartesian space. The most widely used method is called impedance control [4], where the robot and environment are regarded as impedance and admittance, respectively. The interaction model (which is also called impedance model) can be a spring-mass-damper system, spring-damper system, etc. Besides, a series of hybrid position-force controllers are designed in [5, 6], which consist of two independent loops, namely position loop, and force loop. By designing control schemes separately, the final control efforts can be formulated as the sum of the output of the independent loops. Similar research can be found in [7–9].

In industrial applications, the accurate value of the impedance model can hardly be obtained. for example, the stiffness parameter may be sensitive to environmental factors such as temperature, humidity, *etc.* Besides, the uncertainties in physical parameters would also affect control performance. In order to due with the uncertainties in the interaction model, in [10], an adaptive impedance controller is designed, in which a neural network is used to learn the nonlinear dynamics of the interaction part. In [11], by considering the influence of unknown dynamics of the external environment, and a radial basis function based controller is proposed, in which an objective function is used to regulate the torque and an adaptive admittance technique is used to minimize path tracking errors. In [12], a human-like-learning based controller is designed for interaction with environmental uncertainties. It is proved that the controller is capable of handling unstable situations such as tool switching, and can finally achieve an expected stability margin. Besides, contact force sensors are not required. Using the approximation ability of artificial neural networks, some intelligent controllers are reported in [13–17]. As to physical uncertainties, in [18], a fixed point controller is proposed based on robust adaptive control theory, the controller also ensures the bounded-ness control torque. Cheng et al. propose a unit quaternion based controller based on neural networks [19], which shows good performance in eliminating singularities, and semi-global stability is proved by theoretical results. In [20], a Jacobian adaptation method based on zeroing dynamics is proposed, in which the Jacobian matrix is updated according to the information of desired and actual accelerations. Other feasible adaptive strategies are reported in [21–24], in which the Jacobian is estimated by updating physical parameters online. As to physical constraints, in [42], an adaptive neural network control scheme is designed for systems with non-symmetric input dead-zone, as well as output constraints and model uncertainties. The output constraints are guaranteed by the barrier Lyapunov function. In [43], a boundary adaptive robust controller is established for flexible rise systems, in which an auxiliary system is introduced to suppress vibrational offset, and an estimator is constructed to observe to upper-bound of disturbances. The controller achieves the global convergence of control errors. Although the above-mentioned controllers could handle uncertainties in the interaction model or physical parameters, few studies have considered both uncertainties at the same time. More importantly, those controllers rarely consider the secondary task, let alone the redundancy resolution problem. Besides, the boundary of joint states is ignored, which is essential in protecting the robot.

In order to accomplish the secondary task in the reliable physical range, a kinematic control method of redundant manipulator based on QP is proposed [25–28]. The objective function is based on the secondary task, and the constraints describe the basic properties and physical constraints of the system [29]. Because of the high efficiency of parallel computing, the recurrent neural network is often used to solve the redundancy decomposition problem based on QP. In recent years, research shows that RNN based controller has good performance in motion control of redundant manipulator [30]. In [31], in order to achieve task space tracking, the joint velocity command is designed to ensure the boundary of joint angle, velocity, and acceleration. In the paper [32], by maximizing the indirectness of its time derivative, an operational optimization scheme is proposed. Numerical experiments show that the average increase in this method is 40%. In [33], different levels of redundancy resolution are discussed. Recently, RNN based methods have been extended to control examples of flexible robots, multi-robot systems, and methods such as [34–40]. However, as far as we know, there is no existing dynamic neural network (including RNN and DNN) protocol for the force control of redundant manipulators. It is necessary to consider not only the trajectory tracking problem of free-motion direction, but also the precise control of the contact force, especially for the system with model uncertainty. In addition, from the literature review, one of the research directions of a dynamic neural network is to extend the protocol of redundant manipulator of motion control task to those aspects that need precise control of tracking ability and contact force.

Based on the above observation results, we propose the first RNN based redundant manipulator position force controller, which considers the uncertainty of the interaction model and physical parameters. In this paper, the ideal case of the known model parameters is discussed, and then an adaptive admittance control scheme based on RNN is established. It ensures the boundary of joint angle and velocity. The effectiveness of the proposed controller is verified by the theoretical derivation and numerical results of LBR iiwa. Before concluding this chapter, the main contributions compared to the existing work are as follows


#### **3.2 Preliminaries**

#### *3.2.1 Problem Formulation*

When a robot is controlled to perform a given operational task, the forwardkinematics of a serial manipulator is formulated as

$$\mathbf{x}(t) = f(\theta(t)),\tag{3.1}$$

with <sup>θ</sup> <sup>∈</sup> <sup>R</sup>*<sup>n</sup>* being the generalize variable of the robot, and *<sup>x</sup>*(*t*) <sup>∈</sup> <sup>R</sup>*<sup>m</sup>* being the description of end-effector s coordinate in task space. Without loss of generality, in this chapter, we assume that all joint are rotational joints. Therefore, θ represents the vector of joint angles. In the velocity level, the Jacobian matrix *<sup>J</sup>* (θ , *ak* ) <sup>=</sup> <sup>∂</sup> *<sup>f</sup>* (θ (*t*), *ak* )/∂θ (*t*) <sup>∈</sup> <sup>R</sup>*m*×*<sup>n</sup>* is used to describe the relationship between *x*˙(*t*) and θ˙ as

$$
\dot{\chi}(t) = J(\theta(t), a\_k) \dot{\theta}(t), \tag{3.2}
$$

where *ak* <sup>∈</sup> <sup>R</sup>*<sup>l</sup>* is a vector of physical parameters. In terms with (3.2), an important property which will be used in the controller design is given as below

$$J(\theta(t), a\_k)\dot{\theta}(t) = Y(\theta, \dot{\theta})a\_k,\tag{3.3}$$

with *<sup>Y</sup>* (θ , θ )˙ <sup>∈</sup> <sup>R</sup>*<sup>m</sup>*×*<sup>l</sup>* being the kinematic regressor matrix.

The physical parameters are very essential in describing the robot's kinematic model, for example, as the most common physical parameters, the length of robot links affects the DH parameters directly, which are fundamental in the controller design. In this chapter, the physical parameters refer to the length of robot links.

Figure 3.1 shows the interaction between the robot and environment, the contact force between the robot and workpiece is required to be precisely controlled. When the fixed contact surface is known, according to the idea of admittance control, the interaction model can be described as a spring-damping system as

$$F = K\_p(\mathbf{x} - \mathbf{x}\_d) + K\_d(\dot{\mathbf{x}} - \dot{\mathbf{x}}\_d),\tag{3.4}$$

where *Kp* <sup>∈</sup> <sup>R</sup><sup>3</sup>×<sup>3</sup> and *Kd* <sup>∈</sup> <sup>R</sup><sup>3</sup>×<sup>3</sup> are the corresponding damping and stiffness coefficients, *x*<sup>d</sup> is the desired trajectory. If *Kp* and *Kd* are known, the desired contact **Fig. 3.1** Spring-damper model of interaction

force *F*<sup>d</sup> can be obtained by designing the reference velocity of the end-effector *x*˙*<sup>r</sup>* based on *x*˙d, *x*d, *x* and *F*<sup>d</sup> according to Eq. (3.4).

$$
\dot{\mathbf{x}}\_r = K\_d^{-1} F - K\_d^{-1} K\_p (\mathbf{x}\_\mathbf{d} - \mathbf{x}) + \dot{\mathbf{x}}\_\mathbf{d}.\tag{3.5}
$$

**Remark 3.1** In this chapter, we only consider the contact force on the vertical direction of the contact surface, and friction force is ignored, therefore *F* is aligned with the normal direction of the surface. When the surface is priorly known, by defining a rotational matrix *R* between the tool coordinate system and based coordinate system, *Kp* and *Kd* can be formulated as *Kp* = *diag*(0, 0, *k <sup>p</sup>*)*R*, *Kp* = *diag*(0, 0, *kd* )*R*, respectively. Then *Kp* and *Kd* can be described as single parameters.

In practical applications, the real values of system parameters such as *ak* , *Kp* and *Kd* are usually unavailable. In terms of *ak* , due to machining and installation error, the length of robot s links may be different from the nominal value, and the robot may hold uncertain tools, which would lead to uncertainties in *ak* . As to *Kp* and *Kd* , the real values are even more difficult to obtain. *Kd* and *Kp* are related to the material and structure of the workpiece, furthermore, those parameters would change in different environmental conditions. Therefore, it is a challenging issue to achieve precise force control in the presence of parameter uncertainties.

For a redundant manipulator, the redundant DOFs can enhance the flexibility of the robot, and this property can be used to achieve a secondary task. In industrial applications, by minimizing the norm of joint speed, the kinematic energy can be optimized. Therefore, in this chapter, the objective function is selected as

$$H(\theta) = \dot{\theta}^{\text{T}} \dot{\theta}.\tag{3.6}$$

In order to save energy consumption in the control process, a smaller value of *H*(θ ) is preferred.

**Remark 3.2** The objective function *H*(θ )˙ is a typical function to describe the secondary task in redundant resolution problems, as reported in [21, 25]. In actual implementations, this function can be defined according to the designer's preferences or actual requirements. In this chapter, we propose a generalized RNN based force control strategy with simultaneous optimization ability. Based on the proposed control strategy, similar controllers can be easily designed by defining different objective functions.

#### *3.2.2 Control Objective and QP Problem Formulation*

Before pointing out the control objective, it is noteworthy that the robot must satisfy certain constraints. For example, due to the physical structure, every joint angle θ*<sup>i</sup>* must not exceed its limitations i.e., the lower bound θ<sup>−</sup> *<sup>i</sup>* and upper bound θ<sup>+</sup> *<sup>i</sup>* . Furthermore, limited by actual performance of actuators, joint speed θ˙ is also restricted, i.e., θ˙− ≤ θ˙ ≤ θ˙+.

When the actual parameters of interaction model are unknown, the control objective of this chapter is to design a force controller with adaptation ability, *i.e.,* to realize accurate force control along the predefined contact surface, in the sense that *F* → *F*d, at the same time, physical constraints of joint angles and velocities must be ensured. According to (3.2), (3.5) and (3.6), the control objective can be described in the view of optimization as

$$\min \ H(\theta) = \dot{\theta}^{\text{T}} \dot{\theta},\tag{3.7a}$$

$$\text{s.t. } \dot{\mathbf{x}}\_r = J(\theta, a\_k)\theta,\tag{3.7b}$$

$$
\dot{\mathbf{x}}\_r = K\_d^{-1} F\_d - K\_d^{-1} K\_p (\mathbf{x} - \mathbf{x}\_d) + \dot{\mathbf{x}}\_d,\tag{3.7c}
$$

$$
\theta^- \le \theta \le \theta^+,
\tag{3.7d}
$$

$$
\dot{\theta}^- \le \dot{\theta} \le \dot{\theta}^+.\tag{3.7e}
$$

**Remark 3.3** So far, we have arrived at a generalized description of admittance control for redundant manipulators in the QP problem. Apparently, there exist parameter uncertainties in *J* (θ , *ak* ), *k <sup>p</sup>* and *kd* as formulated in (3.7b) and (3.7c). In the next chapter, we will solve the problem (3.7) with the aid of RNNs.

#### **3.3 Main Results**

In this chapter, an recurrent neural network based adaptive admittance controller is proposed to solve (3.7). An ideal situation where real values of system model are perfectly known is firstly considered, which lays the foundation of the later discussion. Then an adaptive RNN is proposed to achieve force control in the presence of model uncertainties. We also show the stability of the control method.

#### *3.3.1 Nominal Design*

In order to explain the proposed adaptive control scheme more clearly, an ideal case in which all parameters are perfectly known is firstly discussed. It can be regarded as a special case of uncertain parameter ones. In this case, both *Kd* and *Kp* are available, then the real value of *x*˙*<sup>r</sup>* is available according to (3.5).

Let <sup>ω</sup> <sup>=</sup> <sup>θ</sup>˙ and define a Lagrange function as *<sup>L</sup>*<sup>1</sup> <sup>=</sup> <sup>ω</sup>T<sup>ω</sup> <sup>+</sup> <sup>λ</sup>T(*J*<sup>ω</sup> <sup>−</sup> *FdK* <sup>−</sup><sup>1</sup> *<sup>d</sup>* + *KpK* <sup>−</sup><sup>1</sup> *<sup>d</sup>* (*x* − *x*d) − ˙*x*d), with λ being the Lagrange multiplier. Similar to [25], a RNN with provable convergence can be designed as

$$
\varepsilon \dot{\omega} = -\omega + P\_{\Omega}(-J^{\mathrm{T}}\lambda),
\tag{3.8a}
$$

$$
\varepsilon \dot{\lambda} = -K\_d^{-1} F\_d + K\_d^{-1} K\_p (\mathbf{x} - \mathbf{x}\_d) - \dot{\mathbf{x}}\_d,\tag{3.8b}
$$

where ε is a positive constant and *P*ω(•) is a projection operator to set Ω as *<sup>P</sup>*ω(*x*) <sup>=</sup> argmin*<sup>y</sup>*∈<sup>Ω</sup> ||*<sup>y</sup>* <sup>−</sup> *<sup>x</sup>*||, and the set <sup>Ω</sup> = {<sup>ω</sup> <sup>∈</sup> <sup>R</sup>*n*|ω*<sup>i</sup>* min ≤ ω*<sup>i</sup>* ≤ ω*<sup>i</sup>* max}is a convex set describing the modified speed constraints based on escape velocity method [29], and ωmin = max{α(θmin − θ ), θ˙ min}, *u*max = min{α(θmax − θ ), θ˙ max}, α > 0. The stability of system can be readily proved, which is similar in [25], is omitted here.

#### *3.3.2 Adaptive Control Method Based on RNN*

Based on the previous description, in this subchapter, by learning the uncertain parameters online, an adaptive RNN is established to solve the force control problem with gravity torque optimization under model uncertainties, the stability of the system is also proved.

#### **3.3.2.1 Adaptive RNN Design**

In order to handle the uncertain interaction parameters *Kp* and *Kd* , let *K*ˆ *<sup>p</sup>* and *K*ˆ *<sup>d</sup>* be their estimates. Although *Kp* and *Kd* are unknown, they are considered to be constant. Then the estimated reference velocity ˆ *x*˙*<sup>r</sup>* can be derived by replacing *Kp*, *Kd* with *K*ˆ *<sup>p</sup>* and *K*ˆ *<sup>d</sup>* respectively according to (3.5)

$$
\hat{\dot{\chi}}\_r = \hat{K}\_d^{-1} F\_d - \hat{K}\_d^{-1} \hat{K}\_p (\chi - \chi\_d) + \dot{\chi}\_d. \tag{3.9}
$$

Let η = [*x* − *x*d, ˆ *x*˙*<sup>r</sup>* − ˙*x*d] T, *W* = [*Kp*, *Kd* ] <sup>T</sup> and *W*ˆ = [*K*ˆ *<sup>p</sup>*, *K*ˆ *<sup>d</sup>* ] T. Then we can rewrite (3.9) as *F*<sup>d</sup> = *W*ˆ <sup>T</sup>η. However, due to the uncertainties in *Kd* and *Kp*, in the actual process, the resulting contact force *F* using ˆ *x*˙*<sup>r</sup>* directly is *F* = *W*<sup>T</sup>η, it is noteworthy that the contact force *F* can be measured by force/torque sensors.

As to the uncertain *ak* , the alternative Jacobian matrix *J* (θ , *a*ˆ*<sup>k</sup>* ) is used by substituting *ak* with its estimate *a*ˆ*<sup>k</sup>* , then *J* (θ , *ak* ) in equality constraint (3.7b) is replaced by *J* (θ , *a*ˆ*<sup>k</sup>* ). Therefore, the force control problem with joint speed optimization considering model uncertainties can be formulated as

$$\min \ H(\omega) = \boldsymbol{\omega}^{\mathsf{T}} \boldsymbol{\omega},\tag{3.10a}$$

$$\text{s.t.}\quad J(\theta,\hat{a}\_k)\omega = \hat{K}\_d^{-1}F\_d - \hat{K}\_d^{-1}\hat{K}\_p(\mathbf{x} - \mathbf{x}\_d) + \dot{\mathbf{x}}\_d,\tag{3.10b}$$

$$
\theta^- \le \theta \le \theta^+,
\tag{3.10c}
$$

$$
\omega^- \le \omega \le \omega^+.\tag{3.10d}
$$

To solve (3.10), by defining a Lagrange function as *L* = ω<sup>T</sup>ω + λ(*J* (θ , *a*ˆ*<sup>k</sup>* )ω − *x*˙*r*), the adaptive RNN is designed as

$$
\varepsilon \dot{\omega} = -\omega + P\_{\Omega}(-\hat{J}^{\text{T}}\lambda),
\tag{3.11a}
$$

$$
\varepsilon \dot{\lambda} = J(\theta, \hat{a}\_k) \omega - \hat{\ddot{\chi}}\_r,\tag{3.11b}
$$

$$
\hat{W} = -F\_1 \eta (F\_4 - F)^\mathrm{T},\tag{3.11c}
$$

$$\dot{\hat{a}}\_k = -\Gamma\_2 Y^\Gamma(J(\theta, \hat{a}\_k)\omega - \dot{\mathbf{x}}),\tag{3.11d}$$

where ε,Γ<sup>1</sup> andΓ<sup>2</sup> are positive gains. Figure 3.2 shows the framework of the proposed adaptive RNN in real-time force control with uncertain parameters. In order to learn the uncertain parameters, the neurons *W*ˆ and *a*ˆ*<sup>k</sup>* update their values based on desired signals *x*d, *x*˙<sup>d</sup> and *F*<sup>d</sup> and the feedback of *x*, *x*˙ and *F*. The output of the RNN is exactly the joint speed command ω. By designing proper updating laws, λ and ω achieve both stability of the inner loop and the optimization of *H*(ω).

**Remark 3.4** In this chapter, we consider the case where *m* = 6, *n* = 7 (where *m* is the dimension of the cartesian space, and *n* is the number of joint angles). Since only the contact force on the vertical direction of the surface is considered, the dimension of *Kd* and *KP* are all 1(the contact surface if known). As illustrated in Fig. 3.2, the proposed adaptive RNN has a typical one-layer architecture, and the total number of neurons is *n* + *l* + *m* + 2.

**Remark 3.5** The proposed adaptive RNN (3.11) can be regarded as a generalized form of the nominal RNN (3.8), when *<sup>W</sup>*<sup>ˆ</sup> <sup>=</sup> *<sup>W</sup>* and *<sup>a</sup>*ˆ*<sup>k</sup>* <sup>=</sup> 0, it can be obtained that ˙ *W*ˆ = 0 and ˙ *a*ˆ*<sup>k</sup>* = 0 from (3.3) and (3.9). Then (3.11) is the same as (3.8). However, it is remarkable that adaptive RNN is capable of dealing with model uncertainties. On the other hand, unlike the adaptive RNNs based kinematic control strategies in [21, 22], the proposed controller can achieve both precise control of both position and contact force.

**Fig. 3.2** A schematic framework of the proposed RNN based force controller

#### **3.3.2.2 Stability Analysis**

So far, a theorem about the convergence of the force control problem using proposed adaptive RNN in presence of model uncertainties can be summarised as below

**Theorem 3.1** Consider the force control problem for a category of redundant manipulators described in (3.1)–(3.4) with model uncertainties, the state variable ω of the proposed adaptive RNN will converge the optimal solution of (3.7), i.e., the force control error will converge to 0, and the norm of joint speed will be optimized simultaneously.

*Proof* The proof consists of three steps. Firstly, we will prove that *W*ˆ and *a*ˆ*<sup>k</sup>* could learn the model parameters online, and then the stability in the inner-loop is also analyzed.

Step 1. Define the estimate error of concatenated form of *W* as *W*˜ = *W*ˆ − *W*, and *e <sup>f</sup>* = *F* − *Fd* be the error between the contact force and the desired signal. From (3.9), *e <sup>f</sup>* can be formulated as *e <sup>f</sup>* = *W*ˆ <sup>T</sup>η − *W*<sup>T</sup>η = *W*˜ <sup>T</sup>η. Consider the Lyapunov function as *V*<sup>1</sup> = *tr*(*W*˜ <sup>T</sup>*W*˜ )/2, which *tr*(•) is the trace of a matrix. Calculating the time derivative of *V*<sup>1</sup> yields

$$\begin{split} \dot{V}\_1 &= tr(\tilde{W}^T \tilde{W}) = tr(-\Gamma\_1 \tilde{W}^T \eta (F\_\mathbf{d} - F)^T) \\ &= tr(-\Gamma\_1 e\_f e\_f^\mathsf{T}) = -\Gamma\_1 e\_f^\mathsf{T} e\_f \le 0. \end{split} \tag{3.12}$$

From (3.12) and (3.10) and using LaSalle s invariance principle [41], we have *e*<sup>T</sup> *<sup>f</sup> e <sup>f</sup>* = 0, as *t* → ∞. In other words, the state variable *W*ˆ ensures the convergence of force error *e <sup>f</sup>* by modifying the end-effector s reference speed ˆ *x*˙*<sup>r</sup>* according to (3.5).

Step 2. Define the estimate error of *ak* as *a*˜*<sup>k</sup>* = ˆ*ak* − *ak* , and let *V*<sup>2</sup> = ˜*a*<sup>T</sup> *<sup>k</sup> a*˜*k*/2. It is notable that during the control process, *ak* can be regarded as constant, then we have ˙ *a*˜*<sup>k</sup>* = ˙ *a*ˆ*<sup>k</sup>* . Actually, using the property described in Eq. (3.3), based on linearization

#### **Algorithm 3** The proposed adaptive RNN based force controller

**Input:** Nominal values of interaction model *K<sup>n</sup> <sup>p</sup>*, *K<sup>n</sup> <sup>d</sup>* and physical parameters *<sup>a</sup><sup>n</sup> <sup>k</sup>* . Physical range of joint angles and joint velocities θ*<sup>i</sup>* max, θ*<sup>i</sup>* min, <sup>θ</sup>˙*<sup>i</sup>* max, θ˙*<sup>i</sup>* min. The desired trajectory *x*d, *x*˙<sup>d</sup> and the desired contact force *F*d. Positive control gains α, Γ1, Γ2, ε. Sensor readings of contact force *F* and movement of the end-effector *x*, *x*˙*<sup>d</sup>* . Task duration *T* .

**Output:** To achieve position-force control in presence of model uncetainties

1: Initialize λ(0), *<sup>a</sup>*ˆ*<sup>k</sup>* (0) <sup>←</sup> *ak* , *<sup>W</sup>*<sup>ˆ</sup> (0) ← [*K<sup>n</sup> <sup>p</sup>*; *<sup>K</sup><sup>n</sup> p*]


4: Obtain the Jacobian matrix *J* (θ , *a*ˆ*<sup>k</sup>* ) and kinematic regressor matrix *Y* (θ , θ )˙


**Until** (*t* > *T* )

descriptions of θ˙ and *ak* , respectively, the task-space velocity *x*˙ has two equivalent descriptions, namely *J* (θ , *ak* )θ˙ and *Y* (θ , θ )˙ *ak* . As a result, the estimated value ˆ *x*˙ also has two similar descriptions, depending on the estimated value of kinematic parameter *a*ˆ*<sup>k</sup>* . Therefore, the updating law Eq. (3.11) is equivalent to−Γ2*Y* <sup>T</sup>(*Y* (θ , ω)*a*ˆ*<sup>k</sup>* − ˙*x*). Then it follows from (3.2) and (3.3) that

$$\begin{split} \dot{\hat{a}}\_{k} &= -\Gamma\_{2} Y^{\mathsf{T}}(J(\theta, \hat{a}\_{k})\omega - J(\theta, a\_{k})\omega) \\ &= -\Gamma\_{2} Y(\theta, \omega)^{\mathsf{T}} Y(\theta, \omega) \tilde{a}\_{k} . \end{split} \tag{3.13}$$

In light of (3.13), *V*˙ <sup>2</sup> can be rewritten as

$$\begin{split} \dot{V}\_{2} &= \tilde{a}\_{k}^{\mathrm{T}} \hat{a}\_{k} \\ &= -\Gamma\_{2} \tilde{a}\_{k}^{\mathrm{T}} Y(\theta, \omega)^{\mathrm{T}} Y(\theta, \omega) \tilde{a}\_{k} \leq 0. \end{split} \tag{3.14}$$

Then it can be readily obtained that *Y* (θ , ω)*a*˜*<sup>k</sup>* → 0 as *t* → ∞. From (3.3) and and definition of *a*˜*<sup>k</sup>* , *J* (θ , *a*ˆ*<sup>k</sup>* )ω will eventually converge to *J* (θ , *ak* )ω, i.e., the equality constraint (3.10b) will eventually be equivalent to (3.7b).

*Step 3.* Then we will prove the stability of inner-loop system. According to (3.11), the dynamics of ω and λ can be reformulated as

$$
\varepsilon \dot{\xi} = -\xi + P\_{\dot{\Omega}}[\xi - F(\xi)],\tag{3.15}
$$

with ξ = [ω<sup>T</sup>, λ<sup>T</sup>] T, <sup>Ω</sup>¯ = {(ω, λ)|<sup>ω</sup> <sup>∈</sup> Ω,λ <sup>∈</sup> <sup>R</sup>*<sup>m</sup>*}, and

$$F = \begin{bmatrix} \boldsymbol{\alpha} + \boldsymbol{J}^{\mathrm{T}} \boldsymbol{\lambda} \\ -\hat{\boldsymbol{J}}\boldsymbol{\omega} - \boldsymbol{F}\_{d}\hat{\boldsymbol{K}}\_{d}^{-1} + \hat{\boldsymbol{K}}\_{p}\hat{\boldsymbol{K}}\_{d}^{-1}(\boldsymbol{x} - \boldsymbol{x}\_{\mathrm{d}}) - \dot{\boldsymbol{x}}\_{\mathrm{d}} \end{bmatrix}.$$

Define ∇*F* = ∂*F*(ξ )/∂ξ , we have

$$
\nabla \mathcal{R} = \begin{bmatrix} I & -\hat{J}^{\mathrm{T}} \\ \hat{J} & 0 \end{bmatrix},
$$

with *I* being the identity matrix. Then it can be readily obtained that ∇*F* + (∇*F*)<sup>T</sup> is positive semi-definite. According to the definition in [32], *F* is a monotone function of ξ . From the description of (3.11) and (3.15), *P*Ω¯ can be formulated as *P*Ω¯ = [*P*<sup>Ω</sup> ; *PR*], where *PR* <sup>∈</sup> <sup>R</sup>*<sup>m</sup>* is a projection operator of <sup>λ</sup> to set *<sup>R</sup>*, with the upper and lower bounds being ±∞. Therefore, *P*<sup>Ω</sup>¯ is a projection operator to closed set Ω¯ . Based on Lemma 1 in [32], the adaptive RNN (3.11) is stable, and the will be ultimately equivalent to the solution of (3.7). This completes the Proof. -

**Remark 3.6** Till now, we have shown the stability of the proposed RNN based adaptive admittance control strategy in the presence of uncertain model parameters. The established adaptive RNN is capable of maintaining the boundedness of system states and avoiding calculating the pseudo-inversion of the Jacobian matrix.

#### **3.4 Illustrative Examples**

In this chapter, numerical results on a 7-DOF robot manipulator LBR iiwa are carried out. The physical structure and D-H parameters of iiwa are shown in Fig. 3.3. All we all know, up to 6 DOFs (3 DOFs of position and another 3 DOFs of orientation) are required to fulfill a given task in engineering applications, therefore, the iiwa is a typical redundant manipulator in the force control when considering both the position and orientation of the end-effector. As to the contact force, the contact surface is selected as a plane in the workspace, as shown in Fig. 3.3a. The end-effector is controlled to offer a desired contact force on the contact surface while tracking a given path on it. In the control process, the orientation of the end-effector is wished to keep constant.

This chapter mainly consists of three parts, firstly, a comparative simulation between the proposed controller and pseudo-inverse of Jacobian matrix(PJMI) based method is firstly discussed, and then the effectiveness of the proposed adaptive controller is checked via more cases. In addition, more discussion about the superiority of the proposed method is carried out to enlighten the contribution of this chapter.

#### *3.4.1 Simulation Setup*

In this chapter, the initial value of joint angles is set as θ<sup>0</sup> = [0,π/3, 0,π/3, 0, π/3, 0] Trad, and the corresponding coordinate of the end-effector is noted as *P*0. The initial value of joint velocity is set as θ˙ <sup>0</sup> = [0, 0, 0, 0, 0, 0, 0] Trad/s. The contact surface is defined as a horizontal plane with *z* = 0.094m, and the physical parameters of the interaction model are set as *Kp* = 5000, *Kd* = 20, respectively. The limitations of

**Fig. 3.3** The architecture of 7-DOF manipulator iiwa. **a** Physical structure. **b** Table of D-H parameters

joint angles and velocities are selected as θ<sup>−</sup> = [−2, −2, −2, −2, −2, −2, −2] Trad, θ<sup>+</sup> = [2, 2, 2, 2, 2, 2, 2] Trad, θ˙− = [−2, −2, −2, −2, −2, −2, −2] Trad/s and θ˙+ = [2, 2, 2, 2, 2, 2, 2] Trad/s, respectively. The control gains of the proposed ARNN are set as ε = 0.002, Γ<sup>1</sup> = *diag*(5000, 3000), Γ<sup>2</sup> = 100*I*, α = 8, respectively.

#### *3.4.2 Comparative Simulation Between PJMI Methods*

Firstly, a comparative simulation between the proposed control strategy and traditional Jacobian-inverse based methods is carried out to show the superiority of the RNN based controller. The robot is expected to provide a contact force of 20N at a fixed point *P*1 = [0.2, 0.6, 0.094] Tm, without considering the orientation control of the end-effector. In traditional PJMI based methods, the joint commands are obtained by directly calculating the inverse of the Jacobian matrix online, and only the special solution is considered. Simulation results are shown in Fig. 3.4. Both controllers can guarantee the convergence of positional and force errors. Using the same control gain in the outer loop, although the controller based on PJMI achieves a faster convergence of control errors, its output is big at the beginning of the simulation (with the Euclidean norm of joint velocity being about 20 rad/s), moreover, as shown in Fig. 3.4c, the joint angle θ<sup>6</sup> exceeds its upper bound during 0.2–1 s. In contrast, using the RNN based controller, both joint angles and velocities are ensured not to exceed their limits. It is worth noting that at about *t* = 0.6s, θ<sup>6</sup> reaches its upper limit (Fig. 3.4e), correspondingly, the joints move a relatively big range, as shown in Fig. 3.4f, as a result, θ<sup>6</sup> stops increasing and then converges to a group of certain values via self-motion.

**Fig. 3.4** Numerical results of comparative simulation between the proposed scheme and PJMI based methods. **a** Profile of position and force errors. **b** Euclidean norm of joint velocities. **c** Profile of joint angles using PJMI method. **d** Profile of joint velocities using PJMI method. **e** Profile of joint angles using the proposed method. **f** Profile of joint velocities using the proposed method

#### *3.4.3 Force Control Along Predefined Trajectories with Model Uncertainties*

In this subchapter, we will carry out a group of experimental tests to further verify the validity of RNN based admittance controller (3.11). In terms of the interaction parameters, we assume the nominal values of *Kp* and *Kd* is 4500 and 15, respectively. As to the kinematic parameters, the nominal value of *ak* is set to be *a*ˆ*<sup>k</sup>* (0) = [*D*ˆ <sup>1</sup>(0), *D*ˆ <sup>3</sup>(0), *D*ˆ <sup>5</sup>(0), *D*ˆ <sup>7</sup>(0)] <sup>T</sup> = [0.36, 0.4, 0.42, 0.25] Tm.

#### *(1) Force Control On Fixed Points*

Similar to Sect. 3.4.2, a motion-force control at fixed points is studied first. When simulation begins, the target point is set as *P*1 = [0.2, −0.6, 0.094] Tm, at *t* = 5s, the target point is reset to *P*2 = [0.2, −0.4, 0.094] Tm. During the simulation, the contact force between the end-effector and contact surface is selected as *F*<sup>d</sup> = 20N. Numerical results are shown in Figs. 3.5 and 3.6.

The position error when the simulation begins is about 0.2 m, accordingly, the proposed RNN based controller generates a large output, which ensures the quick convergence of both motion and force errors. The stabilization time is about 0.5 s. At *t* = 5s, the target point is switched to *P*2, leading to an instantaneous change of position error. Using the adaptive admittance controller (3.11), the robot adjusts its joint configurations quickly and then slows down with the convergence of errors. It is remarkable that the second joint reaches its maximum value, and during the whole process, the joint velocities are guaranteed not to exceed the predefined limits. The estimated values of *K*ˆ *<sup>p</sup>* and *K*ˆ *<sup>d</sup>* are shown in Fig. 3.5f, although the exact values of *Kd* and *Kp* are unknown, by updating *K*ˆ *<sup>p</sup>* and *K*ˆ *<sup>p</sup>* online according to (3.11), precise control of control is achieved. The difference between the task-space speed *x*˙ and its estimate value *Ya*ˆ*<sup>k</sup>* is shown in Fig. 3.5g, correspondingly, *D*ˆ 1, *D*ˆ 2, *D*ˆ <sup>3</sup> and *D*ˆ <sup>4</sup> converge to a group of constant value.

#### *(2) Force Control Along A Circular Path*

In this example, the end-effector is controlled to offer constant contact force *F*<sup>d</sup> = 20N while tracking a circular trajectory on the contact surface, this trajectory is defined as *x*<sup>d</sup> = [−0.1 + 0.1*cos*(0.5*t*), −0.6 − 0.1*sin*(0.5*t*), 0.094] Tm, and the orientation is required to remain the same as the initial state. Numerical results are shown in Figs. 3.7 and 3.8. As shown in Fig. 3.7a, the robot tracks the desired path successfully, and both position and orientation errors converge to zero in less than 1 s, the expected contact force is also obtained. Because of the periodicity of the desired commands, the robot s joint angles and angular velocities change periodically, at the same time, boundedness of θ and θ˙ is also guaranteed. On the other hand, the smooth change of θ and θ˙ shows that the proposed controller is very stable. Based on the adaptive strategy (3.11d), the system shows great robustness against uncertain system parameters.

#### *(3) Force Control Along A Rhodonea Path*

In this example, we consider the case where the robot provides a time-varying contact force while tracking a Rhodonea path. The desired contact force is set to be *F*<sup>d</sup> = 20 + 5*sin*(0.2*t*)N, and the Rhodonea path is defined as

$$\begin{aligned} \mathbf{x\_{d}}(t) &= 0.1 \sin(0.4t) \cos(0.2t), \\ \mathbf{x\_{d}}(t) &= 0.15 \sin(0.4t) \sin(0.2t) - 0.6, \\ \mathbf{x\_{dZ}}(t) &= 0.094. \end{aligned}$$

**Fig. 3.5** Numerical results of force control at fixed points with uncertain model parameters. **a** Profile of positional error. **b** Profile of orientational error. **c** Profile of contact force. **d** Profile of joint angles. **e** Profile of joint speed. **f** Profile of the estimated interaction coefficients. **g** Profile of ||*Ya*ˆ*<sup>k</sup>* − ˙*x*||<sup>2</sup> <sup>2</sup>. **h** Profile of the estimated physical parameters

**Fig. 3.6** Snapshots when iiwa offers a constant contact force at fixed points. **a** Snapshot when *t* = 2s. **b** Snapshot when *t* = 7s

Numerical results are shown in Figs. 3.9 and 3.10. Figure 3.9a, b show the positional and orientational errors the end-effector respect to the desired path, respectively. In the steady-state, accurate motion control is realized using the proposed controller, and the operation force between the end-effector and contact surface is shown in Fig. 3.9c. At the beginning stage, the joint speed is high, which enables the end-effector to move toward the desired path rapidly. As the end-effector approached the expected path, the robot moves at a low speed periodically and smoothly, correspondingly, the joint angles changes at the same frequency. The Euclidean norm of *Ya*ˆ*<sup>k</sup>* − ˙*x* is illustrated in Fig. 3.9g, the proposed RNN based control strategy could calculate control command ω online with subject to model uncertainties. The estimated model parameters are given in Fig. 3.10f, h.

#### *3.4.4 Comparison*

To further illustrate the contribution of the proposed force control strategy, we provide a comparison between the proposed method and the existing related methods, as shown in the Table 3.1. In [11], an adaptive admittance control scheme based on neural network approximation capability is proposed and the admittance adaptive tracking error is optimized. However, no physical constraints are considered. In [10], although the established impedance controller can guarantee input saturation, the controller still needs to calculate the pseudo-inverse of the jacobian. In [25, 32], a pseudo-inverse-free controller based on RNN is designed to realize the task space tracking of redundant robots, and its convergence is proved. Convex optimization and non-convex optimization are also obtained. Considering physical uncertainties,

**Fig. 3.7** Numerical results of force control along a circular curve with uncertain model parameters. **a** Profile of positional error. **b** Profile of orientational error. **c** Profile of contact force. **d** Profile of joint angles. **e** Profile of joint speed. **f** Profile of the estimated interaction coefficients. **g** Profile of the estimated physical parameters. **<sup>h</sup>** Profile of the objective function ||ω||<sup>2</sup> 2

**Fig. 3.8** Snapshots when iiwa offers a constant contact force along a circular curve. **a** Snapshot when *t* = 8s. **b** Snapshot when *t* = 15s


**Table 3.1** Comparisons among different tracking controllers on manipulators

*<sup>a</sup>* In [10], only the input saturation is considered *<sup>b</sup>* In [21, 22], the authors only consider the uncertainties of physical parameters, while the contact force is ignored

two different adaptive strategies are proposed in [21, 22]. This is the first RNN-based force controller in this chapter. On the other hand, this control scheme is suitable for the case of model uncertainty, so it is no longer necessary to calculate the pseudoinverse of the Jacobian matrix and has great application potential in force control.

#### **3.5 Summary**

In this chapter, we propose an adaptive admittance control method for redundant robots based on a recursive neural network. The convergence of the adaptive RNN is proved by the theoretical derivation of the Lyapunov technique, and the effectiveness of the control strategy is verified by numerical simulation on the 7-DOF robot

**Fig. 3.9** Numerical results of force control along a Rhodonea curve with uncertain model parameters. **a** Profile of positional error. **b** Profile of orientational error. **c** Profile of contact force. **d** Profile of joint angles. **e** Profile of joint speed. **f** Profile of the estimated interaction coefficients. **g** Profile of ||*Ya*ˆ*<sup>k</sup>* − ˙*x*||<sup>2</sup> <sup>2</sup>. **h** Profile of the estimated physical parameters

**Fig. 3.10** Snapshots when iiwa offers a constant contact force along a Rhodonea curve. **a** Snapshot when *t* = 4s. **b** Snapshot when *t* = 19s. **c** Snapshot when *t* = 27s

iiwa. Compared with the existing control methods, the controller not only has better performance in dealing with physical constraints but also has better performance in eliminating pseudo-inversion calculation. Finally, it is worth noting that this is the first time that an RNN-based approach has been extended to force control for the redundant manipulator, especially for model uncertainty manipulators. The research of this subject is of great significance to grinding robot, assembly robot and industrial application.

#### **References**



**Open Access** This chapter is licensed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made.

The images or other third party material in this chapter are included in the chapter's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the chapter's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.

# **Chapter 4 Deep RNN Based Obstacle Avoidance Control for Redundant Manipulators**

**Abstract** In this chapter, we consider the obstacle avoidance problem of redundant robot manipulators with physical constraints compliance, where static and dynamic obstacles are investigated. Both the robot and obstacles are abstracted as two critical point sets, respectively, relying on the general class-K functions, the obstacle avoidance problem is formulated into an inequality in speed level. The minimalvelocity-norm (MVN) is regarded as the cost function, converting the kinematic control problem of redundant manipulators considering obstacle avoidance into a constraint-quadratic-programming problem, in which the joint angles and joint velocity constraints are built in velocity level in form of inequality. To solve it, a novel deep recurrent neural network based controller is proposed. Theoretical analyses and the corresponding simulative experiments are given successively, showing that the proposed neural controller does not only avoid collision with obstacles, but also track the desired trajectory correctly.

#### **4.1 Introduction**

With development of intelligent manufacturing and automation, the research on robot manipulators is obtaining increasing attention from a large number of scholars, numerous fruits have been reported on painting, welding and assembly [1, 2] and so on. With the popularization of robots, higher requirements such as flexibility and execution ability are imposed on robots, especially working in the complicated environment [3]. Consequently, more and more scholars cast light on redundant robots which show better flexibility, responsiveness [4, 5].

Stem from the consideration of human-machine collaboration, robots are no longer arranged in a separate area [6–8], which makes the obstacle avoidance for robots become an important part of kinematic control of the robot manipulators. There has reported many obstacle avoidance methods applicable to robot manipulators. A modified RRT based method, namely Smoothly RRT, was proposed in [9]. This paper established a maximum curvature constraint to obtain a smooth curve when avoiding obstacles. Compared to the traditional RRT based method, the proposed method achieves faster convergence. In [10], Hsu investigated the probabilistic foundations of PRM based methods, obtaining a conclusion that the visibility properties has a heavier impact on the probability, and the convergence would be faster if extract partial knowledge could be introduced. However, due to the heavy computational burdens, those methods can be hardly used online.

Apart from stochastic sampling based algorithms mentioned above, the artificial potential field method is also a potential method for obstacle avoidance, and have been found their application in [11–15]. Taking advantage of redundant DOFs, obstacles can be avoided by the self-motion in the null space. Using pseudo-inverse of Jacobian matrix, the solution can be built as the sum of a minimum-norm particular solution and homogeneous solutions [16–18].

With parallelism and easier to implement in hardware, neural networks have been a powerful tool in robot control. Artificial intelligence algorithms based on neural networks provide a new view for robotic control, these methods are very promising due to neural networks' excellent learning ability [19]. For example, in [20], a neural network based learning scheme was proposed to handle functional uncertainties. In [21], a bio-mimetic hybrid controller was designed, where the control strategy consist of an RBF neural network based feed-forward predictive machine and a feedback servo machine based on a proportional-derivative controller. In [22], a fuzzy logic controller is proposed for long-term navigation of quad-rotor UAV systems with input uncertainties. Experiment results show that the controller can achieve better control performance when compared to their singleton counterparts. In [23], an online learning mechanism is built for visual tracking systems. The controller uses both positive and negative sample importance as input, and it is shown that the proposed weighted multiple instance learning scheme achieves wonderful tracking performance in challenging environments. The system model of robot manipulators is highly nonlinear, however, if the prior information of the model is known in advance, the neural network can be optimized. This is to say, on one hand, the number of nodes in neural networks can be reduced. In addition, the excellent learning efficiency is maintained simultaneously [24]. Therefore, to achieve the real-time control of robot manipulators, a series of dynamic neural network are proposed, such as [25– 27]. For kinematic control of redundant manipulators, such a time-varying problem will be transformed into a quadratic programming from perspective of optimization, where nonlinear mapping from joint space to cartesian space is abstracted as a linear equation. Dynamic neural networks can be used to solve the quadratic-programming problem online, therefore, the kinematic control of manipulators is achieved when the formulated linear equation is ensured. More importantly, these methods can also handle inequality constraints considering joint physical constraints, and model uncertainties [28–32]. There are few works on obstacle avoidance using dynamic neural network. In [33], the obstacle avoidance problem is considered as an equality constraint, however the parameters of the escape velocity is not easy to get. In [34], the distance between the robot and obstacles are formulated as a group of distances from critical points and robot links. On this basis, an improved method is proposed by Guo et. al. in [35], which can suppress undesirable discontinuity in the original solutions.

Motivated by the above observations, in this chapter, a RNN-based obstacle avoidance strategy was proposed for redundant robot manipulators. Both the robot and obstacles are abstracted as two critical point sets, respectively, relying on the class-K functions, the obstacle avoidance problem is formulated into an inequality in speed level. The minimal velocity-norm (MVN) is regarded as the cost function, converting the kinematic control problem of redundant manipulators considering obstacle avoidance into a constraint-quadratic-programming problem, in which the joint angles and joint velocity constraints are built in velocity level in form of inequality. To solve it, a novel deep recurrent neural network based controller is proposed. Theoretical analyses and the corresponding simulative experiments are given successively, showing that the proposed neural controller does not only avoid collision with obstacles, but also track the desired trajectory correctly. The main contributions of this chapter are summarized as below:


#### **4.2 Problem Formulation**

#### *4.2.1 Basic Description*

When a redundant robot is controlled to track a particular trajectory in the cartesian space, the positional description of the end-effector can be formulated as

$$
\alpha = f(\theta),
\tag{4.1}
$$

where *<sup>x</sup>* <sup>∈</sup> <sup>R</sup>*<sup>m</sup>* and <sup>θ</sup> <sup>∈</sup> <sup>R</sup>*<sup>n</sup>* are the end-effector s positional vector and joint angles, respectively. In the velocity level, the kinematic mapping between *x*˙ and θ˙ can be described as

$$
\dot{\mathfrak{x}} = J(\theta)\theta,\tag{4.2}
$$

where *<sup>J</sup>* (θ ) <sup>∈</sup> <sup>R</sup>*<sup>m</sup>*×*<sup>n</sup>* is the Jacobian matrix from the end-effector to joint space.

In engineering applications, obstacles are inevitable in the workspace of a robot manipulator. For example, robot manipulators usually work in a limited workspace restricted by fences, which are used to isolate robots from humans or other robots.

This problem could be even more acute in tasks which require collaboration of multiple robots. Let *C*<sup>1</sup> be the set of all the points on a robot body, and *C*<sup>2</sup> be the set of all the points on the obstacles, then the purpose of obstacle avoidance of a robot manipulator is to ensure *C*<sup>1</sup> ∪ *C*<sup>2</sup> = ∅ at all times. By introducing *d* as a safety

$$|O\_j A\_i| \ge d, \qquad \forall A\_i \in C\_1, \forall O\_i \in C\_2. \tag{4.3}$$

where |*Oj Ai*| = - (*Ai* − *Oj*)<sup>T</sup>(*Ai* − *Oj*) is the Euclidean norm of the vector *Ai Oj* .

distance between the robot and obstacles, the obstacle avoidance is reformulated as

Equation (4.3) gives a basic description of obstacle avoidance problem in form of inequalities. However, there are too many elements in sets *C*<sup>1</sup> and *C*2, the vast majority of which are actually unnecessary. Therefore, by uniformly selecting points of representative significance from *C*<sup>1</sup> and *C*2, and increasing *d* properly, Eq. (4.3) can be approximately described as below

$$|O\_j A\_i| \ge d,\tag{4.4}$$

with *Ai*,*i* = 1,..., *a* and *Oj*, *j* = 1,..., *b* being the representative points of the robot and obstacles, respectively. The schematic diagram of Eq. (4.4) in shown in Fig. 4.1.

#### *4.2.2 Reformulation of Inequality in Speed Level*

In order to guarantee the inequality (4.4), by defining *D* = |*Oj Ai*| − *d*, an inequality is rebuilt in speed level as

$$\mathbf{d}(|O\_j A\_i|)/\mathbf{d}t \ge -\text{sgn}(D)\mathbf{g}(|D|),\tag{4.5}$$

in which *g*(•) belongs to class-*K*. Remarkable that the velocities of critical points *Ai* can be described by joint velocities

#### 4.2 Problem Formulation 67

$$
\dot{A}\_i = J\_{ai}(\theta)\dot{\theta},\tag{4.6}
$$

where *Jai* <sup>∈</sup> <sup>R</sup>*<sup>m</sup>*×*<sup>n</sup>* is the Jacobian matrix from the critical point *Ai* to joint space. If *Oj* is prior known, the left-side of Eq. (4.5) can be reformulated as

$$\begin{split} \frac{\mathbf{d}}{\mathbf{d}t}(|O\_{j}A\_{i}|) &= \frac{\mathbf{d}}{\mathbf{d}t}(\sqrt{(A\_{i}-O\_{j})^{\mathrm{T}}(A\_{i}-O\_{j})})\\ &= \frac{1}{|O\_{j}A\_{i}|}(A\_{i}-O\_{j})^{\mathrm{T}}(\dot{A}\_{i}-\dot{O}\_{j})\\ &= \overrightarrow{|O\_{j}A\_{i}|^{\mathrm{T}}}I\_{ai}(\theta)\dot{\theta}-\overrightarrow{|O\_{j}A\_{i}|^{\mathrm{T}}}\dot{O}\_{j},\end{split} \tag{4.7}$$

where −−−−→ <sup>|</sup>*Oj Ai*| = (*Ai* <sup>−</sup> *Oj*)<sup>T</sup>/|*Oj Ai*| ∈ <sup>R</sup>1×*<sup>m</sup>* is the unit vector of −−−−−→ *Ai* <sup>−</sup> *Oj* . Therefore, the collision between critical point *Ai* and object *Oj* can be obtained by ensuring the following inequality

$$J\_{oi}\dot{\theta} \le \text{sgn}(D)\text{g}(|D|) - |\overrightarrow{O\_jA\_i}|^\text{T}\dot{O}\_j,\tag{4.8}$$

where *Joi* = −−−−−→ <sup>|</sup>*Oj Ai*<sup>|</sup> <sup>T</sup> *Jai* <sup>∈</sup> <sup>R</sup>1×*<sup>n</sup>*. Based on the inequality description (4.8), the collision between robot and obstacle can be avoided by ensuring

$$J\_o \dot{\theta} \le B,\tag{4.9}$$

where *Jo* = [*<sup>J</sup>* <sup>T</sup> *<sup>o</sup>*1, ··· , *<sup>J</sup>* <sup>T</sup> *o*1 *<sup>b</sup>* , ··· , *<sup>J</sup>* <sup>T</sup> *oa*, ··· , *<sup>J</sup>* <sup>T</sup> *oa <sup>b</sup>* ] <sup>T</sup> <sup>∈</sup> <sup>R</sup>*ab*×*<sup>n</sup>* is the concatenated form of *Joi* considering all pairs between *Ai* and *Oj* , *B* = [*B*11, ··· , *B*1*<sup>b</sup>*, ··· , *Ba*1, ··· , *Bab*] <sup>T</sup> <sup>∈</sup> <sup>R</sup>*ab* is the vector of upper-bounds, in which *Bi j* <sup>=</sup> sgn(*D*)*g*(|*D*|) <sup>−</sup> −−−−→ <sup>|</sup>*Oj Ai*<sup>|</sup> <sup>T</sup>*O*˙ *<sup>j</sup>* .

**Remark 4.1** According to Eq. 4.5 and the definition of class-K functions, the original escape velocity based obstacle avoidance methods in [34, 35] can be regarded as a special case of 4.5, in which *G*(|*D*|) is selected as *G*(|*D*|) = *k*|*D*|. Therefore, in this chapter, the proposed obstacle avoidance strategy is more general than traditional methods.

#### *4.2.3 QP Type Problem Description*

As to redundant manipulators, in order to take full advantage of the redundant DOFs, the robot can be designed to fulfill a secondary task when tracking a desired trajectory. In this chapter, the secondary task is set to minimize joint velocity while avoiding obstacles. In real implementations, both joint angles and velocities are limited because of physical limitations such as mechanical constraints and actuator saturation. Because of the fact that rank(*J* ) < *n*, there might be infinity solutions to achieve kinematic control. In this chapter, we aim to design a kinematic controller which is capable of avoiding obstacles while tracking a pre-defined trajectory in the Cartesian space. For safety, the robot is wished to move at a low speed, in addition, lower energy consumption is guaranteed. By defining an objective function scaling joint velocity as θ˙<sup>T</sup>θ/˙ 2, the tracking control of a redundant manipulator while avoiding obstacles can be described as

$$\mathbf{\dot{m} \equiv \begin{array}{c} \dot{\theta}^{\mathsf{T}} \dot{\theta} / \mathcal{D}, \end{array}} \tag{4.10a}$$

$$\mathbf{x}.t. \quad \mathbf{x} = \mathbf{x}\_{\mathsf{d}},\tag{4.10b}$$

$$
\theta^- \le \theta \le \theta^+,\tag{4.10c}
$$

$$
\dot{\theta}^- \le \dot{\theta} \le \dot{\theta}^+,\tag{4.10d}
$$

$$J\_o \dot{\theta} \le B.\tag{4.10\mathbf{e}}.\tag{4.10\mathbf{e}}$$

It is remarkable that the constraints Eq. (4.10b)–(4.10e) and the objective function (4.10a) to be optimized are built in different levels, which is very difficult to solve directly. Therefore, we will transform the original QP problem (4.10) in the velocity level. In order to realize precise tracking control to the desired trajectory *x*d, by introducing a negative feedback in the outer-loop, the equality constraint can be ensured by letting the end-effector moves at a velocity of *x*˙ = ˙*x*<sup>d</sup> + *k*(*x*<sup>d</sup> − *x*). In terms with (4.10c), according to the escape velocity method, it can be obtained by limiting joint speed as α(θ<sup>−</sup> − θ ) ≤ θ˙ ≤ α(θ<sup>+</sup> − θ ), where α is a positive constant. Combing the kinematic property (4.2), the reformulated QP problem is

$$\min \quad \dot{\theta}^{\mathrm{T}} \dot{\theta} / 2,\tag{4.11a}$$

$$\text{s.t.}\quad J(\theta)\dot{\theta} = \dot{\mathbf{x}}\_{\mathsf{d}} + k(\mathbf{x}\_{\mathsf{d}} - \mathbf{x}),\tag{4.11b}$$

$$\max(\alpha(\theta^- - \theta), \dot{\theta}^-) \le \dot{\theta} \le \min(\dot{\theta}^+, \alpha(\theta^+ - \theta)),\tag{4.11c}$$

$$J\_o \theta \le B.\tag{4.1\,\mathrm{Id}}$$

It is noteworthy that both the formula (4.11a) and (4.11d) are nonlinear. The QP problem cannot be solved directly by traditional methods. Using the parallel computing and learning ability, a deep RNN will be established later.

#### **4.3 Deep RNN Based Solver Design**

In this chapter, a deep RNN is proposed to solve the obstacle avoidance problem (4.11) online. To ensure the constraints (4.11b), (4.11c), and (4.11d), a group of state variables are introduced in the deep RNN. The stability is also proved by Lyapunov theory.

#### *4.3.1 Deep RNN Design*

Firstly, by defining a group of state variables <sup>λ</sup><sup>1</sup> <sup>∈</sup> <sup>R</sup>*<sup>m</sup>*, <sup>λ</sup><sup>2</sup> <sup>∈</sup> <sup>R</sup>*ab*, a Lagrange function is selected as

$$L = \dot{\theta}^{\text{T}} \dot{\theta} / 2 + \dot{\lambda}\_1^{\text{T}} (\dot{\mathbf{x}}\_{\text{d}} + k(\mathbf{x}\_{\text{d}} - \mathbf{x}) - J(\theta)\dot{\theta}) + \dot{\lambda}\_2^{\text{T}} (J\_o \dot{\theta} - B), \tag{4.12}$$

λ<sup>1</sup> and λ<sup>2</sup> are the dual variables corresponding to the constraints (4.11b) and (4.11d). According to Karush-Kuhn-Tucker conditions, the optimal solution of the optimization problem (4.12) can be equivalently formulated as

$$
\dot{\theta} = P\_{\Omega} (\dot{\theta} - \frac{\partial L}{\partial \dot{\theta}}),
\tag{4.13a}
$$

$$J(\theta)\dot{\theta} = \dot{\mathbf{x}}\_{\mathsf{d}} + k(\mathbf{x}\_{\mathsf{d}} - \mathbf{x}),\tag{4.13b}$$

$$\begin{cases} \lambda\_2 > 0 & \text{if} \quad J\_o \theta = B, \\ \lambda\_2 = 0 & \text{if} \quad J\_o \dot{\theta} \le B, \end{cases} \tag{4.13c}$$

where *P*<sup>Ω</sup> (*x*) = argmin*<sup>y</sup>*∈<sup>Ω</sup> ||*y* − *x*|| is a projection operator to a restricted interval <sup>Ω</sup>, which is defined as <sup>Ω</sup> = {θ˙ <sup>∈</sup> <sup>R</sup>*n*|max(α(θ<sup>−</sup> <sup>−</sup> θ ), <sup>θ</sup>˙−) <sup>≤</sup> <sup>θ</sup>˙ <sup>≤</sup> min(θ˙+, α(θ<sup>+</sup> <sup>−</sup> θ ))}. Notable that Equation (4.13c) can be simply described as

$$
\lambda\_2 = \left(\lambda\_2 + J\_o \dot{\theta} - B\right)^+,\tag{4.14}
$$

where (•)<sup>+</sup> is a projection operation to the non-negative space, in the sense that *y*<sup>+</sup> = max(*y*, 0).

Although the solution of (4.13) is exact the optimal solution of the constrainedoptimization problem (4.11), it is still a challenging issue to solve (4.13) online since its inherent nonlinearity. In this chapter, in order to solve (4.13), a deep recurrent neural network is designed as

$$\varepsilon \ddot{\theta} = -\dot{\theta} + P\_{\Omega} (J^{\mathrm{T}} \lambda\_1 - J\_o^{\mathrm{T}} \lambda\_2),\tag{4.15a}$$

$$
\varepsilon \dot{\lambda}\_1 = \dot{\mathbf{x}}\_{\mathsf{d}} + k(\mathbf{x}\_{\mathsf{d}} - \mathbf{x}) - J(\theta)\dot{\theta}, \tag{4.15b}
$$

$$
\varepsilon \dot{\lambda}\_2 = -\lambda\_2 + \left(\lambda\_2 + J\_o \dot{\theta} - B\right)^+,\tag{4.15c}
$$

with ε is a positive constant scaling the convergence of (4.15).

**Remark 4.2** As to the established deep RNN (4.15), the first dynamic equation is also the output dynamics, which combines the dynamics of state variables λ<sup>1</sup> and λ2, as well as the physical limitations including joint angles and velocities. State variable λ<sup>1</sup> is used to ensure the equality constraint (4.11b), as shown in (4.15b), λ<sup>1</sup> is updated according to the difference between reference speed *x*˙<sup>d</sup> + *k*(*x*<sup>d</sup> − *x*) and actually speed *J* (θ )θ˙. Similarly, λ<sup>2</sup> is used to ensure the inequality constraint (4.11d), which will be further discussed later. It is remarkable that ε plays an important role in the convergence of the deep RNN. The smaller ε, the faster the deep RNN converges. **Remark 4.3** By introducing the model information such as *J* , *Jo*, etc., the established deep RNN consists of three class of nodes, namely θ˙, λ<sup>1</sup> and λ2, and the total number of nodes is *n* + *m* + *ab*. Comparing to traditional neural networks in [19], the complexity of neural networks is greatly reduced.

#### *4.3.2 Stability Analysis*

In this part, we offer stability analysis of the obstacle avoidance method based on deep RNN based. First of all, some basic Lemmas are given as below.

**Definition 4.1** A continuously differentiable function *F*(•) is said to be monotone, if ∇*F* + ∇*F*<sup>T</sup> is positive semi-definite, where ∇*F* is the gradient of *F*(•).

**Lemma 4.1** *A dynamic neural network is said to converge to the equilibrium point if it satisfies*

$$
\kappa \dot{\mathbf{x}} = -\dot{\mathbf{x}} + P\_S(\mathbf{x} - \rho F(\mathbf{x})),
\tag{4.16}
$$

*where* κ > 0 *and* ρ > 0 *are constant parameters, and PS* = *argminy*∈*S*||*y* − *x*|| *is a projection operator to closed set S.*

**Lemma 4.2** *[37] Let V* : [0,∞) <sup>×</sup> *Bd* <sup>→</sup> <sup>R</sup> *be a C*<sup>1</sup> *function,* <sup>α</sup>1*,* <sup>α</sup><sup>2</sup> *be class-K functions defined on* [0, *d*) *which satisfy* α1(||*x*||) ≤ *V*(*t*, *x*) ≤ α2(||*x*||)*,* ∀(*t*, *x*) ∈ [0, *d*) × *Bd , then x* = 0 *is a uniformly asymptotically stable equilibrium point if there exist some class-K function g defined on* [0, *d*)*, to make the following inequality hold*

$$
\frac{\partial V}{\partial t} + \frac{\partial V}{\partial x} f(t, x) \le -\alpha(||\chi||), \forall (t, x) \in [0, \infty) \times B\_d,\tag{4.17}
$$

About the stability of the closed-loop system, we offer the following theorem.

**Theorem 4.1** *Given the obstacle avoidance problem for a redundant manipulator in kinematic control tasks, the proposed deep recurrent neural network is stable and will globally converge to the optimal solution of (4.10).*

*Proof* The stability analysis consists of two parts: firstly, we will show that the equilibrium of the deep RNN is exactly the optimal solution of the control objective described in (4.11), which prove that the output of deep RNN will realize obstacle avoidance while tracking a given trajectory, and then we will prove that the deep recurrent neural network is stable in sense of Lyapunov.

*Part I.* As to the deep recurrent neural network (4.15), let (θ˙∗, λ<sup>∗</sup> 1, λ<sup>∗</sup> <sup>2</sup>) be the equilibrium of the deep RNN, then (θ˙∗, λ<sup>∗</sup> 1, λ<sup>∗</sup> <sup>2</sup>) satisfies

$$-\dot{\theta}^\* + P\_\Omega (J^\mathrm{T} \lambda\_1^\* - J\_o^\mathrm{T} \lambda\_2^\*) = 0,\tag{4.18a}$$

$$
\dot{x}\_{\mathsf{d}} + k(x\_{\mathsf{d}} - x) - J(\theta)\dot{\theta}^\* = 0,\tag{4.18b}
$$

$$-\lambda\_2^\* + (\lambda\_2^\* + J\_o \dot{\theta}^\* - B)^+ = 0,\tag{4.18c}$$

with θ˙∗ be the output of deep RNN. By comparing Equation (4.18) and (4.13), we can readily obtain that they are identical, i.e., the equilibrium point satisfies the KKT condition of (4.10).

Then we will show that the equilibrium point (output of the proposed deep RNN) is capable of dealing with kinematic tracking as well as obstacle avoidance problem. Define a Lyapunov function *V* about the tracking error *e* = *x*<sup>d</sup> − *x* as *V* = *e*T*e*/2, by differentiating *V* with respect to time and combining (4.11b), we have

$$\begin{split} \dot{V} &= e^{\mathrm{T}} \dot{e} = e^{\mathrm{T}} (\dot{\mathrm{x}}\_{\mathrm{d}} - J(\theta) \dot{\theta}^{\*}) \\ &= -ke^{\mathrm{T}} e \leq 0, \end{split} \tag{4.19}$$

in which the dynamic equation (4.18b) is also used. It can readily obtained that the tracking error would eventually converge to zero.

It is notable that the dynamic equation (4.18c) satisfies

$$
\lambda\_2^\* + J\_o \dot{\theta}^\* - B - (\lambda\_2^\* + J\_o \dot{\theta}^\* - B)^+ = J\_o \dot{\theta}^\* - B. \tag{4.20}
$$

According to the property of projection operator (•)+, *y* − (*y*)<sup>+</sup> ≤ 0 holds for any *y*, then we have *Jo*θ˙∗ − *B* ≤ 0, together with (4.5), the inequality (4.5) is satisfied. Notable that (4.5) can be rewritten as

$$
\dot{D} \ge -\text{sgn}(D)\text{g}\,(|D|). \tag{4.21}
$$

As to (4.21), we first consider the situation when equality holds. Since *g*(|*D*|) is a function belonging to class K, it can be easily obtained that *D* = 0 is the only equilibrium of *D*˙ = −sgn(*D*)*g*(|*D*|). Define a Lyapunov function as *V*2(*t*, *D*) = *D*<sup>2</sup>/2, and select two functions as α1(|*D*|) = α2(|*D*|) = *D*<sup>2</sup>/2. It is obvious that α1(|*D*|) = α2(|*D*|) belongs to class-K, and the following inequality will always hold

$$
\alpha\_1(|D|) \le V\_2(t, D) \le \alpha\_2(|D|). \tag{4.22}
$$

Taking the time derivative of *V*2(*t*, *D*), we have

$$\frac{\partial V\_2}{\partial \mathbf{t}} + \frac{\partial V}{\partial D}\dot{D} = -|D|\mathbf{g}(|D|) \le 0. \tag{4.23}$$

According to Lemma 4.2, the equilibrium *x* = 0 is uniformly asymptotically stable. Then we arrive at the conclusion that if the equality d(|*Oj Ai*|)/d*t* = −sgn(*D*)*g* (|*D*|) holds, |*D*| = 0 will be guaranteed, i.e., |*Oj Ai*| − *d* for all *i* = 1 ··· *a*, = 1 ··· *b*. Based on comparison principle, we can readily obtain that |*Oj Ai*| ≥ *d* when d(|*Oj Ai*|)/d*t* ≥ −sgn(*D*)*g*(|*D*|).

*Part II.* Then we will show the stability of the deep RNN (4.15). Let ξ = [θ˙<sup>T</sup>, λ<sup>T</sup> <sup>1</sup> , λ<sup>T</sup> 2 ] <sup>T</sup> be the a concatenated vector of state variables of the proposed deep RNN, then (4.15) can be rewritten as

$$
\varepsilon \dot{\xi} = -\xi + P\_{\dot{\Omega}}[\xi - F(\xi)], \tag{4.24}
$$

where *PS*(•)is a projection operator to a set *S*, and *F*(ξ ) = [*F*1(ξ ), *F*2(ξ ), *F*3(ξ )] T ∈ R*<sup>n</sup>*+*m*+*ab*, in which

$$
\begin{bmatrix} F\_1 \\ F\_2 \\ F\_3 \end{bmatrix} = \begin{bmatrix} \dot{\theta} - J^\mathrm{T} \lambda\_1 + J\_o^\mathrm{T} \lambda\_2 \\ J\dot{\theta} - \dot{\lambda}\_4 - k(\chi\_4 - \chi) \\ -J\_o \dot{\theta}^\* - B \end{bmatrix}.
$$

Let ∇*F* = ∂*F*/∂ξ , we have

$$
\nabla F(\xi) = \begin{bmatrix} I & -J^{\mathrm{T}} & J\_o^{\mathrm{T}} \\ J & 0 & 0 \\ -J\_o^{\mathrm{T}} & 0 & 0 \end{bmatrix}. \tag{4.25}
$$

According to the definition of monotone function, we can readily obtain that *F*(ξ ) is monotone. From the description of (4.24), the projection operator *PS* can be formulated as *PS* = [*P*<sup>Ω</sup> ; *PR*; *P*], in which *P*<sup>Ω</sup> is defined in (4.13), *PR* can be regarded as a projection operator of λ<sup>1</sup> to *R*, with the upper and lower bounds being ±∞, and *<sup>P</sup>* <sup>=</sup> (•)<sup>+</sup> is a special projection operator to closed set <sup>R</sup>*ab* <sup>+</sup> . Therefore, *PS* is a projection operator to closed set [Ω; <sup>R</sup>*m*; <sup>R</sup>*ab* <sup>+</sup> ]. Based on Lemma 4.1, the proposed neural network (4.15) is stable and will globally converge to the optimal solution of (4.10). The proof is completed. -

#### **4.4 Numerical Results**

In this chapter, the proposed deep RNN based controller is applied on a planar 4- DOF robot. Firstly, a basic case where the obstacle is described as a single point is discussed, and then the controller is expanded to multiple obstacles and dynamic ones. Comparisons with existing methods are also listed to indicate the superiority of the deep RNN based scheme.

#### *4.4.1 Simulation Setup*

The physical structure of the 4-link planar robot to be simulated is shown in Fig. 4.2, in which the critical points of the robot are also marked. As shown in Fig. 4.2, critical points *A*2, *A*4, *A*<sup>6</sup> are selected at the joint centers, and *A*1, *A*3, *A*5, *A*<sup>7</sup> are selected at the center of robot links. It is notable that *Ai* and the Jacobian matrix *Joi* are essential in the proposed control scheme. Based on the above description of *Ai* , the D-H parameters of *A*<sup>1</sup> is *a*<sup>1</sup> = 0.15, *a*<sup>2</sup> = *a*<sup>3</sup> = 0, α<sup>1</sup> = α<sup>2</sup> = α<sup>3</sup> = 0, *d*<sup>1</sup> = *d*<sup>2</sup> = *d*<sup>3</sup> = 0, then both the position and Jacobian matrix *Ja*<sup>1</sup> of *A*<sup>1</sup> can be calculated readily. Based on the definition in Eq. (4.8), *Jo*<sup>1</sup> can be obtained. *Ai* and *Joi* can

#### 4.4 Numerical Results 73

**Fig. 4.2** The planar robot to be simulated in this chapter

be calculated similarly. The control parameters are set as ε = 0.001, α = 8, *k* = 8. As to the physical constraints, the limits of joint angles and velocities are selected as θ<sup>−</sup> *<sup>i</sup>* = −3rad, θ<sup>+</sup> *<sup>i</sup>* = 3rad, θ˙− *<sup>i</sup>* = −1rad/s, θ˙+ *<sup>i</sup>* = 1rad/s for *i* = 1 ... 4. The safety distance *d* is set to be 0.1 m.

#### *4.4.2 Single Obstacle Avoidance*

In this simulation, the obstacle is assumed to be centered at [−0.1, 0.2] Tm, the desired path is set as *x*<sup>d</sup> = [0.4 + 0.1*cos*(0.5*t*), 0.4 + 0.1*sin*(0.5*t*)] Tm, and the initial joint angles are set to be θ<sup>0</sup> = [π/2, −π/3, −π/4, 0] Trad. The class-K function is selected as *G*(|*D*|) = *K*1|*D*| with *K*<sup>1</sup> = 200. In order to show the effectiveness of the proposed obstacle avoidance method, contrast simulations with and without inequality constraint (4.10e) are conducted. Simulation results are shown in Fig. 4.3. When ignoring the obstacle, the end-effector trajectories and the corresponding incremental configurations are shown in Fig. 4.3a, although the robot achieves task space tracking to *x*d, obviously the first link of the robot would collide with the obstacle. After introducing obstacle avoidance scheme, the robot moves other joints rather than the first joint, and then avoids the obstacle effectively (Fig. 4.3b). Simultaneously, the tracking errors when tracking the given circle are shown in Fig. 4.3c. From the initial state, the end-effector moves towards the circle quickly and smoothly, after that, the tracking error in stable state keeps less than 1 × 10−4m, showing that the robot could achieve kinematic control as well as obstacle avoidance tasks. To show more details of the proposed deep RNN based method, some important process data is given. As the obstacle is close to the first joint, critical points *A*<sup>1</sup> and *A*<sup>2</sup> are more likely to collide with the obstacle, therefore, as a result, the distances between the obstacle *O*<sup>1</sup> and *A*1, *A*<sup>2</sup> are shown in Fig. 4.3d, from *t* = 2s to *t* = 6.5s, ||*A*1*O*1|| remains at the minimum value *d* = 0.1, that is to say, using the proposed obstacle

**Fig. 4.3** Numerical results of single obstacle avoidance. **a** is the motion trajectories when ignoring obstacle avoidance scheme, **b** is the motion trajectories when considering obstacle avoidance scheme, **c** is the profile of tracking errors when considering obstacle avoidance scheme, **d** is the profile of distances between critical points and obstacle, **e** is the profile of joint velocities, **f** is the profile of joint angles

avoidance method, the robot maintains minimum distance from the obstacle. The profile of joint velocities are shown in Fig. 4.3e, at the beginning of simulation, the robot moves at maximum speed, which leads to the fast convergence of tracking errors. The curve of joint angles change over time is shown in Fig. 4.3f.

#### *4.4.3 Discussion on Class-K Functions*

In this part, we will discuss the influence of different class-K functions in the avoidance scheme (4.5). Four functions are selected as *G*1(|*D*|) = *K*|*D*| 2, *G*2(|*D*|) = *K*|*D*|, *G*3(|*D*|) = *K*tanh(5|*D*|), *G*4(|*D*|) = *K*tanh(10|*D*|), Fig. 4.4a shows the com-

**Fig. 4.4** Discussions on different obstacle avoidance functions. **a** is the comparative curves of different obstacle avoidance functions. **b** is the profile of minimum distance of the robot and obstacle using different obstacle avoidance functions

parative curves the these functions. Other simulation settings are the same as the previous one. Simulation results are shown in Fig. 4.4b. When selecting the same positive gain *K*, the minimum distance is about 0.08 m, which shows the robot can avoid colliding with the obstacle using the avoidance scheme (4.5). The close-up graph of the tracking error is also shown, it is remarkable that the minimum distance deceases, as the gradient of the class-K function increases near zero. Therefore, one conclusion can be drawn that the function can be more similar with sign function, to achieve better obstacle avoidance.

#### *4.4.4 Multiple Obstacles Avoidance*

In this part, we consider the case where there are two obstacles in the workspace. The obstacles are set at [0.1, 0.25] Tm and [0, 0.4] Tm, respectively. Simulation results are shown in Fig. 4.5. The desired path is defined as *x*<sup>d</sup> = [0.45 + 0.1*cos*(0.5*t*), 0.4 + 0.1*sin*(0.5*t*)] T. The initial joint angle of the robot is selected as θ<sup>0</sup> = [1.5, −1 − 1, 0] T. To further show the effectiveness of the proposed obstacle avoidance strategy 4.5, *g*|*D*| is selected as *g*|*D*| = *K*1/(1 + *e*−|*D*<sup>|</sup> ) − *K*1/2 with *K*<sup>1</sup> = 200. When λ<sup>2</sup> is set to 0, as shown in Fig. 4.5a, the inequality constraint (4.11d) will not work, in other words, only kinematic tracking problem is considered rather than obstacle avoidance, in this case, the robot would collide with the obstacles. After introducing online training of λ2, the simulation results are given in Fig. 4.5b–h. The tracking errors are shown in Fig. 4.5c, with the transient time being about 4s, and steady state error less than 1 × 10−3m. Correspondingly, the robot moves fast in the transient stage, ensuring the quick convergence of the tracking errors. It is remarkable that the distances between the critical points and obstacle points are kept larger than 0.1m at all times, showing the effectiveness of the proposed method. At *t* = 14s,

**Fig. 4.5** Numerical results of multiple obstacle avoidance. **a** is the motion trajectories when ignoring obstacle avoidance scheme. **b** is the motion trajectories when considering obstacle avoidance scheme. **c** is the profile of tracking errors when considering obstacle avoidance scheme. **d** is the profile of distances between critical points and obstacles. **e** is the profile of joint velocities. **f** is the profile of λ2. **g** is the profile of joint angles. **h** is the profile of λ<sup>1</sup>

from Fig. 4.5d and g, when the distance between *A*<sup>3</sup> and *O*<sup>1</sup> is close to 0.1m, the corresponding dual variable λ<sup>2</sup> becomes positive, making the inequality constraint (4.11d) hold, and the boundary between the robot and obstacle is thus guaranteed. After *t* = 18s, ||*A*3*O*1|| becomes greater, then λ<sup>2</sup> converges to aero. Notable that although λ<sup>1</sup> and λ<sup>2</sup> do not converge to certain values, the dynamic change of λ<sup>1</sup> and λ<sup>2</sup> ensures the regulation of the proposed deep RNN.

#### *4.4.5 Enveloping Shape Obstacles*

In this part, we consider obstacles of general significance. Suppose that there is a rectangular obstacle in the workspace, with the vertices being [0, 0.5] T, [0.4, 0.5] T, [0.4, 0.6] <sup>T</sup> and [0.5, 0.6] T, respectively. By selecting the safety distance *d* = 0.1m, and obstacle points as *O*<sup>1</sup> = [0.05, 0.55] T, *O*<sup>2</sup> = [0.15, 0.55] T, *O*<sup>3</sup> = [0.25, 0.55] T and *O*<sup>4</sup> = [0.35, 0.55] T. It can be readily obtained that the rectangular obstacle is totally within the envelope defined by *Oi* and *d*. The incremental configurations when tracking the path while avoiding the obstacle are shown in Fig. 4.6b, in which a local amplification diagram is also given, showing that the critical points *A*<sup>3</sup> is capable of avoiding *O*<sup>2</sup> and *O*3. It is worth noting that by selecting proper point group and safety distance, the obstacle can be described by the envelope shape effectively. Figure 4.6c, h also give important process data of the system under the proposed controller, including tracking errors, joint angles, angular velocities, and state variables of deep RNNs. We can observe that the physical constraints as well as kinematic control task are realized using the controller.

#### *4.4.6 Comparisons*

To illustrate the priority of the proposed scheme, a group of comparisons are carried out. As shown in Table 4.1, all the controllers in [12, 16, 34, 35] achieve the avoidance of obstacles. Comparing to APF method in [12, 16] and JP based method in [12, 16], the proposed controller can realize a secondary task, at the same time, we present a more general formulation of the obstacle avoidance strategy, which is helpful to gain a deeper understanding of the mechanism for avoidance of obstacles. Moreover, in this chapter, both dynamic trajectories and obstacles are considered. The comparisons above also highlight the main contributions of this paper.

**Fig. 4.6** Numerical results of enveloping shape obstacles. **a** is the motion trajectories when ignoring obstacle avoidance scheme. **b** is the motion trajectories when considering obstacle avoidance scheme. **c** is the profile of tracking errors when considering obstacle avoidance scheme. **d** is the profile of distances between critical points and obstacles. **e** is the profile of joint velocities. **f** is the profile of joint angles. **g** is the profile of λ2. **h** is the profile of λ<sup>1</sup>


**Table 4.1** Comparisons among different obstacle avoidance controllers on manipulators

\* In [34, 35] and [16], dynamic obstacles are not considered

\*\* Regular escape velocity method is used, which is only a special case of 4.5

#### **4.5 Summary**

In this chapter, a novel obstacle avoidance strategy is proposed based on a deep recurrent neural network. The robots and obstacles are presented by sets of critical points, then the distance between the robot and obstacle can be approximately described as point-to-points distances. By understanding the nature escape velocity methods, a more general description of obstacle avoidance strategy is proposed. Using minimum-velocity-norm (MVN) scheme, the obstacle avoidance together with path tracking problem is formulated as a QP problem, in which physical limits are also considered. By introducing model information, a deep RNN with simple structure is established to solve the QP problem online. Simulation results show that the proposed method can realize the avoidance of static and dynamic obstacles.

#### **References**


**Open Access** This chapter is licensed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made.

The images or other third party material in this chapter are included in the chapter's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the chapter's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.

# **Chapter 5 Optimization-Based Compliant Control for Manipulators Under Dynamic Obstacle Constraints**

**Abstract** The research on force control among manipulators has attracted more and more attention from a large of scholars and researcher. In this chapter, from perspective of optimization, we investigated the collision-free compliance control of redundant robot manipulators using recurrent neural network. The position-force control is constructed as an equality constraint in velocity level together with the kinematic property of robots. Both the joint angles and joint speed limitations on robots physical structure are also considered, and are described by a group of inequality constraints. To avoid collision between robots and obstacles, they are described as two set of points, the Euclidean norm of distance between robots and obstacles, greater than zero, is established as the condition of collision-free occurrence. Minimizing joint velocities as the secondary task, a time-varying QP-type problem description is given with equality and inequality constraints, then an RNN-based controller is designed to solve it. Based on theoretical analysis and simulative experiments, the effectiveness of the designed controller is validated.

#### **5.1 Introduction**

With development of industry society, robot manipulators are required to be more flexible and intelligent, to satisfy the increasing personalized and customized production requirement [1]. Compared to non-redundant manipulators, redundant ones show more flexibility due to its more DOFs that exceed the required number to accomplish the given task [2]. On the other hand, position control scheme on robots would show lower performance for some complicated tasks [3]. For example, the control methods that only considers position usually ignore the contact-force between robot and workpieces, with high safety challenge, resulting from the excessive system stiffness would bring the unpredictable responses [4]. Therefore, control of contact force between redundant robots and workpieces should be considered.

In the light of different robot structure and control signals, until now, a number of methods have been proposed. Imitating the muscle-tendon tissue of animal joints, compliance units such as series elastic actuators (SEA), variable stiffness actuators and so on, are introduced into the robots. In [5], Pan et al. proposed a compliance controller for SEA-based systems, achieving torque output control. As to the interaction between the robot and workpieces, Hogan proposes a basic idea of impedance control, in which the robot and environment usually bear as an impedance and admittance, respectively [6]. Generally speaking, the contact force and relative movement of the robot and workpieces can be described as a combination of mass-springdamper systems. Therefore, the contact force can be controlled by designing motion commands indirectly. Another representative approach is hybrid position-force control, the controller is usually designed in the torque loop of the joint space, in which both contact forces and movement of the robot are modelled based on dynamic analysis. Then the controller can be described as a combination of control efforts which achieve position and force control respectively [7]. Similar research can be found in literature such as [8–13].

During the operation, the robot may collide with the environment because the manipulator usually needs to keep in touch with the workpiece. In addition, the working space of the robot is limited [14]. For example, in a production line with multiple manipulators, each robot is in a fixed position. In order to avoid interference, the working space of the robot is limited by hardware (fences, obstacles, [emph], etc.) or software constraints (pre planned space). In the case of human-computer cooperation, the robot shall not collide with people. Therefore, it is very important to avoid obstacles in the process of operation. In current reports, the desired trajectory is usually obtained by offline programming, which is limited by the efficiency of programming. In order to achieve real-time obstacle avoidance control, the artificial potential field method has been widely used. The basic idea is that when an obstacle repels the robot, the target acts as an attractive pole, and then the robot will be controlled to converge to the target without colliding with the obstacle [15]. In [16], a modified method is proposed, which describes the obstacles by different geometrical forms, both theoretical conduction and experimental tests validate the proposed method. Considering the local minimum problem that may caused by multi-link structures, in [17], a two minima is introduced to construct potential field, such that a dual attraction between links enables faster maneuvers comparing with traditional methods. Other improvements to artificial potential field method can be found in [18, 19]. A series of pseudo-inverse methods are constructed for redundant manipulators in [20], in which the control efforts consists of a minimum-norm particular solution and homogeneous solutions, and the collision can be avoided by calculating a escape velocity as homogeneous solutions. By understanding the limited workspace, the obstacle avoidance can be described in forms of inequalities, which opens a new way in realtime collision avoidance. In [21], the robot is regarded as the sum of several links, and the distances between the robot and obstacle is obtained by calculating distances between points and links. Then Guo [22] improves the method by modifying obstacle avoidance MVN scheme, and simulation results show that the modified control strategy can suppress the discontinuity of angular velocities effectively.

To solve the problem of robot compliance control, the controller should be designed according to the required command and system characteristics. That is to say, robots must follow constraints to achieve compliance control, while ensuring unequal constraints to avoid obstacles. Obviously, the control problem involves several constraints, including equal constraints and unequal constraints. By using the idea of constraint optimization, the control problem with multiple constraints can be well dealt with. In recent years, the application of recurrent neural network in robot control has been studied extensively, and it shows the great efficiency of realtime processing [23–27]. In those literatures, analysis in dual space and a convex projection are introduced to handle inequality constraints.

Recently, taking advantage of parallel computing, neural networks are used to solve the constraint-optimization, and have shown great efficiency in real-time processing. In [28, 29], controllers are established in joint velocity/acceleration level, to fulfill kinematic tracking problem for robot manipulators. In [30], tracking problem with model uncertainties is considered, and an adaptive RNN based controller is proposed for a 6DOF robot Jaco2. Discussions on multiple robot systems, parallel manipulators, time-delay systems using RNN can be found in [30–33].

Based on the above observations, a RNN based collision-free compliance control strategy is proposed for redundant manipulators. The remainder of this chapter is organized as follows. In Chap. 2, the control objective including the position-force control as well as collision avoidance is pointed out, and then rewritten as a QP problem. In Chap. 3, the RNN based controller is proposed, and the stability of the system is also analyzed. A number of numerical experiments on a 4-DOF redundant manipulator including model uncertainties and narrow workspace are carried out in Chap. 4, to further verify effectiveness of the proposed control strategy. Chapter 5 concludes the chapter. The contributions of this chapter are summarized as below


#### **5.2 Problem Formulation**

#### *5.2.1 Robot Kinematics and Impedance Control*

Without loss of generality, we consider series robot manipulators with redundant DOFs, and the joints are assumed as rotational joints. Let <sup>θ</sup> <sup>∈</sup> <sup>R</sup>*<sup>n</sup>* be the vector of joint angles, the description of the end-effector in the cartesian space is

$$\mathbf{x} = f(\theta), \tag{5.1}$$

where *<sup>x</sup>* <sup>∈</sup> <sup>R</sup>*<sup>m</sup>* is the coordination of the end-effector. In the velocity level, the forward kinematic model can be formulated as

$$
\dot{\chi} = J(\theta)\dot{\theta},\tag{5.2}
$$

in which *<sup>J</sup>* (θ ) <sup>=</sup> <sup>∂</sup>*x*/∂θ is Jacobian matrix. As to redundant manipulators, *<sup>J</sup>* <sup>∈</sup> <sup>R</sup>*<sup>m</sup>*×*<sup>n</sup>*, *rank*(*J* ) < *n*.

In industrial applications, position control based operation mode has many limitations: due to the lack of compliance, pure kinematic control methods may cause unexpected consequences, especially when the robot is in contact with the environment. To enhance the compliance and achieve precise control of contact force, according to impedance control technology, the interaction between robot and environment can be described as a damper-spring system [34].

$$F = K\_p \Delta x + K\_d \mathbf{d}(\Delta x)/\mathbf{d}t,\tag{5.3}$$

where *Kp* and *Kd* are interaction coefficients, and Δ*x* = *x* − *x*<sup>d</sup> is the difference between the actual response *x* and desired trajectory *x*d. By referring to Eqs. (5.2) and (5.3), we have

$$
\dot{\mathbf{x}} = K\_d^{-1} F - K\_p K\_d^{-1} \Delta \mathbf{x} + \dot{\mathbf{x}}\_{\mathbf{d}}.\tag{5.4}
$$

When the real values of *Kp* and *Kd* are known, *F* can be obtained by adjusting the velocity *x*˙ of the end-effector according to Eq. (5.4).

#### *5.2.2 Obstacle Avoidance Scheme*

In the process of robot force control, there is a risk of collision as the robot may contact with workpieces. Besides, robot manipulators usually work in a limited workspace restricted by fences, which are used to isolated robots from humans or other robots. This problem could be even more acute in tasks which require collaboration of multiple robots. Therefore, obstacle avoidance problem must be taken into consideration. When collision does not happens, the distance between robot and obstacles keeps positive. By describing the robot and obstacles as two separated sets, namely *SA* = {*A*1,..., *Aa*}, *SB* = {*B*1,..., *Bb*}, where *Ai*,*i* = 1, ··· , *a* and *Bj*, *j* = 1, ··· , *b* are points on the robot and obstacles, respectively. Then the sufficient and necessary condition of obstacle avoidance problem is that the intersection of A and B is an empty set. That is to say, for any point pair *Ai* on the robot and *Bj* on the obstacle, the distance between *Ai* and *Bj* is always positive, i.e., ||*Ai Bj*||<sup>2</sup> <sup>2</sup> > 0, for all *i* = 1,..., *a*, *j* = 1, ··· , *b*, where || • ||<sup>2</sup> <sup>2</sup> is the Euclidean norm of vector *Ai Bj* . For sake of safety, let *d* > 0 be a proper value describing the minimum distance between robot and obstacles, the collision can be avoided b ensuring ||*Ai Bj*||<sup>2</sup> <sup>2</sup> ≥ *d*. **Remark 5.1** In fact, both *SA* and *SB* consist of infinite points. However, by evenly selecting representative points from the robot link and obstacles, *SA* and *SB* can be simplified significantly. Besides, the safety distance *d* can be appropriately increased. Despite that this treatment will sacrifice some workspace of the robot (the inequality ||*Ai Bj*||<sup>2</sup> <sup>2</sup> ≥ *d* would consider some areas that collisions do not happen, due to a bigger *d* is considered), this sacrifice is meaningful: the number of inequality constraints can be reduced greatly, which is helpful for constraint description and solution.

In real applications, the key points of the robot manipulator is easy to select. Cylindrical envelopes are usually used to describe the robotic links, then the key points can be selected on the axes of the cylinders uniformly, and the distance between those points can be defined the same as the radius of the cylinder. As to the obstacles with irregular shapes, the key points can be selected based on image processing techniques, such as edge detection, corrosion, etc.

#### *5.2.3 Problem Reformulation in QP Type*

From the above description, the purpose of this chapter is to build a collision-free force controller for redundant manipulators, to achieve precise force control along a predefined trajectory, in the sense that *F* → *F*d, *x* → *x*d, and ||*Ai Bj*||<sup>2</sup> <sup>2</sup> ≥ *d* for all *i* = 1, ··· , *a*, *j* = 1,..., *b*.

As to a redundant manipulator, there exist redundant DOFs, which can be used to enhance the flexibility of the robot. When the robot gets close to the obstacles, the robot must avoid the obstacle without affecting the contact force and tracking errors. In addition, when there is no risk of collision, the robot may work in an economic way, by minimizing the joint velocities, energy consumption can be reduced effectively. Therefore, by defining an objective function as ||θ˙||<sup>2</sup> 2, the control objective can be summarized as

$$\min \quad \|\dot{\theta}\|\_{2}^{2},\tag{5.5a}$$

$$\mathbf{x}.t.\quad \mathbf{x} = \mathbf{x}\_{\mathsf{d}},\tag{\mathsf{S.5b}}$$

$$F = F\_{\mathsf{d}},\tag{5.5c}$$

$$||A\_i B\_j||\_2^2 \ge d,\tag{5.5d}$$

where ||θ˙||<sup>2</sup> <sup>2</sup> is the Euclidean norm of θ˙. It is noteworthy that in actual industrial applications, the robot is also limited by its own physical structures. For instance, the joint angles are limited in a fixed range, and the upper/lower bounds of joint velocities are also constrained due to actuator saturation. By combing Eq. (5.4), the control objective rewrites as

$$\min \quad \|\dot{\theta}\|\_{2}^{2},\tag{5.6a}$$

$$\text{s.t.}\quad J\dot{\theta} = K\_d^{-1}F - K\_p K\_d^{-1} \Delta \text{x} + \dot{\text{x}}\_{\text{d}},\tag{5.6b}$$

$$||A\_i B\_j||\_2^2 \ge d,\tag{5.6c}$$

$$
\theta^- \le \theta \le \theta^+,
\tag{5.6d}
$$

$$
\dot{\theta}^- \le \dot{\theta} \le \dot{\theta}^+,\tag{5.6e}
$$

with θ−, θ+, θ˙−, θ˙+ being the upper/lower bounds of joint angles and velocities, respectively. However, the optimization problem is described in different levels, i.e., joint speed level or joint angle level, which remains challenging to solve Eq. (5.6) directly. Therefore, we will rewrite this formula in velocity level. As to the key points *Ai* on the robot, let *xAi* be the coordination of *Ai* in the cartesian space, both *xAi* and *x*˙*Ai* are available:

$$\mathbf{x}\_{Ai} = f\_{Ai}(\theta),\tag{5.7a}$$

$$
\dot{X}\_{Ai} = J\_{Ai}\theta,\tag{5.7b}
$$

where *f Ai*(•) is the forward kinematics of point *Ai* , and *JAi* is the corresponding Jacobian matrix from *Ai* to joint space. Let us consider the following equality

$$\frac{d}{dt}(||A\_iB\_j||\_2^2) = k(||A\_iB\_j||\_2^2 - d),\tag{5.8}$$

in which *k* is a positive constant. It is obviously that the equilibrium point of Eq. (5.8) is ||*Ai Bj*||<sup>2</sup> <sup>2</sup> = *d*. By letting d d*t* (||*Ai Bj*||<sup>2</sup> <sup>2</sup>) ≥ 0, the inequality (5.5d) can be readily guaranteed. Taking the time-derivative of ||*Ai Bj*||<sup>2</sup> <sup>2</sup> yields

$$\begin{split} \frac{\mathbf{d}}{\mathbf{d}t} (||\boldsymbol{B}\_{j}\boldsymbol{A}\_{i}||\_{2}^{2}) &= \frac{\mathbf{d}}{\mathbf{d}t} (\sqrt{(\boldsymbol{A}\_{i} - \boldsymbol{B}\_{j})^{\mathrm{T}}(\boldsymbol{A}\_{i} - \boldsymbol{B}\_{j})}) \\ &= \frac{1}{||\boldsymbol{B}\_{j}\boldsymbol{A}\_{i}||\_{2}^{2}} (\boldsymbol{A}\_{i} - \boldsymbol{B}\_{j})^{\mathrm{T}} (\dot{\boldsymbol{A}}\_{i} - \dot{\boldsymbol{B}}\_{j}) \\ &= \overrightarrow{|\boldsymbol{B}\_{j}\boldsymbol{A}\_{i}|} |\boldsymbol{\mathrm{T}}\_{Ai}(\boldsymbol{\theta})\dot{\boldsymbol{\theta}} - \overrightarrow{|\boldsymbol{B}\_{j}\boldsymbol{A}\_{i}|}|^{\mathrm{T}} \dot{\boldsymbol{B}}\_{j} \,. \end{split} \tag{5.9}$$

where −−−→ <sup>|</sup>*Bj Ai*| = (*Ai* <sup>−</sup> *Bj*)<sup>T</sup>/||θ˙||<sup>2</sup> <sup>2</sup> is a unit vector from *Bj* to *Ai* , and *B*˙ *<sup>j</sup>* is the velocity of key point *Bj* on the obstacles. By Eqs. (5.9) and (5.6c), the inequality description of obstacle avoidance strategy is

$$\overrightarrow{|\mathcal{B}\_j A\_i|}^\text{T} J\_{Ai}(\theta) \dot{\theta} \ge k(||A\_i B\_j||\_2^2 - d) + \overrightarrow{|\mathcal{B}\_j A\_i|}^\text{T} \dot{\mathcal{B}}\_j,\tag{5.10}$$

**Remark 5.2** In this part, we have shown the basic idea of obstacle avoidance scheme in velocity level, whose equilibrium point is described in Eq. (5.8). It is notable that the right-hand side of Eq. (5.8) is only a common form to realize obstacle avoidance. Generally speaking, the right-hand side of Eq. (5.8) may have different forms, such as *k*(||*Ai Bj*||<sup>2</sup> <sup>2</sup> − *d*), *k*(||*Ai Bj*||<sup>2</sup> <sup>2</sup> − *d*)3, *etc.* From Eq. (5.10), the value of the response velocity to avoid obstacles is related to the two parts, the first part is the difference between the actual and safety distance, the other part depends on the movement of the obstacles.

In terms of the physical constraints of joint angles, according to escape velocity method, inequalities (5.6d) and (5.6e) can be uniformly described as max (α(θ<sup>−</sup> − θ ), θ˙−) ≤ θ˙ ≤ min(θ˙+, α(θ<sup>+</sup> − θ )). So far, the position-force control problem together with obstacle avoidance strategy in velocity level is as below

$$\min \quad ||\dot{\theta}||\_2^2,\tag{5.11a}$$

$$\text{s.t.}\quad J\dot{\theta} = K\_d^{-1}F - K\_pK\_d^{-1}\Delta x + \dot{x}\_\text{d},\tag{5.11b}$$

$$\max(\alpha(\theta^- - \theta), \dot{\theta}^-) \le \dot{\theta} \le \min(\dot{\theta}^+, \alpha(\theta^+ - \theta)),\tag{5.11c}$$

$$J\_o \dot{\theta} \le B.\tag{5.11d}$$

where (5.11c) is a rewritten inequality considering (5.6d) and (5.6e) based on escape velocity scheme , *Jo* = [−−−−→ <sup>|</sup>*B*<sup>1</sup> *<sup>A</sup>*1<sup>|</sup> <sup>T</sup> *JA*1;··· ; −−−−→ <sup>|</sup>*Bb <sup>A</sup>*1<sup>|</sup> <sup>T</sup> *JAb* , ··· , −−−−→ <sup>|</sup>*B*<sup>1</sup> *Aa*<sup>|</sup> <sup>T</sup> *J*<sup>T</sup> *Aa*;··· ; −−−−→ <sup>|</sup>*Bb Aa*<sup>|</sup> <sup>T</sup> *JAb* ] ∈

*b b* R*ab*×*<sup>n</sup>* is the concatenated form of *JAi* considering all pairs between *Ai* and *Bj* , *B* = [*B*11, ··· , *B*1*<sup>b</sup>*, ··· , *Ba*1, ··· , *Bab*] <sup>T</sup> <sup>∈</sup> <sup>R</sup>*ab* is the vector of upper-bounds, in which −*k*(||*Ai Bj*||<sup>2</sup> <sup>2</sup> <sup>−</sup> *<sup>d</sup>*) <sup>−</sup> −−−→ <sup>|</sup>*Bj Ai*<sup>|</sup> <sup>T</sup> *B*˙ *<sup>j</sup>* . From the definition of *Jo*, *B*, inequality (5.11d) in equivalent to −−−−→ <sup>|</sup>*B*1*A*1<sup>|</sup> <sup>T</sup> *JA*1(θ )θ˙ ≥ *k*(||*A*1*B*1||<sup>2</sup> <sup>2</sup> <sup>−</sup> *<sup>d</sup>*) <sup>+</sup> −−−−→ <sup>|</sup>*B*1*A*1<sup>|</sup> <sup>T</sup> *<sup>B</sup>*˙1,... −−−−→ <sup>|</sup>*Bb Aa*<sup>|</sup> <sup>T</sup> *JAa*(θ )θ˙ ≥ *k*(||*Aa Bb*||<sup>2</sup> <sup>2</sup> <sup>−</sup> *<sup>d</sup>*) <sup>+</sup> −−−−→ <sup>|</sup>*Bb Aa*<sup>|</sup> <sup>T</sup> *B*˙*b*, which is the cascading form of the inequality description (5.10) for all points pairs *Ai Bj* , i.e., if (5.11d) holds, the obstacle avoidance can be achieved. It is notable that a larger number of key points do help to describe the information of the obstacle more clearly, but it would lead to a computational burden, since the number of inequality constraints also increases. Therefore, the distance of the key points on the obstacle can be selected similar to those of the manipulator.

#### **5.3 RNN Based Controller Design**

In previous parts, we have transform the compliance control as well as obstacle avoidance problem into a constraint-optimization one. However, because that the QP problem described in Eq. (5.11) contains equality and inequality constraints, moreover, both Eq. (5.11b, d) are nonlinear, it is difficult to solve directly, especially in industrial applications in realtime. Based on the parallel computation ability, an RNN is established to solve Eq. (5.11) online, and the stability of the closed-loop system is also discussed.

#### *5.3.1 RNN Design*

In terms with the QP problem Eq. (5.11), although the analytical solution can be hardly obtained, by defining a Lagrange function as

$$L = ||\dot{\theta}||\_2^2 + \lambda\_1^T (K\_d^{-1} F - K\_p K\_d^{-1} \Delta x + \dot{x}\_d - J(\theta)\dot{\theta}) + \lambda\_2^T (J\_o \dot{\theta} - B), \quad (5.12)$$

where λ<sup>1</sup> and λ<sup>2</sup> are state variables, respectively. According to Karush-Kuhn-Tucker (KKT) conditions, the inherent solution of Eq. (5.11) satisfies

$$
\dot{\theta} = P\_{\Omega} (\dot{\theta} - \frac{\partial L}{\partial \dot{\theta}}),
\tag{5.13a}
$$

$$J\dot{\theta} = K\_d^{-1}F - K\_p K\_d^{-1} \Delta \mathbf{x} + \dot{\mathbf{x}}\_{\mathbf{d}},\tag{5.13b}$$

$$
\lambda\_2 = \left(\lambda\_2 + J\_o \dot{\theta} - B\right)^+,\tag{5.13c}
$$

where *P*<sup>Ω</sup> (*x*) = argmin*<sup>y</sup>*∈<sup>Ω</sup> ||*y* − *x*|| is a projection operator of θ˙ to convex Ω, and <sup>Ω</sup> = {θ˙ <sup>∈</sup> <sup>R</sup>*n*|max(α(θ<sup>−</sup> <sup>−</sup> θ ), <sup>θ</sup>˙−) <sup>≤</sup> <sup>θ</sup>˙ <sup>≤</sup> min(θ˙+, α(θ<sup>+</sup> <sup>−</sup> θ ))}. In Eq. (5.13c), the operation function (•)<sup>+</sup> is defined as a mapping to the non-negative space. Equation(5.13c) can be rewritten as

$$\begin{cases} \lambda\_2 > 0 & \text{if} \quad J\_o \dot{\theta} = B, \\ \lambda\_2 = 0 & \text{if} \quad J\_o \dot{\theta} \le B, \end{cases} \tag{5.14}$$

When *Jo*θ˙ ≤ *B*, the inequality Eq. (5.11d) holds, then λ<sup>2</sup> stays zero. Instead, if the inequality reaches a critical state, λ<sup>2</sup> becomes positive to ensure *Jo*θ˙ = *B*. In order to obtain the inherent solution in real time, a recurrent neural network is built as follows

$$\varepsilon \ddot{\theta} = -\dot{\theta} + P\_{\Omega} (\dot{\theta} - \dot{\theta} / ||\dot{\theta}||\_2^2 + J^{\text{T}} \lambda\_1 - J\_o^{\text{T}} \lambda\_2), \tag{5.15a}$$

$$
\varepsilon \dot{\lambda}\_1 = K\_d^{-1} F - K\_p K\_d^{-1} \Delta \mathbf{x} + \dot{\mathbf{x}}\_{\mathsf{d}} - J(\theta) \dot{\theta}, \tag{5.15b}
$$

$$
\varepsilon \dot{\lambda}\_2 = -\lambda\_2 + \left(\lambda\_2 + J\_o \dot{\theta} - B\right)^+,\tag{5.15c}
$$

with ε being a positive constant scaling the convergence of Eq. (5.15).

The proposed RNN based algorithm is shown in Algorithm 5.3.1. Based on escape velocity method, the convex set of joint speed can be obtained based on the positive constant α and physical constraints θ−, θ+, θ˙−, θ˙−. After initializing state variables λ<sup>1</sup> and λ2, the reference velocity can be obtained based on the desired command and actual responses according to Eq. (5.4) then the output of RNN (which is also the control command) can be calculated based on Eq. (5.15a), at the same time, both λ<sup>1</sup> and λ<sup>2</sup> can be updated according to Eqs. (5.15b) and (5.15c).

In real applications, the nonlinear system can be hardly approximated completely. Therefore, the approximate error is inevitable, which would influence the performance of the proposed controller. However, the approximate error is a small value

#### **Algorithm 4** Collision-Free position-force controller based on RNN

**Input:** Positive control gains α, ε, and interaction coefficients *Kp*, *Kd* . Initial states *q*˙(0) = 0, *q*(0), desired path *x*d(*t*), *x*˙d(*t*) and operation force *F*d(*t*), task duration *Te*, feedback of end effector- s coordination *x*(*t*) and contact force *F*, joint angles θ, Jacobian matrix *J* (θ ), information of the obstacles *Bj* and *B*˙ *<sup>j</sup>* = 1, ··· , *b*. Location of key points *Ai*,*i* = 1, ··· , *a* on the robot, and the corresponding Jacobian matrices *JAi* . Physical limitations θ−, θ+, θ˙−, θ˙+. Safety distance *d*. **Output:** To achieve position-force control without colliding with obstacles 1. Initialize λ<sup>1</sup> = 0, λ<sup>2</sup> = 0.


**Until**(*t* > *Te*)

of higher order, and the influence can be suppressed based on the negative feedback scheme in the outer-loop, as shown in Eq. (5.4).

**Remark 5.3** The output dynamics of the proposed RNN is given in Eq. (5.15a), in which the projection operator *P*<sup>Ω</sup> (•) plays an important rule in handling physical constraints Eq. (5.11c), the updating of θ˙ depends on three parts: the first part −θ/˙ ||θ˙||<sup>2</sup> <sup>2</sup> in used to optimize the objective function ||θ˙||<sup>2</sup> 2, and the second item *J* <sup>T</sup>λ<sup>1</sup> guarantees the equality constraint Eq. (5.11b) by adjusting the dual state variable λ<sup>1</sup> according to Eq. (5.15b), and the last item −*J* <sup>T</sup> *<sup>o</sup>* λ<sup>2</sup> ensures the inequality constraint Eq. (5.11d). The RNN consists of three kinds of nodes, namely, θ¨, λ<sup>1</sup> and λ2, with the number of neurons being *n* + *ab* + *m*.

It is remarkable that the proposed controller is based on the information of system models such as *J* , *Jo*, *Kp*, etc., which is helpful to reduce computational cost. As to the constraint-optimization problem Eq. (5.11), the main challenge is to solve it in real-time, since the parameters in constraints Eqs. (5.11b) and (5.11d) are time varying. From Eq. (5.15), the control effort is obtained by calculating its updating law, which is based on the historical data and model information, i.e., it is no longer necessary to solve the solution of Eq. (5.11) as every step, and the computational cost is thus reduced. In the following section, we will also show the convergence of the RNN based controller.

In this chapter, we mainly concern the obstacle avoidance problem in force control tasks. It is notable that force control is mainly based on the idea of impedance control theory, which is similar to existing methods in [35, 36]. The main challenge of the proposed control scheme lies in the limitation of sampling ability of cameras, which are used to capture the obstacles. To handle the measurement noise or disturbances, a larger safety distance *d* can be introduced to ensure the performance of obstacle avoidance.

#### *5.3.2 Stability Analysis*

**Lemma 1** (Convergence for a class of neural networks) *[37] A dynamic neural network is said to converge to its equilibrium point if it satisfies*

$$
\kappa \dot{\mathbf{x}} = -\dot{\mathbf{x}} + P\_S(\mathbf{x} - \varrho F(\mathbf{x})),
\tag{5.16}
$$

*where* κ > 0 *and* > 0 *are constant parameters, and PS* = *argminy*∈*S*||*y* − *x*|| *is a projection operator to closed set S.*

**Definition 1** For a given function *F*(•) which is continuously differentiable, with its gradient defined as ∇*F*, if ∇*F* + ∇*F*<sup>T</sup> is positive semi-definite, *F*(•) is called a monotone function.

About the stability of the closed-loop system, we offer the following theorem.

**Theorem 1** *Given the collision-free position-force controller based on a recurrent neural network, the RNN will converge to the inherent solution (optimal solution) of Eq. (5.11), and the stability of the closed-loop system is also ensured.*

*Proof* Define a vector <sup>ξ</sup> as <sup>ξ</sup> = [θ˙; <sup>λ</sup>1; <sup>λ</sup>2] ∈ <sup>R</sup>*n*+*m*+*ab*, according to Eq.(5.15), the time derivative of ξ satisfies

$$
\varepsilon \xi = -\xi + P\_{\tilde{\Omega}}[\xi - F(\xi)], \tag{5.17}
$$

in which ε > 0, and *F*(ξ ) = [*F*1(ξ ), *F*2(ξ ), *F*3(ξ )] T, where *F*<sup>1</sup> = θ/˙ ||θ˙||<sup>2</sup> <sup>2</sup> − *J* <sup>T</sup>λ<sup>1</sup> + *J* T *<sup>o</sup>* <sup>λ</sup>2, *<sup>F</sup>*<sup>2</sup> <sup>=</sup> *<sup>J</sup>* <sup>θ</sup>˙ <sup>−</sup> *<sup>K</sup>* <sup>−</sup><sup>1</sup> *<sup>d</sup> <sup>F</sup>* <sup>+</sup> *KpK* <sup>−</sup><sup>1</sup> *<sup>d</sup>* Δ*x* − ˙*x*d, *F*<sup>3</sup> = −*Jo*θ˙ + *B*. By calculating the gradient of *F*(ξ ), we have

$$
\nabla F(\xi) = \begin{bmatrix}
I/||\dot{\theta}||\_2^2 & -J^T & J\_o^T \\
J & 0 & 0 \\
\end{bmatrix}.
\tag{5.18}
$$

It is obviously that ∇*F*(ξ ) is positive definite. According to Definition 1, *F*(ξ ) is a monotone function. From the description of (5.17), the projection operator *PS* can be formulated as *PS* = [*P*<sup>Ω</sup> ; *PR*; *P*], in which *P*<sup>Ω</sup> is defined in (5.13a), *PR* can be regarded as a projection operator of λ<sup>1</sup> to *R*, with the upper and lower bounds being ±∞, and *<sup>P</sup>* <sup>=</sup> (•)<sup>+</sup> is a special projection operator to closed set <sup>R</sup>*ab* <sup>+</sup> . Therefore, *PS* is a projection operator to closed set [Ω; <sup>R</sup>*<sup>m</sup>*; <sup>R</sup>*ab* <sup>+</sup> ]. Based on Lemma 5.1, the proposed neural network (5.15) is stable and will globally converge to the optimal solution of (5.11).

Notable that the equality constraint (5.11b) describes the impedance controller, and the convergence can be found in [38]. Similarly, the establishment of inequality constraint enables obstacle avoidance during the whole process. The proof is completed. - **Remark 5.4** It is remarkable that the original impedance controller described in (5.11b) bears similar with traditional methods in [39] the main contribution of the proposed controller is that the controller can not only realize the force control, but also realize the obstacle avoidance, besides, the control strategy is capable of handling inequality constraints, including joint angles and velocities.

#### **5.4 Numerical Results**

In this part, a series of numerical simulations are carried out to verify the effectiveness of the proposed control scheme. First, the pure force control experiment is carried out to show the effectiveness of the force controller, and then the control scheme is further verified by testing the system response after the introduction of obstacles. Then, we examine the control performance in more general cases, including model uncertainty and multiple obstacles.

#### *5.4.1 Simulation Settings*

First of all, the planar robot used in the simulation is the same as the previous chapters. It is worth noting that in the force control task, the final actuator needs to keep contact with the workpiece, so it is necessary to distinguish between necessary contact and unnecessary collision. In this chapter, the proposed controller can handle this problem by properly selecting key points. As a result, the final effector is not considered critical in order to be in contact with an obstacle (or external environment). In order to avoid obstacles, the set of key points of the robot is defined as *A*1, ··· , *A*7, in which *A*1, *A*3, *A*<sup>5</sup> and *A*<sup>7</sup> locate at the center of the links, and *A*2, *A*<sup>4</sup> and *A*<sup>6</sup> are defined to be at *J*2, *J*3 and *J*4. The lower and upper bounds of joint angles and joint velocities are defined as θ<sup>−</sup> *<sup>i</sup>* = −3rad, θ<sup>+</sup> *<sup>i</sup>* = 3rad, θ˙− *<sup>i</sup>* = −1rad/s, θ˙+ *<sup>i</sup>* = 1rad/s for *i* = 1 ... 4, respectively. The safety margin is selected as 0.01m. The coefficients describing the contact force are selected as *Kd* = 50, *Kp* = 5000. For simplicity, let *<sup>b</sup>*<sup>0</sup> <sup>=</sup> *<sup>K</sup>* <sup>−</sup><sup>1</sup> *<sup>d</sup> <sup>F</sup>* <sup>−</sup> *KpK* <sup>−</sup><sup>1</sup> *<sup>d</sup>* Δ*x* + ˙*x*d.

#### *5.4.2 Force Control Without Obstacles*

First of all, an ideal case where there is no obstacles in the workspace is considered, and the parameters *Kd* and *Kp* are assumed to be known. The robot is wished to offer a constant contact force on a given plane. The contact force is set to be 20 N, while the direction of contact force is aligned with the y-axis of the tool coordination system. In this example, the y-axis of is [1, −1] <sup>T</sup> in the base coordination. The pre-defined path on the contact plane is *x*<sup>d</sup> = [0.4 + 0.1*cos*(0.5*t*), 0.5 + 0.1*cos*(0.5*t*)]. The initial

**Fig. 5.1** Numerical results of compliance control without obstacles. **a** is the robot- s tracking path and the corresponding joint configurations. **b** is the profile of position error along the free-motion direction. **<sup>c</sup>** is the profile of contact force. **<sup>d</sup>** is the profile of ||θ˙||<sup>2</sup> 2

state of the robot system is set as θ<sup>0</sup> = [1.57, −0.628, −0.524, −0.524] Trad, θ˙ 0 = [0, 0, 0, 0] Trad/s. The control gains of the proposed RNN controller are α = 8,ε = 0.02, respectively. Numerical results are shown in Fig. 5.1. The tracking error along the contact plane is given in Fig. 5.1b, the transient is about 1 s. At the beginning stage, since the end-effector is not in contact with the surface, the contact force stays zero before 0.5 s. As the end-effector approaches the surface, the contact force converges to 20 N, showing the convergence of both positional and force errors. The Euclidean norm of joint velocities (which is also output of the established RNN) is shown in Fig. 5.1d, ||θ˙|| changes periodically, with the same cycle as the expected trajectory. The time history of the end-effector- s motion trajectory and the corresponding joint configurations are shown in Fig. 5.1a, in which the red arrow indicates the direction of the contact force, and the blue arrow shows the direction of the end-effector- s free-motion. All in all, the proposed controller can achieve the position-force control precisely.

#### *5.4.3 Force Control with Single Obstacles*

In this chapter, a stick obstacle is introduced into the workspace, which is defined as *x* = −0.05 m. The initial states and expected values of *x*d, *F*<sup>d</sup> are the same as Chap. 5.4.2.

**Fig. 5.2** Control performance of the proposed controller while avoiding a wall obstacle. **a** is the robot- s tracking path and the corresponding joint configurations. **b** is the profile of position error along the free-motion direction. **c** is the profile of contact force. **d** is the profile of joint angles, **r** is the profile of joint velocities. **f** is the profile of the closest distance to the obstacle of each key points *Ai*,*i* = 1, ··· , 7

**Remark 5.5** In Eq. (5.10), we have shown the basic idea of calculating the distance between the robot and obstacles, i.e., by abstracting key points form the robot and obstacles, the distances can be the robot and obstacle can be described approximately at a set of point-to-point distances. In this example, the distance can be obtained in a simpler way. However, the obstacle avoidance strategy is essentially consistent with Eq. (5.10).

Simulation results are given in Figs. 5.2 and 5.3. The output of RNN is shown in Fig. 5.2e, when simulation begins, θ˙ reaches its maximum value, driving the endeffector to move towards the desired path. And then the robot slows down quickly (after *t* ≈ 0.5s), the robot move smoothly, as a result, the position error successfully converges to 0, and simultaneously, the contact force converges to 20 N. It is notable

**Fig. 5.3** Simulation results of the established RNN while avoiding a wall obstacle. **a** is the profile of <sup>λ</sup>1. **<sup>b</sup>** is the profile of <sup>λ</sup>2. **<sup>c</sup>** is the profile of ||*<sup>J</sup>* <sup>θ</sup>˙ <sup>−</sup> *<sup>b</sup>*0||<sup>2</sup> <sup>2</sup>. **d** is the profiles of the desired and reference trajectory along x-axis. **e** is the profiles of the desired and reference trajectory along y-axis. **f** is the profiles of the objective function of the proposed controller and JPMI based method

that at *t* = 1.2 s, the key point *A*<sup>2</sup> of the robot gets close to the obstacle, as shown in Fig. 5.2f. Based on the obstacle avoidance strategy Eq. (5.15c), the state variable λ2(2) becomes positive, and then the output of the RNN varies with λ<sup>2</sup> (Fig. 5.3b). Correspondingly, an error (about 1 × 10−3m) occurs in the positional tracking, and so as the contact force (force error is about 2N). However, the RNN converges to the new equilibrium point(since the equilibrium point would change when the inequality constraint works), and both *ex* and *e <sup>f</sup>* converges to 0. By comparing Figs. 5.2a and 5.1a, after introducing the obstacle, the robot is capable of adjusting its joint configuration to avoid the obstacle. The distances between the key points *A*<sup>1</sup> − *A*<sup>7</sup> to the obstacle are shown in Fig. 5.2d, a minimum value of about 0.01m is ensured during the whole process. Using impedance model, the force control problem is transferred into a kinematic control one by modifying the reference speed Eq.(5.4). Consequently, the resulting trajectory *xr* together with *x*<sup>d</sup> are as shown in Fig. 5.3d, e. As an important index in the proposed control scheme, the norm of joint speed ||θ˙||<sup>2</sup> 2 is wished as small as possible. Therefore, we introduce a comparative simulation, in which the solution is obtained based on pseudo-inverse of Jacobian matrix and physical limitations are not considered. Comparative curves of the objective functions are as shown in Fig. 5.3f. The RNN based controller can optimize the objective function, it is remarkable that a difference appears at about *t* = 1.2 − 5 s, which is mainly caused by obstacle avoidance (which is not considered in JMPI based method). Since the output of RNN θ˙ is used to approximate the reference speed *b*0, the approximate error ||*J* θ˙ − *b*0||<sup>2</sup> <sup>2</sup> is shown in 5.3c, demonstrating the effectiveness of the established RNN.

#### *5.4.4 Force Control with Uncertain Parameters*

In this example, we check the control performance of the proposed control scheme in presence of model uncertainties. Similar with previous simulations, the initial states of the robot are also θ<sup>0</sup> = [1.57, −0.628, −0.524, −0.524] Trad, θ˙ 0 = [0, 0, 0, 0] Trad/s. In real implementations, the interaction model is usually unknown, and the nominal values of *Kd* and *Kp* are not accurate. Without loss of generality, we select the nominal values of *Kd* and *Kp* as *K*ˆ *<sup>d</sup>* = 80, *K*ˆ *<sup>p</sup>* = 4000, respectively. In order to handle model uncertainties in the interaction coefficients, an extra node is introduced into (5.15). Then the modified RNN can be formulated as

$$\begin{aligned} \varepsilon \ddot{\theta} &= -\dot{\theta} + P\_{\Omega} (\dot{\theta} - \dot{\theta} / ||\dot{\theta}||\_2^2 + J^{\mathrm{T}} \lambda\_1 - J\_o^{\mathrm{T}} \lambda\_2), \\ \varepsilon \dot{\lambda}\_1 &= K\_d^{-1} F - K\_p K\_d^{-1} \Delta x + \dot{x}\_{\mathrm{d}} - J(\theta) \dot{\theta}, \\ \varepsilon \dot{\lambda}\_2 &= -\lambda\_2 + (\lambda\_2 + J\_o \dot{\theta} - B)^{+}, \\ \dot{\tilde{W}} &= -K\_{in} \eta (F\_{\mathsf{d}} - F)^{\mathrm{T}}, \end{aligned}$$

in which *W* = [*Kp*; *Kd* ], η = [*x* − *x*d; ˙*x* − ˙*x*d], and the positive coefficient *Kin* scaling the updating rate is defined as *Kin* = *diag*(500, 20). Simulation results are shown in Fig. 5.4 and Fig. 5.5. Although the exact values of *Kd* and *Kp* are unknown, the closed-loop system is still stable, which can be shown from the convergence of tracking error *ex* and contact force *F* in Figs. 5.4a and 5.4b. The change curves of joint angles and joint velocities with respect to time are shown in Fig. 5.4c, d, in which the bounded-ness of joint angles and velocities are guaranteed. The observed interaction coefficients *K*ˆ *<sup>d</sup>* and *K*ˆ *<sup>p</sup>* are shown in Fig. 5.4e, indicating that both *K*ˆ *<sup>d</sup>* and *K*ˆ *<sup>p</sup>* converge to their real values. Figure 5.5a shows the distances between the key points and the obstacle, it is obvious that all key points keep at a safe distance from the obstacle (the closest key point is *A*2). Euclidean norm of *b*<sup>0</sup> − *J* θ˙ is illustrated in Fig. 5.5c, despite fluctuation occurs at about *t* = 1.5s, the proposed controller could handle model uncertainties. The impedance model based reference trajectory and the original desired trajectory are shown in Fig. 5.5d and Fig. 5.5e. Although *xr* and *x*<sup>d</sup> are different, the tracking error *ex* along the direction of free motion and force error *eF* converges to zero, as shown in Fig. 5.4a, b. The objective function ||θ˙||<sup>2</sup> <sup>2</sup> to

**Fig. 5.4** Control performance of the proposed controller while avoiding a wall obstacle with uncertain *Kp* and *Kd* . **a** is the robot- s tracking path and the corresponding joint configurations. **b** is the profile of position error along the free-motion direction. **c** is the profile of contact force. **d** is the profile of joint angles. **e** is the profile of joint velocities. **f** is the profile of the closest distance to the obstacle of each key points *Ai*,*i* = 1,..., 7

be optimized is given in Fig. 5.5f. The convergence of the established RNN is shown in Fig. 5.5c, despite the uncertain parameters, using the adaptive updating law, the established RNN is capable of learning the optimal solution. The spikes are mainly because of the change of λ<sup>2</sup> when obstacle avoidance scheme is activated.

#### *5.4.5 Manipulation in Narrow Space*

In this part, we discuss a more general case of motion-force control task, in which the workspace is defined in a limited narrow space. The robot is limited by two parallel lines, namely, *y*<sup>1</sup> = 0.15 and *y*<sup>2</sup> = −0.15 m. Considering the safety distance, all

**Fig. 5.5** Simulation results of the established RNN while avoiding a wall obstacle with uncertain *Kp* and *Kd* . **<sup>a</sup>** is the profile of <sup>λ</sup>1. **<sup>b</sup>** is the profile of <sup>λ</sup>2. **<sup>c</sup>** is the profile of ||*<sup>J</sup>* <sup>θ</sup>˙ <sup>−</sup> *<sup>b</sup>*0||<sup>2</sup> <sup>2</sup>. **d** is the profiles of the desired and reference trajectory along x-axis. **e** is the profiles of the desired and reference trajectory along y-axis. **f** is the profiles of the objective function of the proposed controller and JPMI based method

key points except *A*<sup>8</sup> must satisfy the workspace description −0.14 ≤ *y* ≤ 0.14m. The initial joint angles are set to be θ<sup>0</sup> = [0.393, −1.05, 1.57, −0.785] <sup>T</sup> rad, and θ˙ <sup>0</sup> = [0, 0, 0, 0] <sup>T</sup> rad/s. The desired path is selected as *x*<sup>d</sup> = [0.8 + 0.1*cos*(0.5*t*), − 0.15] Tm, and the expected contact force is *F*<sup>d</sup> = 20 N, with the direction vector being [0, −1] T. Simulation results are given in Figs. 5.6 and 5.7. When simulation begins, the initial position error is about 0.1 m, and the converges to zero, with the transient being about 0.5 s. Simultaneously, the contact force also converges to 20 N. In Fig. 5.7a, minimum distances between the key points to *y*<sup>1</sup> and *y*<sup>2</sup> are represented by blue and red curves, respectively. The tracking trajectory and the corresponding joint configurations are shown in Fig. 5.6a. During *t* = 1 − 1.5s and *t* = 6 − 13s, point *A*<sup>2</sup> gets close to *y*1, during *t* = 4 − 7s, *A*<sup>4</sup> is close to *y*2. Remarkable that there exist fluctuations in positional and force errors at *t* = 1s and *t* = 4s, (i.e., when *A*<sup>2</sup> and *A*<sup>4</sup> get close to the bounds), respectively. Similar to the previous simulations, the reference trajectories are given in Fig. 5.5c, d, and the objective functions are shown

**Fig. 5.6** Control performance of the proposed controller in a narrow workspace. **a** is the robot- s tracking path and the corresponding joint configurations. **b** is the profile of position error along the free-motion direction. c) is the profile of contact force, **d** is the profile of joint angles. (e) is the profile of joint velocities. (f) is the profile of the closest distance to the obstacle of each key points *Ai*,*i* = 1, ··· , 7

in Fig. 5.5e. Using the proposed RNN controller, the robot can realize both position and force control in limited narrow space.

#### *5.4.6 Comparisons*

In this part, comparisons among the proposed control scheme and existing methods are given to show the superiority of the RNN based strategy. The comparisons are shown in Table 5.1. In [22], an RNN based controller is designed for redundant manipulators, both obstacle avoidance and physical constraints are considered. However, the controller only focus on kinematic control problem. In [40] and [16], force control together with obstacle avoidance are taken into account, but the physical constraints are ignored. [23] develop an adaptive admittance control strategy, which is capable

**Fig. 5.7** Simulation results of the established RNN in a narrow workspace. **a** is the profile of λ1, **b** is the profile of λ2, **c** is the profiles of the desired and reference trajectory along x-axis. **d** is the profiles of the desired and reference trajectory along y-axis. **e** is the profiles of the objective function of the proposed controller and JPMI based method


**Table 5.1** Comparisons among The Proposed Controller and Existing Methods

of dealing with force control under model uncertainties, physical constraints and real-time optimization. It is remarkable that the proposed strategy focus on real-time obstacle avoidance in force control tasks, and the controller is capable of ensuring the boundedness of joint angles and velocities. At the same time, simulations have shown the potential of optimization ability of norm of joint speed.

#### **5.5 Summary**

This chapter constructs a new collision free compliance controller based on the concept of QP programming and neural network. Different from the existing methods, this chapter describes the control problem from the perspective of optimization, taking compliance control and conflict avoidance as equal or inequality constraints. Physical constraints, such as joint angle and speed constraints, are also considered. Before concluding this chapter, it is worth noting that it is the first RNN based compliance control method, which considers collision avoidance in real time and shows great potential in dealing with physical constraints. In this chapter, Matlab is simulated to verify the efficiency of the controller. In the future, we will check the control framework of different impedance models in the physical real simulation environment, and then consider the machine vision technology and system delay on the physical experiment platform.

#### **References**


**Open Access** This chapter is licensed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made.

The images or other third party material in this chapter are included in the chapter's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the chapter's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.

# **Chapter 6 RNN for Motion-Force Control of Redundant Manipulators with Optimal Joint Torque**

**Abstract** Precise position force control is the core and difficulty of robot technology, especially for robots with redundant degrees of freedom. For example, track-based controls often fail to grind the robot due to the intolerable impact force applied to the end-effector. The main difficulties lie in the coupling of motion and contact forces, redundancy analysis and physical constraints. In this chapter, we propose a new motion force control strategy under the framework of recursive neural network. The tracking error and contact force are described respectively in the orthogonal space. By choosing the minimum joint torque as the secondary task, the control problem is transformed into the QP problem under multi-constraint conditions. In order to obtain real-time optimization of the joint torque relative to the non-convex joint angle, the original QP is reconstructed at the velocity level, and the original objective function is replaced by the time derivative. Then a convergent dynamic neural network is established to solve the improved QP problem online. The robot position control based on recursive neural network is extended to the robot position control based on position force, which opens a new way for the robot to turn from simple control angle to crossover design with convergence and optimality. Numerical results show that the proposed method can realize precise position force control, deal with inequality constraints such as joint angular velocity and torque limitation, and reduce joint torque consumption by 16% on average.

#### **6.1 Introduction**

Redundant manipulators, which have more DOFs than those required to complete a given task, are more flexible than non-redundant ones. The redundant DOFs enable manipulators to realize the fault tolerant control, improve operation performance and enhance reliability. Therefore, redundant manipulators have been widely used in industry, agriculture, military, space exploration, etc. Consequently, the research on the redundant manipulator has been studied intensively [1–4].

Motion control and force control are two main modes of redundant manipulator control. In motion control problems, a basic assumption is that there is no contact between the robot and the environment, that is, the robot can move freely in the workspace [5]. This problem is well-reflected in coating, welding, stacking, stacking and other applications. Then the core problem is to design control commands that drive the robot to follow a predetermined trajectory. The control command may be joint angular sequence [6], velocity sequence [7], acceleration sequence [8, 9] or torque sequences [10–12]. The redundancy resolution is usually used to achieve a secondary task, such as avoiding obstacles [13], avoiding singularities [14], etc. Different from motion control, force control involves the direct interaction between a robot and its environment. The control of contact force can enhance the robustness and flexibility of the robot in the weak structure environment, so as to enhance the robot- s operation ability [15]. Corresponding typical applications in polishing, grinding, assembly, polishing, polishing and other fields [16, 17]. In [18], the theoretical framework of impedance control is proposed. The basic idea is to use the environment as admittance and the robot as impedance.By maintaining the dynamic relationship between force and motion, the controller is represented as a spring-mass-damping system. In [19], a hybrid position force controller is proposed which combines position information with force information to realize the simultaneous control of position constraint and force constraint. Based on these two control frameworks, a series of controllers are proposed and verified by simulation or experiments [20–22].

Although the above work has achieved great success in motion force control of non-redundant robots, the control of redundant robots has not attracted enough attention. It is worth noting that the redundancy of the manipulator provides an opportunity to meet secondary objectives, but also sets up mathematical difficulties. In [23], in order to realize the flexible control of unknown obstacles, a position force control strategy is proposed. The motion of the robot is completely decoupled into two parts, namely the motion of the end-effector and the internal motion. The motion of the end-effector is controlled to achieve positional force control over the environment, while the internal motion is designed to avoid obstacles by minimizing impact. In [24], a robust control strategy with the ability to adjust contact force and apparent impedance is designed. The controller has strong robustness in dynamic and kinematic uncertainties. In [25], Patel et al. proposed a hybrid impedance control scheme based on pseudo inverse jacobian. The limitation of joint angle is avoided by defining a function that scales the difference between joint angle and its boundary. However, these literatures require continuous computation of the pseudo inverse of the jacobian matrix, which brings huge computational burden and is difficult to deal with multiple constraints [26].

In order to solve the redundant solution problem of redundant robots, a feasible method is to transform the control problem into an optimization problem under constraints [27]. The objective function is established according to the secondary task, and the constraint conditions are established according to the primary task and physical constraints. This optimization problem is often described as a QP problem [28]. Because of the high efficiency of parallel computing, recursive neural network is often used to solve QP-based redundant solutions online [29]. In recent years, the controller based on recursive neural network (RNN) has been introduced into the motion control of redundant robots. A new redundancy decomposition method is proposed, which constructs a robust neural network at the velocity level to guarantee the boundary of joint acceleration [30]. In [31], RNN is improved to allow projection operations on non-convex sets, avoiding the accumulation of tracking errors to system noise. In [32], a method is proposed to realize operational optimization through indirect maximization of time derivative. In [33], cooperative control of distributed multi-robot is studied. Recently, RNN has been extended to control flexible robots, model uncertainties and other issues [34–37]. Although the motion control of redundant robot based on RNN has obtained good research results, as far as we know, there is no research report on the application of RNN in motion force control of robot.

On this basis, we propose a position force control scheme based on RNN, which is an important extension of recursive neural network in robot control. Table 6.1 provides a brief comparison between the proposed and existing programmes. Unlike [25] and [23], in this chapter the motion force controller is established in the joint velocity stage and allows for multiple inequality constraints. The non-convex optimization problem is studied without loss of generality. With [31, 32, 34] the existing similar motion controller, the proposed motion force controller is no longer needed the pseudo inverse Jacobian.

This chapter mainly studies the following aspects. In the second part, the tracking error and contact force are modeled, and the control problem is written as QP problem. In Chap. 3, QP is reformulated at the velocity level by rewriting the objective function and constraints. In the fourth part, we set up an RNN to solve the redundant resolution problem. Stability has also been demonstrated. In the fifth part, the numerical experiments of 4-DOF planar manipulator are carried out. Finally, the sixth part carries on the summary to the full text. By the end of this section, the main contributions are as follows:


#### **6.2 Problem Formulation**

#### *6.2.1 Problem Formulation*

In this chapter, we focus on position-force control problem for redundant manipulators. Figure 6.1 gives a brief introduction of a redundant robot and its operation on an


**Table 6.1** Comparisons among the Proposed Motion-force Control Scheme and Existing Ones

† In [25] and [23], no projection operations are introduced in those control strategies ‡ In [23], only some certain kinds of inequalities can be handled by [23]

workpiece. The robot is expected to offer a desired contact force in the vertical direction of the contact surface, at the same time, the end-effector is required to track a predefined trajectory along the surface. In the base coordinate frame *R*0(00, *x*0, *y*0,*z*0), forward kinematics of a serial manipulator can be written as

$$f(\theta(t)) = \mathbf{x}(t),\tag{6.1}$$

where <sup>θ</sup> <sup>∈</sup> <sup>R</sup>*<sup>n</sup>* is the vector of joint angles, and *<sup>x</sup>* <sup>∈</sup> <sup>R</sup>*<sup>m</sup>* represents the end-effector- s coordinate vector in frame *<sup>R</sup>*0, *<sup>f</sup>* (•) : <sup>R</sup>*<sup>n</sup>* <sup>→</sup> <sup>R</sup>*<sup>m</sup>* is used to describe the forward kinematics operator. For a redundant manipulator, we have *n* > *m*.

By differentiating *x*(*t*) with respect to time *t*, we can get the relationship between Cartesian velocity *<sup>x</sup>*˙(*t*) <sup>∈</sup> <sup>R</sup>*<sup>m</sup>* and joint velocity (or joint control signal) θ (˙ *<sup>t</sup>*) <sup>∈</sup> <sup>R</sup>*<sup>n</sup>* as follows:

$$J(\theta(t))\theta(t) = \dot{\mathbf{x}}(t),\tag{6.2}$$

where *J* (θ (*t*)) = ∂ *f* (θ (*t*))/∂θ (*t*) is called Jacobian matrix.

#### 6.2 Problem Formulation 109

In position-force control tasks, the end-effector- s motion is constrained by the contact surface. For simplicity, we define a tool coordinate system as *Rt*(*xt*, *yt*,*zt*), in which the axis *zt* is set in alignment with the vertical direction of the contact surface. Obviously, the motion of end-effector can be specified along *xt* and *yt* . In this chapter, frictional force between the robot and contact surface is ignored, therefore, the contact force *F* is in alignment with *zt* .

In the tool coordinate system *Rt* , let δ*Xt* be the displacement between effector and its position command, then the contact force *Ft* can be formulated as

$$F\_t = k\_f \Sigma\_t \delta X\_t,\tag{6.3}$$

where *k <sup>f</sup>* > 0 is the stiffness coefficient, Σ*<sup>t</sup>* = *diag*(0, 0, 1). Diagonal matrix Σ*<sup>t</sup>* describes the relationship between the contact force and relative displacement along different axes: 1 means that displacement component along *zt* affects the contact force, and 0 otherwise.

Similarly, in tool coordinate system *Rt* , the position tracking error *et* can be written as

$$
\delta e\_t = \Sigma\_t \delta X\_t,\tag{6.4}
$$

where Σ¯*<sup>t</sup>* = *I* − Σ*<sup>f</sup>* = *diag*(1, 1, 0), 1 means there is a DOF of movement along the corresponding direction, and 0 otherwise.

When the contact surface is prior known, *Rt* can be obtained from *R*<sup>0</sup> by a rotation matrix *St* . Let *F*, *e*<sup>0</sup> and δ*X* be the corresponding description of *Ft* , *et* and δ*Xt* in coordinate frame *R*0, then we have *F* = *S*<sup>T</sup> *<sup>t</sup> Ft* , *et* = *St e*<sup>0</sup> and δ*Xt* = *St* δ*X*. Therefore, *F* and *e*<sup>0</sup> can be rewritten as

$$\begin{cases} F = k\_f S\_t^\mathrm{T} \Sigma\_t S\_t \delta X, \\ e\_0 = S\_f^\mathrm{T} \bar{\Sigma}\_t S\_f \delta X. \end{cases} \tag{6.5}$$

Notable that in frame *R*0, the displacement δ*X* can be described as δ*X* = *x* − *x*d, where *x*<sup>d</sup> is the desired position signals described in *R*0. Using (6.1), (6.5) can be rewritten as

$$\begin{cases} \boldsymbol{F} = \boldsymbol{k}\_f \boldsymbol{S}\_t^\mathsf{T} \boldsymbol{\Sigma}\_f \boldsymbol{S}\_t (f(\boldsymbol{\theta}) - \boldsymbol{x}\_\mathsf{d}), \\ \boldsymbol{e}\_0 = \boldsymbol{S}\_t^\mathsf{T} \bar{\boldsymbol{\Sigma}}\_f \boldsymbol{S}\_t (f(\boldsymbol{\theta}) - \boldsymbol{x}\_\mathsf{d}). \end{cases} \tag{6.6}$$

**Remark 6.1** Equation (6.6) gives the unified description of the relationship between the contact force *F*, position tracking error *e*<sup>0</sup> and displacement δ*X* in *R*0. δ*X* will lead to contact force *F* in the vertical direction, and position tracking error *e*<sup>0</sup> along the contact surface.

In real implementations, given the desired contact force *F*<sup>s</sup> and trajectory command *x*d, the manipulator- s end-effector is expected to offer contact force *F*<sup>d</sup> while tracking *x*d, *i.e.*, *F* → *F*d, *e*<sup>0</sup> → 0. For the convenience of writing in the following sections, let *A* = [*k <sup>f</sup> S*<sup>T</sup> *<sup>t</sup>* Σ*<sup>t</sup> St*; *S*<sup>T</sup> *<sup>t</sup>* Σ¯ *<sup>f</sup> St*], *r* = [*F*T, *e*<sup>T</sup> 0 ] T, and *r*<sup>d</sup> = [*F*d; 0]. Then (6.6) can be reformulated as

$$A(f(\theta) - \mathbf{x}\_{\mathsf{d}}) = r.\tag{6.7}$$

Therefore, the control objective of position-force control is to adjust the joint angles θ, to ensure *r* → *r*d.

#### *6.2.2 Joint Torque and Physical Constraints*

When the end-effector offers a contact force *F*, the corresponding torque is provided by motors at every joint. The relationship between contact force *F* and the joint torque τ can be formulated as

$$
\pi = J^{\mathrm{T}}(\theta)F.\tag{6.8}
$$

In the control of redundant manipulators, there would be infinite groups of solutions to a certain control task. In order to save energy during the control process, we select a objective function scaling energy consumption as τ <sup>T</sup>τ/2. The smaller τ <sup>T</sup>τ/2, the less energy consumption.

In real implementations, the system is limited by physical constraints. For example, the joint angles θ and velocities θ˙ must not exceed their limits: θmin,θmax, θ˙min, θ˙max, since the possible collisions or overheating of motor would lead to irreversible damages. At the same time, considering the bounded torque output of the motors, the limitation of joint torque τ is described as τ min ≤ τ ≤ τ max.

#### *6.2.3 Optimization Problem Formulation*

According to the descriptions above, the position-force control problem for redundant manipulators considering torque optimization can be formulated as

$$\min \qquad G\_1 = \mathbf{r}^\mathsf{T} \mathbf{r} / 2,\tag{6.9a}$$

$$\mathbf{s}.\mathbf{t}.\qquad \mathbf{\tau} = J^{\mathrm{T}} F,\tag{6.9b}$$

$$r\_{\mathbb{d}} = A(f(\theta) - x\_{\mathbb{d}}),\tag{6.9c}$$

$$
\theta^{\text{min}} \le \theta \le \theta^{\text{max}}, \tag{6.9d}
$$

$$
\dot{\theta}^{\text{min}} \le \dot{\theta} \le \dot{\theta}^{\text{max}},
\tag{6.9e}
$$

$$
\mathfrak{r}^{\min} \le \mathfrak{r} \le \mathfrak{r}^{\max},\tag{6.9f}
$$

with θ being the decision variable. Equation (6.9a) is the cost function to be minimized, the equality constraint (6.9b) describes the relationship between the resulting joint torque τ and contact force *F*. The force and motion tasks of the robot are described in (6.9c), and inequality constraints (6.9e), (6.9d) and (6.9f) show the physical limitations to be satisfied. By substituting (6.9b) into (6.9a), the optimization problem can be rewritten as

$$\min \quad G\_1 = F^\mathsf{T} J(\theta) J^\mathsf{T}(\theta) F / 2,\tag{6.10a}$$

$$\text{s.t.} \quad r\_{\text{d}} = A(f(\theta) - \mathbf{x}\_{\text{d}}),\tag{6.10b}$$

$$
\theta^{\text{min}} \le \theta \le \theta^{\text{max}},
\tag{6.10c}
$$

$$
\dot{\theta}^{\text{min}} \le \dot{\theta} \le \dot{\theta}^{\text{max}},\tag{6.10d}
$$

$$
\mathfrak{r}^{\min} \le \mathfrak{r} \le \mathfrak{r}^{\max}.\tag{6.10\mathbf{e}})
$$

To solve (6.10), there are two main challenges. Firstly, as an objective function to be minimized, *F*<sup>T</sup> *J* (θ )*J* <sup>T</sup>(θ )*F*/2 is usually non-convex relative to θ, because it is a function of *J* (θ ). Secondly, the equation constrain (6.10b) is highly nonlinear, and at the same time, it remains difficult to handle the inequality constraints, especially (6.10d) and (6.10e).

#### **6.3 Reconstruction of Optimization Problem**

In this section, in order to overcome the above difficulties, the redundancy resolution problem (6.10) will be reconstructed. The objective function is firstly redefined, and both equality and inequality constrains are rebuilt in velocity level.

#### *6.3.1 Reconstruction of Objective Function*

As to *F*<sup>T</sup> *J* (θ )*J* <sup>T</sup>(θ )*F*/2, we will replace *F* with the desired value *F*d. Therefore, the optimization function can be formulated as *G*<sup>2</sup> = *F*<sup>T</sup> <sup>d</sup> *J* (θ )*J* <sup>T</sup>(θ )*F*d/2.

**Remark 6.2** There are two main reasons: firstly, according to the control objective, the contact force *F* is expected to track *F*d, if the controller is proper designed, *F* will eventually converge to *F*d, consequently, *F*<sup>T</sup> <sup>d</sup> *J* (θ )*J* <sup>T</sup>(θ )*F*d/2 will be equivalent to *F*<sup>T</sup> *J* (θ )*J* <sup>T</sup>(θ )*F*/2. Secondly, *F*<sup>d</sup> is independent of θ, this replacement will reduce the computational complexity in the control process.

Differentiating *G*<sup>2</sup> with respect to time, we have

$$\dot{G}\_2 = \left(J^{\mathrm{T}}(\theta)F\_{\mathrm{d}}\right)^{\mathrm{T}} \frac{\mathrm{d}(J^{\mathrm{T}}(\theta)F\_{\mathrm{d}})}{\mathrm{d}t}.\tag{6.11}$$

Obviously, *G*˙ <sup>2</sup> describes the change of *G*2. By minimizing *G*˙ 2, the system will be compelled to develop in the direction of decreasing *G*2. Therefore, in this chapter, we use *G*˙ <sup>2</sup> instead of *G*<sup>2</sup> as the new objective function. Notable that d(*J* <sup>T</sup>(θ )*F*d)/d*t* can be formulated as

$$\begin{split} \frac{\mathbf{d}}{\partial \mathbf{d}} (J^{\mathrm{T}}(\theta) F\_{\mathrm{d}}) &= \sum\_{i=1}^{n} \frac{\partial (J^{\mathrm{T}}(\theta) F\_{\mathrm{d}})}{\partial \theta\_{i}} \dot{\theta}\_{i} + J^{\mathrm{T}}(\theta) \dot{F}\_{\mathrm{d}} \\ &= [H\_{i}, \cdots, H\_{n}] \dot{\theta} + J^{\mathrm{T}}(\theta) \dot{F}\_{\mathrm{d}}, \end{split} \tag{6.12}$$

where *Hi* <sup>∈</sup> <sup>R</sup>*<sup>n</sup>* is

$$H\_i = \frac{\partial (J^\mathrm{T}(\boldsymbol{\theta}) F\_\mathrm{d})}{\partial \theta\_i} = \begin{bmatrix} \sum\_{j=1}^m (\partial (J(j,1) F\_\mathrm{d}(j)) / \partial \theta\_i) \\ \sum\_{j=1}^m (\partial (J(j,2) F\_\mathrm{d}(j)) / \partial \theta\_i) \\ \cdots \\ \sum\_{j=1}^m (\partial (J(j,n) F\_\mathrm{d}(j)) / \partial \theta\_i) \end{bmatrix}.$$

Let *H* = [*H*1, ··· , *Hn*], then (6.11) can be converted as

$$
\dot{G}\_2 = F\_\mathrm{d}^\mathrm{T} J H \dot{\theta} + F\_\mathrm{d}^\mathrm{T} J J^\mathrm{T} \dot{F}\_\mathrm{d}.\tag{6.13}
$$

It is worth pointing that the second term of (6.13) is independent of θ˙, therefore, the objective function is equivalent to *F*<sup>T</sup> <sup>d</sup> *J H*θ˙.

#### *6.3.2 Reconstruction of Constraints*

In this part, we will transform the constrains into velocity level. First of all, we define a concatenated vector describing force and position errors as *e* = *r* − *r*<sup>d</sup> = [*F* − *F*d; *e*0], according to (6.7), *e* can be formulated as

$$e = A(f(\theta) - \chi\_{\mathsf{d}}) - r\_{\mathsf{d}}.\tag{6.14}$$

Differentiating *e* and combing (6.2) yields

$$
\dot{e} = A(J\theta - \dot{\mathbf{x}}\_{\mathsf{d}}) - \dot{r}\_{\mathsf{d}}.\tag{6.15}
$$

To ensure that *e* converges to zero, a simple controller can be designed as *e*˙ = −*ke*, where *k* > 0 is a positive constant. According to (6.14), (6.15), the equality constrains can be converted in velocity level as

$$AJ\theta = \dot{r}\_{\mathsf{d}} + A\dot{\mathbf{x}}\_{\mathsf{d}} - k(Af(\theta) - \mathbf{x}\_{\mathsf{d}}).\tag{6.16}$$

As to the inequality constraints (6.10c) and (6.10d), according to [27], let ω = θ˙ and define α ≥ 0 as a constant parameter to scale the negative feedback to conform the joint constraints, these two constraints can be formulated in the speed level as

$$
\omega^{\rm min} \le \omega \le \omega^{\rm max},\tag{6.17}
$$

where ωmin = max{α(θmin − θ ), θ˙min}, and ωmax = min{α(θmax − θ ), θ˙max}.

Similarly, (6.10e) can be built indirectly by limiting its derivative: β(τ min − τ ) ≤ τ˙ ≤ β(τ max − τ ), where β is a positive constant. By combing (6.12), the boundedness of joint torque can be rewritten as an inequality constraint about a function *g*(ω) as

$$\lg(\omega) \le 0,\tag{6.18}$$

where *g*(ω) = [β(τ min − τ ) − *J* <sup>T</sup> *F*˙ <sup>d</sup> − *H*ω, *H*ω − β(τ max − τ )*J* <sup>T</sup> + *F*˙ d] <sup>T</sup> <sup>∈</sup> <sup>R</sup>2*<sup>n</sup>*.

#### *6.3.3 Reformulation and Convexification*

According to the above description, in order to achieve position-force control of redundant manipulators, instead of solving (6.10) directly, one feasible solution is to solve the optimization problem in velocity level as

$$\min \qquad F\_{\mathsf{d}}^{\mathsf{T}} J H \omega,\tag{6.19a}$$

$$\text{s.t.} \qquad r\_r = AJo,\tag{6.19b}$$

$$\mathbf{g}(\omega) \le \mathbf{0},\tag{6.19c}$$

$$
\omega \in \mathfrak{Q}, \tag{6.19d}$$

where *rr* = ˙*rd* <sup>+</sup> *Ax*˙<sup>d</sup> <sup>−</sup> *<sup>k</sup>*(*Af* (θ ) <sup>−</sup> *<sup>x</sup>*d), <sup>Ω</sup> = {<sup>ω</sup> <sup>∈</sup> <sup>R</sup>*n*|ωmin *<sup>i</sup>* ≤ ω*<sup>i</sup>* ≤ ωmax *<sup>i</sup>* } is a convex set. It is remarkable that the objective function described in (6.19a) is non-convex relative to ω. Therefore, (6.19b) is introduced to convexity (6.19a). The final form of optimization problem is described as

$$\min \quad F\_{\mathsf{d}}^{\mathsf{T}} J H \omega + (A J \omega - r\_r)^{\mathsf{T}} (A J \omega - r\_r), \tag{6.20a}$$

$$\text{s.t.}\qquad r\_r = AJ\,\omega,\tag{6.20b}$$

$$\mathbf{g}(\omega) \le \mathbf{0},\tag{6.20c}$$

$$
\omega \in \mathfrak{Q}.\tag{6.20d}
$$

So far, we have reconstructed the position-force control with joint torque optimization problem into a quadratic programming issue under constraints. However, the QP problem (6.20) cannot be solved directly.

#### **6.4 RNN Based Redundancy Resolution**

In this chapter, in order to solve the optimization problem (6.20), an expanded recurrent neural network is built to obtain the optimal solution of (6.20). Stability will be also discussed.

#### *6.4.1 RNN Design*

Firstly, let <sup>λ</sup><sup>1</sup> <sup>∈</sup> <sup>R</sup>2*<sup>m</sup>* and <sup>λ</sup><sup>2</sup> <sup>∈</sup> <sup>R</sup>2*<sup>n</sup>* be dual variables to constraints (6.20b) and (6.20c), a Lagrange function is defined as

$$\begin{split} L &= F\_{\mathsf{d}}^{\mathsf{T}} J H \boldsymbol{\omega} + (A J \boldsymbol{\omega} - \boldsymbol{r}\_{r})^{\mathsf{T}} (A J \boldsymbol{\omega} - \boldsymbol{r}\_{r}) \\ &+ \boldsymbol{\lambda}\_{1}^{\mathsf{T}} (\boldsymbol{r}\_{r} - A J \boldsymbol{\omega}) + \boldsymbol{\lambda}\_{2}^{\mathsf{T}} \boldsymbol{g}(\boldsymbol{\omega}). \end{split} \tag{6.21}$$

According to Karush−Kuhn−Tucker condition, the optimal solution of the optimization problem (6.20) can be equivalently formulated as

$$
\omega = P\_{\mathcal{Q}}(\omega - \frac{\partial L}{\partial \omega}),
\tag{6.22a}
$$

$$r\_r = AJ\omega,\tag{6.22b}$$

$$
\lambda\_2 = \left(\lambda\_2 + \mathbf{g}(a)\right)^+,\tag{6.22c}
$$

where *P*<sup>Ω</sup> (*x*) = argmin*<sup>y</sup>*∈<sup>Ω</sup> ||*y* − *x*|| is a projection operation to convex set Ω, and (*x*)<sup>+</sup> = (*x*<sup>+</sup> <sup>1</sup> , ··· , *x*<sup>+</sup> <sup>2</sup>*<sup>n</sup>*)T, *x*<sup>+</sup> *<sup>i</sup>* = max(*xi*, 0).

In order to solve (6.22), an expanded recurrent neural network is designed as

$$\begin{split} \varepsilon \dot{\boldsymbol{\omega}} &= -\boldsymbol{\omega} + P\_{\Omega} (\boldsymbol{\omega} - \boldsymbol{H}^{\mathrm{T}} \boldsymbol{J}^{\mathrm{T}} \boldsymbol{F}\_{\mathsf{d}} - \boldsymbol{J}^{\mathrm{T}} \boldsymbol{A}^{\mathrm{T}} (\boldsymbol{A} \boldsymbol{J} \boldsymbol{\omega} - \boldsymbol{r}\_{r}) \\ &+ \boldsymbol{J}^{\mathrm{T}} \boldsymbol{A}^{\mathrm{T}} \boldsymbol{\lambda}\_{1} - \nabla \mathsf{g} \boldsymbol{\lambda}\_{2}), \end{split} \tag{6.23a}$$

$$
\varepsilon \dot{\lambda}\_1 = r\_r - A J \omega,\tag{6.23b}
$$

$$
\varepsilon \dot{\lambda}\_2 = (\lambda\_2 - (\lambda\_2 + g(\omega))^+),
\tag{6.23c}
$$

where ∇*g* = ( ∂*g*<sup>1</sup> ∂ω , ··· , ∂*g*2*<sup>m</sup>* ∂ω ) = [−*H*<sup>T</sup>, *<sup>H</sup>*<sup>T</sup>] ∈ <sup>R</sup>*<sup>n</sup>*×2*<sup>n</sup>*, <sup>ε</sup> is a positive constant scaling the convergence of (6.23). The pseudo code of the RNN-based strategy is shown in Algorithm 5.

#### **Algorithm 5** The RNN based position-force controller

**Input:** Control parameters , *k*, α, stiffness coefficient *k <sup>f</sup>* , prior knowledge of the contact surface *St* , Σ*<sup>t</sup>* and Σ¯*<sup>t</sup>* . The robot joint limits θmax *<sup>i</sup>* , <sup>θ</sup>min *<sup>i</sup>* and joint speed limits <sup>θ</sup>˙max *<sup>i</sup>* , <sup>θ</sup>˙min *<sup>i</sup>* , initial joint angles θ (˙ 0), desired tracking trajectory *r*d(*t*), *r*˙d(*t*), and contact force *F*d. Feedback of actual coordinate *x*(*t*), contact force *F*(*t*), and joint angle θ, task duration *T* .

**Output:** To achieve position-force control of the redundant manipulator and optimize joint torque 1: Initialize λ1(0), λ2(0)

2: **Repeat**

3: On-line feedback of *F*, θ, *x* ← from sensors


**Until** (*t* > *T* )

#### *6.4.2 Stability Analysis*

In this part, theoretical analysis of stability and convergence of closed-loop system using the proposed neural network (6.23).

First of all, several important definitions and Lammas are presented, which is very useful in stability analysis.

**Definition 6.1** A continuously differentiable function *F*(•) is said to be monotone if <sup>∇</sup>*<sup>F</sup>* + ∇*F*<sup>T</sup> is positive semi-definite, where <sup>∇</sup>*<sup>F</sup>* is the gradient of *<sup>F</sup>*(•).

**Lemma 6.1** *[38] A dynamic neural network is said to converge to the equilibrium point if it satisfies*

$$
\kappa \dot{\mathbf{x}} = -\dot{\mathbf{x}} + P\_S(\mathbf{x} - \varrho F(\mathbf{x})),
\tag{6.24}
$$

*where* κ > 0 *and* > 0 *are constant parameters, and PS* = *argminy*∈*<sup>S</sup>*||*y* − *x*|| *is a projection operator to closed set S.*

So far, a theorem about the convergence of the redundancy resolution problem can be described as follows

**Theorem 6.1** *Given the motion-force control problem for redundant manipulators with torque optimization under physical constraints as (6.20), the recurrent neural network (6.23) is stable and will globally converge to the optimal solution of (6.20).*

*Proof* Let ξ = [ω<sup>T</sup>, λ<sup>T</sup> <sup>1</sup> , λ<sup>T</sup> 2 ] T, the proposed RNN (6.23) can be written as

$$
\varepsilon \dot{\xi} = -\xi + P\_{\tilde{\Omega}}[\xi - F(\xi)],\tag{6.25}
$$

where *F*(ξ ) = [*F*1(ξ ), *F*2(ξ ), *F*3(ξ )] <sup>T</sup> <sup>∈</sup> <sup>R</sup>2*m*+3*<sup>n</sup>*, in which

$$
\begin{bmatrix} F\_1 \\ F\_2 \\ F\_3 \end{bmatrix} = \begin{bmatrix} H^\top J^\top F\_\mathsf{d} + J^\top A^\top (A J \omega - r\_r) - J^\top A^\top \lambda\_1 + \nabla g \lambda\_2 \\ \lambda\_1 - r\_r + A J \omega \\ -\mathsf{g}(\omega) \end{bmatrix}.
$$

Let ∇*F*(ξ ) = ∂*F*/∂ξ , we have

$$
\nabla F(\xi) = \begin{bmatrix} J^\mathrm{T} A^\mathrm{T} A J & -J^\mathrm{T} A^\mathrm{T} & \nabla g \\ A J & I & 0 \\ - (\nabla g)^\mathrm{T} & 0 & 0 \end{bmatrix}. \tag{6.26}
$$

It is remarkable that

$$\left(\nabla F(\xi) + \left(\nabla F(\xi)\right)^{\mathsf{T}}\right) = \begin{bmatrix} 2J^{\mathsf{T}}A^{\mathsf{T}}AJ & 0 & 0\\ 0 & 2I & 0\\ 0 & 0 & 0 \end{bmatrix}.\tag{6.27}$$

From Definition 6.1, *F*(ξ ) is a monotone function of ξ .

According to the description of (6.23) and (6.25), *P*<sup>Ω</sup>¯ can be formulated as *P*<sup>Ω</sup>¯ = [*P*<sup>Ω</sup> ; *PR*; *<sup>P</sup>*], where *PR* <sup>∈</sup> <sup>R</sup>2*<sup>m</sup>* is a projection operator of <sup>λ</sup><sup>1</sup> to set *<sup>R</sup>*, with the upper and lower bounds being ±∞. Furthermore, (•)<sup>+</sup> is a special case of *P*, in which <sup>=</sup> <sup>R</sup>2*<sup>n</sup>* <sup>+</sup> is the nonnegative quadrant of <sup>R</sup>2*<sup>n</sup>*. Therefore, *<sup>P</sup>*<sup>Ω</sup>¯ is a projection operator to closed set Ω¯ . Based on Lemma 6.1, the proposed neural network (6.23) is stable and will globally converge to the optimal solution of (6.20). The proof is completed. -

#### **6.5 Illustrative Examples**

In this chapter, taking the planar 4-DOF manipulator as an example, numerical calculation is carried out to verify the effectiveness of the proposed control scheme. First, we will check the control performance without joint torque optimization by making *H*<sup>T</sup> *J* <sup>T</sup> *F*<sup>d</sup> in (6.23a) be 0. Secondly, a dynamic simulation example of joint torque optimization is introduced to illustrate the superiority of the control strategy. Finally, the adaptability and optimization performance of this method are verified by simulation experiments under different initial conditions.

#### *6.5.1 Simulation Setup*

As shown in Fig. 6.2, a contact surface in the workspace can be described as *y* = 0, the end-effector can move freely along the horizontal axis, and the desired contact force *F*<sup>d</sup> is aligned with the vertical direction. The stiffness coefficient *k <sup>f</sup>* is

#### 6.5 Illustrative Examples 117

set to be 1000N/mm. Control positive control gains are set as α = 10, β = 10, *k* = 8, ε = 0.005, respectively. Physical constraints of joint angles, velocities and torque are defined as θmin = [−2, −2, −2, −2] <sup>T</sup> rad, θmax = [2, 2, 2, 2] <sup>T</sup> rad, θ˙min = [−2, −2, −2, −2] <sup>T</sup> rad/s, θ˙max = [2, 2, 2, 2] <sup>T</sup> rad/s, τ min = [−10, −10, −10, −10] T Nm, τ max = [10, 10, 10, 10] <sup>T</sup> Nm, respectively.

#### *6.5.2 Position-Force Control Without Optimization*

In this part, the robot is controlled to offer a constant contact force on the surface while tracking a given trajectory. It is remarkable that joint torque optimization is not investigated yet. The initial joint angles are selected as θ<sup>0</sup> = [1.57, −1.26, −0.52, −0.52] T rad. The desired trajectory is defined as *x*<sup>d</sup> = [0.25 + 0.1*cos*(0.5*t*), 0] T, and the contact force is defined as *F*<sup>d</sup> = [0, −1] <sup>T</sup> N. Numerical results are shown in Fig. 6.3. When simulation begins (*t* < 0.5*s*), the position error is big, and there is no contact between the robot and surface. Correspondingly, both contact force and result torque are zero. Under the RNN based controller (6.23), joint velocities reach the maximum value, the end-effector approaches to the surface rapidly from the initial position, and the tracking error converges to zero quickly, the corresponding joint configurations are shown in Fig. 6.3a. As the robot approaches the contact surface, the robot slows down quickly, and the contact force rises from zero, the then converges to the desired value smoothly. In the stable state (*t* > 2 s), both contact force *F* and end-effector track the desired command, the tracking errors are zero, which means the robot tracks

**Fig. 6.3** Numerical results when tracking a time varying force command along a trajectory without optimization. **a** Profiles of the end-effector (black dashed line) and the corresponding joint configurations. **b** Profiles of position error. **c** Profiles of contact force. **d** Profiles of joint angles. **e** Profiles of joint velocities. **f** Profiles of joint torque

both desired trajectory and force successfully. Correspondingly, joint angles change periodically, which enables the robot to achieve dynamic tracking. This will also lead to a periodic change in result torque in joint space, as shown in Fig. 6.3f. During the whole process, boundary of joint angles, velocities and torque are all ensured.

#### *6.5.3 Position-Force Control with Optimization*

In this part, joint torque optimization is introduced to make full use of redundancy resolution. The proposed position-force control scheme is firstly validated in a fixed point case, and then extended to dynamic cases.

#### *(1) Position-Force Control on A Fixed Point*

In this simulation, the robot is wished to offer a constant contact force *F*<sup>d</sup> = [0, −10] T at a fixed point *x*<sup>d</sup> = [0.3, 0] T. The initial values of joint angles are also θ<sup>0</sup> = [1.57, −1.26, −0.52, −0.52] Trad. Numerical results are shown in Fig. 6.4. At the beginning of simulation, the robot moves at its maximum speed (2 rad/s), making the regulation error converge quickly. Then it slows down as the regulation error is small. At *t* = 0.5 *s*, robot touches the surface, which leads to the emergence of contact force. Using the proposed controller, both control error of motion and force converge to zero smoothly. Correspondingly, the Euclidean norm of joint torque also converges to a constant value (3.7 N2/m2). From Fig. 6.4e, f, joint angles and velocities do not exceed their limits, showing that the proposed scheme could handle inequality constraints effectively. To further demonstrate the validity of the optimization scheme, comparative simulations without optimization are also carried out. The obtained Euclidean norm of joint torque without optimization is shown as the red dashed line (4.3 *N*<sup>2</sup>/*m*<sup>2</sup> in stable state). After introducing joint torque optimization strategy, 16% off of torque consumption is achieved.

#### *(2) Position-Force Control Along A Straight Line*

Then we check the optimization scheme in dynamic control. Both the desired path *x*<sup>d</sup> and desired force *F*<sup>d</sup> are time varying. The expected signals are defined as *x*<sup>d</sup> = [0.25 + 0.1*cos*(0.5*t*), 0] T, *F*<sup>d</sup> = [0, 20 − 2*cos*(0.5*t*)] <sup>T</sup> N, respectively. The initial values of joint angles are the same as the previous case. Numerical results are shown in Fig. 6.5. We also define a index to scale the torque consumption as *<sup>J</sup>*<sup>τ</sup> <sup>=</sup> *<sup>T</sup>* <sup>0</sup> ||τ (*t*)||<sup>2</sup> 2d*t*.

When simulation begins, high joint speed ensures the fast convergence of tracking error, which is very similar to the previous simulation. After *t* = 2*s*, high precision trajectory tracking is realized by the control strategy, as well as the contact force. Comparative simulation without joint torque optimization is carried out. Figure 6.5d shows the comparison of Euclidean norms of joint torque with and without optimization. Correspondingly, *J*<sup>τ</sup> decreases 16.2% from 142 to 119, showing the validity of the proposed scheme. It is notable that all physical constraints are guaranteed. Dynamic change of joint configurations is shown in Fig. 6.5a.

#### *(3) Position-Force Control Along An Arc Surface*

In this part, the end-effector is controlled to track a quarter-circular surface, which is centered at [0.3, 0.3] Tm with radius 0.2 m, and is provided a constant force of 10 N in the vertical direction. The initial values of joint angles are selected as θ<sup>0</sup> = [1.5708, −0.9851, −1.1714, 0] Trad. Numerical results are shown in Fig. 6.6. The trajectory of end-effector is shown in Fig. 6.6a, while in Fig. 6.6b, optimization is not introduced. The proposed controller enables the robot to achieve precision control of both position and force, and at the same time, by adjusting its joint angles, the

**Fig. 6.4** Numerical results when the robot is controlled to offer constant force at a fixed point. **a** Profiles of the end-effector (black dashed line) and the corresponding joint configurations. **b** Profiles of position error. **c** Profiles of contact force. **d** Comparison of Euclidean norm of joint torque with and without optimization. **e** Profiles of joint angles. **f** Profiles of joint velocities

joint torque consumption is reduced, i.e., *J*<sup>τ</sup> decreases 17.6% from 88.1 to 72.6. It is remarkable that the physical constraints are also guaranteed.

#### *(4) Adaptability to Different Initial Settings*

To further illustrate the joint optimization scheme, another fixed-point control is presented. Desired signals are set as *x*<sup>d</sup> = [0.3, 0] <sup>T</sup> and *F*<sup>d</sup> = [0, −10] T. The initial

**Fig. 6.5** Numerical results when tracking a time varying force command along a straight line with optimization. **a** Profiles of the end-effector (black curve) and the corresponding joint configurations. **b** Profiles of position error. **c** Profiles of contact force. **d** Comparison of Euclidean norm of joint torque with and without optimization. **e** Profiles of joint anges. **f** Profiles of joint velocities

values of joint angles is selected as θ<sup>0</sup> = [1.8850, −1.8850, −1.2566, 0] <sup>T</sup> rad, consequently, the corresponding position of the end-effector is exactly the same as *x*d. As shown in Fig. 6.7a, the robot adjusts its posture and stops in final state, while keeping its end-effector on the fixed point. This phenomenon is similar to the null-space movement based on pseudo-inverse methods. However, different with pseudo-inverse

**Fig. 6.6** Numerical results when tracking a time varying force command along an arc surface with optimization. **a** Time history of the end-effector (black curve) and the corresponding joint configurations with optimization. **b** Time history of the end-effector (black curve) and the corresponding joint configurations without optimization. **c** Profiles of contact force. **d** Profiles of position error. **e** Comparison of Euclidean norm of joint torque with and without optimization. **f** Profiles of joint angles

based method, the RNN based motion-force controller is capable of handling physical inequalities, at the same time, joint torque optimization is achieved from 4.3 to 3.7. Further more, there in no need to calculate pseudo-inverse of Jacobian matrix, which will save computing cost effectively.

**Fig. 6.7** Numerical results when the initial position of end-effector locates on the desired fixed point. **a** Profiles of joint configurations. **b** Profiles of Euclidean norm of joint torque with and without optimization. **c** Profiles of joint angles. **d** Profiles of joint velocities

Finally, a group of verifications for fixed point position-force control with different initial joint angles are carried out, the desired signals are the same as the previous simulation. As shown in Fig. 6.8, although the initial joint angles are different, at steady state, the robot reaches the same joint angle, which shows the adaptability of the RNN based control strategy.

#### **6.6 Question and Answer**

#### **Q1:** "*What's the complexity of the proposed RNN?*"

**Answer:** The network is organized in a one-layer architecture, which consists of <sup>2</sup>*<sup>m</sup>* <sup>+</sup> <sup>3</sup>*<sup>n</sup>* neurons, namely <sup>ω</sup> <sup>∈</sup> <sup>R</sup>*<sup>n</sup>*, <sup>λ</sup><sup>1</sup> <sup>∈</sup> <sup>R</sup>2*<sup>m</sup>* and <sup>λ</sup><sup>2</sup> <sup>∈</sup> <sup>R</sup>2*<sup>n</sup>*. Despite the difference between the proposed neural network with traditional recurrent neural networks, from both the mathematical description Eq. (23) and the architecture, one characteristic of the established neural network can be found that the neural network uses its historical information to calculate the output at current moment, which is also a typical feature of recurrent neural networks.

**Fig. 6.8** Time history of robot- s joint configurations in fixed point control from different initial joint angle θ0. **a** θ<sup>0</sup> = [0.9, −0.75, −1.5, −1.6] <sup>T</sup> rad. **<sup>b</sup>** <sup>θ</sup><sup>0</sup> = [1.8, <sup>−</sup>0.3, <sup>−</sup>1.6, <sup>0</sup>.6] <sup>T</sup> rad. **<sup>c</sup>** <sup>θ</sup><sup>0</sup> <sup>=</sup> [1.9, −1.5, −1.6, −0.6] <sup>T</sup> rad. **<sup>d</sup>** <sup>θ</sup><sup>0</sup> = [0.5, <sup>−</sup>0.5, <sup>−</sup>1.6, <sup>−</sup>0.6] <sup>T</sup> rad. **<sup>e</sup>** <sup>θ</sup><sup>0</sup> = [0.7, <sup>−</sup>0.3, <sup>−</sup>2, <sup>0</sup>.2] T rad. **f** θ<sup>0</sup> = [0.3, −1.5, −1.6, −0.6] <sup>T</sup> rad

**Q2:** "*As described in Section II, the matrices* Σ*<sup>f</sup> and* Σ¯ *<sup>f</sup> are crucial in the controller design, however, the authors didnt show the details. How to obtain those matrices in actual applications requires detailed description.*"

**Answer:** Σ*<sup>f</sup>* and Σ¯ *<sup>f</sup>* are used to realize the decoupling of the contact force and tracking error of the end-effector.When the contact surface is known, the combination of Σ*<sup>f</sup>* , Σ¯ *<sup>f</sup>* and *St* enables the normalized description of the control tasks.

**Q3:** "Limited stiffness of the manipulator elements can lead to state variables oscillations. Have you observed such work of the object?"

**Answer:** The limited stiffness of the manipulator elements can lead to state variables oscillations. In this chapter, the QP type formulation is obtained based on static modeling method, and inertial force is not taken into account. The condition of the modeling method is that the process is quasi-static. In other words, the relative motion of the end-effector and the workpiece is very slow. In the experiment tests, we also found that some oscillation would occur if some parameters are appropriately tuned. In this case, a damping coefficient can be introduced to handle the oscillations.

**Q4:** "*In real manipulator significant issue is related to control of electric drives. In mentioned structures, internal control loop, related to torque control, introduces some delays for external speed controllers. Have you considered such problem?*"

**Answer:** In this chapter, we mainly focus on projection RNN based controller design in kinematic level, and the control command is selected as joint velocity signals. Therefore, we assume that the robot controller can provide an ideal response to the joint velocity command. Although the delay is unavoidable for real systems, when the control frequency is set as 100 Hz, experimental results could show the effectiveness of the proposed controller. From Eq. (23), it can be observed that the force control is realized by adjusting the joint velocities base on the RNN, which is consistent with the idea of admittance control. In our experiment, the velocity control in the inner loop is done by the robot controller, and we assume that it provide an ideal response to the joint velocity command. It is remarkable that the uncertainties in the dynamic level such as friction and disturbances do affect the performance of position-force control in the outer loop, but these uncertainties can be suppressed by the closed-loop control mechanism of the controller itself.

**Q5:** "*Could you explain real impact of projection operator PR on work of the control system?*"

**Answer:** The projection operator *P*<sup>Ω</sup> plays an important role in guaranteeing the bounded-ness of the output of neural network i.e., the boundedness of ω can be ensured introducing *P*<sup>Ω</sup> . As described in Eq. (17), based on escape velocity method, both the boundedness of joint angles and velocities are guaranteed.

**Q6:** "*RNN uses delays during data processing, so the calculation step size seems to be important for overall work. Have you considered such issue?*"

**Answer:** We did consider this problem. The faster the RNN calculates, the better performance can be achieved. But at the same time, it would also lead to a increase of computational burden, which may make the system unstable. In our experiment, the control period is set to be 10 ms.

#### **6.7 Summary**

This chapter focuses on motion-force control problem for redundant manipulators, while physical constraints and torque optimization are taken into consideration. Firstly, tracking error and contact force are modeled in orthogonal spaces respectively, and then the control problem is turned into a QP problem, which is further rewritten in velocity level by rewriting objective function and constraints. To handle multiple physical constraints, an RNN based scheme is designed to solve the redundancy resolution online. Numerical experiment results show the validity of the proposed control scheme. Before ending this chapter, it is noteworthy that this is the first chapter to deal with motion-force control of redundant manipulators in the framework of RNNs and redundant manipulators with force sensitivity, e.g., grinding robots, can be readily controlled with the proposed RNN model but cannot with existing RNN models in this field.

#### **References**


**Open Access** This chapter is licensed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made.

The images or other third party material in this chapter are included in the chapter's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the chapter's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.