**Springer Series on Touch and Haptic Systems**

Thorsten A. Kern Christian Hatzfeld Alireza Abbasimoshaei Editors

# Engineering Haptic Devices

Third Edition

# **Springer Series on Touch and Haptic Systems**

## **Series Editors**

Manuel Ferre, Universidad Politécnica de Madrid, Madrid, Spain Marc Ernst, Ulm University, Ulm, Germany Alan Wing, University of Birmingham, Birmingham, UK

#### **Editorial Board**

Carlo A. Avizzano, Scuola Superiore Sant'Anna, Pisa, Italy Massimo Bergamasco, Scuola Superiore Sant'Anna, Pisa, Italy Antonio Bicchi, University of Pisa, Pisa, Italy Jan van Erp, University of Twente, Enschede, The Netherlands Matthias Harders, University of Innsbruck, Innsbruck, Austria William S. Harwin, University of Reading, Reading, UK Vincent Hayward, Sorbonne Université, Paris, France Juan M. Ibarra, Cinvestav, Mexico City, Mexico Astrid M. L. Kappers, Eindhoven University of Technology, Eindhoven, The Netherlands Miguel A. Otaduy, Universidad Rey Juan Carlos, Madrid, Spain Angelika Peer, Libera Università di Bolzano, Bolzano, Italy Jerome Perret, Haption, Soulgé-sur-Ouette, France Domenico Prattichizzo, University of Siena, Siena, Italy Jee-Hwan Ryu, Korea Advanced Institute of Science and Technology, Daejeon, Korea (Republic of) Jean-Louis Thonnard, Université Catholique de Louvain, Ottignies-Louvain-la-Neuve, Belgium Yoshihiro Tanaka, Nagoya Institute of Technology, Nagoya, Japan Dangxiao Wang, Beihang University, Beijing, China Yuru Zhang, Beihang University, Beijing, China

The Springer Series on Touch and Haptic Systems is published in collaboration with the EuroHaptics Society. It is focused on publishing new advances and developments in all aspects of haptics. Haptics is a multi-disciplinary field with researchers from Psychology, Physiology, Neurology, Engineering, and Computer Science (amongst others) contributing to a better understanding of the sense of touch, and researching into how to improve and reproduce Haptic interaction artificially in order to simulate real scenarios. The series includes monographs focused on specific topics, edited volumes covering general topics from different perspectives, and selected Ph.D. theses. Books in this series focus on Haptics or Haptic interfaces including:


Thorsten A. Kern · Christian Hatzfeld · Alireza Abbasimoshaei Editors

# Engineering Haptic Devices

Third Edition

*Editors* Thorsten A. Kern Institute for Mechatronics (M-4) Hamburg University of Technology Hamburg, Germany

Alireza Abbasimoshaei Institute for Mechatronics (M-4) Hamburg University of Technology Hamburg, Germany

Christian Hatzfeld Institut für Elektromechanische Konstruktionen Technische Universität Darmstadt Darmstadt, Hessen, Germany

ISSN 2192-2977 ISSN 2192-2985 (electronic) Springer Series on Touch and Haptic Systems ISBN 978-3-031-04535-6 ISBN 978-3-031-04536-3 (eBook) https://doi.org/10.1007/978-3-031-04536-3

1st edition: Springer-Verlag Berlin Heidelberg 2009

2nd edition: Springer-Verlag London 2014

3rd edition: © The Editor(s) (if applicable) and The Author(s) 2023. This book is an open access publication.

**Open Access** This book is licensed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made.

The images or other third party material in this book are included in the book's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the book's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.

The use of general descriptive names, registered names, trademarks, service marks, etc. in this publication does not imply, even in the absence of a specific statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use.

The publisher, the authors, and the editors are safe to assume that the advice and information in this book are believed to be true and accurate at the date of publication. Neither the publisher nor the authors or the editors give a warranty, expressed or implied, with respect to the material contained herein or for any errors or omissions that may have been made. The publisher remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

This Springer imprint is published by the registered company Springer Nature Switzerland AG The registered company address is: Gewerbestrasse 11, 6330 Cham, Switzerland

# **'Series Editors' Foreword**

This volume of the Springer Series on 'Touch and Haptic Systems', published as a collaboration between Springer and the EuroHaptics Society, is significant for several reasons. Engineering Haptic Devices marks a milestone in being the 20th volume in the series which saw its first volumes published in 2011. The volume is also significant for being the second open-access publication in the series. This will help it to reach the wider audience it justly deserves and the commercial sponsorship of Grewus GmbH is greatly appreciated. But most importantly, the volume is a major revision of an earlier edition. The new version is over 20% longer with many revised and new sections and now including many illustrations in colour. The changes will further reinforce the volume's position as the only comprehensive textbook approach to the topic of haptic devices which covers both the user and the technical design of haptic systems. The editors of Engineering Haptic Devices are Thorsten A. Kern, Christian Hatzfeld and Alireza Abbasimoshaei. We are saddened by the loss of Christian Hatzfeld deceased before the publication of this book. We suggest the book represents a fitting tribute to his work. All three editors contributed to writing of the chapters, joined by a number of authors with a wide range of experience in haptics. The book, which comprises 15 chapters plus appendices and glossary, is divided in two: Part I provides an introduction to the basics of haptics, and Part II covers most of the engineering aspects related to haptic devices. Chapter topics in Part I include motivation for the use of haptics, haptic as an interaction modality, user role in haptic systems and developing haptic systems. In Part II, topics include identification of requirements, haptic system structures, haptic system control, kinematics, actuators, sensors, interface, software, evaluation and case studies. Engineering Haptic Devices is written in a style that will be accessible to researchers, engineers and human factors practitioners already working in haptics and looking to use the work as a reference as well as to students attending advanced undergraduate and graduate courses and seeking a comprehensive grounding in this wide-ranging and important topic.

Madrid, Spain Ulm, Germany Birmingham, UK March 2022

Manuel Ferre Marc Ernst Alan Wing

# **Note from the Book Editors**

The idea for this book was born in 2003. Originally conceived as a supplement to Thorsten A. Kern's dissertation, it was intended to fill a gap: The regrettably small number of comprehensive, summary publications on haptics available to, for example, a technically interested person who is confronted for the first time with the task of designing a haptic device. In 2004, apart from a considerable number of conference proceedings, journals and dissertations, there was no document summarising the most important findings of this challenging topic.

The support of several colleagues, especially Prof. Dr.-Ing. Dr. med. Ronald Blechschmidt-Trapp and Dr.-Ing. Christoph Doerrer, helped to develop the idea further in the following years—and showed that this book had to become much more extensive than originally expected. With encouragement from Prof. Dr.-Ing. habil. Roland Werthschützky, the first edition was edited by Thorsten A. Kern during a Post-doc period. It was funded by the German Research Foundation (DFG, grant KE1456/1-1) with a special focus on consolidating the design methodology for haptic devices. Thanks to this funding, the financial basis for this task was guaranteed. The structure of the topic made it clear that the book would be significantly improved by contributions from specialists in different fields. In 2008, the German version *Entwicklung Haptischer Geräte* and in 2009 the English version *Engineering Haptic Devices* were published by Springer.

In 2010, the idea of a second edition of the book was born. With Kern's move from university to an industrial employer, attention also shifted from mainly kinaesthetic to tactile devices. This made severe gaps in the first edition eminent. In parallel, science made great strides in understanding the individual tactile modalities and blurring the boundaries between different conceptual approaches to the same perception. This now provided an opportunity to take an engineering approach to more than just vibrotactile perception. However, it took until 2013 for work to begin on the second edition. In that year, Christian Hatzfeld completed his doctoral thesis on the perception of vibrotactile forces. Also inspired by Prof. Dr.-Ing. habil. Roland Werthschützky, he took the lead in editing this second edition. Like the first edition, this work was also funded by the DFG (grant HA7164/1-1), which underlines the importance of an adapted design approach for haptic systems. In a fruitful collaboration between Springer and the series editors, the book was integrated into the *Springer Series on Touch and Haptic Systems* as we felt that the design of task-specific haptic interfaces would be well complemented by the other works in this series.

To our regret, our dear friend and editor of the second edition, Dr. Christian Hatzfeld, passed away in 2018 after a losing battle with cancer, leaving behind his wife and child. The third edition you hold in your hands still contains countless memories and influences from his work, and we are proud and honoured to have been able to continue his work.

In 2020, a new opportunity arose for this book when Kern returned to academia as a full-professor at Hamburg University of Technology. Despite a detour into the automotive world of visible displays, he returned to his scientific roots and picked up his work again on the design of haptic devices and actuators. This also prompted him to revise some of the content of this book with some distance, as he now not only sees more clearly how the global community has evolved and professionalised, but also notices which issues have remained. Dr. Alireza Abbasimoshaei, an experienced researcher who has made his mark in the field of rehabilitation robots, could be motivated to help with the editorial part of the work. Fortunately, we have also found a strong supporter of haptic research in Grewus GmbH, which focuses on the development of tactile system solutions, and with their help we have succeeded in making this edition of the book an open-access publication.

With the support of several former authors of the first and second editions, as well as some new authors who have taken on key roles in the structure of the book, we have been able to revise and update all sections to make the overall content more accessible and to better represent the current state of research. However, the biggest changes and strongest updates occurred in Chap. 12 with a sophisticated introduction to haptic and tactile rendering algorithms, taking into account the dynamic properties of haptic devices, and in Chap. 8 with finally a full introduction to serial and parallel kinematics and their specifics when it comes to force rendering and why haptics is so different from general robotics. Major updates have also been made to the control Sect. 7 explaining now in-depth concepts of impedance control for coupled systems and some real application examples. In addition, we took care to update each chapter and remove more bugs than we introduced while revising.

We thank all the authors who contributed to this book, as well as all the colleagues, students, and researchers in the haptics community who provided fruitful discussions, examples, and permission to include their work. We would also like to thank all the researchers around the world who have developed, used, and tested mechatronic devices and found amazing applications for them. This book would not be possible without these inspirations, and although we have tried to give a good overview, at the same time we are sure that we have overlooked excellent examples that we would have liked to include if only we had known about them. Our special thanks go to our student assistants whose work helped us with the final editing: Konika Narendra Khatri and Nis Willy Köpke. Last but not the least, we would like to single out one of the authors of this book, Fady Youssef, who was of great help to the editors with numerous discussions on content and practical actions. Especially in the very last phase, when we had to obtain open-access permissions for all illustrations that were adopted and inspired by publications from the haptics community. Without the technical support of these people, such a work would probably not have reached this level of maturity.

We hope that this work will facilitate the work of students and engineers in the exciting and challenging development of haptic systems, and that it will serve as a useful resource for all developers, as the first and second editions have already done. In particular, we hope that the open-access approach of this edition will allow a wider community to critically discuss our work and perhaps gain some inspiration.

Of course, we would also like to express our condolences to Christian's family and hope that we prove worthy to continue his work.

Hamburg, Germany Thorsten A. Kern Alireza Abbasimoshaei

# **Preface**

The term "haptics", unlike the terms "optics" or "acoustics", is not so familiar to most people, at least not in the meaning used in the scientific community: The words "haptics" and "haptic" refer to anything involving the sense of touch. "Haptic" is everything and everything is "haptic" because it describes not only the pure mechanical interaction but also includes thermal- and pain perception (nociception). The sense of touch enables humans and other living beings to perceive the "boundaries of their physical being", i.e. to recognize where their own body begins and where it ends. While we perceive our wider environment through sight and hearing, the sense of touch covers our immediate surroundings: in the heat of a basketball game, a light touch on our back immediately alerts us to an attacking player we cannot see. We notice the intensity of the contact, the direction of the movement through a shear on our skin or a breeze moving our body hair—all without catching a glimpse of the opponent.

"Haptic systems" are divided into two classes. In engineering, there are three terms that are often used but have no clear meaning: System, Device and Component. Systems are—depending on the task of the designer—either a device or a component. A motor is a component of a car, but for the designer of the motor it is a device made of components (coils, magnets, encoders, ...).1 There are the time-invariant systems (the keys on my keyboard) that produce a more or less unchanging haptic effect whether pressed today or a year from now. Structures such as surfaces, e.g. the wooden surface of my table, also belong to this group. These haptically interesting surfaces have the properties of "tactile textures" and are represented by a variety of dimensions, rough or smooth and soft or hard surfaces are just some of them. In addition to these temporally unchanging devices, there are *active, reconfigurable systems* that change their haptic properties partially or completely depending on a pre-selection—e.g. from a menu or due to an interaction with real or virtual environments.

<sup>1</sup> It can be helpful when reading a technical text to replace each of the above terms with the word "thing". This suggestion is not entirely serious, but it surprisingly increases the comprehensibility of technical texts.

The focus of this book is on the technological design criteria for active reconfigurable systems that enable haptic coupling of user and object in a mainly mechanical understanding. Thermal and nociceptive perceptions are mentioned according to their importance, but not discussed in detail. This is also the case for passive haptic systems, although it must be emphasized that a careful understanding of passive haptic dimensions can be seen as key to the development of active haptic systems. Active haptic systems have been developed by research and industry in a wide variety and used for different purposes. They cover a wide range of applications, from low-cost interaction surfaces with tactile outputs to mid-priced devices in the consumer goods industry, mainly aimed at enhancing immersion in virtual worlds, to sophisticated general-purpose devices used in professional engineering or research applications. When confronted with this topic for the first time and seeing the variety of devices in a psychophysiological field that is not so commonplace, it is easy to get lost and fail to recognize the connections between the designs that are so different at first sight. Therefore, on the one hand, we believe in the need for a structured approach to the development of task-specific haptic systems and, on the other hand, in the need to know the different approaches to the components and structures of haptic systems. We would therefore like to offer guidance and the first point of orientation to avoid the most common pitfalls in understanding and to give some hints on the individual technical topics.

The fact that you have found this book shows that you are interested in haptics and its application in human-machine interaction. It also makes it very likely that you have already recognized some complexity in your design task. Perhaps you have already attempted to design a technical system that enables haptic human-machine interaction. Perhaps you are currently planning a project as part of your studies or a commercial product that will improve a particular manual control or introduce a new control concept. Maybe you are an engineer facing the task of using haptics in medical technology and training to improve patient safety, and trying to apply current advances to other interventions. Or maybe you are in component development and just need a quick reference for using actuators and exciters in your end-user application. If you belong to these groups, then we definitely want to help you.

Despite or precisely because of this great diversity of projects in industry and research dealing with haptic systems, the common understanding of "haptics" and the terms directly related to it, such as "kinaesthetic" and "tactile", are by no means as clear and uncontroversial as it should be. With this book, we would like to offer you some assistance to act more confidently in the development of designing haptic devices. We see this book as both a starting point for engineers and students who are new to haptics and the design of haptics and haptic interfaces as well as a reference for more experienced professionals. To make the book more usable and practical in this sense, we have added recommendations for further insights to most chapters.

The book begins by outlining the various areas that can benefit from the integration of haptics, including communication, interaction with virtual environments, and the most sophisticated applications of telepresence and teleoperation. Haptics as an interaction modality is discussed as a basis for the design of such systems. This includes various concepts of haptic perception and haptic interaction, as well as the main results from psychophysical studies that can and must be applied to the design of a task-specific haptic system. Please note that this book has been written by and is aimed at engineers from different disciplines. This means that psychophysical content in particular is sometimes simplified and abridged to give engineers working on a haptic device a basic insight into these topics. Again, you can find references if you want to dive deeper.

Next, the role of the user as a (mechanical) part of the haptic system is discussed in detail, as understanding the user as a very dynamic component of your technical device has a big impact on system properties such as stability and perceived haptic quality.

Part I of the book ends with an extension of the generally known development models for mechatronic systems to the specific design of haptic systems. This chapter places a special emphasis on the integration of perceptual properties and ergonomic aspects in this process. The authors believe that the systematic consideration of perceptual properties and features of the sensory apparatus based on the intended interaction can reduce critical requirements for haptic systems, which both reduces the effort and cost of development and leads to systems with higher perceived quality.

Part II of the book, an overview of technological solutions is given, such as the design of actuators, kinematics or complete systems including software and rendering solutions and the interfaces to simulation and virtual reality systems. This is done from two points of view. Firstly, the reader should be able to find the most important and widely used solutions for recurring problems such as actuator or sensor technology, including the necessary technical basis for their own designs and developments. Secondly, we wanted to give an overview of the large number of different principles used in haptic systems that might be a good solution for a new task-specific haptic system—or a remarkable experience of which solution not to try.

The authors of this book consider their task accomplished once this book helps to inspire more design engineers to develop haptic devices and thus accelerate the creation of more and better haptic systems on the market.

Hamburg, Germany February 2022

Thorsten A. Kern Christian Hatzfeld Alireza Abbasimoshaei

# **Contents**

## **Part I Basics**




# **Editors and Contributors**

## **About the Editors**

**Thorsten A. Kern** received his Dipl.-Ing. and Dr.-Ing. degrees from Darmstadt University of Technology (TUDA), Germany, in the fields of actuator and sensor development for medical human-machine interfaces (HMIs) in applications like minimally invasive surgery and catheterizations. He is currently the director at Hamburg University of Technology, Germany, of the Institute for Mechatronics in Mechanics. He previously worked in Automotive Industry at Continental as an R&D manager for interior components, leading a team of 300 engineers worldwide. He joined Continental in 2008 covering various functions with increasing range of responsibility in actuator development, motor development and active haptic device development before shifting toward R&D management and product management on Head-Up Displays. Between 2006 and 2008, he was working in parallel in a startup focusing on medical interventions and was the main editor of the first edition of "Engineering Haptic Devices". He joined Hamburg University in January 2019. His interests are specifically focused on all types of electromagnetic sensors and actuators and their system integration toward larger motor- or sensor-systems in high-dynamic applications. *t.kern@ hapticdevices.eu.*

**Christian Hatzfeld** † joined the Institute of Electromechanical Design of Technische Universität Darmstadt as a research and teaching assistant in 2008 working in the group of Measurement and Sensor Technologies of Prof. Dr.-Ing. habil Roland Wertschützky. He received his doctoral degree in 2013 for a work about the perception of vibrotactile forces. Then he was the leader of the "Haptic Systems" group until his death in 2018. His research interests included development and design methods for task-specific haptic systems and the utilization of human perception properties to alleviate the technical design. He was the main editor of edition 2 of Engineering Haptic Devices and contributed significantly to a large number of chapters specifically focusing on psychophysical topics.

**Alireza Abbasimoshaei** is currently a researcher assistant at the Institute for Mechatronics in Mechanics at the Hamburg University of Technology in Germany. Before joining iMEK, he designed and built four robots and controlled them. The last one was at the Technical University of Braunschweig in Germany. He filed two patents for rehabilitation devices and sold a finger rehabilitation robot he developed to a hospital partner. He also developed a new control system for rehabilitation robots in the field of control. He is an expert in mechatronic system design with special emphasis on mechanical and control system design. *a.abbasimoshaei@ hapticdevices.eu*.

## **Contributors**

**Abdulali Arsen** received his Ph.D. degree in the Department of Computer Science and Engineering at Kyung Hee University, the Republic of Korea. His research focus lies in data-driven and physics-based modeling and rendering of non-linear object deformation and haptic textures in virtual and augmented environments. He is also interested in the design and control of soft robots, human-computer interaction and human-in-loop systems. Currently, he is a Research Associate at Bioinspired Robotics Laboratory, University of Cambridge. *a.abdulali@hapticdevices.eu*.

**Gölz Jacqueline** is a Professor in Sensors, Actuators and Metrology at Ulm University of Applied Science. Earlier, she was with Roche Diabetes Care GmbH as a test design engineer for insulin delivering systems and as a lecturer at the Computer Science Department, Technische Universität Darmstadt. There she received her Ph.D. in Electrical Engineering in 2012, where she developed miniaturized piezoresistive strain sensing elements, among others, for robotic surgical systems in the field of minimally invasive surgery. *j.goelz@ hapticdevices.eu*.

**Reisinger Jörg** is AE-Engineer and Enabling Technology Owner for haptics and haptic technologies, employed at Mercedes-Benz AG. Besides haptic specification and internal consulting, he is responsible for new haptic technologies and concepts, transferring and guiding them into serial production. Since 2008, he introduced a new haptic quality level, as well as new active haptic systems like in the Mercedes-Benz touchpads and the haptic touchscreens. His doctoral thesis dealt with the objective parameters of haptically perceived quality of control elements, for which he received the doctoral degree in mechanical engineering at TU Munich in 2009 in cooperation with Audi AG and Heilbronn University. *j.reisinger@hapticdevices.eu*.

**Fady Youssef** is currently working at the Institute for Mechatronics in Mechanics at the Hamburg University of Technology as a teaching and research assistant. His research is focused on remote haptics and robotics. He received his Master's degree in Mechatronics from the Hamburg University of Technology in 2020. He also received his Bachelor's degree from the German University in Cairo, Egypt. His special interests are in the area of haptic-enhanced telemanipulation and robotic design. *f.youssef@hapticdevices.eu*.

## **Further Contributions**

**Seokhee Jeon** received his B.S. and Ph.D. degrees in computer science and engineering from the Pohang University of Science and Technology (POSTECH), in 2003 and 2010, respectively. He was a Postdoctoral Research Associate with the Computer Vision Laboratory, ETH Zurich. He joined the Department of Computer Engineering, Kyung Hee University, as an Assistant Professor, in 2012, and is now an Associate Professor. From 2021, he serves as a project director of the immersive media consortium university at Kyung Hee University. His research interests include data-driven haptic modeling and rendering, soft haptic actuators for medical applications and realistic multimodal feedback in virtual and augmented reality. *s. jeon@hapticdevices.eu*.

**Sebastian Kassner** received his doctoral degree (Dr.- Ing.) from Technische Universität Darmstadt in 2013 where his research was focused on haptic humanmachine interfaces for robotic surgical systems in the field of minimally invasive surgery. His special interest is the application of the electromechanical network theory on the design process of haptic devices. He served as an expert in ISO's committee "Tactile and Haptic Interactions" (TC159/SC4/WG9). Since 2012, Sebastian has held different positions in the industry. He now works at Knorr-Bremse where he is a Specialist for Digital Strategy in the field of rail systems and transportation technologies. *s.kassner@hapticdevices.eu*.

**Nataliya Koev** received her doctoral degree (Dr.-Ing.) from Technische Universität Darmstadt in 2021. She joined the Department Measurements and Sensor Technology at the University of Darmstadt in 2013 as a teaching and research assistant.Her researchwas focused on sensor integration in medical guide wire for cardiac catheterization. Her special interest is the development of micro force sensors for medical applications. In 2020, she joined Wilhelm Büchner Hochschule as a research assistant. *n.koev@hapticdevices.eu*.

**Thorsten Meiss** received his doctoral degree (Dr.- Ing.) from Technische Universität Darmstadt in 2012. He joined the university's Institute of Electromechanical Design in 2004 as a teaching and research assistant. His researchis focused onmicro-electro-mechanical sensors, microfabrication and their application in medical and industrial systems. He founded the company EvoSense in 2013, supported and led research projects for medical microsensors, and developed and commercialized products from research to market. In 2018, he joined Mecatronix GmbH and Applied Materials Inc. and is manager in the field of new display manufacturing lines. *t.meiss@ hapticdevices.eu*.

**Dongkill Yu** received his Master's degree (M.Sc.) from Korea University, Seoul, Korea, in 2001. He entered the Korea University in 1999 as a mechanical engineering bachelor's student. Since 2015, he focused on haptic function development of Central Information Display product for Vehicle. He is currently a professional engineer in Vehicle Component Solutions company of LG Electronics. Over 10 years, he developed a mechanical system of various Optical Disk Drives (mass production). *d.yu@hapticdevices.eu*.

**Wenliang Zhou** is Engineer and Project Manager in User Interface Software development, employed at Mercedes-Benz AG. During 2011–2015, he developed and introduced novel measurement technology, framework for characterization of haptic displays. Using this new haptic measurement system, the new Mercedes Benz Touchpad and Touchscreen were introduced to the market for the first time with technically ensured high Haptical Feedback quality. *w.zhou@hapticdevices.eu.*

## **Authors of Editions 1 and 2**

Former contributions to this book were made by

Dr.-Ing. Henry Haus Dr.-Ing. Markus Jungmann Dr.-Ing. Peter Lotz Dr.-Ing. Marc Matysek Dipl.-Ing. Oliver Meckel Dr.-Ing. Carsten Neupert Dr.-Ing. Thomas Opiz Dr.-Ing. Alexander Rettig Dr.-Ing. Tim Rossner Dr.-Ing. Stephanie Sindlinger Prof. Dr. rer. nat. Gerhard Weber Dr.-Ing. Limin Zeng Ingo Zoller, Ph.D.

# **Symbols**

This list includes the most relevant symbols used throughout the book.







# **Indices and Distinctions**

The usage of the most relevant indices and distinctions used throughout the book is shown using the replacement character -.


**Part I Basics**

# **Chapter 1 Motivation and Application of Haptic Systems**

**Thorsten A. Kern and Christian Hatzfeld**

**Abstract** This chapter serves as an introduction and motivation for the field of haptic research. It provides an overview of the technical domains covered, but also introduces the philosophical and social aspects of human haptic sense. Various definitions of haptics as a perceptual and interaction modality are discussed to serve as a common ground for the rest of the book. Typical application areas such as telepresence, training, interaction with virtual environments and communication are introduced and typical haptic systems from these areas are discussed.

## **1.1 Research Disciplines**

*Haptics*—in a non-scientific understanding, refers to the sense of touch and everything connected with it. If you think about it more carefully, you will realise that touch always requires interaction. Thus, the perception of touch cannot take place without contact, and consequently, without *something* being touched or being touched by. Following this basic concept, it is obvious that haptics requires interaction. A statement that sounds simple, but in terms of research and technical tasks it adds complexity to the subject. This is because, in contrast to vision and sound, haptics always has an impact on the touched object itself due to the interaction, and the classification of interactions varies depending on the physical properties of the body and object. If there is also awareness that the sense of touch is relevant to every mechanical part of the body that interacts with the environment, and in particular to

T. A. Kern (B)

Christian Hatzfeld deceased before the publication of this book

Hamburg University of Technology, Eißendorfer Str. 38, 21073 Hamburg, Germany e-mail: t.a.kern@tuhh.de

C. Hatzfeld Technische Universität Darmstadt, Darmstadt, Germany

**Fig. 1.1** Concept-Map on Haptic Disciplines, own visualization

every area covered with skin, each of them having different sensory capabilities, the challenges in this field should become clear.

Consequently with haptics-research still growing the field is restructured frequently. A snapshot of the core-disciplines is given in Fig. 1.1. Whereas 20 years ago haptic research areas were maybe eight or ten, the diversification of research changed drastically in the last decade due to increased understanding of interdependencies but also more specialization and specific needs of industry. One main direction can be found with the group of perception-based research covering psychophysical and neuroscience-related topics. This field has a strong influence on all the applicationbased research such as , or , which themselves again need several components and subsystems and are used in different applications.

The topic of this book is engineering haptic devices. So with regards to Fig. 1.1 we are in the blueish device and yellow application areas, but of course doing this the book does not ignore the interlinked areas and gives those details required to understand the influences from those interfaces.

## **1.2 Some Broad Scope on Haptics**

But what is *haptics* in the first place? A common and general definition is given as

**Definition** *Haptics* Haptics describes the sense of touch and movement and the (mechanical) interactions involving these.

but this will probably not suffice for the purpose of this book. This chapter will give some more detailed insight into the definition of haptics (Sect. 1.4) and will introduce four general classes of applications for haptic systems (Sect. 1.5) as the motivation for the design of haptic systems and—ultimately—for this book. Before that we will have a short summary of the philosophical and social aspects of this human sense (Sect. 1.3). These topics will not be addressed any further in this book, but should be kept in mind by every engineer working on haptics.

## **1.3 Philosophical and Social Aspects**

An engineer tends to describe haptics primarily in terms of forces, elongations, frequencies, mechanical tensions and shear-forces. This of course makes sense and is important for the technical design process. However haptics starts before that. Haptic perception ranges from minor interactions in everyday life, e.g., drinking from a glass or writing this text, to a means of social communication, e.g. shaking hands or giving someone a pat on the shoulder, and very personal and private interpersonal experiences. Touch has a conscious, but also a very relevant unconscious component as demonstrated e.g. by a study of Crusco et al. [1] showing a tip to a waitress being on average 10% higher with the customer being slightly touched. This touch is known as the *Midas Touch* and is surprisingly independent of gender and age on both sides. This section looks at the spectrum and influence of haptics on humans beyond technological descriptions. It is also a hint for the development engineer to deal responsibly and consciously with the possibilities of outwitting the haptic sense.

## *1.3.1 Haptics as a Physical Being's Boundary*

Haptics is derived from the Greek term "haptios" and describes "something which can be touched". In fact the consciousness about and understanding of the haptic sense has changed many times in the history of humanity. Aristoteles puts the sense of touch in the last place when naming the five senses:


Nevertheless he attests this sense a high importance concerning its indispensability as early as 350 B.C. [2]:

*Some classes of animals have all the senses, some only certain of them, others only one, the most indispensable, touch.*

The social estimation of the sense of touch experienced all imaginable phases. Frequently it was afflicted with the blemish of squalor, as lust is transmitted by it [3]:

*Sight differs from touch by its virginity, such as hearing differs from smell and taste: and in the same way their lust-sensation differs*

It was also called the sense of excess [4]. In a general subdivision between lower and higher senses, touch was almost constantly ranged within the lower class. In western civilization the church once stigmatized this sense as forbidden due to the pleasure which can be gained by it. However, in the 18th century the public opinion changed and Kant is cited with the following statement [5]:

*This sense is the only one with an immediate exterior perception; due to this it is the most important and the most teaching one, but also the roughest. Without this sensing organ we would be able to grasp our physical shape, whose perception the other two first class senses (sight and hearing) have to be referred to, to generate some knowledge from experience.*

Kant thus emphasizes the central function of the sense of touch. It is capable of teaching the spatial perception of our environment. Only touch enables us to feel and classify impressions collected with the help of other senses, put them into context and understand spatial concepts. Although stereoscopic vision and hearing develop early, the first-time interpretation of what we see and hear, requires the connection between both impressions perceived independently and information about distances between objects. This can only be provided by a sense, which can bridge the space between a being and an object. Such a sense is the sense of touch. The skin, being a part of this sense, covers a human's complete surface and defines his or her physical boundary, the physical being.

## *1.3.2 Formation of the Sense of Touch*

As shown in the prior section, the sense of touch has numerous functions. The knowledge of these function enables the engineer to formulate demands on the technical system. It is helpful to consider the whole range of purposes the haptic sense serves. However, at this point we do not yet choose an approach by measuring its characteristics, but observe the properties of objects discriminated by it.

The sense of touch is not only specialized on the perception of the physical boundaries of the body, as said before, but also on the analysis of immediate surroundings including the contained objects and their properties. Human beings and their predecessors had to be able to discriminate e.g. the structure of fruits and leaves by touch, in order to identify their ripeness or whether they were eatable or not, like e.g. a furry berry among smooth ones. The haptic sense enables us to identify a potentially harming structure, like e.g. a spiny seed, and to be careful when touching it, in order to obtain its content despite its dangerous needles.

For this reason, the sense of touch has been optimized for the perception and discrimination of surface properties like e.g. roughness. Surface properties may range from smooth ceramic like or lacquered surfaces with structural widths in the area of some µm, to somewhat structured surfaces like coated tables and rough surfaces like coarsely woven cord textiles with mesh apertures in the range of several millimeters. Humans developed a very typical way how to interact with theses surfaces enabling

them to draw conclusions based on the underlying perception mechanism. A human being moves his or her finger along the surface (Fig. 1.2), allowing shear forces to be coupled to the skin. The level of the shear forces is dependent on the quality of the frictional coupling between the object surface and the skin. It is a summary of the tangential elasticity of the skin depending on the normal pre-load resulting from the touch *F*norm and the velocity vexplr of the movement and the quality of the coupling factor μ.

Everyone who has ever designed a technical frictional coupling mechanism knows that without additional structures or adhesive materials viscous friction between two surfaces can hardly reach a factor ofμ*<sup>r</sup>* ≥ 0.1. Nevertheless nature, in order to be able to couple shear force more efficiently into the skin, has "invented" a special structure at the most important body-part for touching and exploration: the fingerprint. The epidermal ridges couple shearing forces efficiently to the skin, as by the bars a bending moment is transmitted into its upper layers. Additionally these bars allow form closures within structural widths of similar size, which means nothing else but canting between the object handled and the hand's skin. At first glance this is a surprising function of this structure. When one looks again, it just reminds you of the fact that nature does not introduce any structure without a deeper purpose.

Two practical facts result from this knowledge: First of all the understanding of shear-forces' coupling to the skin has come into focus of current research [6] and has resulted in an improvement of the design process of tactile devices. Secondly, this knowledge can be applied to improve the measuring accuracy of commercial force sensors by building ridge-like structures [7].

Another aspect of the haptic sense and probably a evolutionary advantage is the ability to use tools. Certain mechanoreceptors in the skin (see Sect. 2.1 for more details) detect high-frequency vibrations that occur when handling a (stiff) tool. Detection of this high-frequency vibrations allows to identify different surface properties and to detect contact situations and collisions [8].

## *1.3.3 Touchable Art and Haptic Aesthetics*

Especially in the 20th century, art deals with the sense of touch and plays with its meaning. Drastically the furry-cup (Fig. 1.3) makes you aware of the significance of haptic texture for the perception of surfaces and surface structures. Whereas the general form of the cup remains visible and recognizable, the originally plane ceramic surface is covered by fur.

In 1968, the "Pad- and Touch-Cinema" (Fig. 1.4) allowed visitors to touch Valie Export's naked skin for 12 s through a box being covered by a curtain all the time. According to the artist this was the only valid approach to experience sexuality without the aspect of voyeurism [9]. These are just a few examples of how art and artists played with the various aspects of haptic perception.

As with virtual worlds and surroundings, also haptic interaction has characteristics of artistry. In 2004,Ishiifrom MIT Media Laboratory and Iwata from the University of Tsukuba demonstrate startling exhibits of "tangible user interfaces" based on bottles opened to "release" music.

And meanwhile, the human-triggered touch is extended to devices touching back. With Marc Teyssier exploring very actively the limits of what is socially acceptable or not in the unexplored field between art and robotics (Fig. 1.5).

Despite the artistic aspect of such installations, recent research evaluates new interaction possibilities for -→ Human-Computer-Interaction (HCI)1 based on such concepts:

**Fig. 1.3** Meret Oppenheim: furry-cup, 1936 [9, 10], DIGITAL IMAGE <sup>c</sup> 2022, The Museum of Modern Art/Scala, Florence

<sup>1</sup> Please note that entries in the glossary and abbreviations are denoted by a -→ throughout the book.

**Fig. 1.4** Valie Export TAPP und TASTKINO, 1968, b/w—photography <sup>c</sup> Valie Export, Bildrecht Wien, 2022, photo <sup>c</sup> Werner Schulz, courtesy Valie Export, http://80.64.129.152: 8080/share.cgi?ssid=0vdjJr7

**Fig. 1.5** *MobiLimb* project with a device touching back [11], <sup>c</sup> <sup>2022</sup> Marc Teyssier, used with permission


**Fig. 1.6** Playtronica product *playtron* and *Touch ME* with capacitive measurement and midi-sound generation based on touch-intensity, c 2022 Daria Malysheva, used with permission

In technical applications, the personal feeling of haptic aesthetics is a distinguishing factor. Car manufactures work on objective quality schemes for the perceived quality of interfaces [14, 15] with the target to create a touchable brand identity, there are whole companies claiming to "*make percepts measurable*" [16] and designers provide toolkits to evaluate characteristics of knobs and switches [17, 18] and meanwhile even design-packages are proposed and commercialized to evaluate typical vibrational feedbacks [19]. However, the underlying mechanisms of the assessment of haptic aesthetics are not fully understood. While the general approach of all studies is basically the same, using multidimensional scaling and regression algorithms to combine subjective assessments and objective measurements [20], details on perceptional dimensions are subject to ongoing research [21] and sophisticated data-models [22].

Carbon and Jakesch published a comprehensive approach based on object properties and the assessment of familiarities [23]. This topic still remains a fascinating field of research for interdisciplinary teams from engineering and psychology and is applied to regular product design [24].

## **1.4 Technical Definitions of Haptics**

To use the haptic sense in a technical manner, some agreements about terms and concepts have to be made. This section deals with some general definitions and classifications of haptic interactions and haptic perception and is the basis for the following Chap. 2, which will dig deeper into topics of perception and interaction.

## *1.4.1 Definitions of Haptic Interactions*

The haptic system empowers humans to interact with real or virtual environments by means of mechanical, sensory, motor and cognitive abilities [25]. An interaction consists out of one or more operations, that can be generally classified into *motion control* and *perception* [26]. The operations in these classes are called *primitives*, since they cannot be divided and further classified.

The perception class includes the primitives *detection*, *discrimination*, *identification* and *scaling* of haptic information [27]. The analysis of these primitives is conducted by the scientific discipline called -→ psychophysics. To further describe the primitives of the description class, the term -→ stimulus has to be defined:

**Definition** *Stimulus (pl. stimuli)* Excitation or signal that is used in a psychophysical procedure. It is normally denoted with the symbol Φ. The term is also used in other contexts, when a (haptic) signal without further specification is presented to a user.

Typical stimuli in haptics are forces, vibrations, stiffnesses, or objects with specific properties. With this definition, we can have a closer look at the perception primitives, since each single primitive can only be applied to certain haptic stimuli, as explained below.


The motor control class can be divided in different operations as well. In this class, the primitives *travel*, *selection* and *modification* exist [29]. They can be better explained, if they are linked to general interaction tasks [29, 30]:


When using motor control primitives, not only the operation itself but the aim of the operation have to be considered for an accurate description of an interaction. If, for example, a computer is operated with a mouse as an input device and an icon on the screen is selected, this interaction could be described as a travel primitive or as a selection primitive. A closer look will probably reveal, that the travel primitive is used to reach an object on the screen. This object is selected in a following step. If this interaction should be executed with a new kind of haptic device, the travel primitive is probably considered subordinate to the selection primitive.

Based on these two classes of interaction primitives, Samur introduces a -→ taxonomy of haptic interaction [31]. It is given in Fig. 1.7 and allows the classification of haptic interaction. A classification of a haptic interaction is useful for the design of new haptic systems: Requirements can be derived more easily (see Chap. 5), analogies can be identified and used in the design of system components and the evaluation is alleviated (see Chap. 13).

Next to the analysis of haptic interaction based on interaction primitives, some more psychophysically motivated approaches exist:


The application of the taxonomy of haptic interactions as given in Fig. 1.7 to the development of task-specific haptic systems seems to be much more straightforward as the application of the approaches by Lederman and Klatzky and Hollins as stated in the above listing. Therefore these are not pursued any further in this book.

**Fig. 1.7** Taxonomy of haptic interaction. Figure based on [27, 31]

## *1.4.2 Taxonomy of Haptic Perception*

Up till now, one of the main taxonomies in haptic literature has not been addressed: The classification based on -→ kinaesthetic and -→ tactile perception properties. It is physiological based and defines perception solely on the location of the sensory receptors. It is defined in the standard ISO 9241-910 [30] and given in Fig. 1.8.

With this definition, tactile perception is based on all -→ cutaneous receptors. These include not only mechanical receptors, but also receptors for temperature, chemicals (i.e. taste) and pain. Compared to the perception of temperature and pain, mechanical interaction is on the one side much more feasible for task-specific haptic systems in terms of usability and generality, on the other side it is technically much more demanding because of the complexity of the mechanoreceptors and the inherited dynamics. Therefore this book will lay its focus on mechanical perception and interaction.

For processes leading to the perception pain the authors point to special literature [34] dealing with that topic, since an application of pain stimuli in a haptic system for everyday use seems not to be likely. The perception of temperature and possible applications are given for example in [35, 36]. Whereas some technical applications of thermal displays are known [37–39], these seem to be minor to mechanical interaction in terms of information transfer and dynamics. Therefore, temperature is primarily considered as an influencing factor on the mechanical perception capabilities and discussed more detailed in Sect. 2.1.2.

With the confinement on mechanical stimuli, we can define kinaesthetic and tactile perception as follows:

**Definition** *kinaesthetic* kinaesthetic perception describes the perception of the operational state of the human locomotor system, particularly joint positions, limb alignment, body orientation and muscle tension. For kinaesthetic perception, there are dedicated sensory receptors in muscles, tendons and joints as detailed in Sect. 2.1. Regarding the taxonomy of haptic interactions, kinaesthetic sensing is primarily involved the motion control primitives, since signals from kinaesthetic receptors are needed in the biological control loop for the positioning of limbs.

**Definition** *tactile* Tactile perception describes the perception based on sensory receptors located in the human skin. Compared to kinaesthetic receptors, they exhibit much larger dynamics and are primarily involved in the perception primitives of haptic interaction.

While originally the terms *tactile* and *kinaesthetic* are strictly defined by the location and the functions of the sensory receptors, they are used in a more general way recently. While the root of the word *kinesthesia* is linked to the description of movement, the term *kinaesthetic* is also used to describe static conditions nowadays [40]. Sometimes, kinaesthetic is only used for the perception of properties of limbs, while the term *proprioception* is used for properties regarding the whole body [41]. This differentiation is neglected further in this book because of its minor technical importance. The term *tactile* often describes any kind of sensor or actuator with a spatial resolution, regardless if it is used in an application addressing tactile perception as defined above. While these examples are only of minor importance for the design of haptic systems, the following usage of the terms is an important adaption of the definitions: Primarily based on the dynamic properties of tactile and kinaesthetic perception, the term definition is extended to haptic interactions in general nowadays. The reader may note that the following description is not accurate in terms of temporal sequence of the cited works, but focuses on the works with relevant contributions to the present use of the terms *kinaesthetic* and *tactile*.

Based on the works of Shimoga, the dynamics of kinaesthetic perception are set equal to the motion capabilities of the locomotor system [42]. The dynamics of tactile perception are bordered at about 1 ... 2 kHz for practical reasons. Higher frequencies can be perceived [43, 44], but it is questioned, whether they have significant contribution to perception [45, p. 3]. As further explained in Sect. 2.4.3, this limitation is technically reasonable and necessary for the design of the electromechanical parts of haptic systems. Figure 1.9 shows this dynamic consideration of haptic interaction based on characteristic values from [44, 46, 47].

To extend this dynamic model of perception to a more general definition of interactions, Daniel and McAree propose a bidirectional, asymmetric model with a low-frequency (<30 Hz) channel for the exchange of energy and a high-frequency channel for the exchange of information [48] with general implications on the design of haptic interfaces. The mapping based on dynamic properties is meaningful to a greater extend, since users can be considered as mechanical passive systems for frequencies above the dynamics of the active movement capabilities of the locomotion system [49]. This will be explained in more detail in Chap. 3. Altogether, these aspects (dynamics of perception and movement capabilities, exchange paths of energy and information and the modelling of the user as active and passive load to a system) lead to the nowadays widely accepted model for the partition of haptic interaction in low-frequency kinaesthetic interaction and high-frequency tactile perception.

Both taxonomies of haptic interaction as seen in Fig. 1.7 and haptic perception as seen in Fig. 1.8 and extended in Fig. 1.9 are relevant sources for standard vocabulary in haptic system design. This is needed in the design of haptic systems, since it will simplify and standardize descriptions of haptic interactions. These are necessary to describe the intended functions of a task-specific haptic system and will be described more detailed in Sect. 5.2. Further definitions and concepts about haptic interaction and perception are given in Chap. 2 in more detail. In the next part of this chapter, possible applications for haptic systems that will become part of the human haptic interaction with systems and environments are presented.

**Fig. 1.9** Kinaesthetic and tactile haptic interaction. Figure is based on data from [44, 46, 47]

## **1.5 Application Areas of Haptic Systems**

Haptic systems can be found in a multitude of applications. In this section, four general application areas are identified. Benefits and technical challenges of haptic systems in this areas are given. In the latter Sect. 2.3, these application areas are combined with a general model of human-system-environment interaction, leading to an interaction-based definition of basic system structures.

## *1.5.1 Telepresence, Teleaction and Assistive Systems*

Did you ever think about touching a lion in a zoo's cage?

With a -→ telepresence and teleaction (TPTA)-system you could do just that without exposing yourself to risks, since they provide the possibility to interact mechanically with remote environments (We neglect the case of the lion feeling disturbed by the fondling...).

In a strict definition of TPTA-systems there is no direct mechanical coupling between operator and manipulated environment, but only via the TPTA-system. Thus the transmission of haptic signals is possible in the first place, since the mechanical interaction is converted to other domains (mainly electrical) and can be transmitted more easily. They are often equipped with additional multimodal features, mainly a one-directional visual channel displaying the environment to the operator of the TPTA-system.

Examples include systems for underwater-assembly, when visual cues are useless because of dispersed particles in the water [50], scaled support of micro- and nanopositioning [51, 52] and surgical applications [53, 54]. The use of TPTA-systems shortens task completion time, and minimizes errors and handling forces compared to systems without a haptic feedback [55]. In surgical applications new combinations of insofar incompatible techniques are possible, for example palpation in minimal invasive surgery. Studies also show an safety increase for patients [56]. In recent years especially the strong increase in band with in any networked application is driving imagination on what could be done. Antonakoglou et al. [57] did a very nice overview paper in the context of the availability of 5G. But despite aerial or space applications, the input-device stays in focus for an efficient operation [58].

Most TPTA-Systems knwon are used for research applications. Figure 1.10 shows an approach by *Quanser*, supplying a haptic interface and a robot manipulator arm. Based on this combination, versatile bilateral teleoperation scenarios can be designed, as for example neuroArm, a teleoperation system for neurological interventions [59]. Example interventions include the removal of brain tumors, that require high position accuracy and real-time integration of -→ Magnetic Resonance Imaging (MRI) images.

The development of TPTA-systems is technically most challenging. This is caused by the unknown properties of the environment, having an influence on the system stability, the required high accuracy of sensors and actuators to present artifact-free haptic impressions and the data transmission over long distances with additional aspects of packeted transmission, (packet-)losses and latency.

A special type of TPTA-systems are so-called -→ comanipulators, that are mainly used in medical applications [53]. Despite the mechanical interaction over the TPTAsystem, additional environment manipulation (and feedback) can be exerted by parts of the system (a detailed definition based on the description of the interaction can be found in Sect. 2.3). Examples for such comanipulators are INKOMAN and HapCath developed at the *Institute for Electromechanical Design*.

**Fig. 1.10** Versatile teleoperation by *Quanser*: HD<sup>2</sup> haptic interface with 7 DoF of haptic feedback and *Denso* Open Architecture robot with 6 DoF. Image courtesy by *Quanser*, Markham, Ontario, CA., used with permissions

The HapCath-system that adds haptic feedback to cardiovascular interventions is presented in detail as an example in Sect. 14.2. Figure 1.11 displays the INKOMAN instrument, which is the result of the joint research project SOMIT- FUSION funded by the German Ministry of Education and Research. It is an extension of a laparoscopic instrument with a parallel kinematic structure [60], that provides additional -→degrees of freedom (DOF) of an universal tool platform [61]. This allows minimal invasive interventions at previously unreachable regions of the liver. By integrating a multi-component force sensor in the tool platform [62] interaction forces between instrument and liver can be displayed to the user [63]. This allows techniques like palpation to identify vessels or cancerous tissue. With the general form of a laparoscopic instrument, additional interaction forces can be exerted by the surgeon by moving the complete instrument, it is therefore classified as a comanipulation system.

TPTA systems are mainly focus of research activities, probably since there are only small markets with a high potential for this kind of systems. An exception are medical applications, where non-directly coupled instruments promise higher safety and efficient usage, for example by avoiding collisions between different instruments or lowering contact and grip forces [56, 64]. Also automated procedures like knot tying can be accelerated and conducted more reliable [65]. However, the distinction between a haptic TPTA-system and a robotic system for medical use is quite a thin line: The aforementioned functions do not require haptic feedback. This explains the large number of existing medical robotic systems in research and industry [66, 67], dominated by the well-known Da Vinci by *Intuitive Surgical Operations Inc.*. This system was developed for urological and gyneological interventions and incorporates a handling console with three-dimensional view of the operation area and a considerable number of instruments, that are directed by the surgeon on the console and actuated with cable drives [68]. There is no haptic feedback for this

**Fig. 1.11** INKOMAN—intracorporal manipulator for minimal invasive abdomen interventions with increased flexibility. The figure shows the handheld instrument with a haptic display based on a delta kinematic structure. The parallel kinematic structure used to move the tool platform is driven by ultrasonic traveling wave motors. Figure adapted from [63]

**Fig. 1.12** Da Vinci SP surgical system for single port access, c 2022 *Intuitive Surgical Operations, Inc.*, used with permission

system preinstalled, although there are promising extensions available as discussed in Sect. 2.4.4. Just recently, the system is extended to single-port entry, which further reduces the liaisons of the intervention and allows a quick exchange of tools used during the procedure (Fig. 1.12).

For consumer application, *Holland Haptics* sold a product called Frebble intended to convey the feeling of holding someones hand over the internet. This was as well an interesting hardware concept as a low-cost teleoperation device.

Also practical magnetic resonance imaging studies into the hand neural control revealed significant progress, but the harsh MRI environments are a challenge for devices capable of delivering a large variety of stimuli. This work focused on presenting an fMRI-compatible haptic interface to find the neural mechanisms for precision grasp control. The interface is placed at the scanner bore, and it is controlled through a shielded electromagnetic actuation system. It is located at the scanner bed end and uses a high stiffness cable. Performance evaluation showed up to 94 N renderable forces and structural stiffness of 3.3 N/mm, and at least 19 Hz position control bandwidth.

In this system, two closed-loop cable transmissions actuate the two DOF, which are for each finger. It consists of aluminum profiles that hold redirection modules. Cables are passing through a length and tension adjustment mechanism. The guiding pulleys are combined with low friction polymer/glass ball bearings. They are fixed on an aluminum bar rigidly attached to the scanner bedside. Fixing the cables to the capstan prevents slippage. Due to the transmission friction, cable wear is important, and for making better interaction with operators, the cable should be easily exchangeable in a breakdown during an fMRI study.

## *1.5.2 Virtual Environments*

The second main application area for haptic systems is interaction with virtual environments. Since this is quite a large field of applications, we will have a closer look on different areas, where interaction with generated situations is used in a wider extent.


Another example for multimodal display of information was recently presented by *Microsoft Research* [76]. The TouchMover is an actuated screen with haptic

feedback that can be used to display object and material properties or to intuitively access volumetric data like for example -→ MRI scans. Figure 1.15 shows this application of the system. Annotations are marked visually and haptically with a detent, allowing for intuitive access and collaboration.

**Consumer Electronics** For the integration of haptic feedback in computer games, *Novint Technologies, Int.* presented the Falcon haptic interface in 2006. It is based on a delta parallel kinematic structure and distinguishes itself through a very competitive price tag at around 500\$. This device is also used in several research projects like for example [77], because of the low price and the support in several -→ application programming interface (API). Looking from the 202xth perspective, complex haptic enhanced input devices did not perform well in consumer electronics. The main area where they still persists are in gamepad or game-controller-applications but reduced to a function of pure *vibrotactile* feedback, *Sony*'s Dual-Sense Technology recently again increased the complexity and combined a vibration actuator with a motor-actuated and adaptable trigger. The future will show whether this is a revival of kinaesthetic feedback in consumer electronics.

But there are other areas. To provide a more intense gaming experience, haptic systems conveying low-frequency acoustic signals Butt Kicker by *The Guitammer Company* exist (Fig. 1.16). The system delivers low-frequency signals increasing the immersion. To allow for the touch of fabric over the internet, the Haptex project developed rendering algorithms as well as interface hardware [78].

**Fig. 1.14** The Haptic Strip system. The strip is mounted on two HapticMaster admittance type interfaces. Capacitive sensors on the strip surface sense the user's touch. Figure is based on [73] c Springer Nature, all rights reserved

**Fig. 1.15** TouchMover with user exploring MRI data. Picture courtesy of *Microsoft Research*, Redmond, WA, USA., used with permission

Compared to the design of TPTA-Systems the development of haptic interfaces for interactions with virtual environments seems to be slightly less complex, since more knowledge about the interaction environment is present in the design process. However, new aspects like derivation and allocation of the environment data arise with this applications. Because of the wider spread of such systems, cost efficiency has to be taken into account.

**Fig. 1.16** Electrodynamic actuator ButtKicker for generating low-frequency oscillations on a gaming seat, c 2022 *The Guitammer Company*, used with permission

## *1.5.3 Non-invasive Medical Applications*

Based on specific values of haptic perception diagnosis of certain illnesses and dysfunctions can be made. Certain types of eating disorders [79, 80] and diabetic neuropathy [38] are accompanied with diminished haptic perception capabilities. They can therefore be diagnosed with a measurement of perception or motor exertion parameters and comparison with the population mean. Next to diagnosis, haptic perception parameters can be used as a progress indicator in stroke [81] and limb [82] rehabilitation, too.

For these purposes cost-efficient systems with robust and efficient measurement protocols are needed. Because feedback from the user can be received with any means, development is easier than the development of TPTA- or VR-systems. These systems are foci of several research groups, up till now there is no system for comprehensive use in the market.

## *1.5.4 Communication*

The fourth and by numbers largest application area of haptic systems is basic communications. The most prominent example is probably on your desk or in your pocket—the vibration function of your phone. Compared to communication based on visual and acoustic signals, haptics give the opportunity to convey information in a discrete way and offer the possibility of a spatial resolution. Communication via the haptic sense tends to be very intuitive, since feedback arises at the point the user is interacting with. A simple example is a switch, that will give a haptic feedback when pressed.

Therefore, haptics are an attractive communication channel in demanding environments, for example when driving a car. Several studies show that haptic communication tends to distract users less from critical operations than the use of other channels like vision or audition [83, 84]. Applications include assistive systems for navigation in military applications [85], a practical example for an adaptive haptic user interface for automotive use is given in Sect. 14.1. With the increasing number of steer-by-wire applications and the vision of autonomous driving vehicles, the haptic channel is identified as a possibility to raise awareness of the driver in possibly dangerous situations as investigated in [86].

More recently, the increasing use of consumer electronics with touch screens triggers a demand for technologies to add haptic feedback. It is intended to facilitate the use without recurring visual status inspection. Solutions for this applications include the usage of quite a lot different actuation principles, which will be the focus of Chap. 9.

Another application area are tactile interfaces for the blind and visual impaired [87, 88]. Despite displaying Braille characters, tactile interfaces offer navigation support (see for example the HaptiMap project providing toolkits for standard mobile terminals [89] or the tactile You-Are-Here-Maps or interactions with graphic interfaces [90, 91]. Newer studies show even advantages on finger-rehabilitation for strokepatients by vibrotactile actuation [92]. Figure 1.17 gives some examples of haptic systems used for communication applications.

Another type of haptic interface is the shape-changing interface. This interface type creates the information communication by altering its form. A usage of this haptic interface is navigation assistance by changing the shape and guiding the user to reach a point Fig. 1.18. This change is felt via the fingers of visually or hearing impaired, deafblind, and sighted pedestrians.

This shape-changing device is developed to implement the navigation guidance via a bi-directional expanding mechanism. It uses two similar parts to move away from the device central section. This shape change generates a sensation of variable volume. Inside of the system, one motor can be used for providing the rotational movement, and a rack and pinion can provide the translational movement. The top and bottom faces are designed to make the device easy to rest on the palm without pinching the user's skin.

Despite the analysis of energy-efficient actuation principles for mobile usage, scientific research in this area addresses the design of haptic icons for information transfer. Sometimes also called *tactons*, *hapticons* or *tactile icons*, the influence of rhythm, signal form, frequency and localization are investigated [94, 95]. Up till now, information transfer rates of 2 ... 12 bit per second were reported [96, 97], although the latter require a special haptic interface called the Tactuator designed for communication applications [98]. The exact bandwidth is still unclear yet. One application related study from Seo and Choi [99] reported 3.7 bits.

**Fig. 1.17** Components and systems for communication via the haptic sense. **a** Exciter for touchpads and mobile devices—*Grewus* Exciter EXR4403L-01A). **b** Hyperbraille-system for displaying graphic information for visual impaired users, image courtesy of *metec AG*, Stuttgart, Germany. **c** Lormer-system as machine-human-interface conveying text information using the lorm-alphabet on palm and hand of the user, image courtesy of Thomas Rupp. **d** Tactile Torso Display, vest for displaying flight information on the pilots torso, image courtesy of *TNO*, Soesterberg, The Netherlands. All images used with permission

## *1.5.5 Completing the Picture*

For completeness, also passive systems like a computer keyboard, trackballs and mice are part of this application area, since they will convey information given in form of a motion control operation to a (computer) system. Although there exists some kind of haptic feedback, it is not dependent on the interaction, but solely on the physical characteristics of the haptic system like inertia, damping or friction.

Another area inspired by haptic research and sometimes even used in haptic telepresence and telemanipulation scenarios is the area of robotic hands or limbs equipped with perception-inspired sensors. The whole area of tactile sensors was and is part of haptic research and referred to in chapter Chap. 10. Its main and fascinating application domain however is the area of robotics, especially when it comes to bionicinspired systems [100]. A preliminary summit is reached by the micromechanical design of a fully dexterous robotic hand in combination with high-end combined capacitive pressure sensors (Fig. 1.19). But there is more to come, and not even limited to humanoid shapes.

**Fig. 1.18** Different shapes of the haptic interface for sending different commands [93], figures by Ad Spiers, used with permission

**Fig. 1.19** Fully actuated robotic hand *Shadow Dexterous Hand* by *Shadow Robot Company* with integrated BioTacs by *SynTouch* allowing manipulation with direct contact force- and direction measurement for each fingertip c 2022 *Shadow Robot Company*, used with permission

## *1.5.6 Why Use a Haptic System?*

The reasons one might want to use a haptic system are quite numerous: Perhaps you want to improve the task performance or lower the error rate in a manipulation scenario, address a previously unused sensory channel to convey additional information or gain advantages over a competitor in an innovation driven market. This book will not answer the question if haptics is able to fulfil the wishes and intentions connected to this reasons, but will focus on the design of a specific haptic system for the intended application.

Although there are many guidelines on how to implement haptic and multimodal feedback for optimal task performance (they will be addressed in Sect. 5.1.2), there are only limited sources on how to decide whether a haptic feedback is usable for an application. Acker provides some criteria for telepresence technologies in industrial applications [51], Jones gives guidelines on the usage of tactile systems [101].

## **1.6 Conclusions**

Technical systems addressing the haptic sense cover a wide range of applications. Since this book focuses on the design process of task specific haptic interfaces, the following chapters will first focus on the deeper analysis of haptic interaction in Chap. 2 and the role of the user in a haptic system in Chap. 3, before a detailed analysis of the development and the structure of haptic systems is presented in Chaps. 4 and 6. This provides the basis for the second part of the book, that will deal with the actual design of a task-specific haptic system.

## **Recommended Background Reading**


*General model about the development of haptic aesthetics and the implications for the design of products.*

[4] Grunwald, M.: **Human Haptic Perception: Basics and Applications**. Birkhäuser, Basel, CH, 2008. *General collection about the haptic sense with chapters about theory and history of haptics, neuro-physiological basics and psychological aspects of haptics.*

## **References**


1 Motivation and Application of Haptic Systems 29

in bioinformatics). LNCS, vol 11786, pp 217–232. ISSN: 16113349. https://doi.org/10.1007/ 978-3-030-30033-3\_17. https://link.springer.com/chapter/10.1007/978-3-030-30033-3


**Open Access** This chapter is licensed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made.

The images or other third party material in this chapter are included in the chapter's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the chapter's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.

# **Chapter 2 Haptics as an Interaction Modality**

**Christian Hatzfeld and Thorsten A. Kern**

**Abstract** This chapter focuses on the biological and behavioural basics of the haptic modality. On the one side, several concepts for describing interaction are presented in Sect. 2.2, on the other side, the physiological and psychophysical basis of haptic perception is discussed in Sect. 2.1. The goal of this chapter is to provide a common basis to describe interactions and to convey a basic understanding of perception and the description by psychophysical parameters. Both aspects are relevant for the formal description of the purpose of a haptic system and the derivation of requirements, further explained in Chap. 5. Several conclusions arising from the description of perception and interaction are given in Sect. 2.4.

## **2.1 Haptic Perception**

This section will give a short summary of relevant topics from the scientific disciplines dealing with haptic perception. It is intended to reflect the current state of the art in a necessary extend for an engineer designing a haptic system. Physiologists and psychophysicists are therefore asked to forgive simplifications and impreciseness. For all engineers, Fig. 2.1 gives a general block diagram of haptic perception that forms a conscious -→ percept from a -→ stimulus.

Analysing each block of this diagram, the mechanical properties of the skin as stimulus' transmitting apparatus are dealt with in Chap. 3. Section 2.1.1 will deal with the characteristics of mechanoreceptors in skin and locomotion system, while

C. Hatzfeld (B)

T. A. Kern (B) Hamburg University of Technology, Eißendorfer Str. 38, 21073 Hamburg, Germany e-mail: t.a.kern@tuhh.de

Christian Hatzfeld deceased before the publication of this book.

Technische Universität Darmstadt, Darmstadt, Germany

**Fig. 2.1** Block diagram of haptic perception and the corresponding scientific areas investigating the relationships between single parts as defined in [1, 2] c Springer Nature, all rights reserved

Sect. 2.1.2 will introduce the psychophysical methods that are used to evaluate this characteristics. In Sects. 2.1.3 and 2.1.4 thresholds and super-threshold parameters of human haptic perception are presented.

## *2.1.1 Physiological Basis*

This section deals with the physiological properties of the tactile and kinaesthetic receptors as defined in the previous chapter (Sect. 1.4.2). We will not cover neural activity in detail, but only look at a general model that is useful for a closer look on multimodal systems.

#### **2.1.1.1 Tactile Receptors and Their Functions**

From a histological view, there are four different sensory cell types in glabrous skin and two additional sensory cell types in hairy skin as well. They are located in the top 2 mm of the skin as shown in Fig. 2.2. The sensory cells in glabrous skin are named after their discoverers, while the additional cells in hairy skin have functional names [3]. Because of the complex mechanical transmission properties of the skin and other body parts like vessels and bone, compression and shear forces in the skin and for high frequency stimuli—surface waves are expected in the skin as a reaction to external mechanical stimuli. These lead to various pressure and tension distributions in the skin, that are detected differently by the individual sensory cells. In general, sensory cells near the skin surface will react only to adjacently applied stimuli, while

**Fig. 2.2** Histology of tactile receptors in **a** glabrous and **b** hairy skin. Figure adapted from [4] c Springer Nature, all rights reserved

cells localized more deeply like the Ruffini endings and Pacinian corpuscles will react also to stimuli applied farther away. These differences are presented in the following for the well-researched receptors in glabrous skin. For hairy skin, less information is available. While tactile disks are assumed to exhibit similar properties as Merkel cells because of the same histology of the receptor, hair follicle receptors are attributed to detect movements on the skin surface. The following sections will concentrate on tactile receptor cells in glabrous skin because of the higher technical relevance of these areas.

To investigate the behavior of a individual sensory cell, a single nerve fiber is contacted with an electrode and electrical impulses in the fiber are recorded as shown in Fig. 2.3 [5]. The results of the following paragraphs are based on such measurements, that are very complicated to conduct on a living organism or a human test person.

#### **From Sensory Cells to Mechanoreceptors and Channels**

When reviewing literature about the physiology of haptic perception, several terms are used seemingly interchangeable with each other. Since microneurography often does not allow a distinct mapping of a sensory cell to the contacted nerve fiber, a formal separation between a single sensory cell as given in Fig. 2.2 and the term mechanoreceptor has been established [4]. A -→ mechanoreceptor is defined as

**Fig. 2.3** Recording of electrical impulses of a single mechanoreceptor with microneurography. Figure adapted from [5] c Springer Nature, all rights reserved


**Table 2.1** Receptive fields of the tactile mechanoreceptors in glabrous skin Table gives size and form of the receptive fields based on data from [4, 6–9]

**Definition** *Mechanoreceptor* An entity consisting of one or more sensory cells, the corresponding nerve fibers and the connection to the central nervous system.

The classification of a mechanoreceptor is based on the size of the -→ receptive fields and the adaptation behavior of the receptor when a constant pressure stimulus is applied. The receptive field denotes the area on the skin, in which an external mechanical stimulus will evoke a nervous impulse on a single nerve fiber. The size of the receptive field depends on the number of sensory cells that are connected to the investigated nerve fiber. Tactile mechanoreceptors exhibit either small (normally indicated with *I*) or large receptive fields (indicated with *II*). The adaptation behavior is classified as *slowly adapting (SA)* or *rapidly adapting (RA, sometimes also called fast adapting (FA))*. With these declarations, four mechanoreceptors can be defined, that are shown in Table 2.1. This nomenclature is based on a biological view. Next to this biological motivated terms, you will find the term *channel* in psychophysical literature to describe the connection between sensory cells and brain. A channel is defined as

**Fig. 2.4** Relation of the terms sensory cell, mechanoreceptor and channel using the NP-I channel as an example. The NP-I channel consists out of RA-I mechanoreceptors, that are based on Meissner corpuscles as sensory cells. NP-I and NP-III channels process signals of multiple sensory cells, while NP-II and PC channels are based on the signals of single sensory cells [11]

**Definition** *channel* Functional/structural pathway in which specific information about the external world is passed to some location in the brain where the perception of a particular sensory event occurs" *(Quote from* [10, p. 49]*)*

The difference to the definition of mechanoreceptors is the integration of functional processes like masking in the channel model. In general, the terms for channels and mechanoreceptors are used synonymously. The channels are named NP-I (RA-I receptor, NP standing for Non-Pacinian), NP-II (SA-II receptor), NP-III (SA-I receptor) and PC (RA-II receptor) [11]. There is experimental evidence for the presence and involvement of four channels in haptic perception in glabrous skin, but only three channels in hairy skin [12]. When using the channel model to describe haptic interaction one has to be aware, that certain aspects of interaction like surface properties and reactions to static stimuli cannot be explained fully by it [13, Chap. 4]. In this book, this discrepancy is not discussed in detail in favor of a primitive-based description of interactions that involves perception and motion control as well. An overview about the different terms for the description of tactile perception is given in Fig. 2.4.

#### **Spatial Distribution of Mechanoreceptors**

The spatial distribution of the different mechanoreceptors depends on the skin region considered. For the skin of the hand there is a varying distribution depending on the depth of the mechanoreceptors in the skin: near-surface receptors (RA-I, SA-) show a higher density in the finger tips than in the palm, deeper localized receptors show only a light dependency on the skin region. This is shown in Fig. 2.5.

**Fig. 2.5** Innervation density of mechanoreceptors near and far from the skin surface. The greater innervation density of Merkel and Meissner receptors leads to a higher spatial solution of quasi-static stimuli. Figure based on [4] c Springer Nature, all rights reserved

The highest density of receptors is found at the fingertips and adds up to 250 receptors/cm<sup>2</sup> [14] (primary source [15]). Thereof 60% are Meissner corpuscles, 60% are Merkel disks and 5% Ruffini endings and Pacinian corpuscles respectively [16]. Because of the high spatial density it can be assumed, that a mechanical stimulus will stimulate always several receptors of different type. However, not the density, but the absolute number of mechanoreceptors of different users are approximately the same [17]. Because of that, small hands are more sensitive than large hands. An inverse study was done by Miller et al. [18] in which a simulated population of receptors was exposed to a virtual stress and the nervous signals were calculated and qualitatively compared to known neurological processes. The study was able to confirm a multitude of hypotheses from neurobiology and is a fascinating read.

#### **Functions of Receptors and Channels**

Next to the physiological and histological differences of the mechanoreceptors described above, channels differ in additional, functional properties [10, 11]:


channels [20]. Recent studies find evidence for a linear behavior of the channels and the aggregation process (Sect. 2.1.4).


Table 2.2 gives an overview over the discussed properties for each channel. It also includes information about the coding of the channels referring to kinematic measures like deflection, velocity (change of deflection) and acceleration (change of velocity) [5, 6].

Based on this properties, functions of the different channels in perception and interaction can be identified [30–33].

**NP-I (RA-I, Meissner corpuscle)** Most sensory cells in human skin belong to the NP-I channel. They exhibit a lower spatial resolution than SA-I receptors, but have a higher sensibility and a slightly larger bandwidth. The corresponding sensory cells are called Meissner corpuscles and exhibit a biomechanical structure that makes them insensitive to quasi-static stimuli.

The RA-I receptors are sensitive to stimuli acting tangential to the skin surface. They are important for the detection of slip of hand held objects and the associated sensomotoric control of grip forces. Together with the PC channel they are relevant for the detection of frequencies of vibrations [34, 35].

The NP-I channel can detect bumps with a height of just 2 µm on a otherwise flat surface, if there is a relative movement between surface and skin. This movement leads to a deformation of the papillae on the skin by the object. Reaction forces and—deformations are located in a frequency bandwidth that will activate the RA-I receptors [36]. Similarly, filter properties of surface structures are used for the design of force sensors [37].

**NP-II (SA-II, Ruffini ending)** The SA-II receptors in this channel are sensitive to lateral elongation of the skin. They detect the direction of a external force, for example while holding a tool. The NP-II channel is more sensitive than the NP-III channel, but has a much lower spatial resolution.

This channel also transmits information about the position of limbs, when joint flexion induces skin elongation. The SA-II receptors are therefore also relevant



 is

> Used abbreviation: -→ quasi-static (QS)

for kinaesthetic perception. With specific stimulation of the NP-II channel, an illusion about the position of limbs can be generated [38, 39].

**NP-III (SA-I, Merkel disk)** The NP-III channel with Merkel disks right under the skin surface is sensitive to strains in normal and shear directions. Because of the slow adaptation of the channel, the high density of sensory cells and the high spatial resolution (less than 0.5 mm although the receptive field is larger than that) it is used to detect elevations, corners and curvatures. It is therefore the basis for the detection of object properties like form and texture.

Because of the coding of intensity and intensity changes, this channel is responsible (together with the RA-I channel) for reading -→ Braille. Studies also show an effect of the channel when wrist and elbow forces are to be controlled [40].

**PC (RA-II, Pacinian corpuscle)** The PC channel with rapidly adapting RA-II receptors exhibits the largest receptive fields and the largest sensitivity bandwidth. It is mainly used to detect vibrations arising in the usage of tools. These vibrations originate in the contact of the tool with the environment and are transmitted by the tool itself. They allow the identification of surface properties with a stiff tool, for example [41]. Because of the very high sensitivity (vibrational amplitudes of just a few nanometers can be detected by the PC channel [11]), also sensory cells located further away from the application of stimuli contribute to perception by reacting to surface wave propagation [3, 42]. To suppress the influence of dynamic forces arising in the movement of limbs on the perception of the PC channel, the Pacinian corpuscles exhibit a strong high-pass characteristic with slopes up to 60 dB per decade. This is realized by the biomechanical structure of the sensory cells.

In interactions, RA-II receptors signalize that something *is* happening, but do not necessarily contribute to the actual interaction. In addition, contributions of the PC channel to the detection of surface roughness and texture are assumed [7].

#### **2.1.1.2 Kinaesthetic Receptors and Their Functions**

For kinaesthetic perception there are two known receptor groups [29, 43, 44]. The so-called *neuromuscular spindles* consist out of muscle fibers with wound around nerve fibers that are placed parallel to the skeletal muscles. Because of that placement, strain of the skeletal muscles can be detected. Histological they consist out of two systems, the nuclear bag fibers and the nuclear chain fibers that react to intensity change and intensity [45].

The second group of receptors are *Golgi Tendon Organs*. These are located mechanically in series to the skeletal muscles and detect mechanical tension. They are used to control the force the muscle exerts as well as the maximum muscle tension. Special forms of the Golgi Organs exist in joints, where extreme joint position and ligament tension are detected [29]. They react mostly on intensity. Figure 2.6 shows these three types of receptors.

**Fig. 2.6** Kinaesthetic Sensors for muscular tension (Golgi Tendon Organ) and strain (Bag and Chain Fibers). Figure adapted from [46] c Wiley, all rights reserved

The dynamic requirements on kinaesthetic sensors are lower compared to tactile sensors, since the extremities exhibit a low-pass behavior. The requirements with regard to relative resolution are comparable to the tactile system. Proximal joints exhibit a higher absolute resolution than distal joints. The hip joint can detect angle changes as low as 0.22◦, while finger joint resolutions increases to 4.4◦ [43]. This is because of the greater influence of proximal joints on the position error of an extremity. The position accuracy increases with increasing movement velocity [44].

Kinaesthetic Perception is supported by information from the NP-II channel, from the vestibular organ responsible for body balance and from visual control by the eye. Different to the tactile system, the kinaesthetic system does not code intensities or their changes, but exhibits some sort of sense for the effort needed to perform a movement [47, 48]. For applications like for example rotary knobs this means, that a description based on movement energy with regard to rotary angle does correlate better with user ratings than the widespread description based on torques with regard to rotary angle.

#### **2.1.1.3 Other Sensory Receptors**

The skin also includes sensory receptors for thermal energy [49] as well as pain receptors. The latter are attributed a protection function, signaling pain when tissue is mechanically damaged [50]. Both aspects are not discussed in detail in this book because of the minor technical importance.

#### **2.1.1.4 Neural Processing**

Haptic information detected by the tactile and kinaesthetic mechanoreceptors are coded and transmitted by action potentials on the axons of the involved neurons to

the central nervous system. The coding of the information resembles the properties given in Table 2.2 and is illustrated in Fig. 2.7.

The biochemical processes taking place in the cells are responsible for the temporal summation (when several action potentials reach a dendrite of a neuron within a short interval) and the spatial summation (when action potentials of more than one receptor arrive at the same neuron) of mechanoreceptor signals. Each individual action potential would not be strong enough to evoke a relay of the signal through the neuron [51]. For the rest of this book, further neurophysiological considerations are of minor importance, but can be assessed in a standard physiology or neurophysiology textbook.

A more interesting question for the design of haptic systems is the synthesis of different information from haptic, visual and acoustic senses to a unconscious or conscious -→ percept and a resulting action. While the neural processes are investigated in depth [52], there is no comprehensively confirmed theory about the processing in the central nervous system (spinal cord and brain).

Current research favors a Bayesian framework, that also incorporates former experiences when assessing information [53, 54]. Figure 2.8 gives a schematic description of this process, that is confirmed by current studies [55, 56].

## *2.1.2 Psychophysical Description of Perception*

The investigation of perception processes, that is the link between an objectively measurable -→ stimulus and a subjective -→ percept, is the task of psychophysics, a section of experimental psychology. It was established by G. T. Fechner in the late mid of the 19th century [58]. As shown in Fig. 2.1, there are a several parts

**Fig. 2.8** Sensory integration according to Helbig [57]. Prior knowledge is combined with current sensory impressions to a percept of the situation. Based on a gain/loss analysis, a decision is made and an interaction using the effectors (limbs, speech) is initiated

of psychophysical studies. *Inner psychophysics* deals with the connection of neural activity and the formation of percepts, while *outer psychophysics* will investigate the reactions to an outer stimulus. These parts were established already by Fechner [1]. Nowadays, modern technologies also allow the investigation of neurophysiological problems linking outer stimuli with neural activity and the analysis of correlations between neurophysiology, inner psychophysics and outer psychophysics.

For the design of haptic systems we will concentrate on outer psychophysics, since only physical properties of stimuli and the corresponding subjective percepts will allow the derivation of design parameters and design goals. Therefore the remaining of this chapter will only deal with procedures and parameters from outer psychophysics. It will describe the main principles, that should be understand by each system engineer to interpret psychophysical studies correctly.

#### **2.1.2.1 The Psychometric Function**

Regardless of the kind of sense, the description of perception is not possible with the general engineering tools. Perception processes are whether non-linear nor stationary, because the perception process of inner psychophysics cannot be described that way. Looking at Fig. 2.8 this is obvious since weighting and decision processes and the risk assessment cannot be described in a universal way.

Because of that, perception processes in outer psychophysics are not described by specific values but by probability functions. From this functions, specific values can be extracted. Figure 2.9 gives an example of such a -→ psychometric function. On the x-axis the intensity of an arbitrary stimulus Φ is plotted, while the y-axis gives the probability *p* for a test person detecting the stimulus with that intensity.

According to [59] the psychometric function *p*<sup>Ψ</sup> has a general mathematical description according to Eq. (2.1).

$$p\_{\Psi}(\Phi, c\_{\theta}, c\_{\sigma}, p\_{\mathbb{G}}, p\_{\mathbb{L}}) = p\_{\mathbb{G}} + (1 - p\_{\mathbb{G}} - p\_{\mathbb{L}}) \cdot f(\Phi, c\_{\theta}, c\_{\sigma}) \tag{2.1}$$

with a stimulus Φ and the following parameters:

**Base Function** *f* The base function determines the form of the psychometric function. In literature you can find different approaches for the base function. Often a cumulative normal distribution (Eq. (2.2)), sigmoid functions (Eq. (2.3)), and Weibull distributions (Eq. (2.4)) are used:

$$f\_{\rm cdf}(c\_{\theta}, c\_{\sigma}, \Phi) = \frac{1}{c\_{\sigma}\sqrt{2\pi}} \int\_{-\infty}^{\Phi} e^{\frac{-(t-c\_{\theta})^2}{2c\_{\sigma}^2}} \,\mathrm{d}t \tag{2.2}$$

$$f\_{\text{sig}}(c\_{\theta}, c\_{\sigma}, \Phi) = \frac{1}{1 + e^{-\frac{c\_{\theta}}{c\_{\theta}}(\Phi - c\_{\theta})}} \tag{2.3}$$

$$f\_{\text{wei}}(c\_{\theta}, c\_{\sigma}, \Phi) = 1 - e^{-(\frac{\Phi}{c\_{\theta}})^{c\_{\theta}}} \tag{2.4}$$

Nowadays there is no computational limit for the calculation of functions and extracting values, therefore the choice of a base function of depends on prior experiences of the experimenter. When investigating the visual sense, a Weibull distribution will better fit data [60], when working with -→ Signal Detection Theory (SDT), a normal distribution is assumed [61]. Sigmoid functions are often used in early simulation studies because of the low computational effort needed to calculate psychometric functions. The current state of the art in mathematics would allow for non model based description of the psychometric function. In psychophysics these approaches have not been seen very often up till now, whereas first studies show a comparable performance of these techniques compared to a model-based description [62].


To find a psychometric function, psychophysics knows a bundle of procedures, that will be addressed in the next section. It has to be kept in mind, that all of the above and the following is not only valid for a stimulus with a changing intensity, but also for any other kind of changing signal parameter like frequency, energy and proportions between different signals.

#### **2.1.2.2 Psychometric Procedures**

The general goal of a psychometric procedure is the determination of a psychometric function *p*<sup>Ψ</sup> as a whole or a single point (Φ|*p*<sup>Ψ</sup> (Φ)) defined by a given probability *p*<sup>Ψ</sup> . In general, each run of a psychometric procedure consists out of several trials, in which a stimulus is presented to a test person and a reaction is recorded in a predefined way. Figure 2.10 gives a general taxonomy to classify psychometric procedures.

Each procedure consists of a measuring method and an answering paradigm. The method determines the course of a psychometric experiment, particularly the start intensity of the stimulus and the changes during each run, the conditions to stop a trial and the calculation rule to obtain a psychometric function from the measured data. The answering paradigm defines the way, a test person is presented the stimulus and the available answering options. The choice of a suitable procedure is not topic of this book, but an interesting topic nevertheless. Further information about the simulation of procedures and the definition of suitable quality criteria can be found in [64, 65].

**Fig. 2.10** Taxonomy of psychometric procedures. Figure is based on classifications in [63]

#### **Methods**

The first, nowadays called "classical", methods were developed back in the 19th century. The most familiar are called *Method of Constant Stimuli*, *Method of Limits* and *Method of Adjustment*. They have barely practical relevance in today's experiments, so they are not detailed here any further. Please refer to [66] for a more detailed explanation. Modern methods are derived from this classical methods, but are generally adaptive in a way, that the progression rule is depending on the answers of the test person in the course of the experiment [67]. They can be classified into heuristic and model based methods.

**Heuristic Based Methods** These methods are based on predetermined rules that are applied to the answers of the test person to change the stimulus intensity in the course of the psychophysical experiment (change of the progression rule). Stopping and starting rules are normally fixed (by the total number of trails for example) for each experiment beforehand. The most-widely spread heuristic method is the so-called *Staircase method*. It is based on the classic *Method of Limits* and tries to nest the investigated threshold with the intensities of the test stimulus. Figure 2.11 gives two examples of a staircase method with different progression rules. It becomes clear, that the name of this method originates in this kind of display of the test stimulus intensities over the test trails.

The definition of progression rules is based on the number of correct answers of the test person leading to a lower test stimulus and the number of false answers leading to a higher test stimulus. The original Staircase method, also called *Simple Up-Down Staircase* (upper part of Fig. 2.11), will change stimulus intensities after every trail, such converging at a threshold with a detection probability of 0.5 [68]. The request for other detection probabilities led to another form of the Staircase method, the so called *Transformed Up-Down Staircase Method*. For these methods, the progression rule is changed and needs for example more than one correct answer for a downward change of the test stimulus. Figure 2.11 gives an example of a 1up-3down progression rule, lowering the test stimulus after three correct answers and raising it for every false answer.

In [68], Levitt calculates the convergence probability for several progression rules. Table 2.3 gives the convergence probabilities for common progression rules. To interpret studies about haptic perception incorporating experiments with staircase methods, the convergence probability has to be taken into account. However, newer studies cast some doubts on this interpretation of the progression rule [69], arguing that the amount of intensity change is much more relevant for the convergence of a staircase than the progression rule. For system design, one therefore has to resort to larger assessment factors in the interpretation of these kind of data. The calculation of a threshold is normally carried out as a mean of the last stimulus intensities leading to a reversal in the staircase direction. Typical values are for example 12 reversals for the calculation of the threshold and 16 reversals as a stopping criterion for the whole experiment run.

Another important heuristic method is so called *PEST—Parameter Estimation by Sequential Testing—Method* [70]. The heuristic keeps a given stimulus intensity until some assessment of the reliability of the answers can be made. The method was designed to yield a high accuracy with a small number of trails. One of the main disadvantages of this method is the calculation rule, that only considers the very last stimulus. However, several modern adaptations like *ZEST* and *QUEST* try to overcome some of these disadvantages [71].

**Model Based Methods** A model of the psychometric function is the basis of these methods, that measure or estimate the parameters of the function as given in Eq. (2.1). Most methods incorporate some kind of prior knowledge of the psychometric function from experience or previous experiments (Bayes' approach) and use different kinds of estimators (maximum likelihood, probit estimation). Examples for these methods are *ML-Test* [60], that uses a maximum likelihood estimator for the determination of the function parameter. The end of each experiment run is determined by the confidence interval of the estimated parameters. If the interval is smaller than a given value, the experiment run is stopped.

In 1999, Kontsevich and Tyler introduced the Ψ-*Method* [72], combining promising elements from several other methods. This method is not very prominent in haptics research, but is considered the most sophisticated method in psychophysics in general [66, 73]. It is able to estimate a threshold in as little as about 30 trials and the sensitivity in about 300 trials.

One of the general advantages of a model based method is the calculation of a whole psychometric function, not only of a single threshold. Therefore, more than one psychometric parameter can be calculated from a single experiment. Adversely, one should have a slight confidence in the model used in the method for the investigated sense. As said above, data show a slightly advantage for Weibull-based models for the visual sense [60], while studies of the author of this chapter yield better results for a logistic function for the assessment of force perception thresholds [74].

**Fig. 2.11** Simulated runs of common psychometric methods. The upper graph shows a *simple up-down staircase*, theoretically converging at *c*<sup>θ</sup> = 50, the middle graph shows a *transformed updown staircase* with a *1up-3down* progression rule and a theoretical convergence level of *c*<sup>θ</sup> = 57.47. The lower graph shows a run from a Ψ-*method*, also converging at *c*<sup>θ</sup> = 50. Simulated answers of the subject are shown in green circles (correct answer) and red squares (incorrect answer), staircase reversals are circled. The dotted line indicates the calculated threshold

**Table 2.3** Probability of convergence of adaptive staircase methods with different progression rules. Table based on [68]


**Fig. 2.12** Baseline model of the Signal Detection Theory (SDT). The model consists of two neural activity distributions for Noise and Signal + Noise. The noise distribution is always present and can be interpreted as sensory background noise. If an additional stimulus is presented, the whole distribution shifts upwards. The subject decides based on a decision criterion *c*λ, whether a neural activity is based on a stimulus or not. The subject shown here exhibits a badly placed respectively conservative decision criterion: Many signals are missed (horizontal striped area *p*M), but only a few false-positive answers are recorded (vertical striped area *p*FP). The detectability *d* , defined as the span between the midpoints of both distributions, is independent from the decision criterion of the subject and can be used as an objective measure of the difficulty of a detection experiment

#### **Paradigms**

Answering paradigms describe the way a test person will give the answer to a stimulus in a way, that the procedure can react according to its inherent rules. The theoretical basis for answering paradigms is given in the -→ SDT, a statistical approach to describe arbitrary decision processes, that is often applied to sensory processes. It is based on the assumption, that not only stimuli, but also noise will contribute to perception. In the perception continuum, this is represented by a noise distribution (mainly Gaussian). If there is no stimulus present, the noise distribution will be present in neural activity and processing, if a stimulus is present, the noise distribution is added to the stimulus. Figure 2.12 shows this theoretical basis of -→ SDT.

Near the absolute detection threshold, both distributions will overlap. In this area on the perception continuum it is indistinguishable if a neural activity is coming from a stimulus or just from innate noise. To decide whether a stimulus is present or not, the test person will construct a decision criterion *c*λ. If a input signal is greater than this criterion, a stimulus is identified, smaller inputs are neglected. Unfortunately, this decision criterion is varying with time and other external conditions. Therefore, one aim of -→ SDT is to investigate the behavior of a test person regarding this criterion. The detectability *d* arising from the signal detection theory can be used to calculate comparable sensitivity parameters for different test persons and to compare studies with different psychometric procedures.

With these implementations, one can differentiate liberal (low decision criterion) from conservative (high decision criterion) test persons. For example, studies show that the consumption of alcohol will not change the sensitivity of a person, but will influence the decision criterion to become more liberal. This leads to a better detection of smaller stimuli, but will also produce more false-positive answers. Based on this stochastic approach of decision theory, answering paradigms can be defined, that are used to minimize the influence of varying decision criteria. The most common paradigms are described in the following.


An example experiment setup would include five reference stimuli and one unknown test stimulus given to the test person. The test person would have do decide which reference stimulus corresponds to the test stimulus. If stimuli were miniature golf balls with different compliance, this experiment can be classified as a 5AFC paradigm.

**Unforced-Choice-Paradigm** In [75] Kärnbach describes an adaptive procedure, that does not require a forced choice, but also allows an "I don't know" answer. This procedure leads to more reliable results from test persons without extensive experience in simulations. Especially in experiments incorporating a comparison of different stimuli this answering paradigm could provide a more intuitive approach to the experiment for the test person and could therefore lead to more motivation and better results. Based on Fig. 2.10 this unforced-choice option belongs to the paradigm definition, but has to be incorporated in the method rules as well. Therefore this paradigm is only found in a limited number of studies, but finds also application in recent studies of haptic perception [76].

#### **2.1.2.3 Psychometric Parameters**

In most cases, not the whole psychometric function, but characteristic values are sufficient for the usage in the design of haptic systems. The most important parameters are described in this section.

#### **Absolute Thresholds**

These parameters describe the human ability, to detect a stimuli at all. They are defined as the stimulus intensity Φ with a detection probability of *p*<sup>Ψ</sup> = 0.5 [77, Chap. 5]. However, since many psychometric procedures will not converge at this probability, most studies call their results threshold regardless of the convergence probability.

For the design of haptic systems, absolute thresholds will give absolute margins for sensors and actuators for noise- and otherwise induced errors: A vibration, that is "detected" by a sensor because of inherent noise in the sensor signal processing or displayed by an actuator is acceptable as long the user of a haptic system will not feel it. Therefore a reliable assessment of these thresholds is important to define suitable requirements. On the other hand, absolute thresholds define a lower limit in communication applications: Each coded information has to be at least as intense as the absolute threshold to be detectable, even if one probably will chose some considerably higher intensity level to ensure detection even in distracting environments.

#### **Differential Thresholds**

Differential thresholds describe the human ability to differentiate between two stimuli, that differ in only one property. The first differential thresholds were recorded by E. H. Weber at the beginning of the last century [78]. He investigated the differential threshold of weight perception by placing a mass (reference stimulus Φ0) of a test persons hand and adding additional mass ΔΦ until the test person reported a higher weight. The additional mass needed to evoke this perception of a higher weight was called *Differenz Limen (DL)*.

Further studies showed that the quotient of ΔΦ and Φ<sup>0</sup> would be constant in a wide range of reference stimulus intensities. This behavior is called *Weber's Law* and the *Weber fraction* given in Eq. (2.5) is also called -→ Just Noticeable Difference (JND).

$$\text{JND} := \frac{\Delta\Phi}{\Phi\_0 + a} \tag{2.5}$$

The -→ JND is generally given in percent (%) or decibel (dB) with respect to the reference stimulus Φ0. Since further studies of Weber's Law showed an increase in JNDs for low reference stimuli near the absolute threshold, the additional parameter *a* was introduced. It is generally interpreted as sensory background noise in the perception process [79, Chap. 1], which is a similarity to the basic assumption of the -→ SDT. The resulting change of the JND near absolute thresholds is so large, that a consideration in the design of technical systems is advisable.

It is generally agreed, that the JND denotes the amount of stimulus change, that is detected as greater half the time. In literature one can find two different approaches to measure a JND in a psychophysical experiment. It has to be noted that these approaches do not necessarily measure the 50% point of the psychometric function:


$$\text{JND} := \Phi(p\_{\Psi} = 0.75) - \Phi(p\_{\Psi} = 0.25) \tag{2.6}$$

This definition is useful, if one cannot control the stimulus intensity freely during the test and has to measure a complete psychometric function with fixed stimuli (for example real objects with defined texture, curvature or roughness) or with long adaptation times of the test person.

It has to be noted that both approaches do not necessarily lead to the same numerical value of a JND. For certain classes of experiments, special terms for the differential thresholds have been coined. They are briefly described in the following:


**Just Tolerable Difference (JTD)** The Just Tolerable Difference denotes the difference between two stimuli, that is differentiable, but still tolerable for the test subject. It is also termed the *Quality JND* and depends more on individual appraisal and judgment of the subjects than on the abilities of the sensory system. This measure can be used to determine system properties that are acceptable to a large number of users, as it is done in various other sensory modalities like taste [80] or vision [81].

The knowledge of differential thresholds has a major meaning. JNDs give the range of signals and stimuli that cannot be distinguished by the user, i.e. a limit for the reproducibility of a system. Proper consideration of JNDs in product design will yield systems with good user ratings and minimized technical requirements as shown for example in [82].

#### **Description of Scaling Behavior**

In the mid of the 19th century, Fechner formulated a relation between objectively measurable stimulus Φ and the subjective percept Ψ based on *Weber's Law* in Eq. (2.5). He set the JND equal to a non-measurable increment of the subjective percept and integrated over several stimulus intensities defined by increments of the JND.<sup>1</sup> This leads to *Fechner's Law* as given in Eq. (2.7):

$$
\Psi = c \log \Phi \tag{2.7}
$$

In Eq. (2.7), *c* is a constant depending on the investigated sensory system. However, *Fechner's Law* is based on two assumptions that rendered invalid in further studies: The basis of a non-universal valid variant of *Weber's Law* and the assumption, that an increment as high as the current JND will evoke a increment in perception [79, Chap. 1]. In the mid of the 20th century, S. S. Stevens proposed the *Power Law*, a new formulation of the relation of objective stimuli and subjective percepts, based on experimental data that could not be explained by *Fechner's Law*:

<sup>1</sup> An elaborate derivation can be found in [79, Chap. 1].

#### 2 Haptics as an Interaction Modality 57

$$
\Psi = c\Phi^a \tag{2.8}
$$

In Eq. (2.8), *c* is a scaling parameter as well, that is often neglected in further analysis. The parameter *a* denotes the coupling between subjective perception and objective measurable stimuli and depends on the individual experiment. By logarithmization of Eq. (2.8), it can be calculated as the slope of the resulting straight line logΨ = log *c* + *a* log Φ [79, Chap. 13]. The *Power Law* can be summarized shortly as *"a constant percentage change in the stimulus produces a constant percentage change in the sensed effect"* [83, p. 16]. To analyze these changes particular psychometric procedures can be used, as for example found in [83]. Typical values in haptics include *a* = 3.5 for the intensity of electrical stimulation at the fingertip (a 20% increase will double the perceived intensity) and *a* = 0.95 for a vibration with a frequency of 60 Hz (i.e. a declining relation).

#### **2.1.2.4 Factors Influencing Haptic Perception**

From different studies of haptic perception, external influencing factors are known that affect absolute and differential thresholds. They originate in the properties of the different blocks given in Fig. 2.1. For the design of haptic systems they have to be considered as disturbance variables or can be used to purposeful manipulate the usage conditions. An example may be the design of a grip or the control of a minimum or maximum grip force at an end effector of a haptic system. The following list will give the technical relevant influencing factors.


Furthermore, the absolute number and not the density of mechanoreceptors is approximately constant among test persons. Therefore the size of the hand is relevant for the perception capabilities, smaller hands will be more sensitive [17]. Since there is a (slight) correlation between sex and hand size, this is the reason for some contradictory studies about the dependency of haptic perception on the sex of the test person [89, 91, 93–95].

**Other Factors** Several other factors with influences on haptic perception thresholds like the menstrual cycle [85, 88, 96], diseases like *bulimia* or *anorexia nervosa* [97], skin moisture [98], and the influence of drinks and tobacco can be identified. In the design of haptic systems these factors cannot be incorporated in a meaningful way, since they can neither be controlled nor influenced in system design or usage.

#### **2.1.2.5 What Do We Feel?**

To investigate perception, an exact physical representation of the stimulus must be known. In auditory perception this is sound pressure, that will affect the eardrum and will be conducted via the middle ear to the nerve fibers in the cochlea and the organ of Corti. Visual perception is based on the detection of photons of a particular wavelength in the cones and rods in the retina.

In haptics, one will find different physical representations of stimuli, namely forces *F* and kinematic measures like acceleration *a*, velocity *v* or deflection *d*. The usage of a certain representation mainly depends on the purpose of the study or the system: forces are sometimes easier to describe and measure because of their characteristics as a flux coordinate defined at a single point. Kinematic measures exhibit characteristics of a differential coordinate, i.e. they can only be measured in relation to a prior defined reference. Many studies (especially of dynamic stimuli) are based on kinematic measures, since their definitions do not depend on the mechanical properties of the test person. Greenspan showed psychophysical measurements with less variation when stimuli were defined by kinematic measures compared to force [99].

However, there is evidence that humans do not only feel forces or kinematic measures. Perception is most likely based on the distribution of mechanical energy in the skin, where the mechanoreceptors are located. This distribution cannot be described with reasonable effort in detail (although there are some attempts for FEmodeling of the human skin [100–103]), furthermore it cannot be produced as a controlled stimuli for psychophysical experiments.

A common approach is to consider the human skin as a mechanical system, whose properties are not changed by haptic interaction. This is supported by studies conducted by Hogan, who showed that a human limb can be modeled as a passive mechanical impedance for frequencies higher that the maximum frequency of human motion capabilities. In that case, forces and kinematic measures coupled into the skin are related via the mechanical impedance *z*user according to Eq. (2.9)

$$\frac{\overline{F}}{\underline{\underline{v}}} = \underline{\underline{z}\_{\text{user}}}\tag{2.9}$$

with *<sup>v</sup>* <sup>=</sup> <sup>d</sup>*<sup>d</sup>* <sup>d</sup>*<sup>t</sup>* = *a* d*t*. Applied to perception this means, that each force perception threshold could be calculated from other thresholds defined by kinematic measures via the mechanical impedance of the test person. This relation is used in a couple of studies [104, 105] to calculate force perception thresholds from deflection-based measurements. Own studies of the author used force-based measurements to experimentally prove the relation given in Eq. (2.9) [106].

One can therefore conclude, that perception is based on the complex distribution of mechanical energy in the skin. For the design of haptic systems, a simplified consideration of the user as a source of mechanical energy with own mechanical parameters as given by the mechanical impedance *z*user is applicable. Furthermore, this model is also valid for the description of perception, linking perception parameters by the mechanical impedance as well. Some important psychometric parameters are given in the next section of this chapter, a detailed view on the modeling of the user is given in Chap. 3. Meanwhile modern and fast imaging technologies revealed dynamical mechanical stimulations to reach much further than the area of interaction. Shao et al. showed that oscillations induced at the finger tip result in responses reaching almost as far as the wrist [107, 108]. The contribution and importance of such afferent vibrations to the overall perception is subject to ongoing research.

Hayward asked in [109]: *Is there a "plenhaptic" function?* A question unanswered still. But it is a tempting assumption that with the right understanding of perceptional dimensions we could translate the physical domain into a perceptionaldomain in all temporal and macro- and micro-dynamics we know.

## *2.1.3 Characteristic Values of Haptic Perception*

There are a vast number of studies investigating haptic perception. For the design process of haptic systems the focus does not lie on single biological receptors but rather on the human's combined perception resulting from the sum of all tactile and kinaesthetic mechanoreceptors. As outlined in the following chapters, a dedicated investigation of perception thresholds is probably advisable for the selected grip configuration of a haptic system. This section will give some results of the most important ones, but will fail in being complete. It is ordered according to the type of psychometric parameter. To interpret the results of the different studies correctly, Fig. 2.14 gives some explanation of the anatomical terms for skin location and skeleton parts.

#### **2.1.3.1 Absolute Thresholds**

One of the most advanced studies of haptic perception is carried out by the group of Gescheider et al. The probably most popular curve is the absolute perception threshold of vibrotactile stimuli defined by deflections of the skin at the thenar eminence as given in Fig. 2.15 [110]. Since the channel model arises in the work

**Fig. 2.14** Anatomical terms for skin areas and skeleton parts of the human hand

of this group, a lot of their studies deal with these channels and their properties. In Fig. 2.15 some properties of this model can be seen: The thresholds are influenced by the receptive fields, the highly sensitive RA-II-receptors are only exited with large contact areas, in addition, the most sensitive channel will be responsible for the detection of a stimulus. Other, non shown work includes the investigation of the perception properties of the finger tip [19] and intensive studies of masking and summation properties [88].

Other relevant studies were conducted by Israr et al. investigating vibrotactile deflection thresholds of hands holding a stylus [104] and a sphere [105], some quite common grip configurations of haptic interfaces. They investigate seven frequencies in the range of 10–500 Hz with an adaptive staircase (1up-3down progression rule) and a 3IFC paradigm and find absolute thresholds of 0.2–0.3µm at 160 Hz. The studies include the calculation of the mechanical impedance and force perception thresholds as well. Brisben et al. investigated the perception thresholds of vibrotactile deflections tangential to the skin, a condition becoming more and more important when dealing with tactile feedback on touch screen displays. Whole hand grasps and single digits were investigated with an adaptive staircase (different progression rules) and 2- and 3IFC paradigms. They additionally investigate perception thresholds for 40 and 300 Hz stimuli at different locations on the hand and with different contact areas. Newer studies by Gleeson et al. investigate the properties of several stimuli parameters like velocity, acceleration and total deflection [111] on the perception of shear stimuli. They found accuracy of direction perception depending on both speed and total displacement of the stimulus, with accuracy rates grater than

**Fig. 2.15** Absolute threshold of tactile perception channels at the thenar eminence with respect to contact size. Measurements were conducted with closed-loop velocity control of the stimuli. To address individual channels, combinations of frequencies, intensities, contact areas and masking effects are employed. The psychometric procedure used converges at a detection probability of *p* = 0.75. Figure is adapted from [110]

95% occurring at tangential displacement of 0.2 mm and a displacement speed of 1 mms−1. The study further includes analysis of priming and learning effects and the application to skin stretch based communication devices.

One of the most important effect on haptic perception originates in the size of the contact area. All of the above mentioned studies show lower perception thresholds for frequencies around 200 Hz with larger contact areas. However, this effects seems to be limited by the minimum area required to arouse mechanoreceptors in the PC channel, which is probably about 3 cm2, corresponding to a contactor diameter of about 20 mm. When more than one finger is involved in the interaction, [24] did not found a summation effect of thresholds.

Regarding the perception of forces, the corresponding absolute thresholds can be calculated according to Eq. (2.9). There are only a few studies dedicated to the absolute perception of forces. Thornbury and Mistretta investigate the sensitivity to tactile forces applied by a modified version of von-Frey filaments. They find a significant influence of age on the absolute threshold that is most likely related to the decrease of mechanoreceptor density. Young subjects (mean age 31 years) exhibit absolute thresholds of 140µN, while older subjects have higher thresholds of about 660µN, measured with a staircase method, constant stimuli intensities and a 2IFC paradigm. Since the stimuli were applied manually by the experimenter, application dynamics cannot be determined from the study but probably contribute to the very low reported thresholds. Abbink and van der Helm investigated absolute force perceptions at the foot with different footwear (socks, sneaker, bowling shoe) for low-frequency stimuli (< 1 Hz) and a static preload of 25 N. They find lowest perception thresholds of 8 N in the sock condition, whereas the perception threshold is

**Fig. 2.16** Absolute force perception threshold based on experiments with 27 test persons, measured with a quasistatic preload of 1 N. Thresholds are obtained with an adaptive staircase procedure converging at a detection probability of 0.707 with an 3IFC paradigm. Data is given as boxplot, since not all data for each frequency are normal distributed. The boxplot denotes the median (horizontal line), the interquartile range (IQR, closed box defined by the 0.25- and the 0.75-quantile), data range (dotted line) and outliers (data points with more than 1.5 IQRs from the 0.25- or the 0.75-quantile). The indentation denotes the confidence interval of the median (α = 0.05). Data taken from [74, 106] c Springer Nature, all rights reserved

defined with a detection probability greater than 0.98. Also motivated by the small number of studies, the author of this chapter measured perception thresholds for vibrotactile forces up to 1000 Hz as shown in Fig. 2.16.

In summary, one can find a large number of studies determining absolute thresholds for the perception of stimuli defined by deflections. Less studies are conducted regarding the absolute perception of forces. Table 2.4 summarizes some values of absolute perception thresholds for the human hand.

#### **2.1.3.2 Differential Thresholds**

For haptics, several studies furnish evidence about the applicability of Weber's Law as stated in Eq. (2.5). Gescheider et al. [121] as well as Verrillo et al. [122] measure -→ JNDs of 1… 3 dB for deflection-defined stimuli with reference stimuli of 5… 40 dB above absolute threshold for frequency ranges exceeding 250 Hz. The measurements of Gescheider et al. are based on broadband and single frequency stimulus excitation. They show an independence of channels for the JND, whereas no fully constant JND was determined for high reference levels. This is addressed


**Table 2.4** Selected absolute thresholds of the human hand

<sup>a</sup> If movement is permitted, isolated surface structures of 0.85µm height can be perceived [36,

117]. If surface roughness is to be detected, stimuli as low 0.06µm are perceived [36] <sup>b</sup> The two-point threshold decreases, if the two stimuli are presented short after another, a position change of a stimulus can be resolved spatially up to ten times better than the static two-point

threshold [112] <sup>c</sup> The perception threshold is strongly dependent on the vibration frequency, the location of the

stimulus and the size of the contact area [88] <sup>d</sup> Amplitudes larger than 0.1mm are perceived as annoying at the fingertips [118]. A stimulation with constant frequency and amplitude results in a desensitization, increasing up to a numb feeling

which may last several minutes after the end of the stimulation [119, 120] <sup>e</sup> Whole hand grasping a cylinder with a diameter of 32mm. Vibrations were created along the cylinder axis

<sup>f</sup> Sphere with a diameter of 2 inches was grasped with the *phalanx distalis* of all fingers, the stylus is taken from a PHANToM haptic interface and held with three fingers <sup>g</sup> A correct detection probability of at least 0.75 was measured for 12 frequencies ranging from 1

to 560 Hz in 22 subjects


**Table 2.5** Relevant parameters and results of studies of dynamic force JNDs. Table based on [74]

<sup>a</sup> In active conditions, test subjects were required to apply movement by their own, while in passive conditions only the measurement setup exerts forces on the subject

<sup>b</sup> Ordering according to reference force ordering

<sup>c</sup> JNDs are based on an experiment, where subjects could interact freely with a custom haptic interface described in [127]

as *"a near miss to Weber's Law"* by the authors [121], but this observation should not have a significant impact on the design of haptic systems.

Regarding the JND of forces, several studies were conducted with an active exertion of forces by the test person. Jones measures JNDs of about 7% from matching force experiments of the elbow flexor muscles [123], a value that is confirmed by Pang et al. [124]. However, one cannot determine the measurements dynamics from the experimental setup, based on Fig. 1.9 a maximum bandwidth of 10… 15 Hz seems to be likely. From other studies evaluating the perception of direction and perception-inspired compression algorithms (Sect. 2.4.4) estimations of the JND for forces can be made. This is summarized in Table 2.5. All studies show JNDs over 10% for reference stimuli well above the absolute threshold and increasing JNDs for reference stimuli near the absolute threshold.

Own studies of the author of this chapter evaluated the JND for dynamic forces in the range from 5… 1000 Hz. As reference stimuli, the individual perception threshold and fixed values of 0.25 N and 0.5 N were used. The results are given in Fig. 2.17. They show no channel dependence (despite a significant higher value for the JND at 1000 Hz) and affirm the increasing JND for reference stimuli near the absolute threshold. However, with about 4… 8 dB for frequencies less than 1000 Hz the JND in the 0.25 and 0.5 N condition is higher than the previously reported values.

Jones and Hunter investigated the perception of stiffness and viscosity and found JNDs of 23% for stiffness [133] and 34% for viscosity [134] with a matching procedure using both forearms with stimuli generated by linear motors. The JND for stiffness is similar to other studies as reported in [135, 136]. Further differential thresholds for the perception of haptic measures by the human hand are given in Table 2.6.

**Fig. 2.17** Just Noticeable Differences of dynamic forces JNDs were calculated with an adaptive staircase procedure converging at a detection probability of 0.707 and a 3IFC paradigm from studies conducted with 29 test persons (absolute threshold reference) and 36 test persons (0.25 and 0.5 N reference conditions) respectively. The test setup is described in [131]c Elsevier, all rights reserved, a static pre-load of 1 N was used. Data taken from [74, 132]. c Springer Nature, all rights reserved

#### **2.1.3.3 Object Properties**

The properties of arbitrary objects are closely related to the interaction primitives. Typical exploration techniques to detect object properties are dealt with in the following section. Despite the basic perception of form, size, texture, hardness and weight of an object, there are are couple of other properties relevant to the design of haptic systems. Bergmann Tiest reviews a large number of studies regarding the material properties roughness, compliance, temperature, and slipperiness. The results are relevant for the design of devices to display such properties, the representation of compliance is especially relevant for the interaction with virtual realities. Key points of the analysis are outlined in the following based on [143], whereas primary sources and some other references are cited as well. Klatzky et al. also review the perception of object properties and algorithms to render this properties in engineering applications [144]. The work of Samur, summarizing several studies about the perception of object properties, could be of further interest [145].

**Roughness** Roughness is one of the most studied object properties in haptic perception. The perception of roughness is based on an uneven pressure distribution on the skin surface for static touch conditions and the vibrations arising when stroking a surface or object dynamically. It was shown, that finer textures with


**Table 2.6** Selected differential thresholds of the human hand

<sup>a</sup> Experiment was made with a reference pressure of 1.8 N/cm<sup>2</sup> at the dorsal side of the wrist. JND increased strongly with reduced contact area: 4.4% at 5.06 cm2, 18.8% at 1.27 cm2 [138] <sup>b</sup> Test subject's limbs were positioned by the experimenter with no active movement involved

<sup>c</sup> A PHANToM haptic interface was used in both studies

<sup>d</sup> The capability to differ stimuli is reduced after 320 Hz [114] <sup>e</sup> If one has to decide which of two stimuli was applied first, a minimum time of 20ms has to be between the onset of the two stimuli [6]

particle sizes smaller than 100 µm can only be detected in dynamic touch conditions, while coarser textures can be detected in static conditions, too. Active and passive touch conditions have no effect on the perceived roughness. This is called the *duplex theory of roughness perception* [146]. However, not only sensitive bandwidth and the touch condition have an influence on the human ability to perceive roughness. Other studies found influences of the contact force, other stimuli in the tested set and the friction between surface and skin. Regarding differential thresholds, Knowles et al. found JNDs of 10… 20% for friction in rotary knobs [147], Provancher et al. recorded JNDs of 19… 28% for sliding virtual blocks against each other [148].

Scaling experiments showed that roughness can be identified as the opposite to smoothness. In similar experiments, no effect of visual cues was found and a power-function exponent (Eq. (2.8)) of 1.5 was measured. In a nutshell, the perception of roughness appears to be a complex ravel of not only material properties, but also of interaction conditions like friction and contact force. This makes the modeling of roughness challenging, on the other hand, there are a vast number of possibilities to display roughness properties in technical systems [149].

**Compliance** This property describes the mechanical reaction of an material to an external force. It can be described by the Young's Modulus or—technically more relevant—the stiffness of an object, that combines material and geometric properties of an object as shown further on in Eq. (2.10). When evaluating physical stiffness with the perceived compliance, a power-function exponent of 0.8 was calculated and softness and hardness were identified as opposites. For the perception of softness, cutaneous and kinaesthetic cues are used, while cutaneous information is both necessary and sufficient. Studies by Bergmann Tiest and Kappers determined that soft materials were mostly judged by the stiffness information, i.e. the relationship of material deformation to exerted force, while harder stimuli are judged by the surface deformation [150].

Several other studies show that the perception of the hardness, i.e. the contrary of compliance, of an object is better modeled by the relation of the temporal change of forces compared to the penetration velocity than by the normally employed relation of force to velocity [151]. This has to be considered in the rendering of such objects and is therefore dealt with in Chap. 12. To render a haptic contact perceived as undeformable, necessary stiffnesses from 2.45 Nm−<sup>1</sup> [138] to 0.2 Nm−<sup>1</sup> [152] are reported.


radius of 204 mm could discriminated from flat surfaces, for concave curvatures a threshold of 185 mm could be assessed with a detection probability of 0.75.

**Temperature** Although not only a object property, basic properties of temperature perception are summarized here, partly based on [13]. Humans can detect intensity differences in warming pulses as low as 0.1% (base temperature of 34 ◦C, warming pulse with base intensity of 6 ◦C) [158]. Changes of 0.16 ◦C for warmth and 0.12 ◦C for cold from a base temperature of 33 ◦C can be detected at the fingertip and are still lower for the thenar eminence. When skin temperature changes slowly with less than 0.1 ◦Cs−1, changes of up to 6 ◦C in a zone of 30–36 ◦C cannot be detected. More rapid changes will make small changes perceivable. The technical use of temperature perception is limited by a temperature of 44 ◦C, where damage is done to the human skin [159].

Perceptually more relevant is the thermal energy transfer from skin into the object. Humans are able to discriminate a chance in heat transfer of about 35–45%. Because of different thermal conductivity and specific heat capacity, different materials can be identified by static touch alone. On this heat transfer mechanisms, the modeling, rendering and displaying thermal information is discussed in a number of studies cited in [143]. For technical applications, Jones and Ho discuss known models and the implications for the design of thermal displays [160].

Despite these general properties, there is a vast number of more complex object properties, that arise largely in the interpretation of the user. It is difficult to find clear technological terms for these interpretation. In literature one can find one approach to describe this interpretations: Users are asked to rate objects on different scales called semantic differentials. Based on these ratings, a multi-dimensional scaling procedure will identify similar scales [161]. Regarding surface properties, Hollins showed three general dimensions perceived by an user: rough ↔ smooth, hard ↔ soft and a third, not exactly definable dimension (probably elastic compressibility) [162]. This approach is also successfully used in the evaluation of passive haptic interfaces [163]. The accurate display of surface properties is still a relevant topic in haptic system design. Readers interested in this topic are pointed to the work of Wiertlewski [164] and the results of the Haptex-Project [165].

#### **2.1.3.4 Scaling Parameters**

Another important psychophysical measure is the interpretation of the intensity of different stimuli by the user, normally termed scaling. Especially for tactile applications, the perception of the intensity of normal and lateral applied stimuli is of importance. One of the first comparisons of the perception of tangential and normal stimuli was carried out by Biggs and Srinivasan. They found a 1.7 ... 3.3 times higher sensitivity for tangential displacements compared to normal stimulation at both the forearm and fingerpad. They conclude, that tangential displacement is the better choice for peak displacement limited actuators, while normal displacement from [118]

should be chosen for actuators limited in peak forces. One has to note that this not caused by higher sensitivity, but differences in the mechanical impedance for normal and tangential stimuli [166].

Classical psychophysical evaluation of scaling behavior is reported by Hugony in the last century. Figure 2.18 shows the result as curves of equal perceived intensity, denoting stimulus amplitudes for different frequencies that will be perceived as equal intense by the user. Such curves can be applied to generated targeted intensity changes of complex stimuli: A slight amplitude increase for low-frequency-components will evoke the same perceived intensity than a much larger amplitude change of midand high-frequency components. This behavior can be optimized with regard to the energy consumption of the actuators in a haptic system.

The results further imply perception dynamics as high as 50 dB (defined as difference between absolute threshold and nuisance level), that are confirmed by newer studies like [122] stating a dynamic of 55 dB. Other results from the study imply an amplitude JND of 10 ... 25% and a JND of 8 ... 10% for frequency. This goes along well with the above reported results.

#### **2.1.3.5 Some Words About the Quality of Studies About Haptic Perception**

Studies of haptic perception are conducted by scientists with various backgrounds. Depending on the formal training and customs in different disciplines, the author experienced a large variety of qualities of haptic perception studies. Based on his own training in measurement and instrumentation and own studies dealing with haptic perception, the following hints are given on how to assess the quality of a perception study for further use in the design of haptic systems.

**Measurement Goal and Hypothesis** A hypothesis should be stated for each experiment. Hypothesis formulated in terms of well-established psychophysical properties like the ones described above (Sect. 2.1) are preferable for the latter comparability of the study results. Further, external influencing variables should be considered in the hypothesis formulation. In general, one can differentiate dependent, independent, controllable and confounding variables as shown in Table 2.7 for investigations of haptic perception.

Independent variables are addressed in the formulation of the hypothesis and are varied during the experiment. Depending on the hypothesis, known influencing variables can be considered as independent or controllable variable. Controllable variables will have a known effect on the result of the experiment and should therefore be measured or closely watched. Possible means are keeping the test setup at constant temperature, a pre-selection of test subjects based on age, body length and weight etc. Confounding variables will contribute to the measurement error and cannot be completely taken care of.

**Measurement Setup and Errors** The measurement setup of a haptic perception study should be well fit for the investigated perception parameter or intended result. This means for example, that all parts of the measurement setup should exhibit adequate frequency response, rated ranges and sampling rates for the expected values in the experiment.

The design and construction of the setup should be neat to prevent unwanted effects and errors like for example electromagnetic disturbance by other equipment in the lab. Setups should favorably be fully automated to prevent errors induced by the experimenter.

The setup should be documented including all procedures and measurements of systematic and random errors. Based on a model of the measurement setup and its components, an analysis of systematic error propagation should be conducted as well as a documented calibration of the setup and its components with known input signals and a null signal. Long time stability, reproducibility, external influences and random errors should be analyzed and documented. Application of standardized methods like the -→ Guide to the Expression of Uncertainty in Measurement (GUM) [167] is preferable. If possible, systematic errors should be corrected.

**Measurement Procedure** There should be a considerable amount of test persons in a study, a dedicated statistical analysis with less than 15 subjects seems to be questionable and should at least be explained in the study, explicitly addressing the type II error of the experiment design [168]. Larger numbers of 30 and more subjects are probably advisable.

Regarding psychophysical procedures, a previously reported and favorably adaptive procedure should be used. Newer studies should only used non-adaptive procedures in case of non-changeable stimuli (like gratings on real objects). The report of pre-tests and the impact on the design of the final study should be


**Table 2.7** Possible variables in haptic perception experiments

discussed in the documentation. Interactions with other sensual modalities like vision and audition should be kept in mind and eventually controlled, for example by ear plugs and masking noise.

**Analysis** Data sets not included in the analysis should be addressed and the criteria for this decision must be reported. All results should be analyzed statistically and the location parameters of the results should be given (i.e. mean and standard error for normal distributed results, and median and IQR for not normal distributed results). If external parameters are included in the study, an analysis of variance (ANOVA) as well as post-hoc tests for significance of treatment group averages should be conducted and reported. If other analysis tools like for example a confusion matrix are used, effort should be put into a statistical analysis of the significance of the result. Errors of the measurement setup should be addressed in the analysis.

If possible, results should be compared to other studies with similar setups and intention. When large differences occur, a detailed discussion of these differences and suggestions for further studies is advisable. To enable further studies based on the experiment results, test results for all effects (not only the significant ones) should be reported, as they can be used to determine effect sizes (useful for sample size calculations, see [168]) and to conduct meta-studies [169].

If all of the above hints are considered, most conference proceedings would not report results, but only measurement setups and their characterization. However, keeping the criteria for good measurement setups in mind will improve the quality of results and the broaden the usage possibilities of the study results.

## *2.1.4 Further Aspects of Haptic Perception*

Despite the classic psychophysical questions *(detection, discrimination, identification, scaling)*, there are a couple of other aspects relevant for the design of haptic systems. Some of them are discussed briefly in the following.

#### **2.1.4.1 Effects of Multiple Stimulation**

When more than one stimulus is applied in close temporal or spatial proximity to the first stimulus, several effects of multiple stimulation are known. The following list is based on Verrillo [122]:


For haptics, especially masking effects were investigated, mainly by the group of Gescheider et al. [104, 171, 172, 174–177]. Studies of other effects are not known to the author. At the moment, these multiple stimulation effects have to be considered as side-effects in haptic interaction. Except for the analysis of receptor channels, there is no direct use of one known to the author.

#### **2.1.4.2 Linearity of Haptic Perception**

Recent studies imply, that the channels of haptic perception do not only have independent thresholds [178], but resemble a linear system. Cholewiak et al. investigated spatial displayed gratings and found a necessity for each spatial frequency harmonic to be higher than the perception threshold at that frequency to be perceived by the user [179]. These results allow to consider error margins and detection thresholds independently for each frequency in the design of haptic systems [180].

A first application of this property of haptic perception was presented by Allerkamp et al. in the design of a haptic system to describe surface properties of textiles: analog to the spectral decomposition of an arbitrary color into red, green and blue, textures were analyzed to be represented by two dedicated vibration

**Fig. 2.19** Examples of haptic illusions **a** Müller- Lyer-illusion, **b** Aristoteles-illusion

frequencies for single receptor types [181]. This approach minimizes hardware and data storage effort to present complex surface properties.

#### **2.1.4.3 Anisotropy of Haptic Perception**

Despite the above mentioned differences in the scaling of lateral and tangential stimuli on the skin, there is also an anisotropy of kinaesthetic perception and interaction capabilities [126, 182, 183]. The perception and control of proximal movements (towards the body) is worse than movements in distal direction (away from the body). This property can be of meaning in the ergonomic design of workplaces with haptic interfaces and in tests and evaluations based on Cartesian coordinates.

#### **2.1.4.4 Fooling the Sense of Touch**

As well as in acoustics and vision, there are a couple of haptic illusions. They are generated by anatomic properties, neural processing or mis-interpretation of percepts like a conflict of visual and haptic perception [140]. Since many visual illusions can be found in haptics, too, and because of the similar neural processing and interpretation mechanisms, an explanation analogue to the visual system is anticipated [184]. As Hayward puts it, "Perceptual illusions have furnished considerable material for study and amusement" [185]: Two examples of basic haptic illusions are given in Fig. 2.19. The Müller- Lyer-illusion on the left side is borrowed from visual perception, but can also be proven for haptic stimuli. Both lines are perceived as of different length because of the arrow heads, even if they have the same length. The Aristoteles illusion can be reproduced easily by the reader: Touching an object like a pencil with crossed fingers will evoke the illusion of two objects. If a wall is touched instead of an object, a straight wall will be perceived as a corner and vice versa. Further illusions can be found in the works of Hayward and Lederman [185, 186].

**Fig. 2.20** Kooboh, **a** outer form and internal components, **b** internal system model. Picture courtesy of J. Kildal, *Nokia Research Center*, Espoo, FIN, used with permission

#### *Example:* **Kooboh**

An application of haptic illusions in the design of haptic systems was presented by Kildal in 2012 [187]. Kooboh consists of a solid, non-deformable box with an integrated force sensor and a vibration actuator as shown in Fig. 2.20. The control software simulates an internal system model containing a spring connected with a (massless) object sliding on a rough surface.

The user applies a force *<sup>F</sup>*<sup>a</sup> to the system, normally resulting in a deflection *<sup>d</sup>* <sup>=</sup> *<sup>F</sup>*<sup>a</sup> *c* of the spring *c*. When the object is moved because of the applied force, a frictional force *F*<sup>f</sup> would be generated depending on the texture of the rough surface and the position of the object. Since the box is non-deformable, the reaction of the (virtual) spring cannot be felt. But because the applied force is measured by the force sensor, the theoretical deflection of the object and the therefrom resulting friction force *F*<sup>f</sup> can be calculated. Depending on the structure of the rough surface, *F*<sup>f</sup> will exhibit periodical, high-frequency contents that can be displayed by the integrated actuator. The user interprets these two contradictory percepts as a fully functional model as shown in Fig. 2.20, efficiently neglecting that the system does not move.

#### **Pseudo-Haptic Feedback**

A important technical application of another kind of haptic illusions is the usage of disagreeing information on the visual and the haptic channel. Termed "Pseudohaptic feedback" it is used in virtual environments to simulate properties like stiffness, texture or mass with limited or distorted haptic feedback and accurate visual feedback [54, 188]. A simple example is given by Kimura et al. in [189] as depicted in Fig. 2.21. A visual representation of a spring is displayed on a mobile phone equipped with a force sensor. The deformation of the visual representation is depending on

**Fig. 2.21** Pseuo-haptic feedback in a mobile application [189] c Springer Nature, all rights reserved. Force exerted on the mobile device by the user is measured with pressure sensors. Based on this force, the deformation on the screen is calculated based on a virtual stiffness, leading to the impression of a compliant device. Pictures courtesy of T. Nojima, University of Electro-Communication, Tokyo, JP

the force applied and the virtual stiffness of the displayed spring. Changing the virtual stiffness leads to a different visual representation and a feeling of different springs—although the user will always press the unchanged mobile phone case.

#### **2.1.4.5 Haptic Icons and Categorized Information**

All of the above is based on continuous stimuli and their perception. Another important aspect is the perception of categorized information, that comes to use mainly in communication applications. Probably the most prominent example is the vibration alarm on a smartphone, that can be configured with different patterns for signaling a message or a call. Several groups investigated basic properties of such haptic icons (sometimes also called tactons or hapticons) [190–192]. They found different combinations out of waveform, frequency, pattern and spatial location suitable to create a set of distinguishable haptic icons based on multi-dimensional scaling analysis.

The use of categorized information in haptic systems introduces another measure of human perception, i.e. the information transfer (IT) [193]. This measure describes, how much distinguishable information can be displayed with the haptic signals defined by combinations of the above mentioned signal properties. However, it is no pure measure of perception, but also depends on the haptic system used. Because of that, it qualifies as a evaluation measure for haptic communication systems as detailed in Chap. 13. Reported information transfer ranges from 1.4–1.5 bits for the differentiation of force magnitude and stiffness [135] up to 12 bits for multiaxis systems especially designed for haptic communications of deaf-blind people [171, 194].

## **2.2 Concepts of Interaction**

In daily life, only the least haptic interactions of man with the environment can be classified as solely passive, that are pure passive perception procedures. The most interactions are a combination of motion and perception to implement a prior defined intention. For the design of haptic systems, general agreed on terms are needed to describe intended functions of a system. In this section, some common approaches for this purpose are described. The section ends with a list of motion capabilities of the human locomotor system.

The taxonomy of haptic interaction by Samur as given in Sect. 1.4.2 is one of the possibilities. It was developed for the evaluation of systems interacting with virtual environments and is therefore most suitable for the description of such. Other interactions can be described by combinations of the taxonomy elements as well, but lack some intuition when describing everyday interactions. Stepping a little bit away from the technical basis of Samur's taxonomy and turning towards the functional meaning of haptic interaction for man an its environment, one will find the exploration theory of Lederman and Klatzky outlined in the following section. Further concepts like active and passive touch are described as well as gestures, that are commonly used as input modality on touch screens and other hardware with similar functionality.

## *2.2.1 Haptic Exploration of Objects*

One of the most important task of haptic interaction is the exploration of unknown objects to assess their properties and usefulness. Not only tactile information, but also kinaesthetic perception contributes to these assessments. One of the most relevant sources for the evaluation of surfaces is the relative movement between the skin and the object.

In [195], Lederman and Klatzky identify different exploratory procedures that are used to investigate unknown objects. Figure 2.22 shows the six most important procedures [196]. Table 2.8 gives an insight about costs and benefits when assessing certain object properties.

## *2.2.2 Active and Passive Touch*

The above described combination of movement action and perception is from such fundamental meaning, that two terms have been established to describe this type of interaction.

**Fig. 2.22** Important exploratory procedures, figure adapted from [196]

**Table 2.8** Correlation of exploratory procedures to ascertainable object properties according to [13]. denotes properties that be can be asserted optimally by the exploration technique, denotes properties that are asserted in a sufficient way


Used abbreviations: temp. - temperature, vol. - volume, mF - macroscopic form, eF - exact form

**Definition** *Active Touch* Active touch describes the interaction with an object, where a relative movement between user and object is controlled by the user.

**Definition** *Passive Touch* Passive touch describes the interaction with an object, when relative movement is induced by external means, for example by the experimental setup.

Both conditions can be summarized as *dynamic touch*, while the touch of objects without a relative movement is defined as *static touch* [146]. This differentiation is indeed seldom used. Active touch is generally considered superior to passive touch in its performance. Lederman and Klatzky attribute this to the different focus of the observer [196, p. 1439]:

*Being passively touched tends to focus the observer's attention on his or her subjective bodily sensations, whereas contact resulting from active exploration tends to guide the observer's attention to properties of the external environment.*

Studies show independence of the assessment of material and system properties from the exploration type (active or passive touch condition) [197, 198]. Active touch delivers a better performance for the exploration of geometric properties [196, 199] from a technical view, the implementation of active exploration techniques is a challenge, since transmitted signals have to be synchronized with the relative movement.

## *2.2.3 Gestures*

Gestures are a form of non-verbal communication that are studied in a large number of scientific disciplines like social sciences, history, communication and rhetoric and—quite lately—human-computer-interaction. Concentrating on the latter, one can find gestures when using pointing devices like mice, joysticks and trackballs. More recently, gestures for touch-based devices became more prominent. Some examples are given in Fig. 2.23, for further information see [200] for a taxonomy of gestures in Human-Computer-Interaction. An informative list on all kinds of gestures can be found in the Wikipedia under the reference term "List of Gestures".

Gestures can be used as a robust input means in complex environments, as for example in the car as shown with touch-based gestures in Sect. 14.1 or based on a camera image [201]. For the use in haptic interaction, gestures have further meaning when interacting with virtual environments, as discrete input options in mobile applications, and in connection with specialized haptic interfaces like AIREAL [202], that combines a 3D camera with haptic feedback through an air vortex, or the Ultra-Haptics project, that generates haptic feedback in free air by superposing the signals from a matrix of ultrasound emitters [203]. In 2017 a standard on the usage of gestures in tactile and haptic interaction (ISO 9241-960) was created covering those among other items.

**Fig. 2.23** Gesture examples for touch input devices. **a** Horizontal flicker movement, **b** two-finger scaling, **c** input gesture for the letter *h*. Pictures by *Gestureworks*, used with permission

## *2.2.4 Human Movement Capabilities*

Since users will interact with haptic systems, the capabilities of their movement has to be taken into account.

#### **2.2.4.1 Dynamic Properties of the Locomotor System**

While anatomy will answer questions regarding the possible movement ranges ([204] for example), there are a few studies dealing with dynamic abilities of humans. Tan et al. conducted a study to investigate the maximum controllable force and the average force control resolution [138]. They found maximal controllable forces that could be maintained for at least 5 s in the range of 16–51 N for the joints of the hand and forces in the range of 35–102 N for wrist, elbow and shoulder joints. Forces about half as large as the maximum force could be controlled with an accuracy of 0.7–3.4%. This study is based on just three test persons, but other studies find similar values, for example when grasping a cylindrical grip with forces ranging from 7 N (proximal phalanx of the little finger) to 99 N (tip of the thumb) [205]. An et al. find female's hand strengths in the range of 60–80% of male's hand strengths [206] 2.

Regarding velocities, Hasser derives velocities of 60 ... 105 cms−<sup>1</sup> for tip of the extended index finger [205] and about 17 rads−<sup>1</sup> for the MCP- and PIP-joints. Brooks reports maximum velocities of 1.1 ms−<sup>1</sup> and maximum accelerations of 12.2 ms−<sup>2</sup> from a survey of 12 experts of telerobotic systems [114].

#### **2.2.4.2 Properties of Interaction with Objects**

When touching a surface, users show exploration velocities of about 2 cms−<sup>1</sup> (with a range of 1 ... 25 cms−1) and contact forces ranging from 0.3… 4.5 N [207]. Other studies confirm this range for tapping with a stylus [208] and when evaluating roughness of objects [209]. Smith et al. found average normal forces of 0.49 N–0.64 N for exploring raised and recessed tactile targets on surfaces with the index finger. Recessed targets were explored with slightly larger forces and lower exploring speed (7.67 cms−<sup>1</sup> compared to 8.6 cms−<sup>1</sup> for raised targets), increased friction between finger and explored lead to higher tangential forces. While the average tangential force in normal condition was 0.42 N, the tangential force was raised to 0.65 N in the increased friction condition (realized by a sucrose coating of the fingertip) [210].

For minimal invasive surgery procedures with a tool-mediated contact, radial (with respect to the endoscopic tool axis) forces up to 6 N and axial forces up to 16.5 N were measured by Rausch et al. High forces of about 4 N on average were recorded for tasks involving holding, pressing and pulling of tissue, low forces were

<sup>2</sup> Unfortunately, the number of test subjects involved in the studies is not reported.

used for tasks like laser cutting and coagulation, all measured with a force measuring endoscope operated by medical professionals as reported in [211, 212]. Tasks were carried out with movement frequency components of up to 9.5 Hz, which is in line with the above reported values (Fig. 1.9).

Hannaford once created a database with measurements of force and torque for activities of daily living like writing, dialing with a cell phone among others [213].

## **2.3 Interaction Using Haptic Systems**

In this section, interactions Using haptic systems are discussed and the nomenclature for haptic systems is derived from these interactions. The definitions are derived from the general usage in the haptics community and a number of publications by different authors [214–218] as well as logically extended based on the interaction model shown in Fig. 2.24.

While used in a general way until here, the term *haptic systems* will be defined as follows:

**Definition** *Haptic Systems* Systems interacting with a human user using the means of haptic perception and interaction. Although modalities like temperature and pain belong to the haptic sense, too, *haptic systems* refers only to pure mechanical interaction in this book. In many cases, the term *haptic device* is synonymously used for haptic systems.

In that sense, haptic systems will not only cover the fundamental haptic inputs and outputs, but also the system control instances needed to drive actuators, read out sensors and take care of data processing. This is in accordance to known definitions of mechatronic systems like the one by Cellier [219]:

*A system is characterized by the fact that we can say what belongs to it and what does not, and by the fact that we can specify how it interacts with its environment. System definitions can furthermore be hierarchical. We can take the piece from before, cut out a yet smaller part of it and we have a new system.*

The terms *system*, *device* and *component* are not defined clearly on an interdisciplinary basis. Dependent on one's point of view, the same object can be a "device" for a hardware designer, a "system" for a software engineer or just another "component" of another hardware engineer. These terms are therefore also used in different contexts in this book.

**Fig. 2.24** Haptic interaction between humans and environment. **a** Direct haptic interaction, **b** Utilization of haptic systems. The interaction paths are denoted as follows: I—Intention, P—Perception, M—Manipulation, S—Sensing, C—Comanipulation/other senses

Compared to other perception modalities, haptics offers the only bidirectional communication means between the human user and the environment [220, p. 94]. A *user* is defined as

**Definition** *User* A person interacting (haptically) with a (haptic) system. The user can convey intentions to the system, receiving (haptic) information depending on the *application* of the system. In that sense, a *test person* or *subject* in a psychophysical experiment is a user as well, but not all users can be considered as subjects.

In this book, a haptic system is always considered to have a specific *application* as for example the ones outlined in Sect. 1.5. We therefore also define this term as follows:

**Definition** *Application* Intended utilization of a haptic system.

One has to keep in mind, that this definition includes-→ commercial off-the-shelf (COTS) haptic interfaces coupled to a computer with a software program to visualize biochemical components as well as the use of a specially designed haptic display as a physical interface. Especially in this section, the term *application* has therefore to be considered context sensitive.

Figure 2.24 gives a schematic integration of an arbitrary haptic system in the interaction between a human user and a (virtual) environment. Based on this, one can identify typical classes of haptic systems.

## *2.3.1 Haptic Displays and General Input Devices*

The probably most basic haptic system shown in Fig. 2.25 is a

**Definition** *Haptic Display* A haptic display solely addresses the interaction path **P** with actuating functions. Mechanical reactions of the human user have no direct influence on the information displayed by the haptic display, since user actions are not recorded and cannot be provided to the application.

Haptic displays are used to convey information originating in status information of the system incorporating the display. Typical applications are -→ Braille row displays and—of course—the vibration alarm in mobile devices. Since the overlap to the next class of systems is somehow fuzzy, for the rest of this book a haptic display is defined as a device that only incorporates actuating functions but no sensory functions (except the internal ones needed for the correct functionality of the actuating part). These type of device is mainly used in communication applications like for example shown in Fig. 1.17, subfigure (a), (c) and (d). Often, a haptic display can be seen as a mechatronic component of a haptic system with additional functionality, for example an *assistive system* described in the next section.

For completeness, also systems addressing only the interaction path **I** can be identified. These are basically general input devices like buttons, keyboards, switches, touch screens and mice, that record intentions of the user mechanically and convey them to an application. Being mechanical components themselves, they naturally exhibit mechanical reactions felt as haptic feedback by the user, but these are normally independent from the application. For example, the haptic feedback from a computer keyboard is the same either for the *F1* key or the *Return* key, while the effects of these intentions are quite different. Therefore, general input devices are defined as devices with a predominant input functionality that can be used in different applications and a subordinated haptic feedback independent from the application and resulting unintended from the real mechanical design of the input device. With the focus on generality, specialized input devices like emergency stop buttons are excluded, since they exhibit a defined haptic feedback to convey the current state of the input device.

**Fig. 2.25** Interaction scheme of a haptic display

## *2.3.2 Assistive Systems*

This class of haptic systems shown in Fig. 2.26 is based on haptic displays, but will also include an application dependent sensory function.

**Definition** *Haptic Assistive System* System that will add haptic information to a natural, non technical mediated haptic interaction on path **P** based on sensory input on path **S**.

Assistive Systems are a main application area for haptic displays. The sensory input of assistive systems is not necessarily of a mechanical kind. However, compared to a haptic display as described above, an assistive system will add to existing, natural haptic interaction (i.e. without any technical means).

## *2.3.3 Haptic Interfaces*

If an intention recording function and a *intended* haptic feedback functionality is combined, another class of haptic systems can be defined as shown in Fig. 2.27:

**Definition** *Haptic Interface* Haptic interfaces address the interaction path **P** with actuating functions, but also record the user's intentions along the interaction path **I** with dedicated sensory functions. These data are fed to the application and evoke commands to the system or visualization under control, depending on the application a mechanical user input can result in direct haptic feedback.

Haptic interfaces are mostly used as universal operating devices to convey interactions with different artificial or real environments. Typical applications with taskspecific interfaces include stall-warning sidesticks in aircraft and force-feedback joysticks in consumer applications. Another application is the interaction with virtual environments that is normally achieved with a large number of -→ COTS haptic interfaces. These can also be used in a variety of other interaction tasks, some applications were outlined in Sect. 1.5. Some -→ COTS haptic interfaces are shown in Fig. 2.28 as well as an example for a task-specific haptic interface for driving assistance. Other task-specific haptic interfaces are developed for the usage in medical training systems.

In general, -→ COTS devices support input and output at only a single point in the workspace. The position of this -→ Tool Center Point (TCP) in the workspace of the device is sent to the application and all haptic feedback is generated with respect to this point. Since the interaction with a single point is somewhat not intuitive, most devices supply contact tools like styluses or pinch grips, that will mediate the feedback to the user. This grip configuration is a relevant design parameter and further addressed in Sect. 3.1.3. Figure 2.29 shows some typical grip situations of -→ COTS devices with such *tool-mediated contact*.

#### **2.3.3.1 System Structures**

To fulfill the request for independent channels for input (user intention) and output (haptic feedback) of the haptic interface and the physical constraint of energy conservation, one can define exact physical representations of the input and output of haptic interfaces. This leads to two fundamental types of haptic systems that are defined by their mechanical inputs and outputs as follows:

**Fig. 2.28** Two haptic interfaces, (PHANToM Premium 1.5, <sup>c</sup> <sup>2022</sup> *3D Systems geomagic Solutions*, Rock Hill, SC, USA) and Accelerator Force Feedback Pedal (AFFP, c 2022 *Continental Automotive*, Hannover, Germany). Both images used with permission

**Fig. 2.29** Realizations of tool-mediated contact in commercial haptic interfaces. **a** omega.6 with a stylus interface (*Force Dimension*, Nyon, Switzerland), **b** Falcon with a pistol-like grip for gaming applications (*Novint*, Rockville Centre, NY, USA), **c** and **d** pinch and scissor grip interfaces for the PHANToM Premium <sup>c</sup> <sup>2022</sup> *3D Systems geomagic Solutions*, (Rock Hill, SC, USA). All images used with permission

**Definition** *Impedance-Type System* Impedance-type systems (or just *impedance systems*) exhibit a mechanical input in form of a kinematic measure and a mechanical output in form of a force or torque. In case of a haptic interface, the mechanical input (in most cases the position of the -→ TCP device) is conveyed as an electronic output to be used in other parts of the application.

**Definition** *Admittance-Type System* Admittance-type systems exhibit a mechanical input in form of a force or a torque, that is conveyed as a electronic output in most cases as well. The mechanical output is given by a kinematic measure, for example deflection, velocity or acceleration.

The principle differentiation of impedance-type and admittance-type of systems is fundamental to haptic systems. It is therefore further detailed in Chap. 6.

#### **2.3.3.2 Force Feedback Devices**

The term *force feedback* is often used for the description of haptic interfaces, especially in advertising force-feedback-joysticks, steering wheels and other consumer products. A more detailed analysis of these systems yields the following characteristics for the majority of such systems:


These characteristics show a quite deep level of detail. The only comparable other term with similar detail depth is perhaps *tactile feedback*, mostly defining spatial distributed feedback in the dynamic range of passive interaction (Fig. 1.9). However, these terms are used so widely in technical and non-technical applications with different and not agreed on definitions, that they will not be used in this book in favor of other, clearly defined terms. In that cased, force feedback devices would probably better be described as*impedance type interfaces with tool-mediated haptic feedback*. Since this is a scientific book, the longer term is preferred to a unclear definition.

## *2.3.4 Manipulators*

There is only a limited number of systems from outside the haptic community that can be classified as impedance systems. For admittance systems, one can find haptic interfaces (For example, the Haptic Master interface shown in Fig. 1.14 is an admittance-type interface) as well as mechanical manipulators from other fields. For example, industrial robots are normally designed as admittance systems that can be commanded to a certain position and measure reaction forces if equipped properly. In the here presented nomenclature of haptic system design, such robots can be defined as manipulators:

**Definition** *Manipulator* Technical system that uses interaction path **M** to manipulate or interact with an object or (remote) environment. Sensing capabilities (interaction path **S**) are used for the internal system control of the manipulator and/or for generating haptic feedback to a user.

Figure 2.30 shows the corresponding interaction scheme.

**Fig. 2.30** Interaction scheme of a manipulator

## *2.3.5 Teleoperators*

The combination of a haptic interface and a manipulator yields the class of teleoperation systems with the interaction scheme shown in Fig. 2.31.

**Definition** *Teleoperation Systems* A combined system recording the user intentions on path **I** via the manipulation path **M** to a real environment, measuring interactions on the sensing path **S** and providing haptic feedback to the user via the perception path **P**.

An extension of teleoperators is the class of-→telepresence and teleaction (TPTA) systems, that include additional feedback from other senses like vision and/or audition. Both terms are used synonymously sometimes. Teleoperation systems allow a spatial separated interaction of the user with a remote physical environment. The simplest system is archived by coupling a impedance-type haptic interface with an admittance-type manipulator, since inputs and outputs correspond correctly. Often, couplings of impedance-impedance systems are used because of the availability of components, which generates higher demands on the system controller.

## *2.3.6 Comanipulators*

If additional mechanical interaction paths are present, telepresence systems will turn into a class of systems called comanipulator [214]:

**Definition** *Comanipulation System* Telepresence system with an additional direct mechanical link between user and the environment or object interacted with.

Comanipulator systems are often used in medical applications, since they minimize the technical effort compared to a pure teleoperator because of less moving mass, fewer active -→ DOF and minimized workspaces, but also induce new challenges for the control and the stability of a system. In an application, the user will move the reference frame of the haptic system.

**Fig. 2.31** Teleoperation interaction scheme

Compared to the above mentioned assistive systems, comanipulators exhibit a full teleoperational interaction scheme with additional direct feedback, while assistive systems will add additional haptic feedback to a non-technical mediated interaction between user and application. This is shown in Fig. 2.32.

## *2.3.7 Haptic System Control*

To make the above described systems useable in an application, another definition of more technical nature has to be introduced:

**Definition** *Haptic System Control* The haptic system control is that part of a real system that will not only control the single mechanical and electrical components to ensure proper sensing, manipulating and displaying haptic information, but will also take care of the connection to other parts of the haptic system. This may be for example the connection between a haptic interface and a manipulator or the interface to some virtual reality software.

While the pure control aspects are addressed in Chap. 7, one has also consider other design tools and information structures like Event-Based-Haptics (Sects. 11.3.4, 11.3.4), Pseudo-Haptics (Sect. 2.1.4.4 and the general connection to software (Chap. 12) using a real interface (Chap. 11). In this book and in other sources, one will also find the term *haptic controller* used synonymously for the whole complex of the here-described haptic system control.

## **2.4 Engineering Conclusions**

Based on the above, one can conclude a general structure of the interaction with haptic systems and assign certain attributes to the different input and output channels of a haptic system. This is shown in Fig. 2.33, that extends Fig. 2.24. In the figure, the output channel of the haptic system towards the user is separated in mainly tactile and mainly kinaesthetic sensing channels. This is done with respect to the explanations given in Sect. 1.4.1 and with the knowledge that there are many haptic interfaces that

**Fig. 2.33** General input and output ports for a haptic system in interaction with the human hand. Figure is based on [6, 221] c Wiley, all rights reserved, values form [221] are based on surveys among experts and are labeled with an asterisk (\*), other values are taken from the different sources stated above

will fit in this classification, that will also be used further on in this book occasionally. The parameters given in Fig. 2.33 give an informative basis for the interaction with haptic systems.

In the remaining part of this section, several conclusions for the design of taskspecific haptic systems are given based on the properties of haptic interaction.

**Fig. 2.34** Concept of modalities and their frequency dependency

## *2.4.1 A Frequency-Dependent Model of Haptic Properties*

Haptics and especially tactile feedback is a dynamic impression. There are little to no static components. Without exploring scientific findings, a simple impression of the dynamic range covered by haptic and tactile feedback can be estimated by taking a look at different daily interactions (Fig. 2.34).

When handling an object, the first impression which will be explored is its *weight*. There is probably no one who was not caught by surprised at least once lifting an object which in the end was lighter than expected. The impression is usually of comparably low frequency and typically directly linked to the active touch and movement applied to the object.

Exploring an object with the finger to determine its fundamental *shape* is the next interaction-type in terms of its dynamics. When touching objects like that, a global deformation of the finger and a tangential load to its surface are relevant to create such an impression. There have been research performed by Hayward showing that indeed the pure inclination of the finger's surface already create an impression of shape. However, still being quite global the dynamic information coded in this property is not very rich.

Dynamics increases when it becomes urgent to react. One of the most critical situations our biology of touch is well prepared for is the detection of *slippage*. Constant control of normal forces to the object prevent it from slipping out of our grasp. Being highly sensitive to shear and stick-slip this capability enables us to gently interact with our surrounding.

When it comes to *slippage textured* surfaces and their dynamics must be mentioned too. Their frequency is of course depending on geometrical properties, however their exploration during active touch typically happens in the range above 100 Hz. Within this sensitive area discrimination capabilities of *textures* are naturally most sensitive, as the vibrotactile sensitivity of the human finger is climbing to its highest level.

Whether *gratings* do differ from *textures* is something which can be discussed endless. The principal excitation of the tactile sensory orchestra may be identical, however *gratings* are more like a dirac pulse, whereas *textures* are more comparable to a continuous signal.

Last but not least, hard contacts and the properties they reveal about an object reflect the most dynamic signal processing a haptic interaction may have. And surprisingly, a strong impact to an object reveals more about its volume and structural properties as any gentle interaction can ever show. Therefore *stiffness* is worth an own set of thoughts in the following section.

## *2.4.2 Stiffnesses*

Already the initial touch of a material gives us information about its haptic properties. A human is able to immediately discriminate, whether he or she is touching a wooden table, a piece of rubber or a concrete wall with his or her finger tip. Besides the acoustic and thermal properties, especially the tactile and kinaesthetic feedback plays a large role. Based on the simplified assumption of a double-sided fixed plate its stiffness *k* can be identified by the usage of Young's modulus *E* according to Eq. (2.10) [222].

$$k = 2\frac{b\,h^3}{l^3} \cdot E \tag{2.10}$$

Figure 2.35a shows the calculation of stiffnesses for a plate of an edge length of 1 m and a thickness of 40 mm of different materials. In comparison, the stiffnesses of commercially available haptic systems are given in (Fig. 2.35b). It is obvious that these stiffnesses of haptic devices are factors of ten lower than the stiffnesses of concrete, every-day objects like tables and walls. However, stiffness is just one criterion for the design of a good, haptic system and should not be overestimated. The wide range of stiffnesses reported to be needed for the rendering of undeformable surfaces as shown in Sect. 2.1.3 is a strong evidence of the inter-dependency of several different parameters. The comparison above shall make us aware of the fact that a pure reproduction of solid objects can hardly be realized with a single technical system. It rather takes a combination of stiff and dynamic hardware, for especially the dynamic interaction in high frequency areas dominates the quality of haptics, which has extensively been discussed in the last section.

**Fig. 2.35 a** Comparison between stiffnesses of a 1 <sup>×</sup> <sup>1</sup> <sup>×</sup> 0.04m<sup>3</sup> plate of different materials and **b** realizable stiffnesses by commercial haptic systems

## *2.4.3 One Kilohertz—Significance for the Mechanical Design?*

As stated above, haptic perception ranges up to a frequency of 10 kHz, whereby the area of highest sensitivity lies between 100 Hz and 1 kHz. This wide range of haptic perception enables us to perceive microstructures on surfaces with the same accuracy as enabling us to identify the point of impact when drumming with our fingers on a table.

For a rough calculation a model according to Fig. 2.36 is considered to be a parallel circuit between a mass *m* and a spring *k*. Assuming an identical "virtual" volume *V* of material and taking the individual density ρ for a qualitative comparison, the border frequency *fb* for a step response can be calculated according to Eq. (2.11).

$$f\_b = \frac{1}{2\pi} \sqrt{\frac{k}{m}} = \frac{1}{2\pi} \sqrt{\frac{k}{V\rho}}\tag{2.11}$$

Figure 2.36 shows the border frequencies of a selection of materials. Only in case of rubber and soft plastics border frequencies of below 100 Hz appear. Harder plastic material (Plexiglas) and all other materials show border frequencies above 700 Hz. One obvious interpretation would state that any qualitatively good simulation

**Fig. 2.36** 3 dB border frequency *fb* of an excitation of a simple mechanical model parametrized as different materials

of such a collision demands at least such bandwidth of dynamics within the signal conditioning elements and the mechanical system.

As a consequence, a frequent recommendation for the design of haptic systems is the transmission of a full bandwidth of 1 kHz (and in some sources even up to 10 kHz). This requirement is valid with respect to software and communicationsengineering, as sampling-systems and algorithmic can achieve such frequencies easily today. Considering the mechanical part of the design, we see that dynamics of 1 kHz are enormous, maybe even utopian. Figure 2.37 gives another rough calculation of oscillating force amplitude according to Eq. (2.12).

$$F\_0 = \left| \underline{\mathbf{x}} \cdot (2\pi f)^2 \cdot m \right| \tag{2.12}$$

The basis of the analysis is a force source generating an output force *F*0. The load of this system is a mass (e.g. a knob) of 10 g (!!). The system does not have any additional load, i.e. it does not have to generate any haptically active force to a user. A periodic oscillation of a frequency *f* and an amplitude *x* is assumed. With expected amplitudes for the oscillation of 1 mm at 10 Hz a force of approximately 10 mN is necessary. At a frequency of 100 Hz there is already a force of 2–3 N needed. At a frequency of 700 Hz the force already increases to 100 N—and this is what happens when moving a mass of 10 g. Of course in combination with a user-impedance as load the amplitude of the oscillation will decrease in areas of below 100µm proportionally decreasing the necessary force. But this calculation should make aware of the simple fact that the energetic design and power management of electromechanical systems with application in the area of haptics needs to be done very carefully.

The design of a technical haptic system is always a compromise between bandwidth, stiffness, dynamics of signal conditioning and maximum force-amplitudes. Even with simple systems the design process leads the engineer to the borders of what is physically possible. Therefore it is necessary to have a good model for the

**Fig. 2.37** Equipotential line of necessary forces in dependency of amplitude and frequency of the acceleration of a mass with 10 g

user according to his being a load to the mechanical system and according to his or her haptic perception. This model enables the engineer to carry out an optimized design of the technical system, which is the focus of Chap. 3. However there is also the option to use psychophysical knowledge to trick perception by technical means.

## *2.4.4 Perception-Inspired Concepts for Haptic System Design*

At the end of this chapter, two examples shall illustrate the technical importance of an understanding of perception and interaction concepts. The chosen examples present two technical applications that purposeful use unique properties of the haptic sensory channel to design innovative and better haptic systems.

#### *Example:* **Event-Based-Haptics**

Based on the bidirectional view of haptic interactions (Sect. 1.4.2) with a low-frequent kinaesthetic interaction channel and a high-frequent tactile perception channel, Kontarinis and Howe published a new combination of kinaesthetic haptic interfaces with additional sensors and actuators for higher frequencies. Tests included the use in the virtual representation and exploration of objects [223] as well as the use in teleoperation systems.

Based on this work, Niemeyer et al. proposed *Event-Based Haptics* as a concept for increasing realism in virtual-reality applications [224]. In superposing the

**Fig. 2.38** Integration of VerroTouch into the DaVinci Surgical System. Figure adapted from [226] c Springer Nature, all rights reserved

kinaesthetic reactions of a haptic interface with high-bandwidth transient signals for certain events like touching a virtual surface, the haptic quality of this contact situation can be improved considerably [225]. The superposed signals are recorded using accelerometers and played back open-loop if a predefined interaction event takes places.

This concept proved as a valuable tool for the rendering of haptic interactions with virtual environments. Rendering quality is increased with a comparatively small hardware effort in form of additions to (existing) kinaesthetic user interfaces. Technically not an addition to an existing kinaesthetic system, but still based on the Event-Base Haptics approach, the VerroTouch-System by Kuchenbecker et al. was developed as an addition to the DaVinci Surgical System. It adds tactile and auditory feedback based on vibrations measured at the end of the minimal invasive instrument attached to the robot [226]. These vibrations are processed and played back using vibratory motors attached to the DaVinci controls and additional auditory speakers.

The system showed in Fig. 2.38 is able to convey the properties of rough surfaces and contact events with manipulated objects. The augmented interaction was evaluated positively in a study with 11 surgeons [227]. Objective task metrics showed neither an improvement nor impairment of the tested tasks.

## *Example:* **Perceptual Deadband Coding**

The *Perceptual Deadband Coding (PD)* is a perception-oriented approach to minimize the amount of haptic data that has to be transmitted in real-time applications such as teleoperation [127, 228]. To achieve his data reduction, new data is only transmitted from the slave to the master side, if the change compared to the preceding data point is greater than the -→ JND. The Perceptual Deadband Coding is illustrated for the one-dimensional case in Fig. 2.39, but can be extended easily to more-dimensional so-called dead-zones [229].

## **Recommended Background Reading**


2 Haptics as an Interaction Modality 97

## **References**


**Open Access** This chapter is licensed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made.

The images or other third party material in this chapter are included in the chapter's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the chapter's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.

# **Chapter 3 The User's Role in Haptic System Design**

**Thorsten A. Kern, Christian Hatzfeld, and Fady Youssef**

**Abstract** Consequently, a good mechanical design has to consider the user in his or her mechanical properties. The first part of this chapter deals with the discussion of the user as a mechanical *load* on the haptic device. The corresponding model is split into two independent elements depending on the frequency range of the oscillation. Methods and measurement setups for the derivation of mechanical impedance of the user are reviewed and a thorough analysis of impedance for different grip configurations is presented. In the second part of the chapter, the user is considered as the ultimate measure of quality for a haptic system. The relation of psychophysical parameters like the absolute threshold or the JND to engineering quality measures like resolution, errors and reproducibility is described and application depending quality measures like haptic transparency are introduced.

## **3.1 The User as Mechanical Load**

Fady Youssef and Thorsten A. Kern

## *3.1.1 Mapping of Frequency Ranges onto the User's Mechanical Model*

The area of active haptic interaction—movements, made in a conscious and controlled way by the user—is of limited range. Sources concerning the dynamics of human movements differ as outlined in the preceding chapters. The fastest conscious

F. Youssef e-mail: f.youssef@tuhh.de

Christian Hatzfeld deceased before the publication of this book.

T. A. Kern (B) · F. Youssef

Hamburg University of Technology, Eißendorfer Str. 38, 21073 Hamburg, Germany e-mail: t.a.kern@tuhh.de

C. Hatzfeld Technische Universität Darmstadt, Darmstadt, Germany

movement performed by humans is done with their fingers. Movements for typing of up to 8 Hz can be observed.<sup>1</sup> As these values refer to a ten-finger interaction, they have to be modified a bit. However, as the border frequency of a movement lies above the pure number of a repetitive event, an assumption of the upper border frequency of 10 Hz for active, controlled movement covers most cases.

The major part of the spectrum of haptic perception is passive (*passive haptic interaction*, Fig. 1.9). The user does not have any active influence or feedback within this passive frequency range. In fact, the user is able to modify his properties as a mechanical load by e.g. altering the force when holding a knob. But although this change influences the higher frequency range, the change itself happens with lower dynamics within the dynamic range of active haptic interaction. A look at haptic systems addressing tactile and kinaesthetic interaction channels shows that the above modeling has slightly different impacts:


If you transfer the model of Fig. 3.1 into an abstract notation, all blocks correspond to the transfer-function *G*Hn. Additionally, it has to be considered that the user's reaction *K* is a combined reaction of complex habits and the perception *K*; therefore a necessity to simplify this branch of the model becomes eminent. For the purpose of device design and requirement specification, the conscious reaction is modeled by a disturbing variable only limited in bandwidth, resulting in a block-diagram according to Fig. 3.2c for kinaesthetic and according to Fig. 3.2d for tactile devices.

<sup>1</sup> 8 Hz corresponds to a typing speed of 480 keystrokes per minute. 400 keystrokes are regarded as very good for a professional typist, 300–200 keystrokes are good, 100 keystrokes can be achieved by most laymen.

<sup>2</sup> *K*, a variable chosen completely arbitrarily, is a helpful construct for the understanding of blockdiagrams rather than having a real neurological analogy.

**Fig. 3.1** User-models as a block-structure from kinaesthetic (a+c) and tactile (b+d) systems

The transfer function *G*H3 corresponds to the mechanical admittance of the grasp above the border frequency of user interaction *fg*.

With regard to the application of the presented models there are two necessary remarks to be considered:


The following sections on user impedance give a practical model for the transfer function *G*H3 used in Fig. 3.2.

**Fig. 3.2** Transformation of the user-models' block-structures in transfer-functions including simplifications of the model for the area of active haptic interaction for kinaesthetic (a+c) and tactile (b+d) systems

## *3.1.2 Modeling the Mechanical Impedance*

The user's reaction as part of any haptic interaction combines a conscious, bandwidthlimited portion—the area of active haptic interaction—and a passive portion, mainly resulting from the mechanical properties of fingers, skin and bones. The influence of this second part stretches across the whole frequency range, but emphasizes the upper area for high frequencies. This section describes the passive part of haptic interaction. The transfer function *G*H3 of Fig. 3.2 is a component of the impedance coupling with force-input and velocity-output and is therefore a mechanical admittance of the human *Y <sup>H</sup>* respectively in its reciprocal value the mechanical impedance *Z <sup>H</sup>* .

$$\underline{G}\_{\rm H3} = \frac{\underline{v}\_{\rm spo}}{\underline{F}\_{\rm out}} = \frac{\underline{v}\_{\rm out} - \underline{v}\_{\rm ind}}{\underline{F}\_{\rm out}} = \underline{Y}\_{\rm H} = \frac{1}{\underline{Z}\_{\rm H}} \tag{3.1}$$

In the following, this mechanical impedance of the user will be specified. The parameter impedance combines all mechanical parameters of an object or system that can be expressed in a linear, time-invariant description, i.e. mass *m*, compliance *k* and damping *d*. High impedance therefore means that an object has at least one of three properties:


In any case a small movement (velocity v) results in a high force reaction *F* with high impedances. Low impedance means that the object, the mechanics, is accordingly soft and light. Even high velocities result in small counter forces in this case. The human's mechanical impedance is dependent on a number of influence parameters:


The quantification of human's mechanical impedance requires taking as many aspects into account as possible. The type of grasp is defined by the mechanical design of the device. Nevertheless a selection of typical grasping situations will give a good overview of typical impedances appearing during human-machine interaction. The user-individual parameters like physiological condition and skin structure can be covered best by the analysis of a large number of people of different conditions. By choosing this approach a span of percentiles can be acquired covering the mechanical impedances typically appearing with human users. The "free will" itself, however, is—similar to the area of active haptic interaction—hard if not impossible to be modeled. The time dependent and unpredictable user impedance dependency on the will can only be compensated if the system is designed to cover all possible impedance couplings of actively influenced touch. Another approach would be to indirectly measure the will to adapt the impedance model of the user within the control loop. Such an indirect measure is, in many typical grasping situations, the force applied between two fingers or even the whole hand holding an object or a handle. In the simplest design the acquisition of such a force can be done by a so called *dead-man-switch*, which in 1988 was already proposed by Hannaford for the usage in haptic systems [11]. A dead-man-switch is pressed as long as the user holds the control handle in his or her hand. It detects the release of the handle resulting in a change in impedance from *Z <sup>H</sup>* to 0.

## *3.1.3 Grips and Grasps*

There is a nomenclature for different types of grasps shown in Fig. 3.3. The hand is an extremity with 27 bones and 33 muscles. It combines 13 (fingers) respectively 15 (incl. the wrist) degrees-of-freedom.<sup>3</sup> Accordingly the capabilities of man to grasp are extremely versatile.

<sup>3</sup> Thumb: 4 DoF, index finger: 3 DoF, middle finger: 2 DoF (sometimes 3 DoF), ring finger: 2 DoF, small finger: 2 DoF, wrist: 2 DoF. The rotation of the whole hand happens in the forearm and therefore does not count among the degrees of freedom of the hand itself.

**Fig. 3.3** Grip configurations, figure based on [3]

There are three classes of grasps to be distinguished:


Further discrimination of grasps is made by Feix et al. and documented online [5] with the purpose of reducing the mechanical complexity of anthropomorphic hands [6]. The reported taxonomy could be useful for very specialized task-specific systems. For all classes of grasps, measurements of the human's impedance can be performed. According to the approach presented by Kern [20], the measurement method and the models of user impedance are presented including the corresponding model parameters.

## *3.1.4 Measurement Setup and Equipment*

The acquisition of mechanical impedances is a well-known problem in measurement technology. The principle of measurement is based on an excitation of the system to be measured by an actuator, simultaneously measuring force and velocity responses of the system. For this purpose combined force and acceleration sensors (e.g. the impedance sensor 8001 from *Brüel & Kjær*, Nærum, DK) exist, whereby the charge amplifier of the acceleration sensor includes an integrator to generate velocity signals.

In [28], Wiertlewski and Hayward argue that measurements with impedance heads are prone to measurement errors because of the mechanical construction of the sensor based on [2]. However, errors induced by the construction of the measurement head appear at frequencies larger than 2000 Hz, values that are only seldom used in the design of haptic interfaces. Furthermore, interpersonal variations and calibration of the measurement setup based on a concentrated network parameter approach are used to minimize the errors even for high frequencies in the following.

In general the impedance of organic systems is *nonlinear* and *time-variant*. This non-linearity is a result of a general viscoelastic behavior of tissue resulting from a combined response of relaxation, conditioning, stretching and creeping [9]. These effects can be reproduced by mechanical models with concentrated elements. However, they are dependent on the time-history of excitation to the measured object. It can be expected that measurements based on step excitation are different from those acquired with a sinusoid sweep. Additionally, the absolute time for measurement has some influence on the measures by conditioning. Both effects are systematic measurement errors. Consequently, the models resulting from such measurements are an indication of the technical design process and should always be interpreted with awareness of their variance and errors (Fig. 3.4).

All impedance measures presented here are based on a sinusoid-sweep from upper to lower frequencies. The excitation has been made with a defined force of 2 N amplitude at the sensor. The mechanical impedance of the handle has been measured by calibration measurements and was subtracted from the measured values. The impedance-sensors are limited concerning their dynamic and amplitude resolution, of course. As a consequence, the maximum frequency up to which a model is valid depends on the type of grasp and its handle used during measure. This limitation is a direct result of the amplitude resolution of the sensors and the necessity at high frequencies to have a significant difference between the user's impedance and the handle's impedance for the model to be built on. The presented model-parameters are limited to the acquired frequency range and cannot be applied to lower or higher frequencies. The measurement setup is given in Fig. 3.5.

Bochereau et al. [1] introduced a device to record, reproduce and image the fingertip friction. In this study, Frustrated Total Internal Reflection principle (FTIR)

**Fig. 3.4** Measurement setup for the acquisition of user impedances according to [20] c Springer Nature, all rights reserved

**Fig. 3.5** Impedance measurement settings for different grasps

was used to image the evolution of fingertip contact area over time. The device, shown in Fig. 3.6, consists of different parts; one part is designed to record the friction force resulting from the movement of the user's finger over a texture. Three load cells are used in the record phase, in which two are used to compute the normal force and one for the tangential component. The second part of the device is designed to reproduce the friction forces, this is done with the help of a linear electrodynamic motor. The motor is connected to a glass plate, that the motor could vibrate the plate, so that the imaging phase could occur.

**Fig. 3.6** Device to record, replay and image of finger friction movement according to [1]

## *3.1.5 Models*

In order to approximate the human's impedance a number of different approaches were taken in the past (Fig. 3.7). For its description mechanical models based on concentrated linear elements were chosen. They range from models including active user reactions represented by force sources (Fig. 3.7a), to models with just three elements (Fig. 3.7c) and combined models of different design. The advantage of a mechanical model compared to a defined transfer function with a certain degree in enumerator and denominator results from the possibility of interpreting the elements of the model as being a picture of physical reality. Elasticities and dampers connected in circuit with the exciting force can be interpreted as the coupling to the skin. Additionally the mechanical model creates very high rankings by its interconnected elements which allow a much better fit to measurements than free transfer functions.

Kern [20] defined an eight-element model based on the models in Fig. 3.7 for the interpolation of the performed impedance measures. The model can be characterized by three impedance groups typical for many grasping situations (Fig. 3.8).

*Z*<sup>3</sup> (Eq. 3.4) models the elasticity and damping of the skin being in direct contact with the handle. *Z*<sup>1</sup> (Eq. 3.2) is the central element of the model and describes the mechanical properties of the dominating body parts—frequently fingers. *Z*<sup>2</sup> (Eq. 3.3) gives an insight into the mechanical properties of the limbs, frequently hands, and allows to make assumptions about the pre-loads in the joints in a certain grasping situation.

$$\underline{Z}\_1 = \frac{s^2 m\_2 + k\_I + d\_I \, s}{s} \tag{3.2}$$

**Fig. 3.7** Modeling the user with concentrated elements, **a** [11], **b** [18], **c** [23], **d** [21], own illustrations

**Fig. 3.8** Eight-element model of the user's impedance [20] c Springer Nature, all rights reserved, modeling the passive mechanics for frequencies >20 Hz vout

$$\underline{Z}\_2 = \left(\frac{s}{d\_2s + k\_2} + \frac{l}{sm\_l}\right)^{-1} \tag{3.3}$$

$$\underline{Z}\_3 = \frac{d\_3 \, s + k\_3}{s} \tag{3.4}$$

$$\underline{Z}\_{B} = Z\_{1} + Z\_{2} \tag{3.5}$$

Combined, the model's transformation is given as

$$\underline{Z}\_{\rm H} = \underline{Z}\_{3} \| \underline{Z}\_{B} \tag{3.6}$$

$$\underline{Z}\_{\rm H} = \left( \frac{s}{d\_{\rm 3}s + k\_{\rm 3}} + \left( \frac{s^2 m\_2 + k\_I + d\_I s}{s} + \left( \frac{s}{d\_{\rm 2}s + k\_2} + \frac{1}{sm\_I} \right)^{-1} \right)^{-1} \right)^{-1} \tag{3.7}$$

## *3.1.6 Modeling Parameters*

For above model (Eq. 3.7) the mechanical parameters can be identified by measurement and approximations with real values. For the values presented here approximately 48–194 measurements were made. The automated algorithm combines an evolutionary approximation procedure followed by a curve-fit with optimization based on Newton curve fitting, to achieve a final adjustment of the evolutionarily found starting parameters according to the measurement data. The measurements vary according to the mechanical pre-load—the grasping force—to hold and move the control handles. This mechanical pre-load was measured by force sensors integrated into the handles. For each measurement this pre-load could be regarded as being static and was kept by the subjects with a 5% range of the nominal value. As a result the model's parameters could be quantified not only dependent on the grasping situation but also dependent on the grasping force. The results are given in the following section. The display of the mechanical impedance is given in decibel, whereby 6 dB equals a doubling of impedance. The list of model values for each grasping situation is given in Appendix.

#### **3.1.6.1 Precision Grasps**

Within the area of precision grasps three types of grasps were analyzed. Holding a measurement cylinder similar to a normal pen in an angle of 30◦ (Fig. 3.9), we find a weak anti-resonance in the area of around 150–300 Hz. This anti-resonance is dependent on the grasping force and moves from weak forces and high frequencies to large forces and lower frequencies. The general dependency makes sense, as the overall system becomes stiffer (the impedance increases) and the coupling between skin and cylinder becomes more efficient resulting in more masses being moved at higher grasping forces.

The general impedance does not change significantly. if the cylinder is held in a position similar to a máobi Chinese pen (Fig. 3.10). However the dependency on the anti-resonance slightly diminishes compared to the above pen hold posture.

This is completely different to the variant of a pen in a horizontal position held by a three finger grasp (Fig. 3.11). A clear anti-resonance with frequencies between 80 and 150 Hz appears largely dependent in shape and position on the grasping force. All observable effects in precision grasps can hardly be traced back to the change of a single parameter but are always a combination of many parameters' changes.

**Fig. 3.9** Impedance with percentiles (**a**) and at different force levels (**b**) for a two fingered precisiongrasp of a pen-like object held like a pen (Ø10mm, defined for 20–950 Hz)

**Fig. 3.10** Impedance with percentiles (**a**) and at different force levels (**b**) for a two fingered precision-grasp of a pen-like object held like a "máobi" Chinese pen (Ø10mm, defined for 20– 700 Hz)

**Fig. 3.11** Impedance with percentiles (**a**) and at different force levels (**b**) for a five fingered precision-grasp of a pen-like object in horizontal position (Ø10mm, defined for 20–2 kHz)

#### **3.1.6.2 One-Finger Contact Grasp**

All measurements were done on the index finger. Direction of touch, size of touched object and touch-force normal to the skin were varied within this analysis. Figure 3.12a shows the overview of the results for a touch being analyzed in normal direction. The mean impedance varies between 10 and 20 dB with a resonance in the range

**Fig. 3.12** Impedance of finger touch via a cylindrical plate for different contact forces (1–6 N) and in dependency from diameter (**a**), for the smallest plate (Ø 2mm) and the largest plate (Ø 15mm) (defined for 20–2 kHz)

of 100 Hz. Throughout all measured diameters of contactor size and forces, no significant dependency of the position of the anti-resonance on touch forces were noted. However, a global increase in impedance is clearly visible. Observing the impedance dependent on contactor size, we can recognize an increase of the antiresonance frequency. Additionally, it is fascinating to see that the stiffness decreases with an increase of contact area. The increase in resonance is probably a result of less material and therefore less inertia participating in generating the impedance. The increase in stiffness may be a result of smaller pins deforming the skin more deeply and therefore getting nearer to the bone as a stiff mechanical counter bearing.

In comparison, with measurements performed with a single pin of only 2 mm in diameter (Fig. 3.12b), the general characteristic of the force dependency can be reproduced. Looking at the largest contact element of 15 mm, in diameter, we are aware of a movement of the resonance frequency from 150 Hz to lower values down to 80 Hz for an increase in contact force.

In orthogonal direction the skin results differ slightly. Figure 3.13a shows a lateral excitation of the finger pad with an obvious increase of impedance at increased force

**Fig. 3.13** Impedance for finger touch of a plate moving in orthogonal direction to the skin at different force levels (1–6 N) (defined for 20 Hz to 2 kHz). Movement in lateral direction (**a**), distal direction (**b**)

of touch. This rise is mainly a result of an increase of damping parameters and masses. The position of the anti-resonance in frequency domain remains constant at around 150 Hz. The picture changes significantly for the impedance in distal direction (Fig. 3.13b). The impedance still increases, but the resonance moves from high frequencies of around 300 Hz to lower frequencies. Damping increases too, resulting in the antiresonance being diminished until non-existence.

#### **3.1.6.3 Superordinate Comparison of Grasps**

It is interesting to compare the impedances among different types of touch and grasps with each other:

• Almost all raw data and the interpolated models show a decrease of impedance within the lower frequency range of 20 Hz to the maximum of the first antiresonance. As to precision grasps (Figs. 3.9, 3.10 and 3.11), normal fingertip excitation (Fig. 3.12), the gradient equals 20 dB/decade resembling a dominating pure elongation proportional effect of force response—elasticity—within a low frequency range. Within this low bandwidth-area nonlinear effects of tissue including damping seem to be not very relevant. Looking at this type of interactions we can assume that any interaction including joint rotation of a finger is almost purely elastic in a low frequency range.


If speculations should be made on still unknown, not yet analyzed types of touches according to the given data, it should be reasonable to assume the following:


of around 200 Hz. The position of the anti-resonance diminishes in an area of 100 Hz due to change in pre-load. Above that anti-resonance let the impedance become dominated by a damping effect. The height of impedance changes in a range of 5 dB by the force of the grasp.

C. **Finger touch** The median impedance should be around 12 dB. Model the impedance with a well balanced elasticity and damping effect until an antiresonance frequency of around 150 Hz. The position of the anti-resonance is quite constant, with the exception of large contact areas moving in normal and in distal direction. Above that anti-resonance let the impedance become strongly dominated by a damping effect. The absolute height of impedance changes in an area of up to 10 dB depending on the force during touch.

## *3.1.7 Comparison with Existing Models*

For further insight into and qualification of the results, a comparison with published mechanical properties of grasps and touches is presented in this section. There are two independent trends of impedance analysis in the scientific focus: the measurement of mechanical impedance as a side product of psychophysical studies at threshold level, and measurements at higher impedance levels for general haptic interaction. The frequency plots of models and measurements are shown in Fig. 3.14.

In [14] the force detection thresholds for grasping a pen in normal orientation have been analyzed. Figure 3.14a shows an extract of the results compared to the pen-like grasp of a cylinder of the model in Fig. 3.9a. Whereas the general level of

**Fig. 3.14** Comparison of the model from Fig. 3.8 with data from similar touches and grasps as published by Israr [14, 15], Fu [8], Yoshikawa [29], Hajian [10], Jungmann [17]

impedance does fit, the dynamic range covered by our model is not as big as described in literature. Analyzing the data as published, we can state that the minimum force measured by Israr is <sup>≈</sup><sup>60</sup> <sup>µ</sup>N at the point of lowest impedance. A force sensor reliably measuring at this extreme level of sensitivity exceeds the measurement error of our setup and may be the explanation of the difference in the dynamic range covered. In another study [15] the force detection threshold of grasping a sphere with the finger tips was analyzed. The absolute force level of interaction during these measurements was in the range of mN. A comparison (Fig. 3.14b) between our model of touching a sphere and these data show a difference in the range of 10–20 dB. However such small contact forces resemble a large extrapolation of our model data to low forces. The difference can therefore be easily explained by the error resulting from this extrapolation.

Fu [8] measured the impedance of the finger tip at a low force of 0.5 N. He advanced an approach published by Hajian [10]. A comparison between our model and their data concerning the shape is hardly possible due to the little number of discrete frequencies of this measurement. However the impedance is again 10 dB lower than of our touch model of a five millimeter cylinder at normal oscillations similar to Fig. 3.12. Once more the literature data describe a level of touch force not covered by our measurements and therefore the diagram in Fig. 3.14c is an extrapolation of the model of these low forces.

As a conclusion of this comparison, the model presented here cannot necessarily be applied to measurements done at lower force levels. Publications dealing with touch and grasp at reasonable interaction forces reach nearer to the model parameter estimated by our research. Yoshikawa [29] published a study of a three element mechanical model regarding the index finger. The study was based on a time-domain analysis of a mechanical impact generated by a kinaesthetic haptic device. The measured parameters result in a frequency plot (Fig. 3.14d) which is comparable to our model of low frequencies, but does neither show the complexity nor the variability of our model in a high frequency range of above 100 Hz. A similar study in time-domain was performed by Hajian [10] with just slightly different results. Measurements available as raw data from Jungmann [17] taken in 2002 come quite close to our results, although obtained with different equipment.

Besides these frequency plots, the model's parameters allow a comparison with absolute values published in literature: Serina [26] made a study on the hysteresis of the finger tips' elongation vs. force curve during tapping experiments. This study identified a value for *k* for pulp stiffness ranging from 2 N/mm at a maximum tappingforce of 1–7 N/mm at a tapping force of 4 N. This value is about 3–8 times larger than the dominating *k*<sup>2</sup> in our eight-element model. The results of Fu [8] make us assume that there was a systematic error concerning the measurements of Serina, as the elongation measured at the fingernail does not exclusively correspond to the deformation of the pulp. Therefore the difference in the values of *k* between our model and their measurements can become reasonable. Last but not least Milner [22] carried out several studies on the mechanical properties of the finger tip in different loading directions. In the relevant loading situation a value of *k* ranging from 200 to 500 N/m was identified by him. This is almost perfect within the range of our model's stiffness.

## *3.1.8 Modeling User's Variability*

In order to perform an optimal system and control design, a good modeling of user's variability should be included. The key for a good variability modeling is precise measurments. Fu et al. [7] performed a variability analysis especially for stylusbased haptic devices. The variability of human arm was studied in two forms: structured and unstructured variability. Structured was defined as the statically defined uncertainties from the paramters of the human arm model used. On the other hand the multiplicative unstructured uncertainties we referred as unstructured variability. Both variability forms are modeled in a way, such that they can be applied directly to a robust stability analysis.

## *3.1.9 Final Remarks on Impedances*

The impedance model as presented here will help with the modeling of haptic perception in high frequency ranges of above 20 Hz. However, it completely ignores any mechanical properties below that frequency range. This is a direct consequence of the general approach to human machine interaction presented in Chap. 2 and has to be considered when using this model.

Another aspect to consider is that the above measurements show a large intersubject variance of impedances. In extreme cases they span 20 dB meaning nothing else but a factor of 10 between e.g. the 5th and the 95th percentile. Further research on the impedance models will minimize this variance and allow a more precise picture of impedances. But already this database, although not yet completed, allows to identify helpful trends for human load and haptic devices.

## **3.2 The User as a Measure of Quality**

Christian Hatzfeld

Salisbury et al. postulated a very valuable hypothesis for the design of taskspecific haptic systems: Their 2011 paper title reads *What You Can't Feel Won't Hurt You: Evaluating Haptic Hardware Using a Haptic Contrast Sensitivity Function* [25]. In this work, they use haptic contrast sensitivity functions (the inverse of the sinusoidal grating detection threshold) to evaluate -→ COTS devices. With a more general view, the first part of this paper title summarizes the second role of the user and her or his properties in the design of haptic systems: As the instance that determines, whether the presented haptic feedback is good enough or not. In this section, this approach is detailed on three aspects of the system design, i.e. resolutions, errors and the quality of the haptic interaction.

## *3.2.1 Resolution of Haptic Systems*

Resolution is mainly an issue in the selection and design of sensors and actuators, while latter is also influenced by the kinematic structure used in interfaces and manipulators. In general, sensors on the manipulation side have so sense at least as good as the human user is able to perceive after the information is haptically displayed by the haptic interface. On the interface side, sensors have to be at least as accurate as the reproducibility of the human motor capability, to convey the users intention correctly. For the actuating part, the attribution is vice versa: actuators on the manipulating side have to be as accurate as the human motor capability, while the haptic interface has to be as accurate as human perception can resolve.

Unfortunately, this is the worst case for technical development: sensors (on the manipulating side) and actuators (on the interfacing side) have to be as accurate as human perception. Therefore exact readings of *absolute thresholds* are indispensable to determine the necessary resolutions for sensors and actuators, if one wants to build a high-fidelity haptic system. On the other hand, systematic provisions to alter the perception thresholds favourably by changing the contact situation (contact area, contact forces) at the primary interface are possible. This is further detailed in Sect. 5.2.

For applications not involving teleoperation, the requirements are basically the same, but extend to other parts of the system: For the interaction with virtual realities, the software has to supply sufficient discretization of the virtual data (a non-trivial problem, especially if small movements and hard contacts are to be simulated), systems for communication have to supply enough mechanical energy that the perception threshold is surpassed to ensure clear transmission of information. Last, but definitely not least, all errors resulting from digital quantization and other, system inherent noise have to be lower than the absolute perception thresholds of the human user.

## *3.2.2 Errors and Reproducibility*

While resolutions are quite a challenge for the design of haptic systems because of the high sensitivity of human haptic perception, the handling of errors is somewhat easier. The basic assumption about the perception of haptic signals with regard to errors and reproducibility is the following: There is no error, if there is no difference detectable by the user. This property is expressed by the -→ JND. *Weber's Law* as stated in Eq. (2.5) facilitates this further: For low references the acceptable error increases due to the increasing differential thresholds. This accommodates the fact, that the absolute errors of technical systems and components usually increase, when the reference values decrease.

For large reference values, this relative resolution of human perception is much smaller than the absolute resolution of technical systems, that is uniformly distributed along the whole nominal range. This has to be taken into account if information are to be conveyed haptically.

## *3.2.3 Quality of Haptic Interaction*

While resolution and errors are pretty much linked directly to perception parameters, the assessment of haptic quality is somewhat more difficult. It is also based on the assumption, that the quality of a haptic interaction is good enough, if all intended information are transmitted correctly to the user and no additional information or errors are perceived. The second part can basically be achieved by considering the above mentioned points regarding errors and resolution. The assessment, if all information is transmitted correctly is more difficult, since the user and the perceived information have to be taken into account. In general, this is only possible if suitable evaluation methods are used, Chap. 13 gives an overview about such methods with respect to the intended application.

Another example for the evaluation of haptic quality is the concept of haptic transparency for teleoperation system. This property describes the ability of a haptic system to convey only the intended information (normally defined as the mechanical impedance of the environment at the manipulator side *Z*e) to the user (in terms of the displayed impedance of the haptic interface *Z*<sup>t</sup> ) without displaying the inherent properties of the haptic system. This definition is further detailed in Sect. 7.5.2. Despite the above said, this property can be tested without a user test, but with considerable effort regarding the mechanical measurement setup.

When further considering haptic perception properties, especially -→Just Noticeable Differences, the common binary definition of transparency can be transformed to a nominal value with a lot less requirements on the technical system. This concept was developed by Hatzfeld et al. [12, 13] and is further explained in Sect. 7.5.2.

One should keep in mind, that all of the above mentioned thresholds are generally dependent on frequency and the contact situation in the best case. In the worst case, they are also dependent on the experimental methodology used to obtain them, which will necessarily require a retest of the perception property needed.

## *3.2.4 Perceptional Dimensions*

All above approaches follow the tendency to describe quality of haptic interaction by perceptional capabilities in a usually physical domain. The inherent assumption is that humans act as sensors for physical properties. Stating this in an explicit way makes it obvious that this can not be true.

This is where *perceptional dimensions* should be considered. All psychophysical fields use this more user-centric approach. And the range for *perceptional dimensions* is wide, from object perception to space perception as nicely summarized by Kappers and Bergmann Tiest in [19].

However the level of difference between physical and perceptional dimensions is nowhere larger than in the domain of *textures*. Okamoto et al. identified in a review [24] five dominating tactile dimensions for textures (Fig. 3.15). This triggered systematic research on the discrimination of materials (e.g. [4]) and new quality measures for performance evaluations of texture-rendering devices (e.g. [27]).

## **Recommended Background Reading**

	- *tion topics.* [6] Feix, T.; Pawlik, R.; Schmiedmayer, H.; Romero, J. & Kragic, D.: **A Comprehensive Grasp Taxonomy** In: Robotics, Science and Systems Conference: Workshop on Understanding the Human Hand for Advancing Robotic Manipulation, 2009.

*Thorough Analysis of human grasps, also available online at* http://grasp. xief.net/.

## **References**


**Open Access** This chapter is licensed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made.

The images or other third party material in this chapter are included in the chapter's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the chapter's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.

# **Chapter 4 Development of Haptic Systems**

**Thorsten A. Kern and Christian Hatzfeld**

**Abstract** This chapter deals with the general design processes for the development of task-specific haptic systems. Based on known mechatronic development processes like the V-model, a specialized variant for haptic systems is presented, that incorporates a strong focus on the intended interaction and the resulting impacts on the development process. Based on this model, a recommended order of technical decisions in the design process is derived. General design goals of haptic systems are introduced in this chapter as well. These include stability, haptic quality and usability that have to be incorporated in several stages of the design process. A short introduction into different forms of technical descriptions for electromechanical systems, control structures and kinematics is also included in this chapter to provide a common basis for the second part of the book.

## **4.1 Application of Mechatronic Design Principles to Haptic Systems**

Obviously, haptic systems are mechatronic systems, incorporating powerful actuators, sophisticated kinematic structures, specialized sensors and demanding control structures as well as complex software. The development of these parts is normally focus of specialized areas of specialists, i.e. mechanical engineers, robotic specialists, sensor and instrumentation professionals, control and automation engineers and software developers. A haptic system engineer should be at least able to understand the basic tasks and procedures of all of these professions, in addition to the required basic knowledge about psychophysics and neurobiology outlined in the last chapters.

T. A. Kern (B)

Christian Hatzfeld deceased before the publication of this book.

Hamburg University of Technology, Eißendorfer Str. 38, 21073 Hamburg, Germany e-mail: t.a.kern@tuhh.de

C. Hatzfeld Technische Universität Darmstadt, Darmstadt, Germany

**Fig. 4.1** Adaption of the V-model for the design of haptic systems

All of the above mentioned professions use different methods, but generally agree on the same concepts in developing their parts of a haptic system. These can be integrated in some common known development design methods as for example the V- model for the development of mechatronic systems [16]. The model was originally developed for pure software design by the federal republic of Germany, but adapted to other domains as well. For the design of task-specific haptic systems, the authors detailed and extended some phases of the derivation of technical requirements based on [3] (Interaction Analysis) and [4] (Detailed Modeling of Mechatronic Systems). This adapted model is shown in Fig. 4.1. Based on this, five general stages are derived for the design of haptic systems. These stages are the basis for the further structure of this book and therefore detailed in the following sections.

TheV- model knows different variations depending on the actual usage and scale of the developed systems. In this case, the above mentioned variation was chosen over existing model variations, to be able to include additional steps in each stage of the V- model. The resulting model is probably nearest to theW- model for the design of adaptronic systems introduced byNattermann and Anderl [8], because this model also includes an iteration in the modeling and design stage. It is further based on a comprehensive data management system, that does not only include information about interfaces and dependencies of individual components, but also a simulation model of each part. Since there is no comparable data basis for the design of haptic systems (that probably make use of a wider range of physical actuation and sensing principles than adaptronic systems up to date), theW- model approach is not directly transferable and more iterations in the modeling and design stage have to be accepted.

## *4.1.1 Stage 1: System Requirements*

The first stage is used for the derivation of system requirements. For the design of task-specific haptic systems, a breakdown in three phases seems useful.


Another result of this phase are detailed and quantified interaction goals for the application in terms of task performance and ergonomics. Possible categories of these goals are given in Chap. 13. If, for example, a communication system is designed, possible goals could be a certain amount of information transfer (IT) [5] and a decrease of cognitive load in an exemplary application scenario measured by the NASA task-load index [2].

**Specification of Requirements** Based on the predefined steps, a detailed analysis of technical requirements on the task-specific haptic system can be made. This should include all technical relevant parameter for the whole system and each component (i.e. actuators, sensors, kinematic structures, interfaces, control structure and software design). Chapter 5 provides some helpful clusters depending on different interactions for the derivation of precise requirement definitions.

The result of this stage is at least a detailed requirement list. The necessary steps are detailed in Chap. 5. Further tools for the requirement engineering can be used as well, but are not detailed further in this book.

## *4.1.2 Stage 2: System Design*

In this stage, the general form and principles used in the system and its components have to be decided on. In general, one can find a vast number of different principles for components of haptic systems. During the technical development of haptic systems, the decisions on single components influence each other intensively. However, this influence is not identical between all components. For the engineer it is necessary to proceed in the solution identification for each component, after having gained the knowledge of the requirements for the haptic system. It is obvious that, according to a systematic development process, each solution has to be compared to the specifications concerning its advantages and disadvantages. The recommended procedure of how to deal with the components is the basis of the chapter structure of this section of the book and is summarized once again for completeness:


Nevertheless it is vital to note that e.g. the kinematics design cannot be realized completely decoupled from the available space for the device and the forces and torques—respectively the actuator. Additionally, kinematics directly influences any measurement technology as even displacement sensors have limitations on resolution and dynamics. The order suggested above for taking decisions has to be understood as being a recommendation for processing the tasks; it does not free the design engineer from the responsibility to keep an overview of the sub-components and their reciprocal influence.

A good possibility to keep track of this influences is the definition of clear interfaces between single components. This definition should include details about the form of energy and data exchanged between the components and be further detailed in the course of the development process to include clear definition of for example voltage levels, mechanical connections, standard interfaces and connectors used etc.

## *4.1.3 Stage 3: Modeling and Design of Components*

#### **4.1.3.1 Modeling of Components**

Based on the decisions from the preceding stage, the individual components can be modeled and designed. For this, general domain-specific methods and description forms are normally used, which are further described in the following Sect. 4.3. This step will first result in a model of the component, that will include all relevant design parameters that influence the performance and design of the component. Some of these parameters can be chosen almost complete freely (i.e. control and filter parameters), while others will be limited by purchased parts in the system component (one will for example only find sensors with different, but fixed ranges as well as actuators with fixed supply voltages etc.).

#### **4.1.3.2 Comprehensive Model of the Haptic System**

In a second step, a more general model of the component should be developed, that exhibits similar interfaces to adjacent components like the ones defined in the preceding Sect. 4.1.2. Furthermore, this model should only include the most relevant design parameters to avoid excessive parameter sets.

When the interfaces of adjacent components match, the models of all components can be combined to a comprehensive model of the haptic system with general haptic input and output definitions (Fig. 2.33) and relevant design parameters for each individual components. Normally, a large number of components is involved in these comprehensive models. For a teleoperation system one can roughly calculate two actuators, two kinematic structures, two positioning sensors for actuator control, one force sensor and the corresponding power and signal processing electronics for *each* -→ DOF with the resulting modeling and simulation effort.

Even if they are very large, such models are advisable to optimize the haptic system with respect to the below mentioned design goals like stability and haptic quality. Only with a comprehensive model one can evaluate the inter-component influences on these design goals. Based on the descriptions of the system structure given in Chap. 7, the optimization of the comprehensive model will lead to additional requirements on the individual components or modifications of the prior defined interfaces between components. These should also be documented in the requirement list.

One has to keep in mind, that all parameters are prone to errors, especially variances with regard to the nominal value and differences between the real part and the (somewhat) simplified model. During optimization of the comprehensive model, robustness of the results with regard to these errors has to be kept in mind.

#### **4.1.3.3 Optimization of Components**

Based on the results of the optimized comprehensive model, the individual components of a haptic system can be further optimized. This step is not only needed, when there is a change of interface definitions and requirements of single components, but is normally also necessary to ensure certain requirements of the system, that are not depending on a single component only. Examples are the overall stiffness of the kinematic structure, the mass of the moving parts of the system and—of course—the tuning parameters of control loops.

For the optimization of components, typical mechatronic approaches and techniques can be used, see for example [4, 9] and Sect. 4.3. Further aspects like standard conformity, security, recycling, wearout, and suitability for production have to be taken into account in this stage, too.

In practice, the three parts of *Stage 3: Modeling and Design of Components* will not be used sequentially, but with several iterations and branches. Experience and intuition of the developer will guide several aspects influencing the success and duration of this stage, especially the selection of meaningful parameters and the depth of modeling of each component. Currently, many software manufacturers work on the combination of different model abstraction levels (i.e. -→ single input, single output (SISO)-systems, network parameter descriptions, finite element models) into a single CAE-Software with the ability not only to simulate, but also to optimize the model. While this is already possible to a certain amount in commercial software products (for example ANSYS™), the ongoing development in these areas will be very useful for the design of haptic systems.

## *4.1.4 Stage 4: Realization and Verification of Components and System*

Based on the optimization, the components can be manufactured and the haptic system can be assembled. Each manufactured component and the complete haptic system should be tested against the requirements, i.e. a verification should be made. Additionally, other design goals like control stability and transparency (if applicable) should be tested. Due to the above mentioned interaction analysis (see Sect. 5.2 for more details), this step will ensure that the system will generate perceivable haptic signals to the user without any disturbances due to errors. To compare the developed haptic system with others, objective parameters as described in Chap. 13 can be measured.

## *4.1.5 Stage 5: Validation of the Haptic System*

While step 4 will ensure, that the system was developed correctly with respect to the expected functions and the requirements, this step will check if the correct system was developed. This is simply made by testing the evaluation criteria defined in the interaction analysis and comparison with other systems with haptic feedback in a user test.

This development process will ensure, that time-intensive and costly user tests are only conducted in the first and last stages, while all other steps only rely on models and typical engineering tools and process chains. With this detailing of the V- model, the general mechatronic design process is extended in such a way, that the interaction with the human user is incorporated in an optimized way in terms of effort and development duration.

## **4.2 General Design Goals**

There are a couple basic goals for the design of haptic systems, that can be applied with various extend to all classes of applications. They do not lead to rigorous requirements, but it is helpful to keep all of these in mind when designing an haptic system to ensure a successful product.


a design goal. These goals are described in the ISO 9241 standard series1 and demand *effectiveness* in fulfilling a given task, *efficiency* in handling the system and *user satisfaction* when working with the system.

Usability has therefore be considered in almost all stages of the development process. This includes the selection of suitable grip configurations that prevent fatigue and allow a comfortable usage of the system, the definition of clearly distinguishable haptic icons, that are not annoying when occurring repeatedly and the integration of assistive elements like arm rests. It is advisable to provide for individual adjustment, since this contributes to the usability of a system. This applies to mechanical parts like adjustable arm rests as well as information carrying elements like haptic icons. Methods to assess some of these criteria mentioned are given in Chap. 13 as well as in the standard literature to usability for human-machine-interaction as for example [1].

For the design of haptic systems, the following design principles derived from Preim's principle for the design of interactive software systems can assist in the development of haptic systems with a higher usability [10]:


## **4.3 Technical Descriptions of Parts and System Components**

Since the design of haptic systems involves several scientific disciplines, one has to deal with different description languages according to the discipline's culture. This section gives an short introduction into different description languages used in the design of control, kinematics, sensors and actuators. It is not intended to be sufficient, but to give an insight into the usage and the advantages of the different descriptions for components of haptic systems.

<sup>1</sup> The ISO 9241 primarily deals with human-computer-interaction in a somewhat limited view of the term "computer" with a strong focus on standard workstations. The general concepts described in the standard series can be transferred to haptics nevertheless, and the ISO 9241-9xx series deals with haptics exclusively.

## *4.3.1 Single Input—Single Output (SISO) Descriptions*

One of the simplest forms of modeling for systems and components are -→ SISO descriptions. They only consider a single input and a single output with a time dependency, i.e. a time-varying force *F*(*t*). The description also includes additional constant parameters and the derivatives with respect to time of the inputs and the outputs. If considering a DC-motor for example, a SISO description would be the relation between the output torque *M*out(*t*) evoked by a current input *i*in(*t*) as shown in Eq. 4.1.

$$M\_{\rm out}(t) = k\_{\rm M} \cdot i\_{\rm in}(t)$$

$$\Rightarrow h(t) = \frac{M\_{\rm out}(t)}{i\_{\rm in}(t)} = k\_{\rm M} \tag{4.1}$$

The output torque is related to the input current by the transfer function *h*(*t*). In this case, the transfer function is just the motor constant *k*<sup>M</sup> that is calculated from the strength of the magnetic field, the number of poles and windings, and geometric parameters of the rotor amongst others. It is normally given in the data sheet of the motor.

SISO descriptions are mostly given in theLaplace-domain, i.e. a transformation of the time-domain transfer function *h*(*t*)into the frequency-domain transfer-function with the complexLaplace operator *<sup>s</sup>* <sup>=</sup> <sup>σ</sup> <sup>+</sup> *<sup>j</sup>*ω. These kind of system descriptions is widely used in control theory to assess stability and the quality of control. However, for the design of complex systems with different components, SISO descriptions have some drawbacks.


To overcome these disadvantages, one can extend the SISO description to multiple input and multiple output systems (MIMO). For the description of haptic systems, a special class of MIMO systems is advisable, the description based on network parameters as outlined in the following Sect. 4.3.2.

These drawbacks do not necessarily mean, that SISO descriptions have no application in the modeling of haptic systems: Despite the usage in control design, they are also useful to describe system parts that are not involved in extensive exchange of energy, but primarily in the exchange of information. Consider a force sensor placed on the tip of the manipulator of a haptic system: While the sensor compliance will effect the transmission of mechanical energy from -→ TCP to the kinematic structure of the manipulator (and should therefore be considered with a more detailed model than a SISO description), the transformation of forces into electrical signals is mainly about information. It is therefore sufficient to use a SISO description for this function of a force sensor.

## *4.3.2 Network Parameter Description*

The description of mechanical, acoustic, fluidic and electrical systems based on lumped network parameters is based on the similar topology of the differential equations in each of these domains. A system is described by several network elements, which are locally and functionally separated from each other and exchange energy via predefined terminals or ports. To describe the exchange of energy, each considered domain exhibits a flow variable in the direct connection of neighboring ports (for example current in the electrical domain and force in translational mechanics) and an effort variable (for example voltage, respectively velocity between two arbitrary ports of the network. Table 4.1 gives the mapping of electrical and translational mechanical elements. Historically, there are two analogies between these domains. The one used here depicts physical conditions best, there is however a single incongruent point: The definition of the mechanical impedance as the quotient of flow variable and effort variable.


**Table 4.1** Analogy between electrical and mechanical network descriptions

#### 4 Development of Haptic Systems 143

To couple different domains, loss-less transducers are used. Because they are loss-less, systems in different domains can be transformed into a single network, which can be simulated with an extensive number of simulation techniques known from electrical engineering like for exampleSPICE. The transducers can be devided in two general classes. The first class called *transformer* links the effort variable of domain A with the effort variable of domain B. A typical example for a transformer is a electrodynamic transducer, that can be described as shown in Eq. 4.2 with the transformer constant *<sup>X</sup>* <sup>=</sup> <sup>1</sup> *<sup>B</sup>*0·*<sup>l</sup>* :

$$
\begin{pmatrix} \underline{\underline{v}} \\ \underline{\underline{F}} \end{pmatrix} = \begin{pmatrix} \frac{\underline{l}}{B\_0 \cdot l} & 0 \\ 0 & B\_0 \cdot l \end{pmatrix} \cdot \begin{pmatrix} \underline{u} \\ \underline{l} \end{pmatrix} \tag{4.2}
$$

*B*<sup>0</sup> denotes the magnetic flux density in the air gap of the transducer and *l* denotes the length of the electrical conductor in this magnetic field. Further details about these kind of transducer are given in Chap. 9. If different domain networks are transformed into each other by the means of a transformer, the network topology stays the same and the transformed elements are weighted with the transformer constant. This is shown in Fig. 4.2 on the example of a electrodynamic loudspeaker and applied to electrodynamic actuators in Fig. 4.2.

The other class of transducers is called *gyrator*, coupling the flow variable from domain A with the effort variable form domain B and vice versa. The coupling is described with the transformer constant *Y* , examples (not shown here) include electrostatic actuators and transducers that change mechanical in fluidic energy. If different domain networks are transformed, the network topology changes, series connections become parallel and vice versa. The single elements change as well, for a gyratory transformation between mechanical and electrical domains an inductor will become a mass and a compliance will turn into a capacitance. A common application for gyratory transformations is the modeling of piezoelectric transducers. This is shown in Chap. 9 in the course of the book.

An advantage of this method is the consideration of influences from other parts in the network, a property that cannot be provided by the representation with SISO transfer functions. On the other side, this method will only work for linear timeinvariant systems. Mostly a linearization around a operating point is made to use network representations of electromechanical systems. Some time dependency can be introduced with switches connecting parts of the network at predefined simulation times. Another constrained is the size of the systems and components modeled by the network parameters. If size and wavelength of the flow and effort variables are in similar dimensions as the system itself, the basic assumption of lumped parameters cannot be hold anymore. In that case, distributed forms of lumped parameter networks can be used to incorporate some wave line transmission properties.

In haptics, network parameters are for example used for the description of the mechanical user impedance *Z*user as shown in Chap. 3, the condensed description of kinematic structures, and the optimization of the mechanical properties of sensors

**Fig. 4.2** Network model of an electrodynamic exciter–Grewus Exciter EXR4403L-01A. **a** The system consists out of an electrical system, the electrodynamic transducer with transformatoric constant *X*, the mechanical parts of the moving parts, the mechanical-acoustic transducer with gyratoric constant *Y* and the properties of the acoustic system. **b** Shows the corresponding network model and **c** the network model, when acoustic network elements are transformed in equivalent mechanical elements—ignoring for the time-being the dynamics of the carrier this exciter is mounted on or any tactile functionality

and actuators as shown above. Further information about this method can be found in the work ofTilmanns [14, 15] andLenk et al. [7], from which all information in this section were taken.

## *4.3.3 Finite Element Methods (FEM)*


**Fig. 4.3** Domain, elements, nodes and boundary conditions of a sample FEM problem formulation

The use of the Finite Element Method requires a discretization of the whole domain, thereby generating several finite elements with finite element nodes as shown in Fig. 4.3. Furthermore, boundary conditions have to be defined for the border of the domain, external loads and effects are included in these boundary conditions.

Put very simple, FE analysis will run through the following steps: To solve the PDE on the chosen domain, first a partial integration is performed on the differential equations multiplied with a test function. This step leads to a weak formulation of the partial differential equation (also called natural formulation), that incorporates theNeumann boundary conditions. Discretization is performed on this natural formulation, leading to a set of PDE that has to be solved on each single element of the discretized domain. By assuming a certain type of appropriate shape or interpolation function for the PDE on each element, a large but sparse linear matrix is constructed, that can be solved with direct or iterative solvers depending on the size of the matrix.

There are a lot of commercial software products that will perform FEM in the different engineering fields. They normally include a pre-processor, that takes care of discretization, material parameters and boundary conditions, a solver and a postprocessor, that will turn the solver's results into a meaningful output. For the quality of results of FEM the choice of the element types depending on geometry of the considered domain and the kind of analysis and the mathematical solver used is of high importance.

The advantages of the FE method are the treatment of non-linear material properties, the application to complex geometries, and the versatile analysis possibilities that include static, transient and harmonic analysis [6]. The aspect of discretization yields a high computational effort, but also a spatial resolution of the physical value in investigation.

To overcome some disadvantages of FEM there are some extensions to the method: The *combined simulation* maps FE results onto network models that are further used in network based simulations of complex systems [7, 13]. The advantage is the high spatial resolution of the calculation on the required parts only and the resulting higher speed. The data exchange between FE and network model is made by the user. The *coupled simulation* incorporates an automated data exchange between FE and network models at run-time of the simulation. At the moment, many companies work on the integration of this functionality in the program packages for FE and network model analysis to allow for multi-domain simulation of complex systems.

The application of -→ finite element model (FEM) in haptics can be found in the design of force sensors (see Chap. 10), the evaluation of thermal behavior of actuators, and the structural strength of mechanical parts.

## *4.3.4 Description of Kinematic Structures*

A description of the pose, i.e. the position and orientation of a rigid body in space, is a basic requirement to deal with kinematic structures and to optimize their properties. If considering Euclidean space, six coordinates are required to describe the pose of a body. This is normally done by defining a fixed reference frame *i* with an origin *Oi* and three orthogonal basis vectors (**x***i*, **y***i*, **z***i*). The pose of a body with respect to the reference frame is described by the differences in position and orientation. The difference in position is also called displacement and describes the change of position of the origin *Oj* of another coordinate frame *j* that is fixed to the body. The orientation is described by the angle differences between the two sets of basis vectors (**x***i*, **y***i*, **z***i*) and (**x** *<sup>j</sup>*, **y** *<sup>j</sup>*, **z** *<sup>j</sup>*). This rotation of the coordinate frame *j* with respect to the reference frame *i* can be described by the rotation matrix *<sup>j</sup>* **R***<sup>i</sup>* as given in Eq. (4.3).

$$\mathbf{r}^{j}\mathbf{R}\_{i} = \begin{pmatrix} \mathbf{x}\_{i}\cdot\mathbf{x}\_{j} \ \mathbf{y}\_{i}\cdot\mathbf{x}\_{j} \ \mathbf{z}\_{i}\cdot\mathbf{x}\_{j} \\ \mathbf{x}\_{i}\cdot\mathbf{y}\_{j} \ \mathbf{y}\_{i}\cdot\mathbf{y}\_{j} \ \mathbf{z}\_{i}\cdot\mathbf{y}\_{j} \\ \mathbf{x}\_{i}\cdot\mathbf{z}\_{j} \ \mathbf{y}\_{i}\cdot\mathbf{z}\_{j} \ \mathbf{z}\_{i}\cdot\mathbf{z}\_{j} \end{pmatrix} \tag{4.3}$$

While the rotation matrix contains nine elements, only three parameters are needed to define the orientation of a body in space. Although there are some mathematical constraints on the elements of *<sup>j</sup>* **R***<sup>i</sup>* that ensure the equivalence, several minimal representations of rotations can be used to describe the orientation with less parameters (and therefore less computational effort when computing kinematic structures). In this book, only three representations are discussed further, the description by *Euler Angels*, *Fixed Angles* and *Quaternions*.

**Euler Angles** To minimize the number of elements needed to describe a rotation, the Euler angle notation uses three angles (α, β, γ ) that each represent a rotation about the axis of a moving coordinate frame. Since each rotation depends on the prior rotations, the order of rotations has to be given as well. Typical orders are Z-Y-Z and the Z-X-Z rotation shown in Fig. 4.4.

The description by Euler angles exhibits singularities, when the first and last rotations occur about the same axis. This is a drawback when one has to describe several, consecutive rotations and when describing motion, i.e. deriving velocities and accelerations.

**Fig. 4.4** Rotation of a coordinate frame based on Euler Angles (α, β, γ ) in Z-X-Z order


$$
\varepsilon = \varepsilon\_0 + \varepsilon\_1 i + \varepsilon\_2 j + \varepsilon\_3 k
$$

with the scalar components ε0, ε1, ε<sup>2</sup> and ε<sup>3</sup> and the operators *i*, *j*, and *k*. The operators fulfill the combination rules shown in Eq. (4.4) and therefore allow associative, commutative and distributive addition as well as associative and distributive multiplication of quaternions.

$$\begin{aligned} ii=jj=kk=-1\\ iii=k, \quad jk=i, \quad ki=j\\ ji=-k, \quad kj=-i, \quad ik=-j \end{aligned} \tag{4.4}$$

One can imagine a quaternion as the definition of a vector (ε1, ε2, ε3), that defines the axis the frame is rotated about with the scalar part <sup>0</sup> defining the amount of rotation. This is shown in Fig. 4.5. By dualization, quaternions can be used to describe the complete pose of a body in space, i.e. rotation and displacement. Further forms of kinematic descriptions as for example the description based on screw theory can be found in [17], on which this section is based primarily and other works like [11, 12].

## **Recommended Background Reading**


*Introduction to the network element description methodology.*

## **References**


**Open Access** This chapter is licensed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made.

The images or other third party material in this chapter are included in the chapter's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the chapter's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.

# **Part II Designing Haptic Systems**

In the previous chapters, the discussion focused on haptic perception in relation to the human user. In the following chapters, the technical realization of haptic systems will come to the fore. As a result, the general view changes from a user-centered perspective to a device-specific view.More concrete technological issues are explored and more practical help is offered for common challenges in the design process. The chapters in this part are organized according to the classic list of tasks to be accomplished in any technical design process. They begin with more general issues that affect the overall system and then move to specific issues that relate to particular subcomponents. The chapters are intentionally ordered so that those dealing with issues whose range of solutions is severely limited are addressed earlier than those that provide more flexible solutions applicable to many situations. The understanding gained, as well as the methods used to quantify haptic perception, will continue to be used to analyze the quality of technological solutions.


chapter provides the necessary knowledge on kinematic design and covers specific and sometimes surprising problems of mechanical transfer functions for haptic devices in serial and parallel kinematics.


For a complete haptic interaction each system requires a position measurement. Technological solutions for this subordinate technological challenge are discussed in this chapter, whereby different positioning- and movement, touch and imagingsensors are presented.


# **Chapter 5 Identification of Requirements**

**Jörg Reisinger, Thorsten A. Kern, and Christian Hatzfeld**

**Abstract** In this chapter, the process of requirement definition is described, starting with the definition of the intended application together with the customer. Especially the derivation of technical parameters from the customers expectation and useful tools for this step are discussed. Further, the analysis of the intended interaction and the effects on the requirement identification are discussed. To alleviate the identification of requirements, main requirement groups are derived from the intended type of interaction and presented in five technical solution clusters. A review of relevant standards and guidelines on safety serves as another source of requirements of a haptic systems.

## **5.1 Definition of Application—The Right Questions to Ask**

At the beginning of a technical design process the requirements for the product which, usually, are not clear and unambiguous, have to be identified. Frequently, customers formulate wishes and demands respectively solutions instead of requirements. A typical example is a task of the kind: "to develop a product just like product **P**, but better/cheaper/nicer". If an engineer accepts such a kind of order without getting to the bottom of the original motivation the project will be doomed to failure. Normally, the original wish of the customer concerning the product has to fulfil two classes of requirements:

J. Reisinger (B)

T. A. Kern

Christian Hatzfeld deceased before the publication of this book.

Mercedes-Benz Cars Development, Daimler AG, 71059 Sindelfingen, Germany e-mail: reisinger@haptics.eu

Hamburg University of Technology, Eißendorfer Str. 38, 21073 Hamburg, Germany e-mail: t.a.kern@tuhh.de

C. Hatzfeld Technische Universität Darmstadt, Darmstadt, Germany

The product shall have


Phrasing market oriented requirements are manifold yet not in the focus of the following analysis (for details of a general systematic product development see [12, 23]). They may be motivated by an existing product **P** to compete with, but usually they are much more comprehensive and cover questions of budget, time-frame of development, personal resources and qualifications and customers to address.

With regard to the technical framework, the customer typically gives just unspecific details. A statement like "a device shall provide a force on a glove" is not a definition of a requirement but already a solution on the basis of existing knowledge on the part of the customer. The complexity of a real technological solution spans from a single actuator to provide e.g. a vibration to complex kinematics addressing single fingers. Questioning the customer's original statement, it may even come out, that his intention is e.g. to simulate the force impression when switching the gears of a clutch in a passenger car. The knowledge about the actual application and following that knowledge about the interaction itself allows the developer a much broader approach, leading to a more optimized technical solution.

## *5.1.1 Experiments with the Customer*

The customer formulates requirements—as mentioned before—typically in an inexact instead of a specific way. Additionally, there is the problem of a very unspecific terminology with regard to the design of haptic systems For the description of haptic sensual impressions there are numerous adjectives difficult to quantify, like: rough, soft, smooth, gentle, mild, hard, viscous, as well as others derived from substantives such as: furry, silky, hairy, watery, and sticky which can be compared to real objects. So what could be more obvious than asking the customer to describe his/her haptic impressions by comparisons?

Ask the customer to describe the intended haptic impression with reference to objects and items in his/her environment. These items should be easily at hand, like e.g. vegetables and fruits which offer a wide spectrum of textures and consistencies for comparison.

Sometimes the customer first needs to develop a certain understanding of the haptic properties of objects and items. This can best be achieved by his/her directly interacting with them. Examples of haptically extreme objects have to be included in a good sample case, too. The evolving technology of 3D printing allows for a very flexible design of such samples.

Provide a sample case including weights and springs of different size, even marbles, fur, leather and silk. Depending on the project, add sandpaper of different granularity. Use these items to explain haptic parameters to the customer and help the customer to optimize the description of the product performance expected!

From practical experience, we can recommend also to take spring balances and letter balances or electronic force sensors with you to customer meetings. Frequently, it is possible to attach a handle directly to the items and ask the customer to pull, until a realistic force is reached. This enables customers of non-technical disciplines to quickly get an impression of the necessary torques and forces.

Take mechanical measurement instruments with you to the customer meetings and allow the customer to touch and use them! This gives him / her a good first impression of the necessary force amplitudes.

In order to give a better impression of texture, mechanical workshops may produce patterns of knurls and grooves of different roughness on metals. Alternatively, sandpaper can be used and, by its defined grade of granularity, can provide a standardized scale to a certain extent.

Use existing materials with scales to describe roughness and simulate the impression of texture.

Recently, different toolkits for haptic prototyping are available. They are specific for certain types of applications, like for example cockpit knobs or texture recording, discrimination and replay. Penn Haptic Toolkit to record and replay haptic texture properties [5] is one of those systems being conceptually slightly different from the approach done by TU- München's LMT texture recording and its database [30] at http://zeus.lmt.ei.tum.de/downloads/texture/. Further examples for lo-fi prototyping can be found in [14]. For more sophisticated setups, the usage of a -→ COTS device and a virtual environment developed with a haptics toolkit could be considered. For vibrotactile feedback specialized prototyping environments like HapticLabs.io are available. With focus on actuator sales, several companies provide a mixture of haptic consulting and actuator customization as a service (*Grewus*, *Nui Lab*, *Actronika*, ...).

## *Engineering Misconceptions when Asking About Haptics*

A normal customer without expertise in the area of haptics will not be able to give statements concerning resolutions or dynamics of the haptic sense. This kind of information has to be derived from the type of interaction and the study of psychophysical knowledge of comparable applications. Therefore, the experience of the developing engineer is still indispensable despite all the systematizations in a technical design process.

Do not confuse the customer by asking questions about the physical resolution! This is necessarily the knowledge of the haptic engineer. However, learn about the dynamics of the interaction and try to assess the application, e.g. by asking about the frame rate of a simulation, or the maximum number of load changes per seconds of a telemanipulator.

## *5.1.2 General Design Guidelines*

Next to the ideas of the customer and/or user, there are also a number of different guidelines dealing with the design of haptic systems. These guidelines are summarized here very shortly, but a close look in the original references is advisable when applicable to the intended a haptic system.


## **5.2 Interaction Analysis**

Based on the demands of a customer and the clarifications obtained in conversation and experiments, a more technical interaction analysis can be performed. The first goal of this step is a technical description of the user with regard to the intended application. Normally, this will include information about the perception thresholds in the chosen grip configuration, information about the movement capabilities, and the mechanical impedance of the user. Naturally, one will not find fixed values for these parameters, but probably only ranges in the best case. In the worst case, own perception studies and impedance measurements have to be conducted.

The second goal of this step is a definition of suitable evaluation parameters and appropriate testing setups. If a reference system (that has to be improved or equipped with haptic feedback in course of the development) is given, reference values of these parameters should be obtained in this stage of the requirement identification as well.

The following steps are advisable for an interaction analysis that will obtain meaningful information for the following requirement specification as stated in Sect. 5.5. They are based on the works of Hatzfeld et al. [10, 11].

**1. Task Analysis** Analyze the interaction task as thoroughly as possible. Interaction primitives as described in Sects. 1.4.1 and 2.2 are helpful at this point. Research possible grip configurations suitable for this kind of interactions (3.1.3), if the hand is intended as the primary interface between user and haptic system. Depending on the intended application, other body sites like the torso, the back of the hand or other limbs can be suitable locations for haptic interactions. For the ease of reading, the rest of this section will only mention the hand as primary interface without loss of generality with regard to other body sites.

Take the usage of tools into account (stylus, gripper, etc.) as well as possible restrictions of the manipulator in a teleoperation scenario (see example below). After his, one should have one or more possible interaction configurations, that will be able to convey all interaction primitives needed for the intended usage. If one plans to build a teleoperation, comanipulation or assistive system that adds haptic feedback to interactions that do not already have such, it is probably worthwhile to discuss if all haptic signals have to be measured, transmitted and displayed. Sometimes, the display of categorized haptic information (OK/Not OK, Material A/Material B/Material C etc.) could be sufficient in terms of intended usage of the system, facilitates the technical development, and lowers the cost of the final product.

It is advisable to also have a look on some multimodal aspects of the application as well as other environmental parameters: If a visual channel has to be or can be used, special concepts like pseudo-haptic feedback can be considered in the design of the system. If the system is to be used in a highly distractive environment, robust communication schemes have to be incorporated or an adjustable feedback mode has to be included. These information will help with formulating the system structure and the detailed requirement list.


**4. Perception Parameters** Research or measure relevant perception parameters for the selected grip or body site configuration. Normally, absolute and different thresholds are needed for an estimation of sensor and actuator resolutions as well as tolerable errors. Based on the intended usage, other perception parameters or other interpretations can be meaningful as well. For example, successiveness limens (SL) and two-point-thresholds will affect the design of communication interfaces on all body sites. For an energy-limited system, small JNDs could be beneficial, since they probably will result in a large number of possible transferable information with a small amount of energy.

Keep in mind, that force and deflection thresholds can be calculated from each other by using the mechanical impedance according to Eq. (2.7). If possible, obtain data in more than one dimension to facilitate the requirement definition in the intended -→ DoF. Be sure to check if there are external conditions, that will influence perception favorably for the technical development. This could be a maximum contact area or a minimum contact force that will lead to higher perception thresholds for the given contact situation. With means of the system developer these conditions can be influenced, for example by the design of the grip or the measurement of a minimum contact force, that has to be applied by the user to make the haptic system functional.

**5. Evaluation Criteria** Define suitable evaluation criteria regarding the intended task performance. Chapter 13 gives possible criteria depending on the application class of the haptic system. Despite these measures of task performance, measurements of haptic quality (if applicable) and ergonomic measures can be taken into account. The latter will quantify the cost and benefit of a haptically enhanced system compared to a system without haptics. The definition this early in the development allows for the measurement of reference values and eases the final evaluation, since the intended testing procedure of the haptic system can be incorporated in the design process.

A final decision for a grip configuration can either be made based on the values obtained in this interaction analysis in favor of the technical less-demanding option or by conducting user tests considering ergonomic factors like fatigue and task performance, if this is technically possible (for example with -→ COTS devices). Obviously, this could involve some iterations of the above mentioned points. With this structured approach to interaction, a lot of purposeful information is generated for the derivation of requirements. The approach is illustrated with a short example in the following.

## **Example:** *FLEXMIN Interaction Scheme*

The surgical system FLEXMIN is developed to enhance single port surgery procedures like for example transanal rectum resection [4] with haptic feedback, additional intracorporal mobility compared to rigid instruments and a more ergonomic working posture of the surgeon. Task-analysis as described above was conducted based on an example rectum resection with commercial available, stiff instruments (TEO system, *Karl Storz*, Tuttlingen, Germany) on an anatomical model. Based on the recordings of the surgeon's movements, system constraints like workspace, dexterity, instruments and principal manipulation tasks were identified [16]. This analysis led to the requirements of two manipulators with at least four movement -→ DoF (positioning in space and rotation along the longitudinal axis) and preferably another DoF for gripping instruments like scissors or forceps.

Based on additional aspects like the request for displaying stiff structures and elements and the available construction space, a parallel kinematic structure was chosen for the intracorporal manipulator already at this point of development [15]. In that case, the -→ TCP will be at the end of the last part of the lead chain of the parallel mechanism. The movement of this part was chosen as the general form of interaction of the haptic interface used to operate the manipulator [18]. The resulting concept for the haptic interface is shown in Fig. 5.1.

Ergonomic considerations about the surgeon handling two of these interfaces led to a passive linear bearing at the one end of the main kinematic chain of the user interface. On the other end, a parallel delta kinematic structure was chosen to actuate three DoF of the haptic interface. Additional feedback for the rotatory and the grasping DoF is integrated in the grasping part of the user interface. This is shown in Fig. 5.2.

**Fig. 5.1** Derivation of the concept of the haptic user interface of FLEXMIN (lower part) from the kinematic structure of the intracorporal manipulator (upper part). Figure adapted from [18]

**Fig. 5.2 a** Realization of the haptic user interface of FLEXMIN, **b** Rendering of the intracorporal robot with two manipulator arms, working channel and visual instrumentation. Further information can be found in [15]

## **5.3 Technical Solution Clusters**

After the interaction analysis and the discussion of the customer's expectations towards the haptic system, one should have a in-depth knowledge about the intended function of the haptic system. Based on a quite basic description of this function, general types of haptic systems and the interactions therewith can be identified. Based on these, this section identifies possible technical realizations and summarizes the necessary questions in clusters of possible applications. The list does not claim to be complete, but is the essence of requirement specifications of dozens of developments achieved during the last few years.

The core of the requirements' identification is the definition of the type of haptic interaction. The first question asked should always refer to the type of interaction with the technical system. Is it a simulation of realistic surroundings, the interaction with physically available objects in terms of telepresence; or is the focus of the interaction on the pure communication of abstract information? In the former cases the variants are less versatile than in the latter, as described below. In Fig. 5.3 a decision tree for the identification of clusters of questions is sketched. It is recommended to follow the tree from top to bottom in order to identify the correct application and the corresponding cluster of questions.

**Simulation and Telepresence of Objects** Does the interaction aim at touching virtual or via telepresence available objects? If this is the case, does the interaction take place directly via fingers, hands or skin, or is a mediator, e.g. a tool the interacting object? Does the user hold a specific tool—a pen, a screw driver, a surgical instrument, a joystick of a plane, in his hands and control one or more other objects with it, or does the user touch a plurality of objects during the interaction with his or her hands? In the case of a tool-interaction the chosen solution can be found in cluster 1 "kinaesthetics", in the case of a direct interaction another detail has to be considered.

**Fig. 5.3** Structure for identifying relevant clusters of questions by analyzing the intended haptic interaction

	- Does the interaction include a single event which occurs from time to time (e.g. the call of a mobile phone) or is some permanently active information (e.g. a distance to a destination) haptically communicated? These questions are one-dimensional<sup>1</sup> and covered by cluster 3 "vibro-tactile".
	- Is the interaction dominated by directional information coding an orientation in a surface (directional movement) or in a space? In this case the questions covered by cluster 4 "vibro-directional" are relevant. In such applications

<sup>1</sup> As the information includes only one parameter.

frequently time respectively distal information is included, also making the questions in cluster 3 become relevant.


In the following sections the questions in the clusters are further discussed and some examples are given for the range of possible solutions to the questions aimed at.

# *5.3.1 Cluster* **1** *— Kinaesthetic*

Cluster 1 has to be chosen either when an interaction between fingers and shapes happens directly or when the interaction takes place between tool and object. Both cases are technical problems of multidimensional complexity.<sup>2</sup> Each possible dimension movement corresponds to one degree of freedom of the later technical system. Therefore the questions to be asked are quite straightforward and mainly deal with the requirements for these degrees of freedom of tools and users:


<sup>2</sup> A tool interaction can be a one-dimensional task, but such an assignment concerning the technical complexity can be regarded as an exception.

<sup>3</sup> In the case of a finger movement it has to be noted that not necessarily all movement directions have to be equipped with haptic feedback to provide an adequate interaction capability. Frequently it is even sufficient to provide the grasp-movement with haptic feedback, solely.

<sup>4</sup> Frequently the customer will not be able to specify these values directly. In this case creative variants of the question should be asked, e.g. by identifying the moving masses, or by taking measurements with one's own tools.

("from 0 to *F*max in 0.1 s"). Usually, this question cannot be answered directly. Frequently, measurements are difficult and time-consuming, as an influence of the measurement on the interaction has to be eliminated. Therefore it is recommended to do an analysis of the interaction itself and the objects the interaction happens with. If it is soft—something typical of surgical simulation, simple viscoelastic models can be used for interpolating the dynamics. The most critical questions with respect to dynamics often address the initial contact with an object, the collision. In this case especially the stiffness of the object and the velocity of the contact describe the necessary dynamics. But is has to be stated that the resulting high demands are not seldom in conflict with the technical possibilities. In these cases, a splitted concept based on "events" can be considered, where kinaesthetic clues are transmitted in low frequency ranges, and highly dynamic clues are coded in pure vibrations (Chaps. 11 and 12).

# *5.3.2 Cluster* **2** *—Surface-Tactile*

Haptic texture represents the microstructure of a surface. The lateral movement between this microstructure and the finger tip results in shear forces generating the typical haptic impression of different materials. Haptic-bumps on the keyboard-keys J and F are a special form of texture. Another variant of texture are Braille-letters carrying additional abstract information. But there are also more straightforward textures such as the surface of all physical materials<sup>5</sup> Cluster 2 has to be chosen when there is a need to present information on any surface via the tactile sense. This can be either coded information on a geological map on a more or less plane surface, but it can also be object specific features like the material itself. The resulting questions for the technical task are:


<sup>5</sup> Consider: The mechanical stimulus pattern is not the only dimension of haptic textures, especially the thermal conductivity of the surface contributes a lot to the realism of surface-rendering.

e.g. texts or the influences of fluids on textures have to be displayed. The answer to this question has a significant impact on the technical system.


# *5.3.3 Cluster* **3** *—Vibro-Tactile*

Cluster 3 is a solution space for simple one-dimensional technical problems and corresponding questions. It covers independent dimensions of information (e.g. coding an event in a frequency and the importance in the amplitude). In this cluster, distributions of intensity variations and /or time dependent distributions of single events are filed. Technological solutions are usually vibrational motors or tactons, as being used in mobile phones or game-consoles. But even if the technical solution itself seems quite straightforward, the challenge lies in the coding of information with respect to intensity and time and an appropriate mechanical coupling of the device to the user.


# *5.3.4 Cluster* **4** *—Vibro-Directional*

Vibro-tactile systems code one-dimensional information in the form of intensities. It is obvious that by the combination of multiples of such information sources directional information can be transmitted. This may happen two-dimensionally in a plane surface, but also three-dimensionally. Cluster 4 deals with such systems. One possible technical solution for directional surface information would be to locate a multitude of active units in the shape of a ring around a body part, e.g. like a belt around the belly. The direction is coded in the activity of single elements. This approach can also be transferred to a volumetric vector, whereby in these cases a large number of units is located on a closed surface, e.g. the upper part of the body. The activity of single elements codes the three dimensional direction as an origin of a normal vector on this surface. In addition to the questions of cluster 3 this cluster deals with the following questions:


# *5.3.5 Cluster* **5** *—Omni-Directional*

Cluster 5 deals with systems coding real volumetric information. Within such a three-dimensional space each point either includes intensity information (scalar field) or vector information (vector field). The sources of such data are numerous and frequent, may it be medical imaging data, or data of fluid mechanics, of atomic physics, of electrodynamics, or of electromagnetics. Pure systems of haptic interaction with such kinds of data are seldom. Frequently, they are combinations of the clusters "kinaesthetic" and "vibro-tactile" for scalar fields, respectively "kinaesthetic" with six active haptic degrees of freedom for vector fields.<sup>6</sup> Consequently, the specific questions of this cluster add one single aspect to already existing questions of the other clusters:

• Does the intended haptic interaction take place with scalar fields or with vector fields?→For pure vector fields kinaesthetic systems with the corresponding questions for six active degrees of freedom should be considered. In the case of scalar fields, an analysis of vibro-tactile systems in combination with three-dimensional kinaesthetic systems and the corresponding questions should be considered. Then the property of the scalar value corresponds to the dynamics of the coded information.

## *5.3.6 General Requirement Sources*

For any development process there are several questions which always have to be asked. They often refer to the time-frame as well as to the resources available for the development. For haptic devices two specific questions have to be focused on, as they can become quite limiting for the design process due to specific properties of haptic devices:


Furthermore, safety is a relevant source of requirements for haptic systems. Because of the importance of this issue, it is dealt within the next section seperately.

## **5.4 Safety Requirements**

Since haptic systems will be in direct contact with human users, safety has to be considered in the development process. As with usability (Sect. 4.2), a consideration

<sup>6</sup> The haptic interaction with objects in a mathematical abstraction always is an interaction with vector fields. In the vectors, forces of surfaces are coded, which themselves are time dependent, e.g. from movements and /or deformations of the objects themselves.

of safety requirements should be made as early in the development process as possible. Furthermore, certain application areas like medicine will require a structured, documented and sometimes certified process for the design of a product which also has to include a dedicated management of risk and safety issues. In this section, some general safety standards that may be applicable for the design of haptic systems are addressed and some methods for the analysis of risks are given.

## *5.4.1 Safety Standards*

Safety standards are issued by the large standard bodies and professional societies like -→ international Organization for standardization(ISO), the national standard organizations, -→ institute of Electrical and Electronics Engineers(IEEE), and -→ International Electrotechnical Commission (IEC) for example. Some relevant standards for the design of haptic systems are listed as follows. Please note that this section will not supersede the study of the relevant standards. For a more detailed view on the general contents of the standards, the websites of the standardizing organizations are recommended.

**IEC 61508** This standard termed *Functional Safety of Electrical/Electronic/ Programmable Electronic Safety-related Systems* defines terms and methods to ensure functional safety, i.e. the ability of a system to stay or assume a safe state, when parts of the system fail. The base principle in this standard is the minimization of risk based on the likelihood of a failure occurrence and the severity of the consequences of the failure. Based on predefined values of these categories, a so-called -→Safety Integrity Level (SIL) can be defined, that will impose requirements on the safety measures of the system. It has to be noted that the IEC 61508 does not only cover the design process of a product, but also the realization and operational phases of the life-cycle.

The requirements of functional safety impose large challenges on the whole process of designing technical products and should not be underestimated. The application of the rules are estimated to increase costs from 10to 20% in the automotive industry for example [27].


(FDA) have to be considered for products intended for the American market. The IEC 62366 deals with the applicability of usability engineering methods for medical devices. IEC 60601 considers safety and ergonomic requirements on medical devices.

**IEEE 830** This standard deals with the requirement specifications of software in general. It can therefore be applied to haptic systems involving considerable amounts of software (as for example haptic training systems). The general principles on requirement definitions (like consistency, traceability, and unambiguousness for example) from this standard can also be applied to the design of technical systems in general.

Since a large number of haptic systems are designed for research purposes and used in closely controlled environments, safety requirements are often considered secondary. One should note however, that industry standards as the ones mentioned above resemble the current state of the art and could therefore provide proven solutions to particular problems.

## *5.4.2 Definition of Safety Requirements from Risk Analysis*

As mentioned above, modern safety standards will not only define certain requirements (like parameters of electrical grounding or automatic shut-down of certain system parts), but have also an impact on the whole design process. To derive requirements for the haptic system, the following steps are advisable during the design process:


Figure 5.4 shows the general risk management flowchart. Based on a risk identification, a risk assessment is made to evaluate the failure occurrence and the severity of the consequences. There are two approaches to identify risks. In a bottom-upapproach, possible failures of single components are identified and possible outcomes are evaluated. This approach can be conducted intuitively, mainly based on the engineering experience of the developer or based on a more conservative approach using check-lists. On the other hand, a top-down-approach can be used by incorporating a -→ Fault Tree Analysis (FTA). In that case, an unwanted system state or event is analyzed for the possible reasons. This is done consecutively for these reasons until possible failure reasons on component level are reached. In practice, both approaches should be used to identify all possible risks.

For each identified risk, the failure occurrence and the severity of the consequence has to be evaluated. Especially for hardware components this is a sometimes hideous task, since some occurrences cannot be calculated easily. Based on these values, a risk graph can be created as for example shown in Fig. 5.5. Acceptable risks do not require further actions, but have to be monitored in the further development process. Risks considered to be in the -→ As Low As Reasonably Practicable (ALARP) area are considered relevant, but cannot be dealt without an abundant (and therefore not reasonable) effort. Risks in the non-acceptable area have to be analyzed to be at least transferred to the ALARP-area. Please note that the definitions of the different axis in the risk graph and the acceptable, ALARP and non-acceptable area have to be

defined for each project or system separately based on the above mentioned standards or company rules.

For each risk, one has three possibilities to deal with the risk, i.e. move it into more acceptable areas of the risk graph:


After each risk is dealt with, the acceptability has to be evaluated again, i.e. the changes in the risk graph have to be analyzed. Obviously, moving risks into lowerrisk regions will consume effort and costs. This considerations can lead to ethical dilemmas, when severe harm to humans has to be weighted against financial risks like damage compensations. For this reason, -→ ALARP classifications for economical reasons are forbidden by ISO 14971 for medical devices starting with the 2013 edition.

The evolution of risks has to be monitored throughout the whole design and production process of a system. If all steps involved are considered, it is obvious that the design of safe systems will have an significant impact on the overall development costs of a haptic system and a thorough knowledge of all components is needed to find possible risks in the development.

## **5.5 Requirement Specifications of a Haptic System**

The defined application together with the assumption from the customer and the interaction analysis will allow to derive individual requirements for the task-specific haptic system. These system requirements should be complemented with applicable safety and other standards to form a detailed requirement list. This list should not only include a clear description of the intended interactions. Also the intended performance measures (Chap. 13) and as much technical details as possible about the overall system and the included components should be documented. As stated above, the technical solution clusters shown in the preceding Sect. 5.3 will also give possible requirements depending on the intended class of applications.


**Table 5.1** Example of a system specification for a haptic device

(continued)


**Table 5.1** (continued)

Table 5.1 will7, <sup>8</sup>, <sup>9</sup> give an example of such a requirement list with the most relevant technical parameters of a haptic system. However, it is meant to be an orientation and has to be adapted to the specific situation by removing obsolete entries and adding application specific aspects.

Additionally a system specification includes references to other standards and special requirements relating to the product development process. Among others, these are the costs for the individual device, the design-process itself and the number of devices to be manufactured in a certain time-frame. Additionally the time of shipment, visual parameters for the design, and safety-related issues are usually addressed.

## **5.6 Haptic Design of Mechanical Controls**

Chapter 4 described the use of simulation is of advantage regarding development time and effort. The following chapter shows basic relations between technical parameters and subjective behavior of rotary and translatory switches. This chapter is a "howto" guide regarding how haptic systems could or should feel like, and how ideal haptical designs can be reached. Starting with the rotary switches that describe and explore haptical characteristics, the turnover to the push buttons is done, showing the influences and differences deriving out of the event based perception that plays an important role in the haptical design of devices. The overall content relies on the Dissertation of [24].

## *5.6.1 Rotary Switches*

Typically, rotary devices are described by a torque versus angle description. Due to the remaining shear forces on the finger tips it makes sense to derive these forces as a reference level to get a uniform force level dealing with different knob diameters (*F*shear = torque ÷ ½ diameter). To be clear about what devices we are talking about and furthermore being the standard for rotary devices we use torque instead of shear force. Figure 5.6 shows the structure of a mechanical rotary switch.

A spring-driven tappet is affecting the cam disc with torque. The shapes of the cam disc and the tappet are defining torque over position which are the major parameters defining the haptic behavior of the system. The spring itself is in general "just"

<sup>7</sup> R: requirement, W: wish.

<sup>8</sup> The combination of requirements and wishes (R and W) may be used for almost any element of the system specification. It is recommended to make use of this method, but due to clarity in the context of this book this approach of double-questions is aborted here.

<sup>9</sup> A "haptic loop" is a complete cycle including the output of the control-variable (in case of simulators this variable was calculated the time-step before) and the read operation on the measurement value.

**Fig. 5.6** Mechanical setup of a rotary switch [24]

relevant for the overall torque level. Therefore, the main haptical behavior is defined by the cam disc and the tappet while the force level can be adjusted with the spring parameters. Additionally, strongly influencing is the friction and it needs to be considered in a general level not to influence the system negatively. Even the construction of the system influences this particular parameter strongly.

#### **5.6.1.1 Rest Position and Transition Point**

The most important issue is the orientation within the torque characteristics. Figure 5.7 shows a simplified characteristic curve without the influence of friction.

The torque versus angle characteristics is not intuitively readable. For example, the rest position of the switch often interpreted to be in a local minimum of the curve. Of course, when looking at the details, the rest position is located in a zero

**Fig. 5.7** Simplified characteristic curve without friction, showing the basic points for orientation as well as the directions of the user interaction and the force directions

torque position, otherwise it might move due to the remaining forces. Due to the effective direction of movement, resulting out of the torques' sign, a positive torque for example moves the knob to the left/clockwise, while a negative torque moves it to the right/counterclockwise.

Two specific points remain out of this in the zero-crossings of the curve: Thus if both torques, positive and negative, point to a zero-crossing, this point will be stable and become a rest position, where the switch will remain in, until it is forced to move out by external (user) forces. The second type is the opposite. Both torques point away from the zero-crossing, so a little deviation from the zero torque level leads to an increasing torque pulling the knob away from it. That is why it is not a stable position like the rest position, typically actively leading the knob from one stable position to the next and we call it the transition point. This is the typical "changing point" where the user recognizes the physical barrier also called "detent" and its change from one position to the next.

Concluding a rising zero crossing's flange typical is a rest position while a falling flange's zero-crossing is the transition point of the curve.

## *5.6.2 Friction*

While the shape of the curve is relevant for the overall feeling (the "how" the device is moving from one position to the next), the friction is an add-on parameter which affects that overall feeling and the operation of the device. High friction makes the device feel dull and the detents are becoming imprecise, while low friction can cause beating and vibration of the device when snapping into the rest position. Regarding operational issues, a high amount of friction can lead to a sticking of the device at the transition points. Therefore, the remaining spring force becomes too low to move the device out of these positions. This situation can lead to undefined states, where the device remains stuck between two defined positions. Of course, one could cause this to happen intentionally. This may be relevant for security issues and a steep flange may avoid it, at all, unfortunately contrary to a "good feeling".

What happens to the characteristic curve: friction is shifting it vertically, increasing the perceived forces, and because friction is always directing against the control's movement, it causes a hysteresis of the measured curve. In short, low friction has a small hysteresis, and high friction a big hysteresis.

The friction value can be derived from the delta of the hysteresis *F*friction = ½*F*Hysteresisdelta.

The frictional effects described before can be compared to a static offset as shown in Fig. 5.8, mostly generated by the bearings and additionally, by varying amounts, by the tappet and cam disc.

Even the friction between tappet and cam disc shows some very specific behavior that can help to identify the frictional source in a component. The relation of diameters of the knob and the cam disc or the bearing are quite relevant for the influence of the added friction and can be a possibility to influence it efficiently.

**Fig. 5.8** Friction offset of a right turn characteristic curve, without showing a hysteresis

**Fig. 5.9** Friction influence caused by slider and cam disc

So how does this friction influence the curve additionally: The friction between tappet and disc is not constantly. While a flat cam disc gradient typically increases the frictional influence, the gradient of the spring plays also an important role. Furthermore, a more compressed spring leads to a higher spring force and a higher friction. Therefore, a steep gradient has comparably low friction.

All the zero crossings, i.e. the rest position and the transition point, typically have a high friction. For the rest position, this is not critical, but for the situation of a standing still in the transition point, as previously described, it is an unwanted thing.

Figure 5.9 shows the impact of this kind of friction, leading to a flatting of the characteristic curve at the zero-crossings, which we call "frictional shoulders". This effect is very practical for identifying the sources of friction during development of devices by measurement.

#### **5.6.2.1 The Integral-Representation**

As mentioned before, the intuitive readability of the torque-characteristics is not given. Thus the question if there is an intuitive representation is answered in [26] and [24] describing the integral representation. It shows that the integration of torqueangle characteristics leads to an intuitively readable description. It is possible to describe the behavior of the device as well as a basic mechanical derivation of the cam disc with this principle. The big advantage of this description is the intuitive readability of the "shape". This helps to divide between important and unimportant parameters and indicates the location of problems intuitively. This makes development much more efficient. Equation (5.1) shows the basic mathematical description of the integral representation.

$$I(\varphi) = \int\_{\varphi\_1}^{\varphi\_2} M(\varphi) \,\mathrm{d}\varphi \tag{5.1}$$

To prove the hypothesis, [24] executed several tests. Figure 5.10 shows examples of basic characteristics displayed by a haptic interface to the subjects. The diagrams on the left show the torque representations and those on the right show its associated integral representation. The subjects had to choose the intuitively fitting representation. Significantly, the subjects selected the integral representation. As an example, against all expectations regarding the torque representation the sine (a) and triangle (b) characteristics both feel comparably smooth and more "sine-like", even the triangle a little "weaker" than the sine. The triangle expected to be crisp and sharp, and absolutely did not fulfill any of those expectations.

The integral representation shows a fitting picture: the integrated sine-shape results in cosine and the integrated triangular shaped results in parabolic shapes that are very similar to the sine shape. In addition to this, the area under the triangle is smaller than the sine and its maximum is slightly lower than the sine in integral representation that fully fits to the derived results out of pair comparison studies.

Another Example, the saw tooth shapes expected to be one-sided sharp are fitting well with the integral representation describing the behavior very intuitively. Finally, the square shape leads to a triangular feeling, also represented correctly by the integral.

#### **5.6.2.2 Identification of Parameters for Rotary Haptic Devices**

Knowing the integral representation is the basis for identifying relevant parameters, because the transformation helps understanding the perceptional influence. Due to technical reasons, the torque representation is still the describing low-level representation and used for the overall parametrization. The chosen torque parameters shown in Fig. 5.11, which are mainly the rising and falling slope of a rectangular shape.

**Fig. 5.10** Shapes that were represented by [24] to the subjects to identify the intuitive way to represent haptical feelings graphically

Changing their steepness independently can convert it to the entire shapes shown in Fig. 5.10 that pointed out to be the most relevant ones. The amplitude appeared to be an overall parameter not influencing the character/feeling of the shape. Its influence is the overall force level or resistance, which allows using it to adjust the ease of movement without changing the basic character of the effect.

To identify the parameters and their influence, the different characteristics presented to subjects on the haptic display for rotary switches. Questionnaires as well as pair-comparison tasks helped to identify and quantify the parameters. Figure 5.12 shows the variety of the presented parameters in integral representation. Looking at the rest position, the "width" or "precision" of the device presented quite realistically. Relations to the adjectives shows a steep rising slope at the rest position increases the precision and hardness, while a steep falling slope at the transition point reduces controllability and increases the hardness. The integral representation displays this

**Fig. 5.11** Variation of torque-parameters used for the haptic representations [24]

**Fig. 5.12** Integral representation of the varied haptic representations [24]

in the width of the rest position, i.e. more precise when it is narrow and less precise when it is wide. Moreover, the transition point is controllable when a round shape leading to the next position given, while it is not controllable when the shape is forming a sharp peak.

Bringing all parameters together, the period that describes the "length" of the detent and the force amplitude, the slopes at the rest position and the transition point are the main parameters of the subjective impression.

Thus, the steepness of the rising slope influences the impression of a precise rest position, where 5% slope was showing the best precision.

The hardness influencing by both flanges. The steeper both are, the harder the impression. Furthermore, the area under the integral representation seems to behave proportionally to the hardness impression. The rising flange of 5% and the falling flange of 50% concluding to be an ideal pairing.

Combining both slope parameters means increasing the precision by the rest position's slope that automatically increases the hardness effect. The falling transition point's slope only affects the hardness.

The falling slope furthermore influences the controllability and the willingness of the device. Thus, a steep falling slope shows a bad controllability of the switch. Explaining it with the hard change of the torque at that point, when the device is working against the user's movement until reaching the transition point. At this position, it suddenly changes its sign suddenly pulling the knob into users moving direction. The steeper the slope the stronger the change. Also, out of a control theories' point of view a very difficult task to handle.

The amplitude of the overall signal is just proportionally influencing the overall impression, but not changing the relations between the adjectives.

The length of the detent influences the signals overall impression also strongly. A reduced angle reduces the influence of the parameters, comparable to a reduction of the resolution because no parameters angles are reducing, not representing flat slopes anymore.

#### **5.6.2.3 Asymmetry**

One very specific "trick" is the use of asymmetric characteristics. Figure 5.13 shows the torque representation where the area under the curve is bigger at the left turn than at the right turn by different angles, at all, requiring more energy to overcome. Figure 5.14 shows the integral representation thus the subjective behavior of the device clearly by a curve descending to the right side.

An example of an active electromechanical rotary input device described by Audi patent [25]. The advantage of this specific asymmetric behavior is, it generates the illusion of a descending direction, but using the whole bandwidth of an actuator for every detent. It is not requiring a higher torque bandwidth to decrease the detent's torque between each detent to generate a decreasing impression. Because the asymmetry in energy is providing this feature. Furthermore, the angular range is without any limits, it is possible to descend indefinitely. As mentioned, the classical strategies need a reduction of the torque for each detent, so that the whole range limited by the bandwidth of the actuator and only a part of the overall torque of the motor used to generate the detents torque difference.

An example of passive mechanical haptics would be the Mercedes-Benz Light switches that have been using an asymmetric characteristic in the market since 2012. Describing the use of asymmetry in [8] for creating a haptic barrier between the parking and the driving light sections to make operation more intuitive.

**Fig. 5.13** Asymmetric torque characteristic with angular asymmetry [24]

**Fig. 5.14** Integral representation that intuitively shows the behavior of the asymmetrically designed control element [24]

#### **5.6.2.4 Construction of the Cam Disc**

The basic development of a haptic characteristic using the integral characteristics as an intuitive guide described before. As previously mentioned, the integral relation also serves as a guideline in designing and constructing the mechanical shape of the cam disk and the tappet itself.

Derived out of the basic mathematical description of the Integral representation *I*(ϕ) <sup>d</sup><sup>ϕ</sup> = *M*(ϕ) the gradient of the cam shape is proportional to the Torque M. We are considering the following elements: the shape of the cam disk, the shape of the tappet, the frictional pairings, and the stiffness and pretension of the spring element. The radius between rotational center and the contact point between the tappet and the cam disk are relevant as well as the radius of the end effector/cap where the finger is grasping.

The gradient at the contact point is nothing more than the derivation of the cam shape, so the relation between shape and force is the required angle α of the shape resulting in the specific force *F*finger at this point. The gradient angle α of the cam in the contact point calculated as shown in Eq. (5.2). It is a simplified version to explain the basic principle:

$$\alpha = \frac{1}{2} \arcsin\left(2 \cdot \frac{F\_{\text{finger}}}{c\_{\text{spring}} \cdot l\_{\text{spring}} + F\_{\text{spring}0}} \cdot \frac{r\_{\text{finger}}}{r\_{\text{cam}}}\right) \tag{5.2}$$

Equation (5.2): calculation of the gradient α at position ϕ out of the required finger force *F*finger(ϕ) at position ϕ or of course its torque (*M*finger(ϕ) = *F*finger(ϕ) · *r*finger) The parameters of Eq. (5.2) are:


#### **5.6.2.5 Correction of the Tappet Geometry**

The calculations consider a point-contact between tappet and cam with a tappetdiameter of "ZERO" which is quite unrealistic. Especially smaller sized systems are strongly influenced by the tappet. Therefore, the radius of the tappet causes a shift of the contact point as shown in Fig. 5.15.

A very simple principle to consider the influence partly should also show how the correction might take place. Equation (5.3) describes the shift in angular direction to be considered as well. In addition, considering the vertical influence of the shift not described here.

$$s\_v = r\_{\text{trapped}} \cdot \sin(\alpha) \tag{5.3}$$

Equation (5.3): Correction of the contact point caused by the tappets' radius*r*tappet

**Fig. 5.15** Principle of the tappet's influence on the cam construction and the principle of the relation between gradient of the cam shape and the torque characteristic

## **5.7 Push Buttons**

The second large group of control elements are push buttons. The differences between translatory (push) and rotary controls are quite strong: the linear movement is dominant, characteristic curves of force-travel look different in addition to the behavior, as well as the use case (positioning vs. activating) and the mechanical principle behind it. A closer look at the details indicates that the principles and description are quite different, but anyway are still compatible. The differences appear in the technical ranges and the type of psychophysical stimuli being useful in both domains. Even help understanding (vibrotactile) active haptic systems more clearly.

## *5.7.1 Characteristic Curve*

The typical characteristic curve of push buttons, describing force versus travel (Fig. 5.16), is comparable to the rotary torque versus angle/travel characteristics.

A basic characteristic of the push button is having a single rest position. This rest position needs to be reached by the push button's mechanics on its own, because, due to the cap's geometry, the user cannot typically bring it back there. In principle, the user is positioning the rotary switch to a specific detent. That is why the rotary switch might not return to the same position, it also can move to the next one. This explains why it typically has several zero crossings and transition points. Compared to this, the user can control the push button only in the direction of the push. The device always needs to provide a force working backwards into the direction of the rest position to be able to return to it. Maybe you had already the experience with a hanging button; it is difficult to get the cap returning to the rest position. For this, the force always needs to be positive, or due to friction even higher. So the typical characteristic curve is located in the first quadrant, while the rotary curve typically occupies the first and second, or even all four quadrants.

There is a difference between the measurement points and a typical specification, because the measurement probe first has to approach the cap before the measurement starts. That is why the measurement probe does not see the relevant force until contact. Therefore, the measurement characteristics show a travel until contact at zero-force and no negative forces that would push the button into the rest position while pulling the cap (compare Fig. 5.16). Compared to this, the specification even needs to define this behavior as well, to keep it stable and not jiggling. The origin of Fig. 5.17 shows exactly that behavior where the curve passes the zero level continuing with the same steepness into negative force to generate a stable rest position. In this point, we can see a comparable behavior like the rest position of rotary switches.

## *5.7.2 The Snap*

Looking at the push buttons characteristic its most important parameters are describing the snap and its position. It is the relevant event communicating to the user that his goal, the activation of the function, has been reached.

**Fig. 5.16** Typical segments of a force-travel characteristic curve of a push button appearing during measurement

**Fig. 5.17** Basic force travel parameters of push buttons without frictional hysteresis

The snap is the falling flange seen in the force-travel depiction. Typically, vertices, also, the snap's ending points, they all refer to the point of origin because of measurement practice. The perceptional point of view shows that considering steepness and height of the flange as a relative definition of the snap is a much better choice. The tolerance ranges can even result in much higher production efficiency, because the factors that affect the overall perception less can be assigned lower tolerance levels.

For example, even if the switching point(*FS* /*xS*)is moving in *x*−or *F*−direction, the snap may remain of the same haptic quality. Therefore, Δ*F*/*FS* and Δ*x* are the parameters to focus on /to prioritize. Figure 5.17 shows the parameter set.

The tests conducted on the psychophysics of rotary controls repeated for push buttons show that besides the subjective parameter estimation, some further interesting effects presented themselves. While flat and longer snaps showed a comparable rating to the rotary controls, steep and short snaps received a very different rating from the subjects.


## *5.7.3 Event-Based Perception*

This observed difference in perception fits perfectly to the phenomenon of eventbased perception described in [13] and approves it for the use of linear control elements.

Looking at a measurement, force versus time in Fig. 5.18 shows two haptic events snap and back-snap. Both show a strongly dampened vibration, having the ability to stimulate even Pacinian mechanoreceptors that are sensitive to high vibrotactile

**Fig. 5.18** Dynamic snap and back-snap of a push button force versus time

frequencies and acceleration. They may also be able to activate reflexes. In this sense, sharp snaps like micro switches activate a reflex-like perception, just saying "Hi here was something", while long snaps are perceived to be more explorative, showing greater detail like a shape or resistance, or more visually spoken like a smooth or a long "hill" of the controls behavior showing more detail and catching more attention by exploration.

Concluding, the kind of the snap provides either a reflex-like quick or a detailed shape-like event of different content, speed and mental load.

## *5.7.4 Relevance of the Probes' Impedance*

In conclusion, this event-based perception mechanism shows that the classical approach of the force versus travel description does not show up all information being necessary. Therefore, if vibrations appear within a characteristic curve, an interpretation of those vibrations is essential. Zhou et al. [34] explains an Interaction-based Dynamic Measurement principle (IDM) measuring with a human finger-like probe analyzing the specific vibrations due to subjective impressions. He found that the mechanical impedance of the finger is of high importance to allow realistic vibration of the event, lying in the working point. If it deviates, it will appear as a vibration differing from reality. The common impedance range of a probe goes from a stiff static probe to no-probe-influence. The former stiff one suppresses nearly all vibrations, thus only Zero-Hz-Frequencies—such as static forces—remain in the measurement data while the latter contactless one does not affect the device and allows it to vibrate at its natural frequency, for example measuring with a laser vibrometer.

If a probe or human finger were in contact with the device, it will put it out of tune. For this reason, it makes sense to use a probe with a comparable mechanical impedance such as a human finger. More details regarding IDM in Chap 14. A comparable approach with a different goal *Syntouch* is realizing [1]. They are mimicking a human fingertip to quantify surface haptic properties like identification of materials and their haptic dimensions [33]. It includes besides specific mechanical impedances the surfaces' fingerprint to get the system at a realistic working point.

## **Recommended Background Reading**

[13] Kuchenbecker K.J. & Fiene J. & Niemeyer G.: **Improving contact realism through event-based haptic feedback**. In: IEEE Transactions on Visualization and Computer Graphics, vol. 12, no. 2, pp. 219–230, March-April 2006, doi: 10.1109/TVCG.2006.32

*A key paper describing the origins of enhancing contact sensations by highdynamics haptic accelerations. Inspiration for many researchers and still valid in its fundamental approach for many teleoperation-systems.*

[20] Nitsch, V. & Färber, B.: **A Meta-Analysis of the Effects of Haptic Interfaces on Task Performance with Teleoperation Systems**. In: IEEE Transactions on Haptics, 2012.

*Reviews the effect measures of several evaluation of VR and teleoperation systems. Recommended read for the design of haptic interaction in teleoperation and VR applications.*

[14] Magnusson, C. & Brewster, S.: **Guidelines for Haptic Lo-Fi Prototyping**, Workshop, 2008.

*Proceedings of a workshop conducted during the HaptiMap project with hints and examples for low-fi prototyping of haptic interfaces.*

## **References**


**Open Access** This chapter is licensed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made.

The images or other third party material in this chapter are included in the chapter's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the chapter's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.

# **Chapter 6 General System Structures**

**Alireza Abbasimoshaei and Thorsten A. Kern**

**Abstract** Haptic systems exhibit several basic structures defined by the mechanical in- and outputs, commonly known as impedance or admittance system structures. This chapter describes these structures in open-loop and closed-loop variants and presents commercial realizations as well as common applications of these structures. Based on the different properties of the structures and the intended application, hints for the selection of a suitable structure are given.

When starting the design of haptic devices, the engineer has to deal with the general structures they can be composed of. Haptic devices of similar functionality can consist of very different modules. There are four big classes of possible system designs:


## **6.1 Open-Loop and Closed-Loop Systems**

An open-loop system is a system in which there is no feedback. Thus, the noise effects appear in output of the system. Moreover, the input has no reaction to different noises (Fig. 6.1a).

T. A. Kern e-mail: t.kern@hapticdevices.eu

Christian Hatzfeld deceased before the publication of this book.

A. Abbasimoshaei (B) · T. A. Kern

Hamburg University of Technology, Eißendorfer Str. 38, 21073 Hamburg, Germany e-mail: al.abbasimoshaei@tuhh.de

But in closed-loop systems, the output influences the input, and the input is changed according to the last output. Thus, in these systems, there is a feedback signal that sends the last output to the input (Fig. 6.1b). So, this system can deal with the noise better.

**Fig. 6.1** Different loop states

## **6.2 Open-Loop and Closed-Loop Systems Comparison**

The most important difference between these two types of systems is in the usage of the error signal. In closed-loop systems, there is an ability to provide the minimum error. Thus, this system is more precise and independent of the noise. However, the open-loop systems are more simple and easy to implement. They are mostly used in a combination with closed-loop systems.

## **6.3 Impedance and Admittance Concept**

The word impedance is coming from the word "impedire" which means "to hinder something". Impedance is a kind of resistance in mechanical and electrical systems. In an electric circuit, the current is related to the number of electrons passing the circuit at a certain time. The voltage is the energy that helps them to go through it. But, the resistance in the circuit decreases the speed of the electrons. So when we need more voltage to reach a certain amount of current means that the resistance is bigger and when we reach a smaller current by a certain voltage means that there is a bigger resistance. Thus, resistance (impedance) is directly related to voltage and reverse current. This is the concept of electrical impedance. It is similar to a spring in which force is voltage, spring stiffness is impedance, and the spring displacement is current. In the spring, to reach a certain displacement, when you push it by more

**Fig. 6.2** Block-diagram of an open-loop impedance controlled haptic system

force means that the stiffness is bigger and vice versa. This stiffness is the mechanical impedance (Z) that is in the mechanical systems. It is the resistance of the system against force.

Therefore, Impedance controlled systems are based on the transfer characteristics of a mechanical impedance *<sup>Z</sup>* <sup>=</sup> *<sup>F</sup> <sup>v</sup>* and are typical of the structure of many kinaesthetic devices. They generate a force as output and measure a position as input. Admittance controlled systems instead, are based on the definition of a mechanical admittance *<sup>Y</sup>* <sup>=</sup> *<sup>v</sup> <sup>F</sup>* , describing transfer characteristics with force-input and velocityoutput. These systems generate a position change as haptic feedback and get a force reaction from the user as an input source. In the situation of a closed-loop controlled system, this force is measured and used for the correction of the position. This analysis can be regarded as analog in the case of torque and angle replacing force and position for rotary systems. Nevertheless, for readability purposes, the following descriptions concentrate on translational movements and devices only.

## **6.4 Open-Loop Impedance Controlled Devices**

Open-loop impedance controlled systems are based on a quite simple structure (Fig. 6.2). A force signal *S*<sup>F</sup> is transferred via a driver *G*ED into a force-proportional energy form *E*F. This energy is then altered into the output force *F*<sup>0</sup> by an actuator *G*D1. This output force interferes with a disturbing force *F*noise. This noise is a result of movements generated by the user *x* out and the mechanical properties of the kinematic design *G*D3. Typically, such disturbing forces are friction and inertia. The sum of both forces is the actual output force *F*out of the impedance controller system. Usually, there is an optional part of the system, a sensor *G*D2, which measures the movements and the actual position of the haptic system.

#### **Examples**: *Universal Haptic Interfaces*

**Fig. 6.3** Example of an open-loop impedance controlled system with **a** serial-kinematic (*Geomagic Touch*, *3D Systems geomagic Solutions*) and **b** parallel-kinematic (5 DOF Haptic Wand, *Quanser*) structure. Images courtesy of *3D Systems geomagic Solutions*, Morrisville, NC, USA and *Quanser*, Markham, Ontario, Canada, used with permission

Open-loop impedance controlled systems are the most frequently available devices on the market. As a result of their simple internal design, a few standard components can already be used to build a quite useful design with adequate haptic feedback, if at least some care is taken of the minimization of friction and masses. Among the cheapest designs available on the market today, the *PHANTOM Omni*, recently renamed to *geomagic Touch*, (Fig. 6.3a), connected via Fire-Wire to the control unit, is among the best known. It is frequently used in research projects and for the creative manipulation of 3D-data during modeling and design. In the higher-price segment there are numerous products, e.g. the devices of the company *Quanser* (Markham, Ontario, Canada). These devices are usually equipped with a real-time *MatLab*™(*The MathWorks*, Natick, MA, USA) based control station, adding some flexibility to the modifications of the internal data processing by the end customer. The doubled pantograph-kinematics of the "HapticWand" (Fig. 6.3b) allow force feedback in up to five degrees of freedom with three translations and two rotations. Although all these devices may be open-loop impedance controlled, the software usually includes simple dynamic models of the mechanical structures. This allows some compensation of inertial and frictional effects of the kinematics based on the continuous measurement of position and velocities.

## **6.5 Closed-Loop Impedance Controlled Devices**

Closed-loop impedance controlled systems (Fig. 6.4) differ from open-loop impedance controlled systems in such a manner that the output force *F*out is measured by a force sensor *G*FSense and is used as a control variable to generate a difference value Δ*S*<sup>F</sup> with the nominal value. An additional component typically is a controller *G*CD in the control path, optimizing the dynamic properties of the feedback loop. The closed-loop makes it possible to compensate the force *F*noise resulting from the mechanics of the systems. This has two considerable advantages: On one hand, at idle state the system behaves in a much less frictional and more dynamic way compared to similar open-loop controlled systems. Additionally, as the closed-loop design allows some compensation of inertia and friction, the whole mechanical setup can be designed stiffer. But it has to be noted that part of the maximum output power of actuators will then be used to compensate the frictional force, which makes these devices slightly less powerful than an open-loop design.

#### **Example**: **Force Dimension** *Delta Series*

Closed-loop impedance controlled systems are usually used in research projects and as special purpose machines. The delta-series of *ForceDimension* (Fig. 6.5) is one example, as it is a commercial system with the option to buy an impedance controlled version. In this variant, force sensors are integrated into the handle, able to measure interaction forces in the directions of the kinematic's degrees of freedom. Closed-loop impedance controlled systems are technologically challenging. On the one hand they have to comply with a minimum of friction and inertia, on the other hand, with little friction, the closed loop tends to become unstable, as an energy exchange between user and device may build up. This is why controllers, typically, monitor the passive behavior of the device. Additionally, the force sensor is a cost-intensive element. In case of the delta-device, the challenge to minimize moving masses has been faced by a parallel-kinematics design.

**Fig. 6.4** Block-diagram of a closed-loop impedance controlled system with force-feedback

**Fig. 6.5** Example of a parallel-kinematic closed-loop impedance controlled system (delta3, *Force Dimension*). Image courtesy of *Force Dimension*, Nyon, Switzerland, used with permission

## **6.6 Open-Loop Admittance Controlled**

Open-loop admittance controlled systems (Fig. 6.6) provide a positional output. Proportionally to the input value *S*x, a control chain with energy converter *G*ED and kinematics *G*D1 provides a displacement *x* 0. This displacement interferes with a disturbance variable *x* noise which is dependent on the mechanical properties of the kinematics *G*D3 and a direct reaction to the user's input *F*out. In practice an open-loop admittance controlled system typically shows a design which allows to neglect the influence of the disturbance variable. Another optional element of open-loop admittance controlled systems is the measurement of the output force with a force sensor *F*Sense without closing the control loop.

## *Example*: **Braille Devices**

Open-loop admittance controlled systems are used especially in the area of tactile displays. Many tactile displays are based on pin arrays, meaning that they are generating spatially distributed information by lifting and lowering pins out of a matrix. These systems origins are Braille devices (Fig. 6.7) coding letters in a tactile, readable, embossed printing. For actuation of tactile pin-based displays a variety of actuators are used. There are electrodynamic, electromagnetic, thermal, pneumatic, hydraulic and piezoelectric actuators and even ultrasonic actuators with transfer media.

## **6.7 Closed-Loop Admittance Controlled Devices**

Closed-loop admittance controlled devices (Fig. 6.8) provide a positional output and a force input to the controlling element identical to impedance controlled devices. The mandatory measurement of the output force *F*out is used as control variable *S*<sup>S</sup> for calculating the difference Δ*S*<sup>F</sup> with the commanding value *S*F. This difference is then fed through the controller *G*CD into the control circuit. As a result, the displacement *x* out is adjusted until an aspired force *F*out is reached.

A variant of a closed-loop admittance controlled device is shown in Fig. 6.9. Closed-loop admittance controlled devices show considerable advantages for many applications requiring large stiffnesses. However, the force sensors *G*FSense are quite complex and consequently expensive components, especially when there are numerous degrees of freedom to be controlled. As a variant, the system according to Fig. 6.9 does not use a sensor but just a force-proportional measure, e.g. a current, as control variable. When using e.g. a current with electrodynamic actuators, we can identify even the reaction of the user generating an induction as an additional velocity dependent value.

#### *Examples*: **Universal Haptic Interfaces**

At present, closed-loop admittance controlled systems are the preferred approach to provide high stiffnesses with little loss in dynamic properties. The idea to haptically hide the actual mechanical impedance from the user by closing the control loop makes it possible to build serial kinematics with a large workspace. The FCS HapticMaster (Fig. 6.10a) is such one meter high system with three degrees of freedom and a force of up to 100 N. It includes a force sensor at its handle. The axes are controlled by selflocking actuators. The device's dynamics is impressive, despite its size. However, a damping has to be included in the controller for security reasons resulting in a limitation of bandwidth depending on the actual application.

**Fig. 6.8** Block-diagram of a closed-loop admittance controlled haptic system with force-feedback loop for control

**Fig. 6.9** Block-diagram of a closed-loop admittance controlled haptic system with a feedback loop measuring an internal force-proportional value

**Fig. 6.10** Examples of closed-loop admittance controlled systems in variants with **a** direct force measurement (*HapticMaster*) and **b** measurement of the actual current (*Virtuose 6D35-45*). Images courtesy of *Moog FCS*, Nieuw-Vennep, the Netherlands and *Haption GmbH*, Aachen, Germany, used with permission

Realizations of the variant of closed-loop admittance controlled devices are the *Virtuose*-systems from *Haption* (Fig. 6.10b). In these devices the current is measured at electrodynamic electronic commutated actuators and fed back as a control value. The devices show impressive qualities for the display of hard contacts, but have limited capabilities in the simulation of soft interactions, e.g. with tissues. Therefore, the application area of such systems is mainly the area of professional simulation of assembly procedures for manufacture preparation.

## **6.8 Qualitative Comparison of the Internal Structures of Haptic Systems**

As the haptic human-machine interaction is based on an impedance coupling, it is always the combination of action and reaction, be it via force or position, which has to be analyzed. In fact, without any knowledge about the internal structure of a device, it is impossible to find out whether the system is open-loop impedance controlled, closed-loop impedance controlled or closed-loop admittance controlled. With experience of the technological borders of the most important parameters like dynamics and maximum force, an engineer can make a well-founded assumption about the internal structure by simply using the device. But concerning the abstract interface of in- and output values, all the devices of the above three classes are absolutely identical to the user as well as to the controlling instance. Despite this fact the technical realizations of haptic systems differ widely in their concrete technical design, of course the parameters influencing this design have to be balanced against each other. Such parameters are:


These parameters and their mapping onto the technical designs are given qualitatively. In Fig. 6.11 the impedance generated by a device in absolute values and the impedance range covered may be one criterion for the performance of a device. Analyzing the systems according to this criterion shows that open-loop admittance controlled systems may have high impedance, which shows smaller variability in tighter borders. Closed-loop admittance controlled systems extend these borders by their ability to modulate the impedance due to the feedback loop. Depending on the design, closed-loop admittance controlled systems vary in the width of this modulation. In the lower area of possible, realizable impedances the open-loop impedance controlled systems follow. They stand out more by simplicity in their design than by large impedance ranges covered. In comparison to the closed-loop admittance controlled systems they gain some impedance width at the lower border of impedances. In order to be able to equally cover lower as well as higher impedances, the choice should be made of closed-loop impedance controlled systems.

#### **Tactile devices**

Normally, pure open-loop admittance controlled systems are suitable for tactile devices only, as, with tactile devices, usually there is no active feedback by the user to be measured. The haptic interaction is limited to tensions being coupled to the skin of the user's hands. Such devices show high internal impedance (*Z <sup>D</sup>*). The dynamics and the resolution concerning the displacement are very high.

**Fig. 6.11** Qualitative comparison of the application areas for different device-structures

#### **Kinaesthetic devices**

Can be built with systems allowing a modulation of the displayed impedances. The closed-loop admittance controlled systems excel due to the possibility to use mechanical components with high impedances. The dynamics of these systems are accordingly low (<100 Hz) and the force-resolution is, due to the typical frictions, not trivial when realized. Open-loop impedance controlled systems show a wider dynamic range due to the missing feedback loop with, at the same time, limited dynamic range. Only closed-loop impedance systems allow covering a wide impedance range from lowest to very high impedances, whereby with increasing requirements of force resolution the dynamics of the maximum velocities achieved by the control loop are limited and limitations of the measurement technology become noticeable.

The decision on the design of a haptic system has significant influence on the application range and vice versa. On one hand, it is necessary to identify the requirements to make such a decision. For this purpose, it is necessary to ask the right questions and to have an insight into possible technical realizations of single elements of the above structures. This is the general topic of the second part of this book. On the other hand, it is necessary to formulate an abstract system description of the device. An introduction of how to achieve this is given in the following section.

## **6.9 How to Choose a Suitable System Structure**

The selection of a suitable system structure is one of the first steps in the design of a task-specific haptic systems. Based on the interaction analysis, one should have a sufficient insight into the intended interactions between system and user and should be

**Fig. 6.12** Aid for the decision on the choice of the control structure

able to decide between a mainly tactile and mainly kinesthetic device structure. Based on further criteria like input and output capabilities and the mechanical impedance to be displayed, Fig. 6.12 gives an decision tree for the control structure.

Especially when the application will include an interaction in a multi-modal or virtual environment, further additions to the system structure could be wise to consider, since they promise a large technical simplification while maintaining haptic quality. This includes the approaches of Event-Based-Haptics as well as Pseudo-Haptics (Fig. 6.12).

**Open Access** This chapter is licensed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made.

The images or other third party material in this chapter are included in the chapter's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the chapter's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.

# **Chapter 7 Control of Haptic Systems**

**Alireza Abbasimoshaei, Thomas Opitz, and Oliver Meckel**

**Abstract** Control engineering is an important part for making the system more precise and provide the possibility for the system to reach the desired parameters. This chapter reviews some aspects of the control in haptic systems, including advanced forms of technical descriptions, system stability criteria and measures as well as the design of different control laws in a haptic system. A focus is set on the control of bilateral teleoperation systems including the derivation of control designs that guarantee stability as well as haptic transparency and the handling of time delay in the control loop. The chapter also includes an example for the consideration of thermal properties and non-ideal mechanics in the control of a linear stage made from an EC motor and a ball screw as well as an perception-orientated approach to haptic transparency intended to lower the technical requirements on the control and component design.

The control of technical systems aims a safe and reliable system behavior, and controllable system states. By the depiction as a *system* the analysis is put on an abstracted level which allows covering many different technical systems described by their fundamental physics. On this abstracted level a general analysis of closed loop control issues is possible using several methods and techniques. The resulting procedures are applicable to a various number of system classes. The main purpose of any depiction and analysis of control systems is to achieve high performance, safe system behavior and reliable processes. Of course this also holds for haptic systems. Here stable system behavior and high transparency are the most important control law design goals. The abstract description that shall be used for a closed loop con-

A. Abbasimoshaei (B)

Hamburg University of Technology, Eißendorfer Str. 38, 21073 Hamburg, Germany e-mail: al.abbasimoshaei@tuhh.de

T. Opitz

Landgraf-Philipps-Anlage 60a, 64283 Darmstadt, Germany e-mail: thomas.opitz@opel.com

O. Meckel

Landgraf-Philipps-Anlage 60a, Edit Stein Str. 14, 97957 Wittighausen, Germany e-mail: oliver.meckel@freenet.de

trol analysis starts with the mathematical formulation of the physical principles the system follows. As mentioned above, systems with different physical principles are covered up by similar mathematical methods. The depiction by differential equations or systems of differential equations proves widely usable for the formulation of various system behaviors. Herein analogies allow transforming this system behavior into the different technical context of a different system, provided that there exists a definite formulation of the system states that are of interest for the closed loop control analysis. The mathematical formulation of the physical principles of the system, also denoted as modeling, is followed by the system analysis including dynamic behavior and its characteristic. With this knowledge, a wide variety of design methods for control systems become applicable. Their main requirements are:


Besides the quality of the control result tracking the demanded values, the system behavior within the range of changes from these demanded values is focused. Also the control effort which needs to achieve a certain control result is to be investigated. The major challenge for closed loop control law design for haptic systems and other engineering disciplines is to deal with different goals that are often in conflict with each other. Typically a gained solution is never an optimal one, rather a tradeoff between system requirements. In the following Sect. 7.1 basic knowledge of linear and non-linear system description will be given. Section 7.2 gives a short overview about system stability analysis. A recommendation for structuring the control law design process for haptic systems will be given in Sect. 7.3. Subsequently Sect. 7.4 focuses on common system descriptions for haptic systems and shows methods for designing control laws. Closing in Sect. 7.5 a conclusion will be presented.

## **7.1 System Description**

A variety of description methods can be applied for the mathematical formulation of systems with different physical principles. One of the main distinctions is drawn between methods for the description of linear and nonlinear systems, summarized in the following paragraphs. The description based on Single-Input-Single-Output-Systems (SISO) in the Laplace domain was already discussed in Sect. 4.3.

## *7.1.1 Linear State Space Description*

Besides the formulation of system characteristics through transfer functions, the description of systems using the state space representation in the time domain allows to deal with arbitrary linear systems too. For Single-Input-Single-Output-Systems, a description using an *n*th order ordinary differential equation is transformable into a set with *n* first order ordinary differential equations. In addition to the simplified usage of numerical algorithms for solving this set of differential equations, the major advantage is the applicability to Multi-Input-Multi-Output-Systems (MIMO). A correct and systematic model of their coupled system inputs, system states, and system outputs is comparably easy to achieve. On the contrary to the system description in the Laplace domain by transfer functions *G*(*s*), the state space representation formulates the system behavior in the time domain. Two sets of equations are necessary for a complete state space system representation. These are denoted as the *system equation*

$$
\dot{\mathbf{x}} = \mathbf{A}\mathbf{x} + \mathbf{B}\mathbf{u} \tag{7.1}
$$

and the *output equation*

$$\mathbf{y} = \mathbf{C}\mathbf{x} + \mathbf{D}\mathbf{u}.\tag{7.2}$$

The vectors **u** and **y** describe the multidimensional system input respectively system output. Vector **x** denotes the inner system states.

As an example for state space representation the 2nd order mechanical oscillating system as shown in Fig. 7.1 is examined. Assuming the existence of time invariant parameters the description by a 2nd order differential equation is:

$$d\mathbf{w}\ddot{\mathbf{y}} + d\dot{\mathbf{y}} + k\mathbf{y} = \mu \tag{7.3}$$

**Fig. 7.1** Second order oscillator **a** scheme, **b** block diagram

The transformation of the 2nd order differential Eq. (7.3) into a set of two 1st order differential equation is done by choosing the integrator outputs as system states:

$$\begin{aligned} \lambda \mathbf{x}\_1 &= \mathbf{y} \Rightarrow \dot{\mathbf{x}}\_1 = \mathbf{x}\_2\\ \lambda \mathbf{x}\_2 &= \dot{\mathbf{y}} \Rightarrow \dot{\mathbf{x}}\_2 = -\frac{k}{m}\mathbf{x}\_1 - \frac{d}{m}\mathbf{x}\_2 + \frac{1}{m}\mathbf{u} \end{aligned} \tag{7.4}$$

Thus the system equation for the state space representation is as follows:

$$
\begin{bmatrix}
\dot{\boldsymbol{x}}\_1 \\
\dot{\boldsymbol{x}}\_2
\end{bmatrix} = \begin{bmatrix}
0 & 1 \\
\end{bmatrix} \begin{bmatrix}
\boldsymbol{x}\_1 \\
\boldsymbol{x}\_2
\end{bmatrix} + \begin{bmatrix}
0 \\
\frac{1}{m}
\end{bmatrix} \boldsymbol{u} \tag{7.5}
$$

The general form of the system equation is:

$$
\dot{\mathbf{x}} = \mathbf{A}\,\mathbf{x} + \mathbf{B}\,\mathbf{u} \tag{7.6}
$$

This set of equations contains the *state space vector* **x**. Its components describe all inner variables of the process that are of interest and that have not been examined explicitly using a formulation by transfer function. The system output is described by the output equation. In the given example as shown in Fig. 7.1 the system output *y* is equal to the inner state *x*<sup>1</sup>

$$\mathbf{y} = \mathbf{x}\_{\mathrm{I}} \tag{7.7}$$

which leads to the vector representation of

$$\mathbf{y} = \begin{bmatrix} 1 \ 0 \end{bmatrix} \begin{bmatrix} x\_1 \\ x\_2 \end{bmatrix} \tag{7.8}$$

The general form of the output equation is:

$$\mathbf{y} = \mathbf{C}\,\mathbf{x} + \mathbf{D}\,\mathbf{u} \tag{7.9}$$

which leads to the general state space representation that is applicable for Single or Multi Input and Output systems. The structure of this representation is depicted in Fig. 7.2. Although not mentioned in this example, matrix **D** denotes a direct feedthrough which occurs in systems whose output signals *y* are directly affected by the input signals *u* without any time delay. Thus these systems show a non-delayed step response. For further explanation on A, B, C and D [38] is recommended. Note that in many teleoperation applications, where long distances between master device and slave device are existing significant time delays occur.

**Fig. 7.2** State space description

## *7.1.2 Nonlinear System Description*

A further challenge within the formulation of system behavior is to imply nonlinear effects, especially if a subsequent system analysis and classification is needed. Although a mathematical description of nonlinear system behavior might be found fast, the applicability of certain control design methods is an additional problem. Static non-linearities can be easily described by a serial coupling of a static nonlinearity and linear dynamic device to be used as a summarized element for closed loop analysis. Herein two different models are differentiated. Figure 7.3 shows the block diagram consisting of a linear element with arbitrary subsystem dynamics followed by a static non-linearity.

This configuration also known as Wiener-model is described by

$$
\begin{aligned}
\tilde{u}(s) &= \underline{G}(s) \cdot u(s), \\
\mathbf{y}(s) &= f(\tilde{u}(s)).
\end{aligned}
$$

In comparison, Fig. 7.4 shows the configuration of the Hammerstein*-model* changing the order of the underlying static non-linearity and the linear dynamic subsystem.

The corresponding mathematical formulation of this model is described by

$$
\begin{aligned}
\tilde{u}(s) &= f(u(s)) \\
\mathbf{y}(s) &= \underline{G}(s) \cdot \tilde{u}(s).
\end{aligned}
$$

More complex structures appear as soon as the dynamic behavior of a system is affected by non-linearities. Figure 7.5 shows as an example a system with an internal saturation. For this configuration both models cannot be applied as easily as for static non-linearities. In particular if a system description is needed usable for certain methods of system analysis and investigation.

Typical examples for systems showing that kind of nonlinear behavior are electrical motors whose torque current characteristic is affected by saturation effects, and thus whose torque available for acceleration is limited to a maximum value.

This kind of system behavior is one example of how complicated the process of system modeling may become, as ordinary linear system description methods are not applicable to such a case. Nevertheless it is necessary to gain a system formulation in which the system behavior and the system stability can be investigated successfully. To achieve a system description taking various system non-linearities into account, it is recommended to set up a nonlinear state space descriptions. They offer a wide set of tools applicable to the following investigations. Deriving from Eqs. (7.1) and (7.2) the nonlinear system description for single respectively multi input and output systems is as follows:

$$\begin{aligned} \dot{\mathbf{x}} &= \mathbf{f}(\mathbf{x}, \mathbf{u}, t) \\ \mathbf{y} &= \mathbf{g}(\mathbf{x}, \mathbf{u}, t) \end{aligned}$$

This state space description is most flexible to gain a usable mathematical formulation of a systems behavior consisting of static, dynamic and arbitrarily coupled non-linearities. In the following, these equations serve as a basis for the examples illustrating concepts of stability and control.

#### **7.1.2.1 Common Nonlinearities in Control Systems**

In general, a control system can be divided into four parts—plant, actuators, sensors, and controller—as shown in Fig. 7.6. Any of these units can be linear or nonlinear.

Due to centripetal and Coriolis forces, the plant or the physical robot is usually nonlinear. As this type of nonlinearity is continuous, it can be locally approximated

to be a linear function. In many applications, since the operation range is small, this linearized model is effective and almost accurate.

On the other hand, some nonlinearities (hard nonlinearities) are discontinuous or hard for approximation. Regardless of the operation range, the magnitude and level of their effect on the system's performance define whether to consider them or not. In the following, some of the common nonlinearities will be discussed.

#### **Saturation**

In linear control, it is considered that increasing the input to a device results in a proportional increase of the output. However, in real systems, it goes somehow differently. For small inputs, the corresponding output is almost proportional, but when the input increases to a certain level and above that, the output will not increase proportionally or even it may not increase. In other words, the output stays around a maximum value and it can be said that the device is in saturation. The saturation is normally due to the physical limits of the device. For example, the properties of the magnet in a DC motor set the limit to its output torque, the supply voltage limits the output of an operational amplifier, and the length of a spring defines its force limit. The typical real saturation nonlinearity and the ideal saturation function are depicted in Fig. 7.7.

Since saturation nonlinearity does not change the phase of the input, one can consider it as a variable gain, where the gain decreases when the saturation occurs. The exact effect of saturation on the system performance is rather complicated. Consider a system that is unstable in the linear range, using saturation can limit the system signals, suppresses its divergence, and result in a sustained oscillation. However, it can slow down a linearly stable system since it is a variable and decreases gain as input increases.

#### **Dead-zone**

Many practical devices do not respond to the inputs below a certain level. When the input's value is bigger than a threshold, there would be output. The dead-zone nonlinearity can be shown as Fig. 7.8.

One common example is a diode. This electronic element does not pass any current if the input voltage is below its threshold (cut-in voltage), so the output current is almost zero, and if the voltage increases, the diode will behave like an ohmic resistance. Another example can be a DC motor that does not rotate until the input voltage exceeds a minimum level and the produced torque becomes bigger than the static friction on the motor's shaft.

Some possible effects of the dead-zone in a control system are reducing the positioning accuracy, introducing a limit cycle, leading to instability due to zero response in the dead-zone, and reducing chattering of an ideal relay.

#### **Backlash**

The clearance of mechanical gears or transmission system results in zero output for a certain range of input (the gap) when the direction of movement is reversed. Consider the gear shown in Fig. 7.9, due to several reasons such as rapid working and

#### 7 Control of Haptic Systems 211

**Fig. 7.9** Backlash in gear and the input-output relation

unavoidable manufacturing error, there exists backlash. When the rotating direction of the driving gear changes, the driven gear does not rotate at all until the driving gear makes contact with it. During this period, the rotation of the driven gear is zero. After the establishment of contact, the driven gear will follow the rotation of the driver. Consequently, if the driver performs a periodic rotation, the driven gear's rotation will be a closed path as shown in Fig. 7.9.

The most important characteristic of backlash is its multi-valued nature. It means that the output depends both on the current input value and on its past values. Due to multi-valued nonlinearities like backlash, the system will store energy that can lead to chattering or sustained oscillation or even instability.

#### **Relay or on-off nonlinearity**

Consider a saturation with zero linearity range and vertical slope; it is called an ideal relay where the output could be maximum positive, off, or maximum negative (Fig. 7.10).

**Fig. 7.12** Relay output: **a** no dead-zone **b** significant dead-zone

An example is the temperature control of a domestic heating system using a thermostat. The heating system turns on whenever the temperature is below the setpoint and turns off when it is above that. Because of its discontinuous nature, the system will oscillate or chatter around the set-point with high frequency. To reduce the chattering frequency, as shown in Fig. 7.11, practical relays have a definite amount of dead-zone.

Due to the fact that a larger input is needed to close a relay, so, depending on their dead-zone range, a relay can perform as shown in Fig. 7.12.

#### 7 Control of Haptic Systems 213

#### **Friction**

When two mechanical surfaces are sliding or trying to slide, there is a friction force in the opposite direction of moving. The special case is static or coulomb friction. Considering the relative velocity between the two surfaces as the input, the resulting force or the output is shown in Fig. 7.13.

In practice, where commonly there exist stiction and viscous damping as well, the output can be depicted as Fig. 7.14. As shown in this figure stiction force is bigger than coulomb force which makes the total friction a complex nonlinearity.

Dealing with these nonlinearities requires a more sophisticated controller design where two of the well-known and highly efficient control techniques are adaptive control and Sliding Mode Control.

#### **7.1.2.2 Adaptive and Sliding Mode Control (SMC) for Controlling Nonlinearities**

Almost all modeled systems contain uncertainties due to intended simplifications such as unmodeled high-order dynamics or linearization of a nonlinear phenomenon, or inaccuracy of the system's parameter. Neglecting the uncertainty results in an adverse effect on the control system. Hence, they should be considered in the controller design. Since linear controller's performance are limited by, for example, waterbed effect, it is needed to deal with nonlinearities by nonlinear controllers. Two well-known and effective approaches to take care of nonlinearity and uncertainty are sliding mode control (SMC) and adaptive control. These two methods will be discussed in the following sections.

#### **Sliding Mode Control**

Sliding Mode Control (SMC) is a nonlinear control technique, which presents desirable characteristics such as accuracy, robustness, and fast dynamic response. The design of this controller is done in two parts:


This design procedure brings two main advantages: the possibility of having tailored dynamic response, and robustness to nonlinearity, uncertainty, and disturbance. In other words, SMC is capable of controlling a nonlinear process suffering from external disturbance and model uncertainty. For designing the SMC system, a system model could be considered as a nonlinear SISO system as follow:

$$
\dot{\mathbf{x}} = \mathbf{f}(\mathbf{x}, \mathbf{u}) + \mathbf{b}(\mathbf{x}, \mathbf{t})u \tag{7.10}
$$

$$\mathbf{y} = \mathbf{g}(\mathbf{x}, \mathbf{t})\tag{7.11}$$

where u is the scalar input, y is the scalar output, and *x* ∈ *R<sup>n</sup>* is the state vector. The ideal controller is the one that y tracks *yd* (desired output) and the tracking error (*e* = *yd* − *y*) tends to a small vicinity of zero after a finite time (transient response).

To design a SMC, the first step is defining the sliding surface as σ(t) in a way that zero error results in σ (*t*) = 0, and σ (*t*)σ (˙ *t*) < 0 fulfils for the rest of the time. A common form of σ(t), which depends on only one parameter is as:

$$
\sigma(\mathbf{t}) = (\frac{d}{dt} + \lambda)^{n-1} e(t) \tag{7.12}
$$

where λ > 0 is a constant. For example, in the case of n=3 which is the order of the controller, the sliding surface is:

$$
\sigma = \ddot{\mathbf{e}} + 2\lambda \dot{\mathbf{e}} + \lambda^2 e \tag{7.13}
$$

The second step is defining a control law that steers the system's states onto the sliding surface, which makes σ = 0 in finite time. There are some approaches for defining the control law. The two most common ones are standard or the first-order control law and the second-order one will be discussed in the next sections. There is no dependency on the selected approach and SMC allows designing the controller based on an estimation of the original system's dynamics.

#### *First-order SMC*

The following formula is one of the most simple SMC controller models. In this model, the control input is a discontinuous function of σ:

#### 7 Control of Haptic Systems 215

**Fig. 7.15** Typical time response of σ variable

$$u = -U\text{sgn}(\sigma)\tag{7.14}$$

where sgn(.) is the signum function and *U* > 0 is a sufficiently large constant. Therefore, the control signal is:

$$u = \begin{cases} -U & \sigma < 0\\ U & \sigma > 0 \end{cases} \tag{7.15}$$

As a result, the σ variable would change typically as shown in Fig. 7.15. As seen in the Fig. 7.15, the system would do high-frequency chattering in a small vicinity of the desired surface rather than sliding on it. This high-frequency switching could cause oscillation especially in the control of the mechanical systems.

Since this chattering phenomenon is because of the discontinuous sign function, smoothed continuous approximation of it could be rather effective. Two common examples are:

$$\text{sat} \quad \mu = -U \\ \text{sat} (\sigma, \varepsilon) = -U \frac{\sigma}{\sigma + \varepsilon} \quad \varepsilon > 0 \\ \text{k}\varepsilon \approx 0 \tag{7.16}$$

$$
tanh h \quad u = -Ut \\
tanh(\frac{\sigma}{\varepsilon}) \quad \varepsilon > 0 \& \varepsilon \approx 0 \tag{7.17}
$$

A comparison of the smoothed saturation and the sign function is depicted in Fig. 7.16.

However, smoothing the sign function will result in increasing the tracking error and decreasing the robustness. Another solution could be the usage of the higher-order SMC.

**Fig. 7.16** Comparison of sign function and its alternative, smoothed saturation

#### *Second-order SMC*

Second-order SMC is capable of the complete elimination of the chattering phenomenon without sacrificing robustness. The first-order SMC steers the system's states in a way that σ (*t*) = 0 when the error is zero, while the second-order SMC also forces the derivative of σ (*t*) goes to zero. There exist many well-known functions to generate a second-order sliding mode law such as integral operation sliding surface, PID surface, and super-twisting algorithm. As an example, the super-twisting second-order SMC can be defined as:

$$\begin{cases} \mu = -V\sqrt{|\sigma|} \text{sgn}(\sigma) + w \\ \dot{w} = -W \text{sgn}(\sigma) \end{cases} \tag{7.18}$$

An effective tuning guide for the parameters are:

$$V = \sqrt{U} \quad = 1.1U \tag{7.19}$$

where *U* > 0 is a constant that should be taken sufficiently large. Considering the comparison of the linear PI controller and the super-twisting second-order SMC depicted in Fig. 7.17. This algorithm can be seen as the nonlinear PI controller. It is obvious that the produced control signal by the second-order SMC is continuous. Therefore, the system performs with no chattering.

#### **Adaptive Control**

Another approach to the control of a nonlinear system that can improve the system output with uncertainty is the adaptive control method. The basis of this approach is estimating the system's parameters or uncertainties based on measured signals of the system. Therefore, adaptive control lays down in the field of nonlinear control.

This method is useful for a system experiencing a wide range of parameter changes, such as a robotic manipulator designed to manipulate loads of various

**Fig. 7.17** Block diagram of a linear PI controller and the super-twisting SMC

weights. Adaptive control is mainly used in systems where there exists nonlinearity or the variation and uncertainty of its parameters are inevitable. The most important requirement of adaptive control is that parameter adaptation should be done significantly faster than the change of the system parameters. However, in practice, this requirement is often fulfilled since a rapid change of a parameter means that the modeling is not complete and should consider this dynamic behavior theoretically.

There exists another method of controlling nonlinearity and uncertainty, which is the robust control method. Although both methods deal with nonlinearity, there are some differences. In the case of slowly varying parameters, the adaptive control performance is significantly better than the robust control method. The reason is that the adaptive control estimates the varying parameters and redesigns the controller according to these changes. Thus, its performance improves over time, while the robust control method is conservative with consistent performance. Moreover, the robust control requires an estimation of the nonlinearity or uncertainty, while adaptive control can be designed with little or no prior estimation. However, on the other hand, comparing to adaptive control, robust control is capable of dealing with disturbances, fast varying parameters, and unmodeled dynamics. Therefore, a combination of these two methods could be a good solution especially when there is an external part such as rehabilitation systems [1].

As it is mentioned, the superiority of the adaptive control is that the controller learns and adjusts its parameters to enhance the tracking performance. There exist two main methods for this learning and adjustment process: model-reference adaptive control (MRAC) and self-tuning controller (STC). In this book, a brief explanation of the methods is presented to provide an overview of the tools that can be used in the field of haptic.

#### *Model-Reference Adaptive Control*

In this method, it is assumed that the structure of the plant's model is known, but some parameters are unknown. A reference model defines the ideal response of the system and the adaptation law adjusts the controller parameters to respond like the reference model (Fig. 7.18).

The reference model should fulfill the expected performance of the system in both time and frequency domain characteristics. Furthermore, by considering the known

**Fig. 7.18** Model-Reference Adaptive Control structure

**Fig. 7.19** Self-Tuning Controller structure

structure of the plant, its order, and its relative degree, the expected performance could be achievable. In addition, the designed controller should be capable of providing the reference model's performance when the plant's model is exactly known.

#### *Self-Tuning Controller (STC)*

In the pole-placement method, where the controller is designed based on the plant's parameters, its parameters could be estimated by using the input-output of the plant (Fig. 7.19). Then, the controller parameters are updated to control the estimated plant.

The adaptation process in this method is different from the MRAC method. MRAC tries to adjust the controller parameters to make the system's response as close as possible to the reference model. However, STC estimates the plant's parameters and adjusts the controller's parameters based on the estimated plant.

Here, the procedure of designing an adaptive controller is explained through an example. In [1], an adaptive law was designed for a sliding mode controlled wearable hand rehabilitation robot to overcome the stiffness variation of the patients' hand. Using the Lyapunov function, not only the stability of the system is guaranteed but also the adaptive law is derived. The Lyapunov function was considered as:

#### 7 Control of Haptic Systems 219

$$V = \frac{1}{2}\sigma^2 + \frac{1}{2}\widetilde{F}^2\tag{7.20}$$

where σ is the sliding surface and *F* <sup>=</sup> *Fint* <sup>−</sup> *<sup>F</sup>* is the estimation error of the user's interaction force. Assuming that *Fint* changes slowly, the adaptive law and adaptive controller equation were derived based on the stability criteria of the Lyapunov method, or *V*˙ <0.

## **7.2 System Stability**

As mentioned in above one of the most important goals of the control design is the stabilization of systems or processes during their life cycle, while operative or disabled. Due to the close coupling of haptic systems to a human user via a human machine interface, safety becomes most relevant. Consequently the focus of this chapter lies on system stability and its analysis by using certain methods applicable to many systems. It has to resemble the system's behavior correctly, and has to be aligned with applied investigation technique. For the investigation of systems, subsystems, closed looped systems, and single or multi input output systems, a wide variety of different methods exists. The most important ones shall be introduced in this chapter.

## *7.2.1 Analysis of Linear System Stability*

The stability analysis of linear time invariant systems is easily done by the investigation of the system poles or roots derived from the eigenvalue calculation of the system transfer function *G*(*s*). The decisive factor is the sign of the real part of these system poles. A negative sign in this real part indicates a stable eigenvalue; a positive sign denotes an unstable eigenvalue. The correspondence to the system stability becomes obvious while looking at the homogenous part of the solution of the ordinary differential equation describing the system behavior. As example a system shall be described by

$$T\circ(t) + \mathbf{y}(t) = K u(t). \tag{7.21}$$

The homogenous part of the solution *y*(*t*) is derived using

$$\mathbf{y}\_h = e^{\lambda t} \quad \text{with } \lambda = -\frac{1}{T}. \tag{7.22}$$

As it can be seen clearly, the pole <sup>λ</sup> = − <sup>1</sup> *<sup>T</sup>* has a negative sign only if the time constant *T* has a positive sign. In this case the homogenous part of *y*(*t*) disappears for *t* → ∞, while it rises beyond each limit exponentially if the pole <sup>λ</sup> = − <sup>1</sup> *<sup>T</sup>* is unstable. This section will not deal with the basic theoretical background of linear system stability, as these are basics of control theory. Focus of this section is the application of certain stability analysis methods. Herein it will be distinguished between methods for a direct stability analysis of a system or subsystem and techniques of the closed looped stability analysis. For direct stability analysis of linear system the investigation of the poles placement in the complex plane is fundamental. Besides the explicit calculation of the system poles or eigenvalues the Routh- Hurwitz *criterion* offers to determine the system stability and the system pole placement with explicit calculation. In many cases this simplifies the stability analysis. For the analysis of the closed loop stability the determination of the closed loop pole placement is also a possible approach. Additional methods leave room for further design aspects and extend the basic stability analysis. Well-known examples of such techniques are


The applicability of both methods will be discussed in the following without looking at the exact derivation.

#### **7.2.1.1 Root Locus Method**

The root locus offers the opportunity to investigate the pole placement in the complex plane depending on certain invariant system parameters. As example of invariant system parameters changing time constants or variable system gains might occur. The gain of the open loop is often of interest within the root locus method for closed loop stability analysis and control design. In Eq. (7.23) *GR* denotes the transfer function of the controller, *GS* describes the behavior of the system to be controlled.

$$-G\_o = G\_R G\_S \tag{7.23}$$

Using the root locus method, it is possible to apply predefined sketching rules whenever the dependency of the closed loop pole placement on the open loop gain *K* is of interest. The closed loop transfer function *Gg* is depicted by Eq. (7.24)

$$G\_g = \frac{G\_R G\_S}{1 + G\_R G\_S} \tag{7.24}$$

As an example an integrator system with a second order delay (IT2) described by Eq. (7.25)

$$G\_S = \frac{1}{s} \cdot \frac{1}{1+s} \cdot \frac{1}{1+4s} \tag{7.25}$$

is examined. The control transfer function is *GR* = *KR*. Thus we find as open loop transfer function

$$-G\_o = G\_R G\_S = \frac{K\_R}{s(1+s)(1+4s)}.\tag{7.26}$$

#### 7 Control of Haptic Systems 221

Using the sketching rules which can be found in various examples in literature [37], [48], the root locus graph as shown in Fig. 7.20 is derived. The graph indicates, that small gains *KR* lead to a stable closed loop system since all roots have a negative real part. A rising *KR* leads to two of the roots crossing the imaginary axis and the closed loop system becomes unstable. This simplified example proves that this method can easily be integrated in a control design process, as it delivers a stability analysis of the closed loop system only processing an examination on the open loop system. This issue is also one of the advantages of the Nyquist stability criterion. Additionally the definition of the open loop system is sufficient to derive a stability analysis of the system in a closed loop arrangement.

#### **7.2.1.2 NYQUIST's Stability Criterion**

This section will concentrate on the simplified Nyquist stability criterion investigating the open loop frequency response described by

$$-G\_o(j a o) = G\_R(j a o) G\_S(j a o) \,.$$

The Nyquist stability criterion is based on the characteristic correspondence to amplitude and phase of the frequency response. As example we use the already introduced IT2-system controlled by a proportional controller *GR* <sup>=</sup> *KR*. The Bode plot of the frequency response is shown in Fig. 7.21: The stability condition which has to be met is given by the phase of the open loop frequency response, with ϕ(ω) > −180◦ in case of the frequency response's amplitude *A*(ω) being above 0 dB. As shown in Fig. 7.21, the choice of the controller gain *KR* transfers the amplitude graph of the open loop frequency response vertically without affecting the phase of the open loop frequency response. For most applications the specific requirement of a sufficient phase margin ϕ*<sup>R</sup>* is compulsory. The resulting phase margin is also shown

**Fig. 7.21** IT2 frequency response

in Fig. 7.21. All such requirements have to be met in the closed control loop and must be determined to choose the correct control design method. In this simplified example the examined amplitude and phase of the open loop frequency response is dependent on the proportional controller gain *KR*, which is sufficient to establish system stability including a certain phase margin. More complex control structures such as PI, PIDT*<sup>n</sup>* or Lead Lag extend the possibilities for control design to meet further requirements.

This section showed the basic principle of the simplified Nyquist criterion being applicable to stable open loop systems. For an investigation of unstable open loop systems the general form of the Nyquist criterion must be used, which itself is not introduced in this book. For this basic knowledge it is recommended to consult [37, 48].

## *7.2.2 Analysis of Non-linear System Stability*

The application of all previous approaches for the analysis of system stability is limited to linear time invariant systems. Nearly all real systems show nonlinear effects or consist of nonlinear subsystems. One approach to deal with these nonlinear systems is the linearization in a fixed working point. All further investigations are focused on this point, and the application of the previously presented methods becomes possible. If these methods are not sufficient, extended techniques for stability analysis of nonlinear systems must be applied. The following are examples of representing completely different approaches:


Without dealing with the mathematical background or the exact proof the principles and the application of chosen techniques shall be demonstrated. At this point a complete explanation of this topic is too extensive due to the wide variety of the underlying methods. For further detailed explanation, [18–20, 34, 45, 49] are recommended.

## **7.2.2.1 POPOV criterion**

As an preliminary example the analysis of closed loop systems can be done applying the Popov criterion respectively the circle criterion. Figure 7.22 shows the block diagram of the corresponding closed loop structure of the system that is going to be analyzed.

The bock diagram consists of a linear transfer function *G*(*s*) with arbitrary dynamics and a static non-linearity *f* (.). The state space formulation of *G*(*s*) is as follows:

$$\begin{aligned} \dot{\mathbf{x}} &= \mathbf{A}\mathbf{x} + \mathbf{B}\tilde{u} \\ \mathbf{y} &= \mathbf{C}\mathbf{x} \end{aligned}$$

Thus we find for the closed loop system description:

**Fig. 7.22** Nonlinear closed loop system –

#### **Fig. 7.23** Sector condition

$$\begin{aligned} \dot{\mathbf{x}} &= \mathbf{A}\mathbf{x} - \mathbf{B}f(\mathbf{y}), \\ \mathbf{y} &= \mathbf{C}\mathbf{x}. \end{aligned}$$

In case that *f* (*y*) = *k* · *y* this nonlinear system is reduced to linear system whose stability can be examined with the evaluation of the system's eigenvalues. An arbitrary nonlinear function *f* (*y*) the complexity of the problem is extended. So first constraint on *f* (*y*) is that it exists only in a determined sector that is limited by a straight line through the origin with a gradient *k*. Figure 7.23 shows an equivalent example for the nonlinear function *f* (*y*). This constraint is depicted by the following equation:

$$0 \le f(\mathbf{y}) \le k \mathbf{y}.$$

The Popov criterion provides an intuitive handling for the stability analysis of the presented example. The system is asymptotically idle state (**x**˙ = **x** = **0**) stable if:


$$\forall \omega \ge 0 \quad \text{Re}[(1 + j\alpha\omega)\underline{G}(j\omega)] + \frac{1}{k} \ge \rho \tag{7.27}$$

Equation (7.27) formulates the condition also know as Popov *inequality*. With

$$\underline{G}(j\omega) = \text{Re}(\underline{G}(j\omega)) + j\text{Im}(\underline{G}(j\omega))\tag{7.28}$$

#### **Fig. 7.24** Popov plot

Eq. (7.27) leads to

$$\operatorname{Re}(\underline{G}(j\omega)) - \alpha \omega \operatorname{Im}(\underline{G}(j\omega)) + \frac{1}{k} \ge \rho \tag{7.29}$$

With an additional definition of a related transfer function

$$G^\* = \text{Re}(\underline{G}(j\omega)) + j\omega \text{Im}(\underline{G}(j\omega)) \tag{7.30}$$

Eq. (7.29) states that the plot in the complex plane of *G*∗, the so called Popov *plot*, has to be located in a sector with an upper limit described by *<sup>y</sup>* <sup>=</sup> <sup>1</sup> <sup>α</sup> (*<sup>x</sup>* <sup>+</sup> <sup>1</sup> *<sup>k</sup>* ). Figure 7.24 shows an example for the Popov plot of a system in the complex plane constrained by the sector condition. The close relation to the Nyquist criterion for the stability analysis of linear systems becomes quite obvious here. While the Nyquist criterion examines the plot of *G*(*j*ω) referred to the critical point (-1|0), the location of the Popov plot is checked for a sector condition defined by a straight line limit.

The application of the Popov criterion has the excelling advantage, that it is possible to gain a result out of the stability analysis without an exact formulation of the non-linearity within the system. All constraints for the nonlinear subsystem are restraint to the sector condition and the condition to have memoryless transfer behavior. The most complicated aspect within this kind of analysis is how to formulate the considered system structure in a way, that the Popov criterion can be applied. For completeness the circle criterion shall be mentioned whose sector condition is not represented by a straight line, rather

$$k\_1 \le \frac{f(\mathbf{y})}{\mathbf{y}} \le k\_2.$$

defines the new sector condition. For additional explanation on these constraints and the application of the circle criterion it is recommended to consider [34, 45, 49].

#### **7.2.2.2 LYAPUNOV's Direct Method**

As second example for stability analysis of nonlinear systems the direct method by Lyapunov is introduced. The basic principle is that if both linear and nonlinear stable systems tend to a stable steady state, the complete system energy has to be dissipated continuously. Thus it is possible to gain result from stability analysis while verifying the characteristics of the function representing the state of energy in the system. Lyapunov's direct method generalizes this approach to evaluate the system energy by the generation of an artificial scalar function which can describe not only the energy stored within the considered dynamic system, further it is used as an energy like function of a dissipative system. These kinds of functions are called Lyapunov functions *V*(*x*). For the examination of the system stability the already mentioned state space description of a nonlinear system is used:

$$\begin{aligned} \dot{\mathbf{x}} &= \mathbf{f}(\mathbf{x}, \mathbf{u}, t) \\ \mathbf{y} &= \mathbf{g}(\mathbf{x}, \mathbf{u}, t) .\end{aligned}$$

By the definition of Lyapunov's theorem the equilibrium at the phase plane origin **x**˙ = **x** = **0** is globally, asymptotically stable if


If these conditions are met in a bounded area at the origin only, the system is locally asymptotically stable.

As a clarifying example the following nonlinear first order system

$$
\dot{\mathbf{x}} + f\mathbf{x} = \mathbf{0} \tag{7.31}
$$

is evaluated. Herein *f* (*x*) denotes any continuous function of the same sign as its scalar argument *x* so that *x* · *f x* > 0 and *f* (0) = 0. Applying this constraints a Lyapunov function candidate can be found described by

$$V = x^2.\tag{7.32}$$

The time derivative of *V*(*x*) provides

$$
\dot{V} = 2\mathbf{x}\dot{\mathbf{x}} = -2\mathbf{x}f(\mathbf{x}).\tag{7.33}
$$

Due to the assumed characteristics of *f* (*x*) all conditions of Lyapunov's direct method are satisfied thus the system has globally asymptotically stable equilibrium at the origin. Although the exact function *f* (*x*) is not known, the fact that it exists in the first and third quadrant only is sufficient for *V*˙(*x*) to be negative definite. As second example a multi-input multi-output system is examined depicted by its state space formulation

$$\begin{aligned} \dot{\mathbf{x}}\_1 &= \mathbf{x}\_2 - \mathbf{x}\_1 (\mathbf{x}\_1^2 + \mathbf{x}\_2^2) \\ \dot{\mathbf{x}}\_2 &= -\mathbf{x}\_1 - \mathbf{x}\_2 (\mathbf{x}\_1^2 + \mathbf{x}\_2^2) .\end{aligned}$$

In this example the system has an equilibrium at the origin too. Consequently the following Lyapunov function candidate can be found

$$V(\mathbf{x}\_1, \mathbf{x}\_2) = \mathbf{x}\_1^2 + \mathbf{x}\_2^2. \tag{7.34}$$

Thus the corresponding time derivative is

$$\dot{V}(\mathbf{x}\_1, \mathbf{x}\_2) = 2\mathbf{x}\_1\dot{\mathbf{x}}\_1 + 2\mathbf{x}\_2\dot{\mathbf{x}}\_2 = -2(\mathbf{x}\_1^2 + \mathbf{x}\_2^2)^2. \tag{7.35}$$

Hence *V*(*x*1, *x*2) is positive definite and *V*˙(*x*1, *x*2) is negative definite. Thus the equilibrium at the origin is globally, asymptotically stable for the system.

A quite difficult aspect when using the Lyapunov's direct method is given by how to find Lyapunov function candidates. No straight algorithm with a determined solution exists, which is a big disadvantage of this method. Slotine [45] proposes several structured approaches to gain Lyapunov function candidates namely


Besides these Slotine provides additional possibilities to involve the system's physical principles in the procedure for the determining of Lyapunov function candidates while analyzing more complex nonlinear dynamic systems.

#### **7.2.2.3 Passivity in Dynamic Systems**

As another method for the stability analysis of dynamic systems the passivity formalism is introduced within this subsection. Functions can be extended to system combinations by using Lyapunov's direct method, and evaluating the dissipation of energy in dynamic systems. The passivity formalism also is based on nonlinear positive definite storage functions *V*(**x**) with *V*(**0** = **0**) representing the overall system energy. The time derivative of this energy determines the system's passivity. As example the general formulation of a system

$$\begin{aligned} \dot{\mathbf{x}} &= \mathbf{f}(\mathbf{x}, \mathbf{u}, t) \\ \mathbf{y} &= \mathbf{g}(\mathbf{x}, \mathbf{u}, t) \end{aligned}$$

is considered. This system is passive concerning the external supply rate *S* = **y***<sup>T</sup>* **u** if the inequality condition

$$\dot{V}(\mathbf{x}) \le \mathbf{y}^T \mathbf{u} \tag{7.36}$$

is satisfied. Khalil distinguishes several cases of system passivity depending on certain system characteristics (*Lossless, Input Strictly Passive, Output Strictly Passive, State Strictly Passive, Strictly Passive*) [34]. If a system is passive concerning the *external supply rate S*, it is stable in the sense of Lyapunov.

The combination of passive systems using parallel or feedback structures inherits the passivity from its passive subsystems. With the close relation of system passivity to stability in the sense of Lyapunov, the examination of the system stability is possible by verifying the subsystem's passivity. Based on this evaluation it can be concluded that the overall system is passive—always with the assumption that a correct system structure was built.

As an illustrating example the RLC circuit taken from [34] is analyzed in the following. The circuit structure is shown in Fig. 7.25.

The system's state vector is defined by

$$i\_L = x\_1$$

$$u\_C = x\_2.$$

The input *u* represents the supply voltage *U*, as output *y* the current *i* is observed. The resistors are described by the corresponding voltage current characteristics:

$$\begin{aligned} i\_1 &= f\_1(\boldsymbol{\mu}\_{R1}) \\ i\_3 &= f\_3(\boldsymbol{\mu}\_{R3}) \end{aligned}$$

For the resistor which is coupled in series with the inductor the following behavior is assumed

$$U\_{R2} = f\_2(\mathbf{i}\_L) = f\_2(\mathbf{x}\_1). \tag{7.37}$$

Thus the nonlinear system is described by the differential equation:

#### 7 Control of Haptic Systems 229

$$\begin{aligned} L\dot{x}\_1 &= u - f\_2(x\_1) - x\_2\\ C\dot{x}\_2 &= x\_1 - f\_3(x\_2) \\ \mathbf{y} &= x\_1 + f\_1(u) \end{aligned}$$

The presented RLC circuit is passive as long as the condition

$$V(\mathbf{x}(t)) - V(\mathbf{x}(0)) \le \int\_0^t \mu(\tau)\mathbf{y}(\tau)d\tau \tag{7.38}$$

is satisfied. In this example the energy stored in the system is described by the storage function

$$V(\mathbf{x}(t)) = \frac{1}{2}Lx\_1^2 + \frac{1}{2}Cx\_2^2. \tag{7.39}$$

Equation (7.38) leads to the condition for passivity:

$$V(\mathbf{x}(t), u(t)) \le u(t)\mathbf{y}(t) \tag{7.40}$$

which means, that the energy supplied to the system must be equal or higher than the time derivative of the energy function. Using *V*(**x**) in the condition for passivity provides

$$\begin{aligned} V(\mathbf{x}, u(t)) &= Lx\_1 \dot{x}\_1 + Cx\_2 \dot{x}\_2 \\ &= x\_1 \left( u - f\_2(\mathbf{x}\_1) - \mathbf{x}\_2 \right) + x\_2 \left( \mathbf{x}\_1 - f\_3(\mathbf{x}\_2) \right) \\ &= x\_1 \left( u - f\_2(\mathbf{x}\_1) \right) + x\_2 f\_3(\mathbf{x}\_2) \\ &= \left( \mathbf{x}\_1 + f\_1(u) \right) u - u f\_1(u) - \mathbf{x}\_1 f\_2(\mathbf{x}\_1) - \mathbf{x}\_2 f\_3(\mathbf{x}\_2) \\ &= u\mathbf{y} - u f\_1(u) - \mathbf{x}\_1 f\_2(\mathbf{x}\_1) - \mathbf{x}\_2 f\_3(\mathbf{x}\_2) \end{aligned}$$

and finally

$$u(t)\mathbf{y}(t) = \dot{V}(\mathbf{x}, u(t)) + uf\_1(u) + \mathbf{x}\_1 f\_2(\mathbf{x}\_1) + \mathbf{x}\_2 f\_3(\mathbf{x}\_2). \tag{7.41}$$

In case that *f*1, *f*<sup>2</sup> and *f*<sup>3</sup> are passive subsystems, i.e. all functions describing the corresponding characteristics of the resistors exist only in the first and third quadrant, so *V*˙(**x**, *u*(*t*)) ≤ *u*(*t*)*y*(*t*) is true, hence the RLC circuit is passive. Any coupling of this passive system to other passive systems in parallel or feedback structures again results in a passive system. For any passivity analysis and stability evaluation this method implements a structured procedure and shows a very high flexibility.

In conclusion it is necessary to mention, that all methods for stability analysis introduced in this section show certain advantages and disadvantages concerning their applicability, information value and complexity, regardless whether linear or nonlinear systems are considered. When a stability analysis is expected to be done, the applicability of a specific method should be checked individually. This section only can give a short overview on the introduced methods and techniques, and does explicitly not claim to be a detailed description due to the limited scope of this section. For any further study the reader is invited to consult the proposed literature.

## **7.3 Control of Multi-input Systems**

There are four types of systems with different inputs and outputs. SISO (single input, single output), SIMO (single input, multiple outputs), MISO (multiple inputs, single output), and MIMO (multiple inputs, multiple outputs). Loop interactions result in unexpected effects from their variables and make these systems complicated.

Many practical systems are multi-input and often nonlinear, such as most of the robotic manipulators, cars, and aircrafts. In these systems, designing a feedback control to fulfill the desired performance and robustness characteristics become more challenging. Here, two control types, position control, and trajectory control will be discussed.

Consider a simple planner robotic manipulator with only two links depicted in Fig. 7.26. The system's dynamic model can be written as:

$$M(q)\ddot{q} + C(q, \dot{q}) + \mathbf{g}(q) = \mathbf{r} \tag{7.42}$$

where *M*(*q*) is the inertia matrix of the manipulator, *C*(*q*, *q*˙) is the centripetal and Coriolis torques, *g*(*q*) is the gravitational torques, and τ is the actuator torques. As can be seen from the above equation, the system is strongly nonlinear (the coriolis and centripetal terms are always nonlinear) with coupled dynamics, which makes it challenging to design a feedback control structure.

As a solution, using high ratio geared actuators can effectively remove the nonlinearity and coupled dynamic difficulties. However, the backlash and friction of the gears, which are hard nonlinearities, adversely affect the performance of the system such as tracking and force control accuracy.

## *7.3.1 Position Control*

Assume that the two-link manipulator is in the horizontal plane, thus *g*(*p*) = 0 , and it is required to move to a defined stationary position i.e. *qd* . One can realize that the most simple feedback control law to achieve position control is the joint PD controller, which controls each joint independently based on its position error and its time derivative as:

$$
\pi\_i = -k\_{pi}\tilde{q}\_i - k\_{di}\dot{q}\_i \tag{7.43}
$$

where *k pi* > 0, *kdi* > 0, *q*˜*<sup>i</sup>* = *qi* − *qdi* is the position error and *q*˙*<sup>i</sup>* is the velocity of the *i*th joint. This control structure can be seen as a spring and damper that are connected to each joint where the neutral position of the springs is the desired position. As a result, the system performs damped oscillation towards the desired position. The stability can be checked by considering the total mechanical energy of the system as Lyapunov function:

$$V = \frac{1}{2} (\dot{q}^T M \dot{q} + \tilde{q}^T K\_p \tilde{q}) \tag{7.44}$$

where *Kp* is the matrix of *P* controller coefficient, which is diagonal and positive definite. Therefore, the derivative of Lyapunov function can be derived as:

$$
\dot{V} = -\dot{q}^T k\_d \dot{q} \le 0 \tag{7.45}
$$

where *kd* is the matrix of *D* controller coefficient and the same as *k <sup>p</sup>*, it is diagonal and positive definite as well. As can be seen from the above equation, *V*˙ is the dissipated energy by *D* controller or the virtual damping. The time response of such a controlled system is almost the same as a damped mass-spring system. However, one should expect a significant variation of time response characteristics of such a highly nonlinear plant with constant controller parameters. Other solutions could be sliding mode control and adaptive control, which are capable of dealing with nonlinearity more effectively.

## *7.3.2 Trajectory Control*

Now consider that the desired position changes with respect to time. Due to the strong nonlinearity of the manipulator in Fig. 7.27 and its equation, the PID-SISO controller structure can't satisfy the desired tracking performance. One solution is to use the general form of the linear controller PID for a MIMO system. In [11], a PID-MIMO control law is tuned based on try and error. The advantage of PID-MIMO over the PID-SISO is that PID-MIMO benefits from the error of all joints to calculate the input of each joint. In other words, *Kp* and *Kd* matrices are not diagonal, they are symmetric and positive definite. As a result, the tracking performance significantly improves compared to PID-SISO (PID).

Figure 7.28 depicts the tracking error of the two-linkage robot, which shows that in a multi-input system, utilizing a single-input PID results in significant tracking error.

Other effective control methods could be robust control and adaptive control methods. However, there is a difference in their performance. As mentioned previously, since the nonlinearity of the system is known and modeled, the adaptive control can lead to better performance. In other words, the robust control considers the nonlinearity as uncertainty, which is a much more conservative technique comparing to estimating the nonlinearity in the adaptive control method. Therefore, in this case,

**Fig. 7.27** The PID-SISO (PID) controller structure (**a**) and the PID-MIMO controller structure (**b**)

**Fig. 7.28** Tracking error comparison of PID and PID-MIMO.png

the adaptive control can result in superior performance. However, considering the haptic or rehabilitation systems, where the robot interacts with an unknown environment or user's command, as discussed in [1] a robust adaptive controller enhances the system performance.

## **7.4 Control Law Design for Haptic Systems**

As introduced in the beginning of this chapter, control design is a fundamental and necessary aspect within the development of haptic systems. Besides the techniques for system description and stability analysis the need for control design and the applicable design rules become obvious. Especially for the control design of a haptic system it is necessary to deal with several aspects and conditions to be satisfied during the design process. The following sections present several control structures and design schemes in order to set up a basic knowledge about the toolbox for analytic control design of haptic systems. This also involves some of the already introduced methods for system formulation and stability analysis, as these form the basis for most control design methods.

## *7.4.1 Structuring of the Control Design*

As introduced in Chap. 6 various different structures of haptic systems exist. Demands on the control of these structures are derived in the following.


the closed loop behavior is more complex than it is in a closed loop impedance controlled scheme.

All of these structures can be basically implemented in a haptic interaction as shown in Fig. 2.33. From this, all necessary control loops of the overall telemanipulation system become evident:


It becomes obvious that the design of a control system for a telemanipulation system with a haptic interface is complex and versatile. Consequently a generally valid procedure for control design cannot be given. The control structures must be designed step by step involving the following controllers:


This strict separation proposed above might not be the only way of structuring the overall system. Depending on the application and functionality, the purpose of the different controller and control levels might be in conflict to each other or simply overlap. Therefore it is recommended to set up the underlying system structure and define all applied control schemes corresponding to their required functionality.

While looking at the control of haptic systems, a similar structure can be established. For both the control of the process manipulation and the haptic display or interface the central interface module will have to generate demand values for force or position, that are going to be followed by the controllers underneath. These demand values derive from a calculation predefined by designed control laws. To gain such control laws a variety of methods and techniques for structural design and optimization can be applied depending on certain requirements. The following subsections give an overview of typical requirements to closed control loop behavior followed by examples for control design.

## *7.4.2 Requirement Definition*

Besides the fundamental need for system stability with sufficient stability margins additional requirements can be set up to achieve a certain system behavior in a closed loop scheme such as dynamic or precision. A quantitative representation of these requirements can be made by the achievement of certain characteristics of the closed loop step response.

Figure 7.29 shows the general form of a typical closed loop step response and its main characteristics. As it can be seen the demanded value is reached and the basic control requirement is satisfied.

Additional characteristics are discussed and listed in Table 7.1. For all mentioned characteristics a quantitative definition of certain requirements is possible. For example the number and amplitude of overshoots shall not extend a defined limit or have a certain frequency spectrum that is of special interest for the control design in haptic systems. As it is analyzed in Chap. 3, the user's impedance shows a significant frequency range which must not be excited within the control loop of the haptic device. Nevertheless a certain cut-off frequency has to be reached to establish a good performance of the dynamic behavior. All these issues are valid for the requirements to the control design of the process manipulation. In addition to the requirements from the step response due to changes of the setpoint value, it is necessary to formulate

**Fig. 7.29** Closed loop step response requirements


**Table 7.1** Parameter for control quality requirements

**Fig. 7.30** Closed loop disturbance response requirements

requirements concerning the closed loop system behavior considering disturbances originating from the process. Especially when interpreting the user's reaction as disturbance within the overall system description a requirement set up for the disturbance reaction of the control loop has to be established. As it can be seen in Fig. 7.30 similar characteristics exist to determine the disturbance reaction quantitatively and qualitatively. In most cases both the step response behavior and the disturbance reaction cannot satisfy all requirements, as they often come into conflict with each other, which is caused by the limited flexibility of the applied optimization method. Thus it is recommended to estimate the relevance of step response and disturbance reaction in order to choose an optimization approach that is most beneficial. Although determined quantitatively, it is not possible to use all requirements in a predefined optimization method. In most cases an adjustment of requirements is necessary to be made, to apply specific control design and optimization methods. As an example the time *T*res as depicted above cannot be used directly, and must be transferred into a requirement for the closed loop dynamic characterized by a definite pole placement.

Furthermore simulation techniques and tests offer iteration within the design procedure to gain an optimal control law. However, this very sufficient way of analyzing system behavior and test designed control laws suggests to forget about the analytic system and control design strategy and switch to a trial an error algorithm.

## *7.4.3 General Control Law Design*

This section shall present some possible types of controllers and control structures that might be used in the already discussed control schemes. For optimization of the control parameters several methods exists. They are introduced here. Depending on the underlying system description several approaches to set up controllers and control structures are possible. This section will present the classic PID-control, additional control structures e.g. compensation, state feedback controllers, and observer based state space control.

#### **7.4.3.1 Classic PID-Control**

Maybe one of the most frequently used controllers is the parallel combination of a proportional (P), an integrating (I) and a derivative (D) controller. This combination is used in several variants including a P-controller, a PI combination, a PD combination or the complete PID structure. Using the PID structure all advantages of the individual components are combined. The corresponding controller transfer function is described by

$$\underline{G}\_{R} = K\_{R} \left( 1 + \frac{1}{T\_{NS}} + T\_{VS} \right). \tag{7.46}$$

Figure 7.31 shows the equivalent block diagram of a PID controller structure. Adjustable parameters in this controller are the proportional gain *KR*, the integrator time constant *TN* and the derivative time *TV* .

With optimized parameter adjustment a wide variety of control tasks can be handled. This configuration offers on the one hand the high dynamic of the proportional controller and on the other, the integrating component guarantees a high precision step response with a residuum *xd* = 0 for *t* → ∞. The derivative finally provides an additional degree of freedom that can be used for a certain pole placement of the closed loop system.

As major design techniques the following examples shall be introduced:


**Fig. 7.31** PID block diagram

error *xd* due to changes of the demanded set point or a process disturbance is integrated (and eventually weighted over time). This time integral will be minimized by adjusting the controller parameters. In case of convergence of this minimization, the result is a set of optimized controller parameters.

For any additional theoretical background concerning controller optimization the reader is invited to consult the literature on control theory and control design [37, 38].

#### **7.4.3.2 Additional Control Structures**

In addition to the described PID controller additional control structures extend the influence on the control result without having an impact on the system stability. The following paragraphs therefore shall present and disturbance compensation and a direct feedforward of auxiliary process variables.

#### **Disturbance Compensation**

The basic principle of disturbance compensation assumes that if a disturbance on the process is measurable and its influence is known, this knowledge can be used to establish compensation by corresponding evaluation and processing. Figure 7.32 shows a simplified scheme of this additional control structure.

In this scheme a disturbance signal is assumed to affect the closed loop via a disturbance *z* transfer function *GD*. By measuring the disturbance signal and processing the compensator transfer function *GC* results in a compensation of the disturbance interference. Assuming an optimal design of the compensator transfer function this interference caused by the disturbance is completely erased. The optimal design of a corresponding compensator transfer function is depicted by

$$
\underline{G}\_{\mathcal{C}} = -\frac{\underline{G}\_{D}}{\underline{G}\_{\mathcal{S}}}.\tag{7.47}
$$

#### 7 Control of Haptic Systems 239

This method assumes that a mathematical and practicable inversion of *GD* exists. For those cases where this assumption is not valid, the optimal compensator *GK* must be approximated. Furthermore Fig. 7.32 states clearly, that this additional control structure does not have any influence on the closed loop system stability and can be designed independently. Besides the practicability the additional effort should be taken into account. This effort will definitively increase just by the sensors to measure the disturbance signals and by the additional costs for realization of the compensator.

#### **Auxiliary Input Feedforward**

A similar structure compared to the disturbance compensation is the *feedforward* of auxiliary input variables. This principle is based on the knowledge of additional process variables that are used to influence the closed loop system behavior without affecting the system stability. Figure 7.33 shows an example of the feedforward of the demanded setpoint *w* to the controller signal *u* using a feedforward filter function *GF F* .

## **7.4.3.3 State Space Control**

Corresponding to the techniques for the description of multi-input multi-output systems discussed prior in this chapter, the state space control provides additional features to cover the special characteristics within those systems. As described before, multi-input multi-output systems are preferably depicted as state space models. Using this mathematical formulation enables the developer to implement a control structure that controls the internal system states to demanded values. A big advantage is that the design methods for state space control use an overall approach for control design and optimization instead of a control design step by step for each system state. With this approach it becomes possible to deal with profoundly coupled multi-input multi-output systems with high complexity, and design a state space controller simultaneously. This section will present the fundamental state space control structures. This will cover the *state feedback control* as well as the *observer based state space control*. For further detailed procedures as well as design and optimization methods the reader is referred to [38, 49].

**Fig. 7.34** State feedback control

#### **State Feedback Control**

As it is shown in Fig. 7.34 this basic structure for state space control uses a feedback of the system states **x**. Similar to the depiction in Fig. 7.2 the considered system is presented in state space description using the matrices **A**, **B**, **C** and **D**. The system states **x** are fed back gained by the matrix **K** to the vector of the demanded values that were filtered by matrix **V**. The results represent the system input vector u. Both matrices **V** and **K** do not have to be square matrices for a state space description is allowed to implement various dimensions for the state vector, the vector for the demanded values and the system input vector.

#### **Observer Based State Space Control**

The state space control structure discussed above requires a complete knowledge of all system states, which is nothing else but that they have to be measured and processed to be used in the control algorithm. From a practical point of view this not possible all the time due to technical limits as well as costs and effort. As a result the developer is faced with the challenge to establish a state space control without the complete knowledge of the system states. As a solution those system states that cannot be measured due to technical difficulties or significant cost factors are estimated using a state space observer structure that is shown in Fig. 7.35.

In this structure a system model is calculated in parallel to the real system. As exact as possible this system model is described by the corresponding parameter matrices **A**∗, **B**∗, **C**<sup>∗</sup> and **D**∗. The model input also is represented by the input vector **u**. Thus the model provides an estimation of the real system states **x**<sup>∗</sup> and an estimated system output vector **y**∗. By comparison of this estimated output vector **y**<sup>∗</sup> with the real output **y**, which is assumed to be measurable, the estimation error is fed back gained by the matrix **L**. This results in a correction of the system state estimation **x**∗. Any estimation error in the system states or the output vector due to varying initial

**Fig. 7.35** Observer based state space control

states is corrected and the estimated states **x**<sup>∗</sup> are used to be gained by the equivalent matrix **K** and fed back for control.

This structure of an observer based state space control uses the Luenberger observer. In this configuration all real systems states are assumed not to be measurable thus the state space control refers to estimated values completely. Practically, the feedback of measurable system states is combined with the observer based estimation of additional system states. In [38, 49] examples for observer based state space control structures as well as methods for observer design are discussed in more detail.

#### *Example:* **Cascade Control of a Linear Drive**

As an example for the design of a controller the cascade control of a linear drive build up of an EC motor and a ball screw is considered in this section based on [32]. The consideration includes non-linear effects due to friction, temperature change and a non-linear degree of efficiency of the ball screw.

A schematic representation of the EC motor is given in Fig. 7.36. in which only one phase is illustrated for simplification. The motor is supplied with the voltage *u DC*.

The resistance *R* and the inductance *L* represent the stator winding of the motor. The angular speed of the rotor ω<sup>M</sup> generates a back electromotive force (back-EMF) *uEMF* . The mechanical properties of the motor are described by the motor torque *M*e, the load torque *M*<sup>L</sup> and the moment of inertia of the rotor *J* . Mesh analysis yields to the equation for the electrical part of the motor

$$
\mu\_{DC} = Ri + L\frac{di}{dt} + \mu\_{EMF} \tag{7.48}
$$

which can be written in the frequency domain as

$$
\underline{U}\_{\rm DC} - \underline{U}\_{\rm EMF} = \underline{I}(R + sL) \tag{7.49}
$$

The back electromotive force *UEMF* depends on the angular speed of the rotor ω*<sup>M</sup>* , the back-EMF constant *ke* and the parameter *F*(φ*e*) which describes the dependence of the back-EMF of the electrical angle φ*e*.

$$
\mu\_{EMF} = k\_c \rho\_M F(\phi\_c) \tag{7.50}
$$

The motor torque *Me* generated by the motor current *i* correlates with the mechanical load *ML* and the angular acceleration ω*<sup>M</sup>* of the rotor with the moment of inertia *J* . It follows:

$$M\_e = \frac{\dot{i} \cdot u\_{EMF}}{\omega\_M} = ik\_e F(\Phi\_e) = J\frac{d\phi\_M}{dt} + M\_L \tag{7.51}$$

In the frequency domain the mechanical properties of the motor are described by

$$M\_{\varepsilon} - M\_{L} = \mathrm{s} \, J \alpha\_{M}.\tag{7.52}$$

The model takes three different types of non-linearities into account: friction, temperature change and a non-linear efficiency of the ball screw. The friction is modeled as the sum of a static friction *KF* and a dynamic friction *kF* · ω*<sup>M</sup>* . So the equilibrium of moments of the rotor can now be written as

**Fig. 7.37 a** Equivalent thermal circuit of the EC motor, **b** efficiency of the ball screw depending on the mechanical load

$$M\_e - M\_L - K\_F = (k\_F + sJ)\rho\_M. \tag{7.53}$$

The influence of changes in temperature on motor parameters is modeled by a thermal equivalent circuit shown in Fig. 7.37a. The temperature change of the stator winding *TW* can be determined by

$$\Delta T\_W = \frac{R\_{th1} T\_{th2} \text{s} + R\_{th1} R\_{th2}}{T\_{th1} T\_{th2} \text{s}^2 + (T\_{th1} + T\_{th2}) \text{s}} P\_{cl} + \frac{R\_{th2}}{T\_{th1} T\_{th2} \text{s}^2 + (T\_{th1} + T\_{th2} + R\_{th2} C\_{th1}) \text{s} + 1} P\_{fric} \text{s} \tag{7.54}$$

with

$$T\_{th1} = R\_{th1} C\_{th1} \quad \text{and} \quad T\_{th2} = R\_{th2} C\_{th2} \tag{7.55}$$

The resulting resistance of the stator winding *R*<sup>∗</sup> and the back-EMF constant *ke*<sup>∗</sup> can be derived with knowledge of the temperature coefficients α*R*, α*<sup>k</sup>* from

$$R\_\* = R(1 + \alpha\_R \Delta T\_W), \quad k\_{\varepsilon\*} = k\_\varepsilon (1 + \alpha\_k \Delta T\_W). \tag{7.56}$$

The efficiency of the ball screw depends on the mechanical load of the linear drive. Its qualitative characteristics are shown in Fig. 7.37 (b) and can be included in the model as characteristics in a lookup table. The resulting model can be computed for example in Matlab/Simulink and used for simulation and the design of a controller. In this example a cascade controller is chosen (Fig. 7.38). It contains of an inner loop for current control, a middle loop for velocity control and an outer loop for position control. As controller for the different control loops P- or PI-controllers are used.

**Fig. 7.38** Structure of cascade controller of EC motor

## **7.5 Control of Teleoperation Systems**

In the previous sections an overview on system description and control aspects in general, which can be used for the design of local and global control laws, was given. The focus of this section lies on special methods used for modeling of haptic systems stability analysis of bilateral telemanipulators. In contrary to Sect 7.4 special tools for the development of control laws are presented here, which based upon the twoport hybrid representation of bilateral telemanipulators (Sect. 7.5.1). Subsequently in Sect. 7.5.2 a definition of transparency will be introduced, which can be used to analyze the performance of a haptic system in dependency of the system characteristics and the chosen control law. In Sect. 7.5.3 the general control model for telemanipulators will be introduced to close the gap between the closed loop representation, known from general control theory and used in the Sects. 7.1–7.4, and the two-port hybrid representation. In section Sect. 7.5.4 it will be shown, how a stable and safe operation of the haptic system can be achieved. Furthermore the design of stable control laws in the presence of time-delays will be presented in section Sect. 7.5.5.

## *7.5.1 Two-Port Representation*

In general a haptic system is a bilateral telemanipulator, where a user handles a master device to control a slave device, which is interacting with an environment. A common representation of a bilateral telemanipulator is the general two-port model as shown in Fig. 7.39.

User and environment are represented by one-ports, characterized by their mechanical impedances *Z* <sup>H</sup> and *Z*<sup>E</sup> as they can be seen as passive elements [33], see Chap. 3. The mechanical impedance *Z* is defined by Eq. (7.57)

The user manipulates the master device, which controls the slave device. The slave interacts with the environment. The behavior of the telemanipulator is described by its hybrid matrix **H** [21, 43]. So the coupling of user action and interaction with the environment is described by the following hybrid matrix taking forces and velocities at the master and slave side and the properties of the haptic system into account.

$$
\begin{pmatrix}
\underline{F\_{\rm H}} \\
\underline{-v\_{\rm E}}
\end{pmatrix} = \begin{pmatrix}
\underline{h\_{11}}\ \underline{h\_{12}} \\
\underline{h\_{21}}\ \underline{h\_{22}}
\end{pmatrix} \cdot \begin{pmatrix}
\underline{v\_{\rm H}} \\
\underline{F\_{\rm E}}
\end{pmatrix}.\tag{7.58}
$$

In this case, the four h-parameters represent

$$
\begin{pmatrix} \underline{F\_{\mathcal{H}}} \\ \underline{-\nu\_{\mathcal{E}}} \end{pmatrix} = \begin{pmatrix} \text{Master Input Impedance} & \text{Backward Force Gain} \\ \text{Forward Velocity Gain} & \text{Slave Output admittance} \end{pmatrix} \cdot \begin{pmatrix} \underline{\nu\_{\mathcal{H}}} \\ \underline{F\_{\mathcal{E}}} \end{pmatrix} \quad (7.59)
$$

Please note that the velocity of the slave *vE* is taken into account with a negative sign. This is done to fulfill the convention for general two-ports, where the flow is always flowing into a port. The hybrid two-port representation as shown before is often used to determine stability criteria and to describe performance properties of bilateral telemanipulators. Despite the formulation with force as flow variable (also found in [21, 35], for example), one can also find velocity as flow variable in other two-port-descriptions of bilateral telemanipulators [25] . As long as the coupling is defined by the impedance formulation given in Eq. (7.57), these both variants of the two-port descriptions are interchangeable.

## *7.5.2 Transparency*

Beside system stability performance is an important design criterion in the development of haptic systems. The function of a haptic system is to provide a high fidelity force feedback of the contact force at the slave side to the user manipulating the master device of the telemanipulator. One parameter often used to evaluate the haptic sensation presented to the user is transparency. If the user interacts directly with the environment, he experiences a haptic sensation, which is determined by the mechanical impedance *Z*<sup>E</sup> of the environment. If the user is coupled to the environment via a telemanipulator system, he experiences a force impression, which is determined by the backward force gain and the mechanical input impedance of the master device. It is desirable that the haptic sensation for the user of the telemanipulator is the same as interacting directly with the environment. Therefore the telemanipulator has to display the mechanical impedance of the environment *Z*<sup>E</sup> at the master device. Assume that *h*<sup>12</sup> = *h*<sup>21</sup> = 1, so there's no scaling of velocity or force. Therefore the following conditions have to be hold to reach full transparency.

$$
\underline{F}\_{\rm H} = \underline{F}\_{\rm E} \quad \text{and} \quad \underline{\nu}\_{\rm H} = \underline{\nu}\_{\rm E}. \tag{7.60}
$$

From this follows that for perfect transparency [35]

$$\underline{\mathbf{Z}\_{\rm H}} = \underline{\mathbf{Z}\_{\rm E}}\tag{7.61}$$

Therefore the force experienced by the user at the master device is

$$
\underline{F}\_{\rm H} = \underline{h}\_{11}\underline{v}\_{\rm H} + \underline{h}\_{12}\underline{F}\_{\rm H}
$$

and for the velocity at the slave side holds

$$-\underline{\underline{\nu}}\_{\rm E} = \underline{h}\_{21}\underline{\underline{\nu}}\_{\rm H} + \underline{h}\_{22}\underline{F}\_{\rm E}.$$

Therefore the mechanical impedance displayed by the master and felt by the user is described by

$$\underline{Z}\_{\rm T} = \frac{\underline{F}\_{\rm T}}{\underline{\underline{\nu}}\_{\rm T}} = \frac{\underline{h}\_{11}\underline{\underline{v}}\_{\rm H} + \underline{h}\_{12}\underline{F}\_{\rm E}}{\frac{\underline{v}\_{\rm E} - \underline{h}\_{22}\underline{F}\_{\rm E}}{\underline{h}\_{21}}} \tag{7.62}$$

By analyzing Eq. (7.62) the conditions for perfect transparency can be derived. To achieve perfect transparency output admittance at the slave side and input impedance at the master side have to be zero. From this follows that for perfect transparency, in the case of no scaling, the matrix has to be in the form

$$
\begin{pmatrix} \underline{F\_{\rm H}} \\ \underline{-\nu\_{\rm E}} \end{pmatrix} = \begin{pmatrix} 0 & -1 \\ 1 & 0 \end{pmatrix} \cdot \begin{pmatrix} \underline{\nu\_{\rm H}} \\ \underline{F\_{\rm E}} \end{pmatrix} \cdot \mathbf{1}
$$

It is obvious that perfect transparency is in practice not achievable without further actions taken, due to non-zero input impedance *h*<sup>11</sup> and output admittance *h*<sup>22</sup> of the manipulator system. If the input impedance would be zero, the user would not feel the mechanical properties of the master device (mass, friction, compliance). An output admittance of zero relates to an ideal stiff slave device.

#### **7.5.2.1 A Perception-Oriented Consideration of Transparency**

To obtain a transparent system, the system's engineer has two options: Work on the control structure, as described in the following sections or consider the perception capabilities of the human user in the definition of transparency. The latter is focus of this section, that is based on the more detailed elaborations in [27]. It has to be noted, that this approach still lacks some experimental evaluation.

Up till now, transparency as defined in Eqs. (7.60) and (7.61) is a binary criterion: A system is either transparent if all conditions are fulfilled or is not transparent, if one of the equalities is not given. Despite this formulation, one can define the absolute transparency error *e*<sup>T</sup> according to Heredia et al. as shown in Eq. (7.63) [30]

$$
\underline{\mathbf{e}}\_{\rm T} = \underline{\mathbf{Z}}\_{\rm H} - \underline{\mathbf{Z}}\_{\rm E} \tag{7.63}
$$

and the relative transparency error *e* <sup>T</sup> as shown in Eq. (7.64)

$$
\underline{e'}\_{\rm T} = \frac{\underline{Z}\_{\rm H} - \underline{Z}\_{\rm E}}{\underline{Z}\_{\rm H}} \tag{7.64}
$$

When analyzed along the whole intended dynamic range and in all relevant → DoF of the haptic system, Eqs. (7.63) and (7.64) allow for the quantitative comparison of different haptic systems and can give insight in the relevant ranges of frequency that have to be optimized for a more transparent system. They also provide the basis for the integration of perception properties in the assessment of transparency.

From the above mentioned definitions of transparency (Eqs. (7.60) and (7.61)) one can conclude, that *e*<sup>T</sup> = *e* T ! = 0 to fulfill the requirement of transparency. On the other hand it is obvious that a human user will not perceive all possible mechanical impedances, since the perception capabilities are limited as shown in Sect. 2.1. To obtain a quantified range for *e*<sup>T</sup> and *e* T, a thought experiment<sup>1</sup> is conducted in the following [46].

#### **Experiment Assumptions**

The following assumptions are made for the thought experiment about the user and the teleoperation scenario:


<sup>1</sup> Thought experiments (also *gedankenexperiment*) consider the possible outcomes of a hypothesis without actually performing the experiment, but by applying theoretical considerations. They are conducted when the actual performance of an experiment is not possible or universally valid. Famous thought experiments include for example Schrödinger's Cat to illustrate quantum indeterminacy.

course of the experiment. Further, a set of frequency-dependent sensory thresholds for deflection and forces exists. They are labeled with *F*<sup>θ</sup> and *d*<sup>θ</sup> respectively. Both thresholds can be coupled using the mechanical impedance of the user and ω = 2π *f* as the angular frequency of the haptic signal as stated in Eq. (7.65) [28].

$$|\underline{Z}\_{\text{user}}| = \left|\frac{F\_{\theta}}{j\omega d\_{\theta}}\right|\tag{7.65}$$


#### **Thought Experiment**

For the experiment, an impedance type system is assumed, i.e. the user imposes a deflection on the haptic interface of the teleoperation system and interaction forces measured are displayed to the user. First, we assume an environment impedance *Z*<sup>E</sup> < *Z*user. Further evaluation leads to Eq. (7.66).

$$\underline{Z}\_{\rm E} = \frac{\underline{F}\_{\rm E}}{j\alpha \underline{d}\_{\rm E}} < \frac{\underline{F}\_{\rm user}}{j\alpha \underline{d}\_{\rm user}} = \underline{Z}\_{\rm user} \tag{7.66}$$

For an impedance type system, the user can be modeled as a source of deflection or velocity. In that case, the induced deflection of the teleoperation system equals the deflection of the environment *d*user,int = *d*<sup>H</sup> = *d*E. With Eq. (7.66) this leads to *F*<sup>H</sup> = *F*<sup>E</sup> < *F*user. Assuming, that the deflection *d*user,int imposed by the user is smaller as the the user's detection threshold *d*<sup>θ</sup> (assumption no. 3), the resulting amount of force displayed to the user |*F*user| is smaller than the individual force threshold *F*<sup>θ</sup> according to Eq. (7.65).

This experiment can can extended to admittance type systems easily. Descriptively the result can be interpreted as the environment "evading" manipulation, as for example a slow moving hand in free air: The arm muscles serve as a deflection source moving the hand, but the interaction forces of the air molecules are too small to be detected.

For large environment impedances, the inequalities above are reversed. In that case, the forces or deflections resulting from the interaction are lager than the detection threshold, the user will feel an interaction with the environment.

#### **Experiment Analysis**

One can reason that the user impedance will limit the transparency error function from Eq. (7.64) from the experiment. This is done in such a way, that environment impedances lower than the user impedance will be neglected as shown in Eq. (7.67).

$$\underline{e}'\_{\rm T} = \frac{\underline{Z}\_{\rm H} - \max\left(\underline{Z}\_{\rm E}, \underline{Z}\_{\rm user}\right)}{\max\left(\underline{Z}\_{\rm E}, \underline{Z}\_{\rm user}\right)}\tag{7.67}$$

If the user impedance is greater than the environment impedance, the user impedance is used, since the user will not feel any haptic stimuli generated by the lower environment impedance. If the user impedance is smaller than the environment impedance, the environment impedance is used as a reference for the transparency error.

Up till now, only absolute detection thresholds were considered, that describe the detection properties of haptic perception. In a second step, the discrimination properties shall be considered in more detail. It is assumed, that a system is transparent *enough* for a satisfactory usage, if errors are smaller than the differences that can be detected by the user. This difference can be described in a conservative way by the → JND as defined in Sect. 2.1. With that, a limit can be imposed on Eq. (7.67) as given by Eq. (7.68)

$$\underline{e'}\_{\rm T} = \frac{\underline{Z\_{\rm t}} - \max\left(\underline{Z\_{\rm e}}, \underline{Z\_{\rm user}}\right)}{\max\left(\underline{Z\_{\rm e}}, \underline{Z\_{\rm user}}\right)} < c\_{\rm jND(\rm z)}\tag{7.68}$$

This limit *c*JND(z) is defined as the JND of an arbitrary mechanical impedance. Although this value is not clearly measurable, it can be either bordered by the JNDs of ideal components like springs, masses and viscous dampers (see Sect. 2.1 for values) or by the JNDs of forces and deflections (since a change in impedance can be detected if the resulting force or deformation for a fixed imposure of deflection or force respectively exceeds the JND). With known values, this leads to a probably sufficient limit of *e* T <sup>≤</sup> 3 dB.

With Eq. (7.68) a perception-considering error term of the transparency of haptic teleoperation systems is given. One has to keep in mind the assumptions of the underlying thought experiment and the fact, that experimental evaluation of this approach is still focus of current research activities by the authors.

## *7.5.3 General Control Model for Teleoperators*

In principle a telemanipulator system can be divided into three different layers as shown in Fig. 7.40. The first layer contains the mechanical, electrical and local control properties of the master device. The second layer represents the communication channels between the master and slave and therefore eventually occurring time delays. The third layer describes mechanical, electrical and local control properties of the

**Fig. 7.40** Schematic illustration of a telemanipulator

slave device. As mentioned before the dynamic behavior of a master and accordingly a slave device (first and third layer) is determined by its mechanical and electrical characteristics. Dependent on the type of actuator used in the master device respectively slave device a distinction is made between impedance and admittance devices. Impedance devices receive a force command and apply a force to their environment. On the contrary admittance devices receive a velocity command and behave as a velocity source interacting with the environment (see Chap. 6).

Customarily dominant parameters are the mass and friction of the device. Compliance can be minimized by a well-considered mechanical design. In addition it can be assumed that the dynamic characteristics of the electronic can be disregarded because the mechanical design is dominating the overall performance of the device. A local controller design may extend the usable frequency range of the device and can guarantee a stable operation of the device. In addition it's possible to change the characteristics of the device from impedance behavior to admittance behavior and vice versa [25].

The second layer describes the characteristics of the communication channel. Significant physical values, which have to be transmitted between master and slave manipulator are the values for forces and velocities at the master and slave side. Therefore telemanipulators exhibit at least two and up to four communication channels for transmitting these values. These communication paths may be afflicted with a significant time delay *T* , which can cause instability of the whole system.

Figure 7.41 shows the system block diagram of a general four-channel architecture bilateral telemanipulator using impedance actuators for master and slave manipulator, for instance electric motors [24, 35]. In total there are four possible combinations of impedance and admittance devices, impedance-impedance, impedance-admittance, admittance-impedance and admittance-admittance.

In this section the impedance-impedance architecture is used due to its common use because of the high hardware availability. The forces of user and environment *F*<sup>H</sup> and *F*<sup>E</sup> are independent values. The mechanical impedance of user and environment is described by *Z* <sup>H</sup> and *Z*E. The communication layer contains of four transmission elements *C*1, *C*2, *C*<sup>3</sup> and *C*<sup>4</sup> for transmitting the contact forces and velocities *v*H, *F*E, *F*<sup>H</sup> and *v*<sup>E</sup> between master and slave side. *Z*−<sup>1</sup> <sup>m</sup> and *Z*−<sup>1</sup> <sup>s</sup> represent the mechanical

admittance of master controller and slave manipulator. In addition *C*mP and *C*sP are local master and slave position controllers and*C*mF and*C*sF are local force controllers.

The dynamics of the four-channel architecture are described by the following equations:

$$\begin{aligned} \underline{F}\_{\text{CM}} &= C\_{\text{mF}} \underline{F}\_{\text{H}} - C\_{4} e^{-sT} \underline{v}\_{\text{E}} - C\_{2} e^{-sT} \underline{F}\_{\text{E}} - C\_{\text{mp}} \underline{v}\_{\text{H}} \\ \underline{F}\_{\text{CS}} &= C\_{1} e^{-sT} \underline{v}\_{\text{H}} + C\_{3} e^{-sT} \underline{F}\_{\text{H}} - C\_{\text{sF}} \underline{F}\_{\text{E}} - C\_{\text{s} \underline{V} \underline{v}\_{\text{E}}} \\ \underline{Z}\_{\text{s}} \underline{v}\_{\text{E}} &= \underline{F}\_{\text{CS}} - \underline{F}\_{\text{E}} \\ \underline{Z}\_{\text{m}} \underline{v}\_{\text{H}} &= \underline{F}\_{\text{CM}} + \underline{F}\_{\text{H}} \end{aligned}$$

So the closed loop dynamics of the telemanipulator are represented by

$$\underbrace{\left(\underline{\mathbf{Z}}\_{\rm m} + \mathbf{C}\_{\rm mP}\right) \cdot \underline{\mathbf{y}}\_{\rm H}}\_{\cdot} + \mathbf{C}\_{4} e^{-sT} \underbrace{\underline{\mathbf{y}}\_{\rm E}}\_{\cdot \tau} = (1 + \mathbf{C}\_{\rm mF}) \cdot \underline{\underline{F}}\_{\cdot \rm H} - \mathbf{C}\_{2} e^{-sT} \underbrace{\underline{F}}\_{\cdot \tau} \tag{7.69}$$

$$-\left(\underline{\mathbf{Z}}\_{s} + \mathbf{C}\_{s\mathbf{P}}\right) \cdot \underline{\mathbf{v}}\_{\mathbf{E}} + \mathbf{C}\_{1}e^{-s\mathbf{T}}\underline{\mathbf{v}}\_{\mathbf{H}} = (1 + \mathbf{C}\_{s\mathbf{F}}) \cdot \underline{\mathbf{F}}\_{\mathbf{E}} - \mathbf{C}\_{3}e^{-s\mathbf{T}}\underline{\mathbf{F}}\_{\mathbf{H}}\tag{7.70}$$

As presented in Sect. 7.5.1 it is common to describe the dynamics of a telemanipulator by two-port representation. In addition several stability analysis methods can be applied on two-port model. From Eqs. (7.69) and (7.69) with (7.58) the following *h*-parameters can be obtained:

$$\underline{h}\_{11} = \frac{(\underline{\mathbf{Z}\_m} + \mathbf{C}\_{mP}) \cdot (\underline{\mathbf{Z}\_s} + \mathbf{C}\_{sP}) + \mathbf{C}\_1 \mathbf{C}\_4 e^{-2sT}}{(1 + \mathbf{C}\_{mF}) \cdot (\underline{\mathbf{Z}\_s} + \mathbf{C}\_{sP}) - \mathbf{C}\_3 \mathbf{C}\_4 e^{-2sT}} \tag{7.71}$$

$$\underline{h}\_{12} = \frac{C\_2(\underline{Z}\_s + C\_{sP})e^{-sT} - C\_4(1 + C\_{sF})e^{-sT}}{(1 + C\_{mF}) \cdot (\underline{Z}\_s + C\_{sP}) - C\_3 C\_4 e^{-2sT}} \tag{7.72}$$

$$\underline{h}\_{21} = -\frac{C\_3(\underline{Z}\_{\rm m} + C\_{mP})e^{-sT} + C\_1(1 + C\_{mF})e^{-sT}}{(1 + C\_{mF}) \cdot (\underline{Z}\_s + C\_{sP}) - C\_3C\_4e^{-2sT}} \tag{7.73}$$

$$\underline{h}\_{22} = \frac{(1 + C\_{sF}) \cdot (1 + C\_{mF}) - C\_2 C\_3 e^{-2sT}}{(1 + C\_{mF}) \cdot (\underline{Z}\_s + C\_{sP}) - C\_3 C\_4 e^{-2sT}} \tag{7.74}$$

With Eq. (7.62) and Eqs. (7.71)–(7.74) the impedance transmitted to the user *Z*<sup>T</sup> is given by Eq. (7.75) [25].

$$\underline{Z}\_{\rm T} = \frac{(\underline{Z}\_{\rm m} + C\_{\rm m}p) \cdot (\underline{Z}\_{\rm s} + C\_{\rm s}P) + C\_{\rm 1}C\_{4}e^{-2sT} + \left[ (1 + C\_{\rm sF}) \cdot (\underline{Z}\_{\rm M} + C\_{\rm mP}) + C\_{\rm 1}C\_{2}e^{-2sT} \right] \cdot \underline{Z}\_{\rm E}}{(1 + C\_{\rm mF}) \cdot (\underline{Z}\_{\rm s} + C\_{\rm 1}P) - C\_{\rm 3}C\_{4}e^{-2sT} + \left[ (1 + C\_{\rm sF}) \cdot (1 + C\_{\rm mF}) + C\_{\rm 2}C\_{3}e^{-2sT} \right] \cdot \underline{Z}\_{\rm E}} \tag{7.75}$$

Perfect transparency is achievable, if the time delay *T* is insignificant. The controllers must hold the following conditions, which are known as the transparency-optimized control law [24, 35]:

$$\begin{aligned} C\_1 &= \underline{Z}\_s + C\_{s\text{P}}\\ C\_2 &= 1 + C\_{\text{mF}}\\ C\_3 &= 1 + C\_{s\text{F}}\\ C\_4 &= -\left(\underline{Z}\_m + C\_{\text{mP}}\right)\\ C\_2, C\_3 &\neq 0 \end{aligned} \tag{7.76}$$

By use of local position and force controllers of master and slave *C*mp, *C*sp, *C*mF and *C*sF, a perfect transparency can achieved with only three communication-channels. In this case the force feedback from slave to master *C*<sup>2</sup> can be neglected [24, 26].

The most common control architecture is the forward-flow architecture [21] also known as force feedback or position-force architecture [35], which uses the two channels *C*<sup>1</sup> and *C*2. *C*<sup>3</sup> and *C*<sup>4</sup> are set to zero. The position respectively velocity *v*<sup>h</sup> at the master manipulator is transmitted to the slave. The slave manipulator feeds back the contact forces between manipulator and environment *F*e. Due to not compensated impedances of master and slave devices perfect transparency is not achievable by telemanipulators build up in the basic forward flow architecture. This architecture has been described and analyzed by many authors [8, 9, 21, 22, 25, 35].

## *7.5.4 Stability Analysis of Teleoperators*

Besides the general stability analysis for dynamic systems from Sect. 7.2, several approaches for stability analysis of haptic devices has been published. Most of them use the two-port-representation introduced in Sect. 7.5.1 for stability analysis and controller design and were derived from classical network theory and communications technology. The subsequent section gives an introduction to the most important of them and also presents methods to guarantee stability of the system under timedelay.

#### **7.5.4.1 Passivity**

The concept of passivity for dynamic systems has been introduced in Sect. 7.2.2. Within this subsection the focus is on the application of this concept on the stability analysis of haptic devices. Assume the two-port representation of a telemanipulator as presented in Fig. 7.40. Furthermore, it shall be assumed that the energy stored in the system at time *t* = 0 is *V*(*t* = 0) = 0. The power *P*in at the input of the system at a time *t* is given by the product of the force *F*H(*t*) applied by the user to the master times the master velocity *v*H(*t*).

$$P\_{\rm in} = F\_{\rm H}(t) \cdot \nu\_{\rm H}(t)$$

Accordingly the power *P*out at the output of the telemanipulator is given by the contact force of the slave *F*E(*t*) manipulating the environment times the velocity of the slave *v*E(*t*)

$$P\_{\rm out} = F\_{\rm E}(t) \cdot \nu\_{\rm E}(t)$$

Thus the telemanipulator is passive and therefore stable as long as the following inequality is fulfilled.

$$\int\_{0}^{t} \left( P\_{\text{in}}(\mathbf{r}) - P\_{\text{out}}(\mathbf{r}) \mathbf{d}\mathbf{r} \right) = \int\_{0}^{t} \left( F\_{\text{H}}(\mathbf{r}) \cdot \nu\_{\text{H}}(\mathbf{r}) - F\_{\text{E}}(\mathbf{r}) \cdot \nu\_{\text{E}}(\mathbf{r}) \mathbf{d}\mathbf{r} \right) \ge V(t) \tag{7.77}$$

Alternatively the criterion can be expressed in the form of the time derivative of Eq. (7.77)

$$F\_\mathcal{H}(t) \cdot \nu\_\mathcal{H}(t) - F\_\mathcal{E}(t) \cdot \nu\_\mathcal{E}(t) \ge V(t) \tag{7.78}$$

From Eq. (7.77) respectively Eq. (7.78) it can be seen that the telemanipulator must not generate energy to be passive. Thus a very easy method to receive a stable telemanipulator system is to implement higher damping, but it is decreasing the performance of the system.

Considering the frequency domain passivity of the system can be analyzed by using the immitance matrix of the transfer function [8, 9, 13–15, 40, 42, 43]. A system is passive and hence inherently stable, if the immitance matrix *G*(*s*) of the n-port network is positive real. The criteria for positive realness of the immitance matrix, which have to be satisfied, are [7, 29]:


User and Environment can be seen as passive [33] Therefore if passivity of the telemanipulator system can be proofed, the whole closed loop of user, telemanipulator and environment can be guaranteed to be passive and hence stable. It has been shown, that a robust (passive) control law and transparency are conflicting objectives in the design of telemanipulators [35]. In many cases the haptic sensation presented to the user can be poor, if a fixed damping value is used to guarantee passivity of the telemanipulator. Thus a new approach by using passivity based control law and improving performance has been done by implementing a passivity observer and passivity controller. The passivity controller increases damping of the system only when needed to guarantee stability. A further benefit from this concept is, that no parameter estimation for the dynamic model of the telemanipulator has to be done and if considered, uncertainties can be compensated [23, 44].

#### **7.5.4.2 Absolute Stability Criterion (Llewellyn)**

A stability criterion for linear two-ports has been derived by Llewellyn [12, 29, 36]. His motivation was the investigation of generalized transmission lines and active networks. Later several authors have used the criteria formulated by Llewellyn to analyze the stability of telemanipulators or to design control laws for bilateral teleoperation [3–5, 25]. The criterion is formulated in the frequency domain and it is assumed that the two-port is linear and time-invariant, at least locally [2]. A linear two-port is absolute stable if and only if there exists no set of passive terminations for which the system is unstable.

The following criteria provide both necessary and sufficient conditions for absolute stability for linear two-ports.


#### 7 Control of Haptic Systems 255

The conditions 1 and 2 guarantee passivity of the system when there is no coupling between master and slave. This case occurs, when master or slave are free or clamped. Condition 3 guarantees stability, if master and slave are coupled.

These criteria may be applied to every type of immitance matrix, thus the impedance-matrix, admittance-matrix, hybrid-matrix or inverse hybrid-matrix. If the criteria are fulfilled for one form of immitance matrix they are fulfilled for the other three forms as well. A network for which *h*<sup>21</sup> = −*h*12, which is the same as *z*<sup>21</sup> = *z*<sup>12</sup> holds is said to be reciprocal. In this particular case the tests for passivity and unconditional stability are the same. A passive network will always be absolute stable, but an absolute stable network is not necessarily passive. A two-port which is not unconditional stable is potentially unstable, but this does not mean that it is definitely unstable as shown in Fig. 7.42.

## *7.5.5 Effects of Time Delay*

When master and slave are far apart from each other, communication data have to be transmitted over a long distance with significant time-delays, which can lead to instabilities unless the bandwidth of signals entering the communication block is severely limited. Reason for this is a non-passive communication block [8], so energy is generated inside the communication block.

#### **7.5.5.1 Scattering Theory**

Anderson [8–10] used the scattering theory in order to find a stable control law for bilateral teleoperation systems with time delay. Scattering variables were well known from transmission line theory. The scattering operator *S* maps effort plus flow into effort minus flow and is defined in terms of an incident wave *F*(*t*) + *v*(*t*) and a reflected wave *F*(*t*) − *v*(*t*).

$$F(t) - \nu(t) = S(t) \left( F(t) + \nu(t) \right)$$

For LTI systems *S* can be expressed in the frequency domain as follows:

$$F(\mathbf{s}) - \nu(\mathbf{s}) = \mathcal{S}(\mathbf{s}) \left( F(\mathbf{s}) + \nu(\mathbf{s}) \right)$$

In the case of a two-port the scattering matrix can be related to the hybrid matrix **H**(*s*) by loop transformation, which leads to:

$$S(s) = \begin{pmatrix} 1 & 0 \\ 0 & -1 \end{pmatrix} \cdot (\mathbf{H}(s) - 1) \left(\mathbf{H}(s) + 1\right)^{-1}$$

To ensure passivity of the system the reflected wave must not carry higher energy content than the incident wave. Therefore a system is passive if and only if the norm of its scattering operator *S*(*s*) is less than or equal to one [8].

$$\|\mathcal{S}(\mathbf{s})\|\_{\infty} \ll 1$$

#### **7.5.5.2 Wave Variables**

Wave variables were used by Niemeyer [40, 42] to design a robust control strategy for bilateral telemanipulation with time-delay. It separates the total power flow into two parts, one the power flowing into the system and the other part representing the power flowing out of the system. Afterward, these two parts are associated with input and output waves. This approach is also valid for non-linear systems. Assume the two-port shown in Fig. 7.43 using *x*˙*<sup>m</sup>* and *F*<sup>e</sup> as inputs.

Therefore the power flow through the two-port can be written as

$$P(t) = \dot{\boldsymbol{\chi}}\_M^T \boldsymbol{F}\_T - \dot{\boldsymbol{\chi}}\_s^T \boldsymbol{F}\_S = \frac{1}{2} \boldsymbol{\mu}\_M^T \boldsymbol{\mu}\_T - \frac{1}{2} \boldsymbol{\nu}\_M^T \boldsymbol{\nu}\_T + \frac{1}{2} \boldsymbol{\mu}\_S^T \boldsymbol{\mu}\_S - \frac{1}{2} \boldsymbol{\nu}\_S^T \boldsymbol{\nu}\_S.$$

**Fig. 7.43** Wave based teleoperator model

#### 7 Control of Haptic Systems 257

Here the vectors **u***<sup>M</sup>* and **u***<sup>S</sup>* are input waves, which increase the power flow into the system. Analog to this **v***<sup>M</sup>* and **v***<sup>S</sup>* are output waves decreasing the power flow into the system. Note that the velocity is denoted here as *x*˙. The transformation from the power variables to wave variables is described by

$$\begin{aligned} \mu\_M &= \frac{1}{\sqrt{2b}} (F\_M + b\dot{x}\_M) \\ \mu\_S &= \frac{1}{\sqrt{2b}} (F\_S - b\dot{x}\_S) \\ \nu\_M &= \frac{1}{\sqrt{2b}} (F\_M - b\dot{x}\_M) \\ \nu\_S &= \frac{1}{\sqrt{2b}} (F\_S + b\dot{x}\_S) \end{aligned}$$

The wave impedance *b* relates velocity to force and represents an opportunity to tune the behavior of the system. Large *b* values leads to an increased force feedback at the cost of high inertial forces. Small *b* values lower any unwanted sensations, so fast movement is possible, but decreases also the force impression of contact forces between slave and environment [41]. The wave variables can be inverted to provide the power variables as a function of the wave variables.

$$\begin{aligned} F\_M &= \sqrt{\frac{b}{2}} (\boldsymbol{u}\_M + \boldsymbol{\nu}\_M) \\ F\_S &= \sqrt{\frac{b}{2}} (\boldsymbol{u}\_S + \boldsymbol{\nu}\_S) \\ \dot{\boldsymbol{x}}\_M &= \frac{1}{\sqrt{2b}} (\boldsymbol{u}\_M - \dot{\boldsymbol{x}}\_M) \\ \dot{\boldsymbol{x}}\_S &= -\frac{1}{\sqrt{2b}} (\boldsymbol{u}\_S + \boldsymbol{\nu}\_S) \end{aligned}$$

By transmitting the wave variables instead of the power variables the system remains stable even if the time-delay *T* is not known [40]. Note that when the actual timedelay *T* is reduced to zero, transmitting wave variables is identical to transmitting velocity and force.

## **7.6 Control of Rehabilitation Robots**

In this section, some control strategies are explained briefly while avoiding the mathematical formulations. A rehabilitation robot needs to fulfill two requirements to be effective and comfortable. First, high accuracy trajectory tracking is needed to precisely follow the predefined trajectory by the physiotherapist. Second, avoidance of harsh interaction force or torque during the therapy, since the patient usually is not able to control her/his muscles, thus unpredicted movements occur. Therefore, the robot must suppress these undesired interactions in a way that the patient does not experience any harsh force or torque. Some of the strategies to meet the mentioned requirements are discussed in the following sections.

## *7.6.1 Control Strategies*

The first controller choice is the well-known PID controller due to its simple structure and tuning rule. However, due to the highly nonlinear characteristics of rehabilitation robots, PID, fuzzy-PID, or adaptive-PID controllers result in significant undesired overshoot and response delay. Overshoot raises the uncomfortable feeling of the patients and if it is too large, it can cause harm to them. Therefore, a highly robust and stable control structure such as SMC is needed. Many variations of SMC are used in this field such as adaptive SMC, terminal SMC, and super-twisting nonsingular terminal SMC. The main drawback of SMC is the chattering phenomenon due to signum function and high-frequency switching when the system reaches the sliding surface.

In [6], a super-twisting nonsingular terminal sliding mode control (ST-NTSMC) is designed to guarantee the predefined trajectory tracking accuracy of a knee and ankle rehabilitation robot (KARR). As mentioned previously, the super-twisting algorithm eliminates the chattering of SMC while keeping the tracking accuracy. The nonsingular terminal SMC is used to enhance the convergence speed and steady-state tracking of the linear-SMC without singularity. In rehabilitation, the goal is to track the predefined joint trajectory by the physiotherapist, while considering the patients' condition such as post-stroke patients who their muscles may move involuntary and exert torques to the robot, that are undesirable and could result in an uncomfortable situation or even worsen the patient's condition. Using admittance control before the ST-NTSMC could suppress this problem. As depicted in Fig. 7.44, instead of feeding the reference trajectory directly to the ST-NTSMC loop, the modified trajectory is used as the input of the SMC loop. This modification is done by measuring the interaction torque and applying it to a dynamic model to calculate the resulting change of the trajectory using equation (7.79).

$$M\ddot{\tilde{\chi}} + C\tilde{\chi} + K\tilde{\chi} = \tau\_{int} \tag{7.79}$$

where *x*˜ = *xr* − *xm*,*xr* is the predefined trajectory, and *xm* is the modified smooth trajectory that ST-NTSMC will follow. The parameters of this dynamic model define the smoothness of the trajectory change. As a result, the predefined trajectory is adjusted in the direction of the interaction torque to eliminate uncomfortable force/torque.

As a result, the system allows deviation from reference trajectory when undesired interaction torque occurred, while accurately tracks the predefined trajectory when there is no interaction torque.

**Fig. 7.44** The control structure of the rehabilitation robot [6]

**Fig. 7.45** Fuzzy sliding mode controller structure [39]

In [39], a fuzzy SMC is used for a hand rehabilitation robot. In this structure, the fuzzy controller is utilized to reduce the chattering of the SMC. The inputs of the fuzzy controller are *S* and *S*˙ (sliding surface and its time derivative), and the output (*u f a*) is a control signal to compensate for the abrupt variation of the SMC's control signal due to sign function and return the sliding variables to the desired surface Fig. 7.45. Experimental results show that the average chattering of the fuzzy SMC is about 25% of the original SMC.

To overcome the variation of the interaction force during the therapy and creating a smooth trajectory tracking performance, in [1], an adaptive law is proposed to estimate the interaction force (Fig. 7.46). The adaptive law is derived in a way that fulfills the Lyapunov stability criterion and is a function of *S* and robot's physical characteristics.

These are some examples of the control strategies that are used in rehabilitation robots to illustrate the importance of tracking accuracy and the smoothness of the interaction force or torque. The latter is more important and should be considered in the controller design.

## *7.6.2 Friction and Backlash Compensation*

The practical systems are not ideal and face with friction (viscous and/or coulomb). In addition, based on the mechanical design and transmission mechanism they could

**Fig. 7.46** Adaptive fuzzy sliding mode controller structure [1] c Springer Nature, all rights reserved

experience backlash as well. As discussed previously, backlash and coulomb friction are hard nonlinearities and during the controller design should be taken into consideration and suppressed.

In [6], the coulomb friction is considered and modelled as:

$$F(\dot{\theta}) = C\dot{\theta} + F\_f \text{sign}(\dot{\theta})\tag{7.80}$$

where *C* is the viscous friction coefficient and *Ff* is the coulomb friction. Furthermore, since precise modeling of a nonlinear system is not practical, the model is considered as the nominal model and the total friction is expressed as:

$$F(\dot{\theta}) = C\dot{\theta} + F\_f \text{sign}(\dot{\theta}) + \Delta F(\dot{\theta}) \tag{7.81}$$

where Δ*F*(θ )˙ is the uncertainty of friction modeling. Using a robust controller such as ST-NTSMC (or mainly SMC), the system performs robustly with high tracking accuracy. It is important to mention that the accuracy of the nominal model directly affects the performance of the system.

Considering the backlash, the situation is worse since the backlash nonlinearity not only depends on the current condition but also the past condition. In [16, 17] a cable mechanism is used as a motion transmission mechanism. The mechanism is called Bowden-cable transmission where the input-output (φ*in* − φ*out*) relation can be expressed as:

$$
\dot{\phi}\_{out} = \begin{cases} c\_1 \dot{\phi}\_{in} & \dot{\phi}\_{in} > 0 \\ c\_2 \dot{\phi}\_{in} & \dot{\phi}\_{in} < 0 \end{cases} \tag{7.82}
$$

#### 7 Control of Haptic Systems 261

**Fig. 7.47** Input-output relation of Bowden-cable transmission

or

$$\dot{\phi}\_{out} = \begin{cases} c\_1(\phi\_{in} - B\_1) & \dot{\phi}\_{in} > 0 \\ c\_2(\phi\_{in} + B\_2) & \dot{\phi}\_{in} < 0 \end{cases} \tag{7.83}$$

Figure 7.47 depicts an example of the input-output relation of this mechanism and illustrates the parameter of equation (7.84).

The nonlinear equation (7.84) is considered as:

$$
\phi\_{out} = \alpha\_{\phi} \phi\_{in} + D \tag{7.84}
$$

where αφ > 0 is the slope of backlash hysteresis and the dead-zone is considered as the model uncertainty *D*. Therefore, an adaptive controller is designed to estimate the αφ and *D* by getting the tracking error (φ*in* − φ*out*) as input. Experimental results show that the backlash configuration is either constant or variable (due to the flexibility of the sheaths); the adaptive compensation significantly enhances the tracking accuracy and reduces its error by a factor of five.

Compensating the backlash of this cable transmission mechanism allows putting the actuator(s) away from the joint(s), which reduces the inertia of the rehabilitation robot.

To put it in a nutshell, a robust controller such as SMC is needed for trajectory tracking. Moreover, a control strategy such as adaptive control or admittance control is required to allow the system to perform smoothly in case of undesired interaction force/torque from the patient, which is common for them. In addition, depending on the mechanical design, the friction and/or backlash should be considered and compensated effectively to ensure accurate and smooth tracking. Finally, since there exist uncertainties in the environment and patient interaction, the adaptive control, if designed properly, could significantly enhance the performance of the system.

## **7.7 Conclusion**

The control design for haptic devices faces the developing engineer with a complex manifold challenge. According to the fundamental requirement, to establish a safe reliable and determined influence on all structures, subsystems, or processes the haptic system is composed of, an analytical approach for control system design is not negligible anymore. It provides a wide variety of methods and techniques to be able to cover many issues that arise during this design process. This chapter intends to introduce the fundamental theoretical background. It shows several tasks, functions and aspects the developer will have to focus on, as well as certain methods and techniques that are going to be useful tools for the system's analysis and the process of control design.

Starting with an abstracted view on the overall system, the control design process is based on an investigation and mathematical formulation of the system's behavior. To achieve this a wide variety of methods exists, that can be used for system description depending on the degree of complexity. Besides methods for the description of linear or linearized systems, this chapter introduced techniques for system description to represent nonlinear system behavior. Furthermore the analysis of multi-input multioutput systems is based on the state space description, which is presented here, too. All of these techniques on the one hand are aimed at the mathematical representation of the analyzed systems as exact as possible, on the other hand they need to satisfy the requirement for a system description that further control design procedures are applicable to. These two requirements will lead to a tradeoff between establishing an exact system formulation that can be used in analysis and control design procedures without extending the necessary effort unreasonably.

Within system analysis of haptic systems the overall system stability is the most important aspect that has to be guaranteed and proven to be robust against model uncertainties. The compendium of methods for stability analysis contains techniques that are applicable to linear or nonlinear system behavior, corresponding to their underlying principles that of course limit the usability. The more complex the mathematical formulation of the system becomes, the higher the effort gets for system analysis. This comes in direct conflict to the fact that a stability analysis of a system with a simplified system description can only provide a proof of stability for this simplified model of the real system. Therefore the impact of all simplifying assumptions must be evaluated to guarantee the robustness of the system stability.

The actual objective within establishing a control scheme for haptic systems is the final design of controller and control structures that have to be implemented in the system in various levels to perform various functions. Besides the design of applicable controllers or control structures the optimization of adjustable parameters is also part of this design process As shown in many examples in the literature on control design a comprehensive collection of control design techniques and optimization methods exists, that enable the developer to cover the emerging challenges, and satisfy various requirements within the development of haptic systems as far as automatic control is concerned.

## **Recommended Background Reading**

[25] Hashtrudi-Zaad, K. & Salcudean, S.: **Analysis of control Architectures for Teleoperation Systems with Impedance/Admittance Master and Slave Manipulators**. In: The International Journal of Robotics Research, SAGE Publications, 2001.

*Thorough analysis of different control schemes for impedance and admittance type systems.*


*Description and Design of a minimal invasive surgical robot with haptic feedback including an analysis of stability issues and the effect of time delay.*

## **References**


**Open Access** This chapter is licensed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made.

The images or other third party material in this chapter are included in the chapter's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the chapter's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.

# **Chapter 8 Kinematic Design**

**Fady Youssef and Sebastian Kassner**

**Abstract** One aspect in haptic devices is the design of the kinematics. The kinematics of a mechanism is the key to implement and accomplish the design goals, like transmitting dynamic feedback in the form of forces or torques, or allowing a sufficient workspace for the user to interact with environment. This chapter introduces the steps of the kinematic design. The chapter consists of five main sections. The first section gives an overview on some basic definitions and the main types of mechanisms. In the second section, the first step in the design, defining the structure of the mechanism, is introduced. This is accompanied with an example. After choosing the most applicable structure for the desired application, the second step takes place, where the kinematic equations are solved. These equations are used to describe the relation between the operating point of the mechanism and the base at any point in time. Different approaches are used to solve those equations depending on the type of the mechanism used. The third and final step in the design process is introduced in the fourth section. This step contains the optimization process of the mechanism in order to achieve a desired operation of the mechanism. Last but not least, the importance of modeling and simulation is discussed in the last section.

## **8.1 Introduction**

The introduction to the topic of kinematic design begins with mentioning the major goals behind the kinematic design. Then, some basic definitions are introduced, followed by a classification of the mechanisms used in haptic interfaces. Finally an introduction to the design steps is given.

S. Kassner

e-mail: sebastian.kassner@kassner-net.de

© The Author(s) 2023

F. Youssef (B)

Hamburg University of Technology, Eißendorfer Str. 38, 21073 Hamburg, Germany e-mail: f.youssef@tuhh.de

Knorr-Bremse Systeme für Schienenfahrzeuge GmbH, Moosacher Str. 80, 80809 München, Germany

T. A. Kern et al. (eds.), *Engineering Haptic Devices*, Springer Series on Touch and Haptic Systems, https://doi.org/10.1007/978-3-031-04536-3\_8

## *8.1.1 Major Design Goals*

Kinematics has a big role in haptic devices. It's the mechanical interface between the user and the environment. Different design goals are available depending on the application. In some applications, the goal of the design is to transmit dynamic feedback, e.g. forces or torques, to the user. Other applications require sufficient workspace to ensure the interaction between the user and the environment.

## *8.1.2 Basic Definitions*

In the area of haptic interfaces, some definitions are valid apart from the type of the mechanism used.

## **Kinematics**

Kinematics is the branch in mechanics that studies the motion of points and bodies in space in terms of position, velocity and acceleration without taking into consideration the cause of this motion, e.g forces or torques.

## **Dynamics**

Dynamics, sometimes referred as kinetics, is the branch in mechanics that studies the motion in space with the cause of the motion is considered.

## **Degree of Freedom**

Degrees of freedom are the number of independent motions a body or a mechanism is able to carry out. A free body has 3 DoF in 2D, two translations and one rotation, while in 3D has 6 DoF, three translations and three rotations. Figure 8.1 shows the possible independent motions a body can do in space.

## **Joints**

Joints are used to connect two or more bodies together. Depending on the type of the joint used, the number of allowed relative DoF between the connected bodies is defined. The commonly used joints allow either 1,2 or 3 DoF. Figure 8.2 shows several types of joints.

**Fig. 8.1** Independent motions of a free body in space. 3 translations and 3 rotations along and about the 3 axes (*x*, *y*,*z*)

**Fig. 8.2** Different types of joints. 1. DoF: Revolute (R), Prismatic (P), Helical (H). 2. DoF: Planar (E), Cylindrical (C), and Universal (U). 3. DoF: Spherical (S)

## **Active Joints**

These are the actuated joints in a mechanism.

## **Passive Joints**

These are the non-actuated joints in a mechanism.

## **Base**

The base is the reference platform of the mechanism. All the calculations are performed relative to this platform. Positions, velocities, and accelerations of any point on the mechanism are given with respect to the base.

## **Tool Center Point**

TCP is the point where the user, or the environment, interacts with the mechanism. Usually it's the end-point of the mechanism.

## **Workspace**

The workspace is the set of all positions in space the TCP can reach.

## **Singularity**

Position in the mechanism's workspace where the mechanism loses the control of one or more DoF. This can be seen in having non-solvable equations. Singularities will be further discussed in Sect. 8.4.2.

## **Translational Parallel Machine**

TPM is a mechanism whose TCP is allowed only to move in Cartesian space (*x*, *y*,*z*).

## *8.1.3 Classification of Mechanisms*

There are three main configurations of mechanisms used in haptic interfaces: Serial, Parallel and Hybrid. This chapter will focus on both serial and parallel mechanisms. Figures 8.3 and 8.4 shows the different configurations of mechanisms used.

## **Serial Mechanisms**

Serial mechanisms are open kinematic chains, in other words there is only one path from the base platform to the TCP. A typical serial mechanism consists of only active joints connected with rods (links). Usually one DoF joints are used. The number of intended DoF of the TCP defines the number of joints in the mechanism. Figure 8.5 shows UR10e from the company *Universal Robots*. The robot has 6 active revolute joints. As can be seen from the figure, there is only one path from the base to the TCP. The joints are connected in series. The advantages of serial mechanism are their simple design, their relatively large workspace and their relatively easy control especially in positioning tasks. On the other hand, the major disadvantage of serial mechanisms is that one actuator carries the load of the all the next actuators. In

**Fig. 8.5** Example of Serial mechanism, UR10e from the company *Universal Robots*

other words, any actuator should overcome its own inertia, the inertia of all the next actuators in the chain, and finally the load acting on the TCP. This drawback affects the dynamic behaviour of serial mechanisms. This drawback can reflect on the mechanism to have an overall low structural stiffness with respect to its own weight.

## **Parallel Mechanisms**

Parallel mechanisms are closed chains mechanisms. In parallel mechanisms, there are at least two paths from the base platform to the TCP. A typical parallel mechanism consists of both active and passive joints. The number of active joints is defined by the intended DoF of the TCP. The most famous parallel mechanism is the Stewart-Gough platform Fig. 8.6. Another example of parallel mechanism, Omega6 haptic device from the company *Force Dimension*, is shown in Fig. 8.7. Omega6 is a penshaped force-feedback device. From the figure, one can see that there are three paths

from the base to the TCP. The main advantage of parallel mechanisms is that the load on the TCP is distributed on multiple kinematic chains, this will lead to a lightweight design, yet a high structural stiffness. The same advantage leads to high transparent transmission of haptic feedback, which is the reason why this configuration is of a great importance in haptic interfaces. On the other hand, the main disadvantages are small workspace and relative complexity of solving the kinematic equations compared to serial mechanisms, in addition to singular positions which is discussed later in this chapter in Sect. 8.4.2.

## **Hybrid Mechanisms**

Hybrid mechanisms are a combination of both serial and parallel mechanisms. It contains both open and closed chains. The most well known example of a hybrid mechanism is the Tricept (Fig. 8.8). It is composed of parallel part that create the translation in the workspace, followed by a serial part in order to create the orientation of the TCP in space [19]. Decoupling the serial part from the parallel part simplifies the design of the mechanism. Hybrid mechanisms stands in between the serial and parallel mechanisms in terms of advantages and disadvantages; they can have a lightweight design compared to pure serial mechanisms and a larger workspace compared to pure parallel mechanisms. In this chapter we will focus on pure serial and pure parallel mechanisms.

## *8.1.4 Design Steps*

Designing the kinematic mechanism of haptic interfaces passes through three steps:

• Defining the structure (Sect. 8.2): In this step, the type of mechanism, the number of appropriate DoF of the joints are defined. This is done based on the application. **Fig. 8.8** Tricept T606 parallel kinematic, c 2022 *PKMtricept*, used with permission


If a serial, a parallel or a hybrid mechanism is suitable for the design of a haptic interface should be decided on a case-by-case basis. All are used in haptic applications.

## **8.2 Design Step 1: Defining the Mechanism's Structure**

The first step in designing the mechanism is the definition of the structure. It leads to the basic configuration of joints, rods and actuators. While the basic structure of the haptic interface is defined in this step, the topological synthesis has to be carried out very thoroughly.

The topological synthesis should be based on an analysis of the specific task. At least the following issues should be addressed:


The analysis of these requirements lays the foundation for the design of an easy-touse and ergonomic haptic interface which will be accepted by the user.

## *8.2.1 Synthesis of Serial Mechanisms*

A serial mechanism is not less or more than a sequence of rods and actuators, whereas the actuators can be regarded as driven (active) joints. Whether the actuators are linear or rotary is of no importance for the complexity of the kinematic problem. For the workspace and the orientation of the TCP however this aspect is of highest importance.

A widely used design in serial kinematics is to split the joints into two groups: the first group is responsible for the translation of the TCP, and the second group is responsible for the orientation of the TCP. In Fig. 8.5, one can see that the base, shoulder, and elbow joints are mainly responsible for the translation, while the three joints of the wrist are responsible for the orientation.

If it is not intended to generate a torque as output to the user, the handle attached to this serial mechanism has to be equipped with a passive universal joint. Such a realisation as haptic device can be found in Fig. 8.9, the torques are decoupled from the hand. The handle does not have to be placed exactly in the TCP, as the moments are eliminated by the passive joints. Force vectors can be displaced arbitrarily within space. As a result the hand experiences the same forces like the TCP.

As human beings are equipped with many serial kinematic chains (e.g. arms, legs) the working area of a serial kinematic chain can be understood intuitively. This makes it simple to design a corresponding haptic control unit. This is however not the only criterion and will be further addressed in Sect. 8.4. The design can be done geometrically "with circle and ruler", however the following should be considered:


## *8.2.2 Synthesis of Parallel Mechanisms*

The synthesis of a parallel mechanism in general is a less intuitive process than the synthesis of a serial mechanism.

Since a parallel structure comprises several kinematic chains, the fist step is to determine the required number of kinematic chains with respect to the desired degrees of freedom of the mechanism. This can be done using the ratio of the number of chains *k* and the degrees of freedom *F* of the mechanism leading to the degree of parallelism [7].

$$P\_{\rm g} = \frac{k}{F} \tag{8.1}$$

A mechanism is considered fully parallel, most common case, if *Pg* = 1. Partially parallel mechanism has *Pg* < 1, while highly parallel mechanism has *Pg* > 1. This means that for a fully parallel mechanism, the number of chains (legs) is equal to the desired number of DoF of the mechanism.

As mentioned earlier, parallel mechanisms consist of both active and passive joints. The relation between the joints (active and passive), and mechanism's DoF is done using the Gruebler- Kutzbach- Chebycheff mobility criterion:

$$F = \lambda \cdot (n - g - 1) + \sum\_{i=1}^{g} f\_i - f\_{id} + s \tag{8.2}$$

where:

	- s Sum of constraints

An identical link is given for example when a rod has universal joints at both of its ends. The rod will be able to rotate around its axis, without violating any constrains. Another example are two coaxial oriented linear joints.

Constraints appear whenever conditions have to be fulfilled to enable the movement. If five joint axes have to be parallel to a (6th) axis to enable a movement, then s = 5. Another example for a passive condition are two driving rods which have to be placed in parallel to enable a motion.

At this stage of the design, Eq. (8.2) can't be applied directly, as *n* and *g* aren't known yet. There exists a correlation between the number of chains (legs) *k*, joints *g* and elements *n* is defined according to:

$$n = \mathbf{g} - k + \mathbf{2} \tag{8.3}$$

Assuming spatial mechanism with no identical conditions and no constraints, applying Eqs. (8.3) in (8.2), the total number of joints' DoF to be distributed are:

$$\sum\_{i=1}^{g} f\_i = F + 6 \cdot (k - 1) \tag{8.4}$$

As a simple rule of thumb:


#### **8.2.2.1 Special Case: Parallel Mechanisms with Pure Translation Motion**

An important task of many haptic interfaces is the displaying pure three-dimensional spatial sensation. An example is the interaction with a pen-like tool where only forces in (*x*, *y*,*z*) should be displayed to the user. A special class of 3-DoF parallel mechanisms is used for those applications is TPM. This is achieved by kinematic

#### 8 Kinematic Design 277

chains which are blocking one or more rotatory DoF of the TCP and being able to perform translational motion in all directions.

According to Carricato [2, 3] two restrictions have to be fulfilled to ensure a parallel kinematic mechanism with pure translational motion:


Neglecting over-determined configurations, this results in so-called T5-mechanisms, each comprising four or five rotatory joints. Each joints constrains the rotation of the TCP about one axis. More details are found in [2, 3].

#### **8.2.2.2 Example: DELTA Mechanism**

One of the most common topologies to display spatial interaction is the parallel DELTA mechanism (Fig. 8.7). Due to its relevance in the field of haptic interfaces it is used as an example for the topological synthesis. Let us assume the design goal of a parallel kinematic haptic interface for a spatial interaction in (*x*, *y*,*z*). Thus a mechanism with three degrees of freedom is required. Using Eq. (8.1) for a fully parallel mechanism *Pg* = 1 on *F* = 3 haptic degrees of freedom leads to a mechanism with *k* = 3 kinematic chains or legs.

In a second step we have to determine the the required joint degrees of freedom using Gruebler's formula (Eq. (8.2)). This leads to the sum of *<sup>g</sup> <sup>i</sup>*=<sup>1</sup> *fi* = 15 joint degrees of freedom.

Regarding an equal behavior in all spatial directions it is self-evident to distribute the 15 joint degrees of freedom with five degrees in each leg. This leads to the topologies in Table 8.1. The topologies are denominated according to the joints in one leg starting from the base of the mechanism to the TCP, e.g. a UUP mechanism comprises of a universal joint, followed by another universal joint, and finally one prismatic joint. The selection of an appropriate topology then can be carried on by a systematic reduction of the 3-DoF topologies in Table 8.1. The reduction is based on the following criteria:



**Table 8.1** Topologies for 3-DoF mechanisms with 5 DoF in each leg

Taking into account the above mentioned criteria, the remaining configurations are: UPU, PUU, CUR, CRU, RUU and RUC (Fig. 8.10). Table 8.2 shows the eliminated topologies based on the different criteria. Looking carefully at these topologies in Fig. 8.10, one recognizes that only RUU and RUC have rotatory joint attached to the base platform. Thus these are the only two topologies that can be reasonably driven by a rotatory electrical motor. What makes the RUU (DELTA) mechanism special is that there are only joints with rotatory degrees of freedom within the kinematic chains. All forces and torques are converted into rotatory motion and there is no chance for the mechanism to cant. DELTA mechanisms have singular positions within the workspace. This has to be considered when dimensioning the mechanism Sect. 8.4. The RUU/DELTA was introduced in 1988 by Clavel [4]. Besides acting as a spatial haptic interface (Fig. 8.7), the mechanism is kind of widely used in robotic applications (e.g. pick-and-place tasks). In these devices with mainly kinaesthetic feedback, a mechanical mechanism is used to link the user and the feedback generating actuators. Furthermore the user's input commands are often given by moving a mechanical mechanism.

## **8.3 Design Step 2: Kinematic Equations**

The second step in designing a mechanism is finding the relation between the base and the TCP at any point in time. This is done by solving the kinematic equations. There are two main types of kinematic equations; forward kinematic and inverse kinematic. Before addressing the kinematic equations, some basic definitions should be introduced.

**Fig. 8.10** Possible TPM mechanisms


**Table 8.2** Eliminated topologies, sorted by the distribution of the 5 DoF in each leg

#### **Forward Kinematics**

Forward kinematics is defined as giving the joints' angles/positions *q* = (*q*1, *q*2, ..., *qn*) as input and calculating the pose (position and orientation) *p* = (*p*1, *p*2, ..., *pm*) of the TCP.

$$p = f(q)$$

In serial kinematics, solving the forward kinematics is usually done analytically. On the other hand, for parallel mechanisms the direct kinematic problem can only be solved numerically. However, there are exceptions that can be seen later in this chapter.

An important application of the forward kinematic problem is the calculation of a input command in impedance controlled devices.

#### **Inverse Kinematics**

Inverse kinematics is the opposite to the forward kinematics. The pose of the TCP is given, and the joints' angles/positions are calculated.

$$q = f^{-1}(p)$$

Geometric, algebraic, and numerical methods are used to solve the inverse kinematics problem. The method used depends on the type of mechanism. Numerical methods can be applied to any type of mechanisms. Inverse kinematics in parallel mechanisms is usually easier to calculate compared to serial mechanisms.

In admittance controlled devices, inverse kinematics is used to calculate the required evasive movement in order to regulate a desired contact force between user and the haptic interface.

#### **Coordinate Frames**

Coordinate frame *i*, Fig. 8.11, or simply frame *i*, is composed of an origin *Oi* and three mutually orthogonal base vectors (*x*ˆ*i*, *y*ˆ*i*,*z*ˆ*i*), that is fixed to a particular body [22]. The pose of each body (rod) in a mechanism is always expressed relative to another body. In other words, the pose can be expressed as the relation between two frames, each frame is stick to one body. The pose consists of two parts, position and rotation (orientation). In a mechanism, the most two important frames are the tool and base frames. The pose of the TCP, or any frame inside the mechanism, is usually given relative to the base frame.

#### **Position Vector**

Position vector is the vector connecting the origins of two frames. The 3 × 1 position vector of frame *j* relative to frame *i* is given as:

$$\prescript{i}{}{p\_j} = \begin{bmatrix} \prescript{i}{}{p\_j}^x\\ \prescript{i}{}{p\_j}^y\\ \prescript{i}{}{p\_j}^z \end{bmatrix}$$

The components of this vector are the Cartesian coordinates of *Oj* in frame *i*. This gives the translation between the two origins.

#### **Rotation Matrix**

Orientation of frame *j* relative to frame *i* is expressed using rotation matrix. A rotation matrix is 3 × 3. It is composed as follows:

$$\prescript{i}{}{\mathcal{R}}\_{j} = \begin{bmatrix} \hat{\boldsymbol{x}}\_{j} \cdot \hat{\boldsymbol{x}}\_{i} \ \hat{\boldsymbol{y}}\_{j} \cdot \hat{\boldsymbol{x}}\_{i} \ \hat{\boldsymbol{z}}\_{j} \cdot \hat{\boldsymbol{x}}\_{i} \\ \hat{\boldsymbol{x}}\_{j} \cdot \hat{\boldsymbol{y}}\_{i} \ \hat{\boldsymbol{y}}\_{j} \cdot \hat{\boldsymbol{y}}\_{i} \ \hat{\boldsymbol{z}}\_{j} \cdot \hat{\boldsymbol{y}}\_{i} \\ \hat{\boldsymbol{x}}\_{j} \cdot \hat{\boldsymbol{z}}\_{i} \ \hat{\boldsymbol{y}}\_{j} \cdot \hat{\boldsymbol{z}}\_{i} \ \hat{\boldsymbol{z}}\_{j} \cdot \hat{\boldsymbol{z}}\_{i} \end{bmatrix}$$

For example, a simple rotation of frame *j* around *z*ˆ*<sup>i</sup>* only by an angle θ (Fig. 8.12) gives the following rotation matrix:

$$\begin{aligned} \;^i R\_j = \begin{bmatrix} \cos \theta & -\sin \theta & 0\\ \sin \theta & \cos \theta & 0\\ 0 & 0 & 1 \end{bmatrix} \end{aligned} $$

The different representations of multiple rotations can be found in [22].

#### **Homogeneous Transformation Matrix**

With homogeneous transformations, position vectors and rotation matrices are combined together in a compact notation. Homogeneous transformation matrix is a 4 × 4 matrix, and is given as follows:

$${}^{i}T\_{j} = \begin{bmatrix} {}^{i}R\_{j} \ {}^{i}p\_{j} \\ 0 \ 1 \end{bmatrix}$$

The matrix *<sup>i</sup> Tj* transforms vectors from frame *j* to coordinate frame *i*. Its inverse *j Ti* = *<sup>i</sup> Tj* <sup>−</sup><sup>1</sup> transforms vectors from coordinate frame *i* to frame *j*.

Remarks:


rotations and translations with a single matrix multiplication. This increases the clarity of an implementation and may be one reason why homogeneous coordinate transformations are widespread within robotics and even virtual reality programming.

## *8.3.1 Solving Kinematic Equations in Serial Mechanisms*

In order to solve the forward kinematic equations in serial mechanisms, the DH convention is used. This convention was introduced by Jacques Denavit and Richard Hartenberg in 1955.

## **Denavit-Hartenberg Convention**

In [5, 10, 22] the different variants of the DH convention, proximal and distal, are well differentiated. The convention is based on attaching frames to each link in the mechanism and performing two translations and two rotations to jump from one frame to the next one. Regardless the variant used, there are common steps as follow:


$${}^{base}T\_{TCP} = {}^{base}\; {}^{1}T\_{1} \; {}^{1}T\_{2} \; \dots \; {}^{n-1}T\_{n} \; {}^{n}T\_{TCP} \tag{8.5}$$

*baseTTCP* gives the pose of the TCP in the base frame. The matrix is function of the active joints' values and the dimensions of the links. Substituting with the actuators angles/positions gives the TCP pose in result to the given set of values.

## **Inverse Kinematics in Serial Mechanisms**

There are multiple approaches to solve the inverse kinematics problem in serial mechanisms. Generally, the inverse kinematics problem is nonlinear. A lot of questions rise when solving the inverse kinematics such as, whether there is a solution at all or the existence of multiple solutions. The two main approaches are closed-form and numerical solutions.

Pieper [20] introduced an approach to solve the inverse kinematics (closed-form) of a six DoF serial manipulator, where three axes meet at one point.

## **8.3.1.1 Example: UR10E**

UR10e (Fig. 8.5) is one example of serial mechanisms used in haptic interfaces. The forward kinematics will be discussed in this part. The proximal variant (Modified DH convention) is used in this analysis. In the proximal variant:


$$T\_i^{-1}T\_i = \begin{bmatrix} \cos\theta\_i & -\sin\theta\_i & 0 & a\_{i-1} \\ \sin\theta\_i \cos\alpha\_{i-1} \cos\theta\_i \cos\alpha\_{i-1} & -\sin\alpha\_{i-1} & -d\_i \sin\alpha\_{i-1} \\ \sin\theta\_i \sin\alpha\_{i-1} \cos\theta\_i \sin\alpha\_{i-1} & \cos\alpha\_{i-1} & d\_i \cos\alpha\_{i-1} \\ 0 & 0 & 0 & 1 \end{bmatrix} \tag{8.6}$$

The robotic arm has six revolute DoF, so we have a total seven frames and total of six rows of the DH-table (Table 8.4). The frames are given in Fig. 8.13. The next step is formulating the six transformation matrices as follow:


**Table 8.3** Definitions of Modified DH parameters


**Table 8.4** DH Table of UR10e

**Fig. 8.13** Coordinate frames of UR10e according to modified DH convention

$$\begin{aligned} \;^0T\_1 = \begin{bmatrix} \cos\theta\_1 & -\sin\theta\_1 & 0 & 0\\ \sin\theta\_1 & \cos\theta\_1 & 0 & 0\\ 0 & 0 & 1 & d\_1\\ 0 & 0 & 0 & 1 \end{bmatrix} \end{aligned} $$

$$\begin{aligned} \;^1T\_2 = \begin{bmatrix} \cos\theta\_2 & -\sin\theta\_2 & 0 & 0\\ 0 & 0 & -1 & 0\\ \sin\theta\_2 & \cos\theta\_2 & 0 & 0\\ 0 & 0 & 0 & 1 \end{bmatrix} \end{aligned} $$

· ·

$$\begin{aligned} \;^5T\_6 = \begin{bmatrix} \cos\theta\_6 & -\sin\theta\_2 & 0 & 0\\ 0 & 0 & -1 & d\_6\\ -\sin\theta\_2 & -\cos\theta\_2 & 0 & 0\\ 0 & 0 & 0 & 1 \end{bmatrix} \end{aligned} $$

The total transformation matrix is given as follows:

$$\prescript{0}{}{T}\_6 = \prescript{0}{}{T}\_1. \prescript{1}{}{T}\_2. \prescript{2}{}{T}\_3. \prescript{3}{}{T}\_4. \prescript{4}{}{T}\_5. \prescript{5}{}{T}\_6 \tag{8.7}$$

The closed-form approach to solve the inverse kinematics of this robot is discussed in details in [11]. The numerical approach will be discussed later in this chapter in Sect. 8.5.3.

## *8.3.2 Solving Kinematic Equations in Parallel Mechanisms*

Solving kinematic equations in parallel mechanisms is somehow different compared to serial mechanisms. The main goal remains the same, to get the relation between the pose of the TCP and the values of the joints' angles/positions. The presence of both active and passive joints adds complexity to the kinematic equations. Also, if the joints aren't distributed equally on all chains (legs), the kinematics gets more complicated.

## **Forward Kinematics**

In contrast to serial mechanisms, for parallel mechanisms the direct kinematic problem can only be solved numerically. However, there are exceptions as can bee seen later. As mentioned earlier, the Stewart- Gough platform (Fig. 8.6) is one of the most famous parallel mechanisms. Solving the forward kinematics of this platform may end with 40 possible solutions [21] and [16]. Many approaches were introduced to solve the kinematics problem in general, like elimination [9], interval analysis [14], continuation [21]. Recently, other algorithms were introduced to cope with the real-time constraints, such as, using Neural networks [18], or using the information from the inverse kinematics and the small changes in the motion of the TCP [24].

## **Inverse Kinematics**

The procedure of calculating the inverse kinematic problem can be split up into the following three steps:


## **8.3.2.1 Example: RUU/DELTA Mechanism**

TPM is a special case in parallel mechanisms. Solving the forward and inverse kinematics is somehow not complicated. One example of TPM is the RUU/DELTA mechanism.

## **Forward Kinematics**

Figure 8.14 shows the necessary dimensions and angles to derive the kinematic equations. It is desired to express all these equations with respect to the world frame in the middle of the base platform. The *x* axis points towards the first leg. A local frame (*xAi*, *yAi*,*z Ai*) with the origin *Ai* is fixed at the first joint of the *ith* leg. This local coordinate system is rotated by φ*<sup>i</sup>* = (*i* − 1) · 120 degrees, with *i* = 1, 2, 3

**Fig. 8.14** Coordinate frames of DELTA mechanism according to [23]

with respect to the world frame. The transformation between the base frame and the *Ai* frame is as follows:

$${}^{b}\_{\
u} {}^{b}\_{A\_{l}} = \begin{bmatrix} \cos \left( -\phi\_{l} \right) & \sin \left( -\phi\_{l} \right) & 0 \ 0 \\ -\sin \left( -\phi\_{l} \right) \cos \left( -\phi\_{l} \right) & 0 \ 0 \\ 0 & 0 & 1 \ 0 \\ 0 & 0 & 0 & 1 \end{bmatrix} \cdot \begin{bmatrix} 1 & 0 \ 0 & r\_{base} \\ 0 & 1 \ 0 & 0 \\ 0 & 0 & 1 \\ 0 & 0 & 1 \end{bmatrix} \tag{8.8}$$
 
$${}^{b}\_{\
u} {}^{b}\_{T\_{A\_{l}}} = \begin{bmatrix} \cos \left( -\phi\_{l} \right) & \sin \left( -\phi\_{l} \right) & 0 & r\_{base} \cdot \cos \left( -\phi\_{l} \right) \\ -\sin \left( -\phi\_{l} \right) \cos \left( -\phi\_{l} \right) & 0 & -r\_{base} \cdot \sin \left( -\phi\_{l} \right) \\ 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 1 \end{bmatrix} \tag{8.9}$$

The transformation is a rotation around the *z* axis in the world frame by angle −φ*<sup>i</sup>* and then a translation in the *xi* direction by distance *rbase*.

Another frame is attached to the point *Ci* . This frame has the same orientation as the *Ai* frame, the relative position is dependent on the angle θ1*<sup>i</sup>* . The transformation between these two frames is:

$$\begin{aligned} \,^{A\_i}T\_{C\_i} = \begin{bmatrix} 1 & 0 \ 0 \ a \cdot \cos \theta\_{1i} \\ 0 \ 1 \ 0 & 0 \\ 0 \ 0 \ 1 \ a \cdot \sin \theta\_{1i} \\ 0 \ 0 \ 0 & 1 \end{bmatrix} \end{aligned} \tag{8.10}$$

As mentioned above the forward kinematic problem in general cannot be solved for parallel kinematic mechanisms. In case of the DELTA mechanism it is different. Here the method of trilateration can be applied. This approach is based on the fact that, if looking at one leg, all points *Bi* are on the surface of a sphere with radius *b* and the center point *Ci* . The surface is given by sphere equation:

$$((\mathbf{x} - \mathbf{x}\_{C\_l})^2 + (\mathbf{y} - \mathbf{y}\_{C\_l})^2 + (z - z\_{C\_l})^2 = b^2 \tag{8.11}$$

with the center coordinates (*xCi*, *yCi*,*zCi*) of the sphere.

In order to use the trilateration method more easily, a virtual frame *C <sup>i</sup>* will be placed with a distance of −*rTCP* along the *x* axis of frame *Ci* . The transformation between the two frames is:

$$\begin{aligned} \,^{C\_l}T\_{C\_i'} = \begin{bmatrix} 1 & 0 \ 0 & -r\_{TCP} \\ 0 & 1 \ 0 & 0 \\ 0 \ 0 \ 1 & 0 \\ 0 \ 0 \ 0 & 1 \end{bmatrix} \end{aligned} \tag{8.12}$$

As mentioned before, the usage of homogeneous transformation matrices gives the ability to jump from one frame to another. The same idea could be used in expressing compound transformations. For example, the transformation between the base frame and the *C <sup>i</sup>* can be expressed as jumping from the base frame to frame *Ai* , then jump from frame *Ai* to frame *Ci* and finally a jump from frame *Ci* to frame *C <sup>i</sup>* .This compound transformation can be expressed as follows:

$${}^{base}T\_{C\_i'} = {}^{base}T\_{A\_i} \cdot {}^{A\_i}T\_{C\_i} \cdot {}^{C\_i}T\_{C\_i'} \tag{8.13}$$

The reason behind attaching the virtual frame *C <sup>i</sup>* is that, considering three spheres with radius *b* and center *C <sup>i</sup>* , the three sphere surfaces of the three legs intersect in the point *P*, which is the solution of the forward kinematics. Additionally, the other two angles of each leg, θ2*<sup>i</sup>* and θ3*<sup>i</sup>* , aren't known. The equation of the three spheres can be formulated as follows:

The solution of the set of equations leads to two points of intersection of the spheres, only one of them is geometrically meaningful.

#### **Inverse Kinematics**

The DELTA mechanism is especially known from impedance-controlled devices. In this mode of operation the inverse kinematics problem is not needed. However it is a very useful tool in the design process to determine the available workspace which is shown later in Sect. 8.4.

A frame is attached to the TCP at point *P* with the same orientation as the base frame. Three other frames are attached to each point *Bi* with the same orientation as the frames at *Ai* and *Ci* . To solve the inverse kinematics of each leg, we can use the following compound transformation:

$${}^{base}T\_{TCP} = \,^{base}T\_{A\_l} \cdot \,^{A\_i}T\_{C\_l} \cdot \,^{C\_i}T\_{B\_l} \cdot \,^{B\_i}T\_{TCP} \tag{8.14}$$

where:

$${}^{base}\_{TCP} = \begin{bmatrix} 1 \ 0 \ 0 \ x\_P \\ 0 \ 1 \ 0 \ \text{y}\_P \\ 0 \ 0 \ 1 \ z\_P \\ 0 \ 0 \ 0 \ 1 \end{bmatrix} c\_{Bi} = \begin{bmatrix} 1 \ 0 \ 0 \ b \cdot \sin \theta\_{3i} \cdot \cos \left(\theta\_{1i} + \theta\_{2i}\right) \\ 0 \ 1 \ 0 \\ 0 \ 0 \ 1 \end{bmatrix} \begin{bmatrix} b \ \cos \theta\_{3i} \cdot \cos \left(\theta\_{1i} + \theta\_{2i}\right) \\ b \ 0 \ 1 \ b \cdot \sin \theta\_{3i} \cdot \sin \left(\theta\_{1i} + \theta\_{2i}\right) \\ 0 \ 0 \ 0 \end{bmatrix}$$

$${}^{B\_i} T\_{TCP} = \begin{bmatrix} \cos\left(\phi\_i\right) & \sin\left(\phi\_i\right) \ 0 & r\_{TCP} \cdot \cos\left(\phi\_i\right) \\ -\sin\left(\phi\_i\right) \cos\left(\phi\_i\right) \ 0 - r\_{TCP} \cdot \sin\left(\phi\_i\right) \\ 0 & 0 \ 1 & 0 \\ 0 & 0 & 0 & 1 \end{bmatrix}} \tag{8.15}$$

The unknown matrices are *Ai TCi* and *Ci TBi* , that are function of the three unknown angles, θ1*<sup>i</sup>* ,θ2*<sup>i</sup>* , and θ3*<sup>i</sup>* . In order to solve for the angles, all the known matrices should be on one side and the unknown should be on the other side. We get:

$$\left(^{base}T\_{A\_i}\right)^{-1}\cdot\;^{base}T\_{TCP}\cdot\;^{B\_i}T\_{TCP}\;^{-1} = \;^{A\_i}T\_{C\_i}\cdot\;^{C\_i}T\_{B\_i}\tag{8.16}$$

Multiplying the two matrices on the right hand side give:

$${}^{A\_i}T\_{B\_i} = \begin{bmatrix} 1 \ 0 \ 0 \ a \cdot \cos \theta\_{li} + b \cdot \sin \theta\_{3i} \cdot \cos \left(\theta\_{li} + \theta\_{2i}\right) \\ 0 \ 1 \ 0 \\ 0 \ 0 \ 1 \ a \cdot \sin \theta\_{li} + b \cdot \sin \theta\_{3i} \cdot \sin \left(\theta\_{li} + \theta\_{2i}\right) \\ 0 \ 0 \ 0 \end{bmatrix} = \begin{bmatrix} 1 \ 0 \ 0 \ x\_{B\_i} \\ 0 \ 1 \ 0 \ y\_{B\_i} \\ 0 \ 0 \ 1 \ z\_{B\_i} \\ 0 \ 0 \ 0 \ 1 \end{bmatrix} \tag{8.17}$$

This leads to, according to [23]:

$$\theta\_{3i} = \arccos \frac{\mathbf{y}\_{B\_i}}{b} \tag{8.18}$$

$$\theta\_{2i} = \arccos \frac{{x\_{B\_i}}^2 + {y\_{B\_i}}^2 + {z\_{B\_i}}^2 - a^2 - b^2}{2ab \sin \theta\_{3i}} \tag{8.19}$$

$$\theta\_{1i} = \arctan \frac{x\_{B\_i} - b \sin \theta\_{3i} \cos \left(\theta\_{1i} + \theta\_{2i}\right)}{z\_{B\_i} - b \sin \theta\_{3i} \sin \left(\theta\_{1i} + \theta\_{2i}\right)}\tag{8.20}$$

Equations (8.18)–(8.20) are the solution to the inverse kinematic equation for each leg.

## **8.4 Design Step 3: Dimensioning a Haptic Kinematic**

The last step in the design is dimensioning. Optimizing the dimensions of the mechanism, lengths of the rods/links defined in step 1 (Sect. 8.2), affects the workspace of the mechanism, the transmission of forces/torques, and the velocities. The goal of the optimization is to reach a specific optimum performance. This may be a maximized workspace with homogeneous transfer characteristics of forces from the TCP to the actuators. Dimensioning procedure in parallel mechanisms is usually more complicated compared to that in serial mechanisms. According to Merlet, parallel mechanisms with well designed dimensions can perform better than one with better suited topology but worse dimensions [15]. An important parameter in haptic interfaces is the impedance. In order to calculate the impedance of the system, the values of the velocities and the forces have to be known. In this part of the chapter, an introduction will be given on how the dimensioning procedure is performed.

#### **Jacobian Matrix**

In both the forward and inverse kinematic problems, the vectors *q* and *p* are linked via the mechanism's gearing properties. Those properties are represented by the Jacobian matrix *J* . For the mechanism's kinematics, the Jacobian matrix represents the transmission matrix of the first order. It carries all information regarding dimensions and transmission properties. *J* is defined by the partial derivative of TCP coordinates with respect to the joints' coordinates. However, generally, the Jacobian could be calculated for any frame in the mechanism, but usually the TCP frame is the important frame to be considered. The size of the matrix is *m* × *n*, where *m* is the number of TCP coordinates and *n* is the number of joints' coordinates. For example, the TCP of UR10e robot has *<sup>m</sup>* <sup>=</sup> <sup>6</sup> (*<sup>x</sup> <sup>p</sup>*, *yp*,*<sup>z</sup> <sup>p</sup>*, α*p*, β*p*, γ*p*) and *<sup>n</sup>* <sup>=</sup> <sup>6</sup> (θ1, θ2, θ3, θ4, θ5, θ6). So the Jacobian matrix of this mechanism consists of *m* rows and *n* columns:

$$J \equiv \begin{bmatrix} \frac{\partial x\_p}{\partial \theta\_1} & \frac{\partial x\_p}{\partial \theta\_2} & \frac{\partial x\_p}{\partial \theta\_3} & \frac{\partial x\_p}{\partial \theta\_4} & \frac{\partial x\_p}{\partial \theta\_5} & \frac{\partial x\_p}{\partial \theta\_6} \\\\ \frac{\partial y\_p}{\partial \theta\_1} & \frac{\partial y\_p}{\partial \theta\_2} & \frac{\partial y\_p}{\partial \theta\_3} & \frac{\partial y\_p}{\partial \theta\_4} & \frac{\partial y\_p}{\partial \phi\_5} & \frac{\partial y\_p}{\partial \phi\_6} \\\\ \frac{\partial z\_p}{\partial \theta\_1} & \frac{\partial z\_p}{\partial \theta\_2} & \frac{\partial z\_p}{\partial \theta\_3} & \frac{\partial z\_p}{\partial \theta\_4} & \frac{\partial z\_p}{\partial \theta\_5} & \frac{\partial z\_p}{\partial \theta\_6} \\\\ \frac{\partial \alpha\_p}{\partial \theta\_1} & \frac{\partial \alpha\_p}{\partial \theta\_2} & \frac{\partial \alpha\_p}{\partial \theta\_3} & \frac{\partial \alpha\_p}{\partial \theta\_4} & \frac{\partial \alpha\_p}{\partial \phi\_5} & \frac{\partial \alpha\_p}{\partial \theta\_6} \\\\ \frac{\partial \beta\_p}{\partial \theta\_1} & \frac{\partial \beta\_p}{\partial \theta\_2} & \frac{\partial \beta\_p}{\partial \phi\_3} & \frac{\partial \beta\_p}{\partial \phi\_4} & \frac{\partial \beta\_p}{\partial \phi\_5} & \frac{\partial \beta\_p}{\partial \phi\_6} \\\\ \end{bmatrix} \tag{8.21}$$

More details are found in [12].

The Jacobian matrix is used to express various relations between the inputs and outputs of a mechanism, such as the relation between the velocities of the joints compared to the velocity of the TCP, and the relation between the torques applied to the joints and the forces on the TCP. These relations are discussed later in this section.

## *8.4.1 Workspace*

The dimensions of the mechanism affect the workspace. To perform an optimization the following steps should be taken:


These steps are discussed and covered in [8, 17]. The key challenge is the formal description of the optimum. This process should be done using a computer software which will be discussed later in Sect. 8.5. Within the optimization process, the measurement value for an optimum has to be determined by scanning the complete workspace and optimizing relevant parameters of the mechanism between each scanning process. In [1] several optimizations are given using the singular values of the Jacobian matrix as a measure.

## *8.4.2 Isotropy and Singular Positions*

The dimensioning process continues by taking into account the best desirable working points of the TCP inside the workspace and what positions should be avoided.

Isotropy describes the optimum working points in the workspace, these are the configurations where the servo transmissions are highly coupled, meaning that the error between the input and the output is minimised.

On the other hand, singularities are the configurations that should be avoided. In singular positions, the control of one or more of the mechanism's DoF are lost. If a mechanism approaches a singular position its transmission or gear ratio changes quickly until the mechanism is locked in the singular position. Singularities are divided into two main types [5].

#### **Workspace-Boundary Singularities**

This type of singularity occurs when the mechanism is fully stretched to the edge of the workspace. This applies to all types of mechanisms.

#### **Workspace-Interior Singularities**

This type occurs inside the workspace. In serial mechanisms, one of these singularities happens especially in six DoF mechanisms where the axes of the last three joints (wrist) intersect in on point. Usually this happens when two axes are coincident.

Figure 8.15 shows examples of both types of singular positions. The key to analyze the isotropy and singularity is based on the properties of the Jacobian matrix. A key performance index which is derived from the Jacobian matrix is the condition number κ.

**Fig. 8.15** Different singular positions

#### **The Conditioning Number**

The kinematic transmission behavior is rated by the singular values σ*<sup>i</sup>* of the inverse Jacobian matrix *J* <sup>−</sup>1. In general the singular values of a matrix **A** are defined as:

$$
\sigma\_i(A) = \sqrt{\lambda\_i(A^T A)}\tag{8.22}
$$

The role of the singular values can be shown by Golub's method of singular value decomposition [6]. It is based on the fact that for a real *m* × *n* matrix **A**, with *m* ≥ *n*, and rank *r* can be fractioned in the following product:

$$A = U \cdot \Sigma \cdot V^T \tag{8.23}$$

Where *U* consists of *n* orthonormalized eignevectors of the *n* largest eignevalues of *AA<sup>T</sup>* and *V* consists of the orthonormalized eigenvectors of *A<sup>T</sup> A*. Σ is a *m* × *n* diagonal matrix as follows:

$$\mathbf{Z} = \begin{pmatrix} \sigma\_1 & & & \vdots \\ & \ddots & & \vdots & \cdots & 0 \\ & & \sigma\_r & \vdots & \\ \hline & \vdots & & & \\ \cdots & 0 & \cdots & 0 & \cdots \\ & \vdots & & & \vdots \end{pmatrix} \tag{8.24}$$

Where σ<sup>1</sup> ≥···≥ σ*<sup>r</sup>* > 0. The conditioning number is defined as:

$$\kappa = \frac{\sigma\_{\text{max}}}{\sigma\_{\text{min}}} \tag{8.25}$$

As a function of the Jacobian matrix κ changes with respect to the mechanism's position. The conditioning number can reach values from <sup>1</sup> <sup>κ</sup> = 0 ··· 1.

The goal is to have a highly isotropic transmission, which means a conditioning number of 1. On the other hand, singular positions should be avoided. In terms of the Jacobian matrix, the rank of the matrix decreases in the singular position. This will translated into a conditioning number of <sup>∞</sup> or <sup>1</sup> <sup>κ</sup> = 0.

For the two types of singularities introduced earlier, the loss of rank of the Jacobian matrix is characterized by:


#### **Optimization Criteria**

Besides the analysis of isotropy and singular positions, another aspect one has to take care of in the design process is the transmission of force and speed.

Recalling Eq. (8.35), to limit the maximal required force and torque and thereby limit also the size of the used actuators, it is important to reach a good transmission of forces and torques even in cases of a disadvantageous σ*<sup>i</sup>* . We can derive the criterion as follows:

$$
\sigma\_{\min}(J^{-1}) \to \max \tag{8.26}
$$

For maximizing the speed transmission, the criterion could be as follows:

$$
\sigma\_{\max}(J^{-1}) \to \min \tag{8.27}
$$

Table 8.5 sums up various design optimization criteria.

One major drawback of Eq. (8.25) is that it rates the mechanism for Jacobian matrix or position. The pure optimization of 1/κ would in fact lead to one single position where the mechanism reaches high isotropy. However one cannot draw the conclusion that the whole workspace in total has an optimized transmission behavior.


**Table 8.5** Summary of optimization criteria

What is needed is a measure to rate 1/κ of a whole workspace. This measure is provided by the global conditioning index as in [15].

$$\upsilon = \frac{\int\_{W} \frac{1}{\kappa} dW}{\int\_{W} dW} \tag{8.28}$$

The global conditioning index can be optimized using computer algorithms.

## *8.4.3 Velocities*

The velocities of the joints and those of the TCP are related with the Jacobian matrix of the mechanism:

$$dp = J \cdot dq\tag{8.29}$$

Equation (8.29) gives the output velocity of the TCP with respect to the joints' velocities. The same idea could be done to get the desired joints' velocities in order to have a required velocity of the TCP:

$$dq = J^{-1} \cdot dp \tag{8.30}$$

The optimization process should involve the desired velocities of the TCP, this will affect the motors used to drive the joints.

## *8.4.4 Dynamics*

For the design and operation of haptic interfaces there is another equation of high importance related to the transformation of forces and torques by a mechanism. In order to express the dynamics of a mechanism, the equations of motion of the links are to calculated. The goal is to find the required torques/forces on the joints. Craig [5] divided the approaches to calculate the equations of motions into: Iterative (numerical) and closed form (analytical).

#### **Iterative Approach: NEWTON-EULER Dynamics Algorithm**

One example of iterative methods is the Newton- Euler dynamics algorithm. This algorithm is split into parts, outward and inward iterations. The algorithm is as follows:

1. Outward iterations are computed to calculate the velocities and accelerations (linear and rotational) of the center of mass of each link/rod in the mechanism. The iterations start with the first link and ends with the last link.


The usage of a numerical approach can be applied to any robot. It only needs the inertia tensors of each link, position vectors that connect the links with each other and the rotation matrices between each two links. On the other hand, sometimes the information about the gravity and the non-inertial effects are important. A closed form equation should be introduced in that case.

#### **Closed Form Approach**

Closed form approaches express the dynamics of a mechanism in more detail. There are a lot of methods that can be used to express the equations of motion analytically. Two of these methods are discussed in this chapter: Newton- Euler equation and Lagrangian dynamic formulation.

The general form of a Newton- Euler equation for a link is as follows:

$$
\pi = M(\theta)\check{\theta} + V(\theta, \dot{\theta}) + G(\theta) \tag{8.31}
$$

where:


Another method that is widely used is the Lagrangian dynamic formulation. The Newton- Euler equation is considered a force balance approach, while, on the other hand, Lagrangian formulation is considered as an energy approach. This method uses the energy of the system to express the equations of motions.

A scalar function called the Lagrangian (L) is defined as:

$$L(\theta, \dot{\theta}) = k(\theta, \dot{\theta}) - \mu(\theta) \tag{8.32}$$

where:


The equations of motion are given as follow:

$$
\pi = \frac{d}{dt} \frac{\partial L}{\partial \dot{\theta}} - \frac{\partial L}{\partial \theta} \tag{8.33}
$$

The number of equations of motion obtained using the Lagrangian dynamics formulation depends on the number of the generalized coordinates. The generalized coordinates are the parameters needed to express the configuration of a mechanism. In our case the joints' values are the generalized coordinates. This means that the Lagrangian function *L* should be expressed only in terms of the generalized coordinates.

The equations showed (8.31) and (8.33) include only the forces as a result of rigid body mechanics, the most important factor that isn't included is friction. There are multiple ways to model friction forces; the two most important models are viscous friction and Coulomb friction. An additional part *Ff* is added to Eq. (8.31) to model the friction:

$$\boldsymbol{\pi} = M(\boldsymbol{\theta})\ddot{\boldsymbol{\theta}} + V(\boldsymbol{\theta}, \dot{\boldsymbol{\theta}}) + G(\boldsymbol{\theta}) + F\_f(\boldsymbol{\theta}, \dot{\boldsymbol{\theta}}) \tag{8.34}$$

Equations (8.31) and (8.33) give the same output. Also, both of them are expressed in terms of the joints' positions, velocities, accelerations or, in other words, in the joint space. In order to express the forces on the TCP, the Jacobian matrix can be used as follows:

$$
\pi = J^{\mathcal{T}}(\theta) \cdot F \tag{8.35}
$$

Combining Eqs. (8.31) and (8.35) results to:

$$J^{-T}\mathfrak{r} = J^{-T}M(\theta)\ddot{\theta} + J^{-T}V(\theta,\dot{\theta}) + J^{-T}G(\theta) \tag{8.36}$$

Which results to:

$$F = J^{-T}M(\theta)\ddot{\theta} + J^{-T}V(\theta, \dot{\theta}) + J^{-T}G(\theta) \tag{8.37}$$

Other methods are available to express the equations of motion of mechanisms. Malvezzi et al. [13] made a qualitative comparison between three approaches to express the dynamics of a serial mechanism. The dynamics in serial mechanisms gets complicated with the increase in number of links, however, it's not as complicated as the case of parallel mechanisms.

#### **8.4.4.1 Example: Equations of Motion of 2-DoF Serial Mechanism**

To show how obtaining the equations of motion is complicated, a simple mechanism will be discussed. Figure 8.16 shows a 2-DoF serial mechanism. The mechanism is simplified such as the masses *m*<sup>1</sup> and *m*<sup>2</sup> are considered to be point masses at the end of each link and the links are considered massless. Also the friction forces aren't taken into consideration. The equations of motion are obtained using the Lagrange dynamic formulation. The generalized coordinates of this mechanism are θ<sup>1</sup> and θ2. Recalling Eq. (8.33), the two equations of motion are as follows:

#### 8 Kinematic Design 297

**Fig. 8.16** 2-DoF serial mechanism

$$
\pi\_1 = \frac{d}{dt} \frac{\partial L}{\partial \dot{\theta\_1}} - \frac{\partial L}{\partial \theta\_1} \tag{8.38}
$$

$$
\sigma\_2 = \frac{d}{dt} \frac{\partial L}{\partial \dot{\theta\_2}} - \frac{\partial L}{\partial \theta\_2} \tag{8.39}
$$

Where each equation represents the torque on each motor that creates the motion of each angle.

The total kinetic energy of the mechanism is:

$$k\_{Total} = k\_{m\_1} + k\_{m\_2} = \frac{1}{2} \cdot m\_1 \cdot (\upsilon\_{m\_1})^2 + \frac{1}{2} \cdot m\_2 \cdot (\upsilon\_{m\_2})^2 \tag{8.40}$$

$$\upsilon\_{m\_1}{}^2 = \left(l\_1 \cdot \dot{\theta}\_1\right)^2$$

To get *vm*<sup>2</sup> , one can obtain first the position of *m*<sup>2</sup> in Cartesian space and convert it to the joint space:

$$\begin{aligned} \left| \chi\_{m\_2} = l\_1 \cdot \cos \theta\_1 + l\_2 \cdot \cos \left( \theta\_1 + \theta\_2 \right) \right| \\\\ \left| \chi\_{m\_2} = l\_1 \cdot \sin \theta\_1 + l\_2 \cdot \sin \left( \theta\_1 + \theta\_2 \right) \right| \\\\ \left| \upsilon\_{m\_2} \right|^2 = (\dot{\chi}\_{m\_2})^2 + (\dot{\chi}\_{m\_2})^2 \end{aligned}$$

This leads to:

$$\begin{aligned} \left| \nu\_{m\_2} \right|^2 &= l\_1^{-2} \dot{\theta\_1}^2 + (\dot{\theta\_1} + \dot{\theta\_2})^2 l\_2^{-2} \\ &+ 2 \dot{\theta\_1} l\_1 l\_2 (\dot{\theta\_1} + \dot{\theta\_2}) [\sin \theta\_1 \sin (\theta\_1 + \theta\_2) + \cos \theta\_1 \cos (\theta\_1 + \theta\_2)] \end{aligned}$$

Using the angle addition trigonometric function, *vm*<sup>2</sup> is given as follows:

$$\left| \mathbf{v}\_{m\_2} \right|^2 = l\_1^{\,2} \dot{\theta}\_1^{\,2} + (\dot{\theta}\_1 + \dot{\theta}\_2)^2 l\_2^{\,2} + 2 \dot{\theta}\_1 l\_1 l\_2 (\dot{\theta}\_1 + \dot{\theta}\_2) [\cos \theta\_2]^2$$

The total kinetic energy of the mechanism is:

$$k\_{Total} = \frac{1}{2} \cdot m\_1 \cdot \left(l\_1 \cdot \dot{\theta}\_1\right)^2 + \frac{1}{2} \cdot m\_2 \cdot l\_1^2 \dot{\theta}\_1^2 + (\dot{\theta}\_1 + \dot{\theta}\_2)^2 l\_2^2 + 2\dot{\theta}\_1 l\_1 l\_2 (\dot{\theta}\_1 + \dot{\theta}\_2) [\cos \theta\_2]^2$$

The potential energy of the mechanism is:

$$u\_{Total} = m\_1 \cdot \mathbf{g} \cdot \mathbf{y}\_{m\_1} + m\_2 \cdot \mathbf{g} \cdot \mathbf{y}\_{m\_2} \tag{8.41}$$

$$u\_{Total} = m\_1 \cdot \mathbf{g} \cdot l\_1 \sin \theta\_1 + m\_2 \cdot \mathbf{g} \cdot [l\_1 \sin \theta\_1 + l\_2 \sin \left(\theta\_1 + \theta\_2\right)]$$

Hence, the Lagrangian function *L* is defined as:

$$L = k\_{Total} - \mu\_{Total} \tag{8.42}$$

The torques to be applied on the joints, recalling Eqs. (8.38) and (8.39), are as follow:

$$\begin{aligned} \tau\_1 &= (m\_1 + m\_2)l\_1^2 \ddot{\theta\_1} + m\_2 l\_2^2 (\ddot{\theta\_1} + \ddot{\theta\_2}) + m\_2 l\_1 l\_2 \cos \theta\_2 (2\ddot{\theta\_1} + \ddot{\theta\_2}) - m\_2 l\_1 l\_2 \sin \theta\_2 \dot{\theta\_2}^2 \\ &- 2m\_2 l\_1 l\_2 \sin \theta\_2 \dot{\theta\_1} \dot{\theta\_2} + m\_2 l\_2 g \cos \left(\theta\_1 + \theta\_2\right) + (m\_1 + m\_2) l\_1 g \cos \theta\_1 \end{aligned} \tag{8.43}$$
 
$$\tau\_2 = m\_2 l\_2 [(\ddot{\theta\_1} + \ddot{\theta\_2}) l\_2 + l\_1 \cos \theta\_2 \ddot{\theta\_1} + l\_1 \sin \theta\_2 \dot{\theta\_1}^2 + g \cos \left(\theta\_1 + \theta\_2\right)] \tag{8.44}$$

Or in matrix form like Eq. (8.34):

$$
\begin{bmatrix} \tau\_1\\ \tau\_2 \end{bmatrix} = \begin{bmatrix} (m\_1 + m\_2)l\_1^2 + m\_2l\_2^2 + 2m\_2l\_1l\_2\cos\theta\_2 \ m\_2l\_2^2 + m\_2l\_1l\_2\cos\theta\_2\\ m\_2l\_2^2 + m\_2l\_1l\_2\cos\theta\_2 \end{bmatrix} \begin{bmatrix} \ddot{\theta\_1}\\ \ddot{\theta\_2} \end{bmatrix} + \begin{bmatrix} \tau\_1\\ \ddot{\theta\_1} \end{bmatrix} + \begin{bmatrix} -m\_2l\_1l\_2\sin\theta\_2\cos\theta\_1\\ m\_2l\_1\sin\theta\_2\cos\theta\_1\end{bmatrix} \begin{bmatrix} \ddot{\theta\_1}\\ \ddot{\theta\_2} \end{bmatrix} + \begin{bmatrix} \tau\_2\\ m\_2l\_2\cos\left(\theta\_1 + \theta\_2\right) + (m\_1 + m\_2)l\_1\mathbf{g}\cos\theta\_1\\ m\_2l\_2\mathbf{g}\cos\left(\theta\_1 + \theta\_2\right) \end{bmatrix} \tag{8.45}
$$

As mentioned earlier, the torque equations are expressed in joint space. In order to express the forces acting on the TCP, the Jacobian matrix is to be used. The Jacobian matrix of this mechanism is:

$$J = \begin{bmatrix} \frac{\partial x}{\partial \theta\_1} \frac{\partial x}{\partial \theta\_2} \\ \frac{\partial y}{\partial \theta\_1} \frac{\partial y}{\partial \theta\_2} \end{bmatrix} = \begin{bmatrix} -l\_1 \sin \theta\_1 - l\_2 \sin \left(\theta\_1 + \theta\_2\right) & -l\_2 \sin \left(\theta\_1 + \theta\_2\right) \\ l\_1 \cos \theta\_1 + l\_2 \cos \left(\theta\_1 + \theta\_2\right) & l\_2 \cos \left(\theta\_1 + \theta\_2\right) \end{bmatrix} \tag{8.46}$$

recalling Eq. (8.36), *J* <sup>−</sup>*<sup>T</sup>* is expressed as:

$$J^{-T} = \frac{1}{l\_1 l\_2 \sin \theta\_2} \begin{bmatrix} l\_2 \cos \left(\theta\_1 + \theta\_2\right) \ -l\_1 \cos \theta\_1 - l\_2 \cos \left(\theta\_1 + \theta\_2\right) \\ l\_2 \sin \left(\theta\_1 + \theta\_2\right) \ -l\_1 \sin \theta\_1 - l\_2 \sin \left(\theta\_1 + \theta\_2\right) \end{bmatrix} \tag{8.47}$$

Recalling Eq. (8.37):

$$J^{-T}M(\theta) = \frac{1}{l\_1 l\_2 \sin \theta\_2} \begin{bmatrix} M\_{11}' \ M\_{12}' \\ M\_{21}' \ M\_{22}' \end{bmatrix} \tag{8.48}$$

Where:

$$\begin{split} M\_{11}^{\prime} &= (m\_1 + m\_2)l\_1^2 l\_2 \cos\left(\theta\_1 + \theta\_2\right) + m\_2 l\_1 l\_2^2 \cos\theta\_2 \cos\left(\theta\_1 + \theta\_2\right) \\ &- m\_2 l\_1 l\_2^2 \cos\theta\_1 - m\_2 l\_1^2 l\_2 \cos\theta\_1 \cos\theta\_2 \end{split} \tag{8.49}$$

$$M\_{12}' = m\_2 l\_1 l\_2^2 (\cos \theta\_2 \cos(\theta\_1 + \theta\_2) - \cos \theta\_1) \tag{8.50}$$

$$\begin{split} M\_{21}' &= (m\_1 + m\_2)l\_1^2 l\_2 \sin\left(\theta\_1 + \theta\_2\right) + m\_2 l\_1 l\_2^2 \cos\theta\_2 \sin\left(\theta\_1 + \theta\_2\right) \\ &- m\_2 l\_1 l\_2^2 \sin\theta\_1 - m\_2 l\_1^2 l\_2 \sin\theta\_1 \cos\theta\_2 \end{split} \tag{8.51}$$

$$M\_{22}' = m\_2 l\_1 l\_2^2 (\cos \theta\_2 \sin \left(\theta\_1 + \theta\_2\right) - \sin \theta\_1) \tag{8.52}$$

Accordingly:

$$J^{-T}V(\theta,\dot{\theta}\_1) = \frac{1}{l\_1 l\_2 \sin\theta\_2} \begin{bmatrix} V\_1'\\ V\_2' \end{bmatrix} \tag{8.53}$$

Where:

$$V\_1' = -m\_2 l\_1 l\_2^2 \sin \theta\_2 \cos (\theta\_1 + \theta\_2)(\dot{\theta}\_1 + \dot{\theta}\_2)^2 - m\_2 l\_1^2 l\_2 \cos \theta\_1 \sin \theta\_2 \dot{\theta}\_1^2 \tag{8.54}$$

$$V\_2' = -m\_2 l\_1 l\_2^2 \sin \theta\_2 \sin (\theta\_1 + \theta\_2)(\dot{\theta}\_1 + \dot{\theta}\_2)^2 - m\_2 l\_1^2 l\_2 \sin \theta\_1 \sin \theta\_2 \dot{\theta}\_1^2 \tag{8.55}$$

And:

$$J^{-T}G(\theta) = \frac{1}{l\_1 l\_2 \sin \theta\_2} \begin{bmatrix} G\_1' \\ G\_2' \end{bmatrix} \tag{8.56}$$

Where:

$$G\_1' = m\_1 l\_1 l\_2 \text{g} \cos \theta\_1 \cos \left(\theta\_1 + \theta\_2\right) \tag{8.57}$$

$$G\_2' = (m\_1 + m\_2)l\_1l\_2\mathbf{g}\cos\theta\_1\sin\left(\theta\_1 + \theta\_2\right) - m\_2l\_1l\_2\mathbf{g}\sin\theta\_1\cos\left(\theta\_1 + \theta\_2\right) \quad (8.58)$$

If we assume from Eq. (8.37), that *F* = [*Fx Fy* ] *<sup>T</sup>* , this leads to the force acting on the TCP in x- and y-direction:

$$F\_x = \frac{1}{l\_1 l\_2 \sin \theta\_2} \left[ M\_{11}' \ddot{\theta}\_1 + M\_{12}' \ddot{\theta}\_2 + V\_1' + G\_1' \right] \tag{8.59}$$

$$F\_y = \frac{1}{l\_1 l\_2 \sin \theta\_2} \left[ M\_{21}' \ddot{\theta}\_1 + M\_{22}' \ddot{\theta}\_2 + V\_2' + G\_2' \right] \tag{8.60}$$

Equations (8.59) and (8.60) express the forces with respect to the angular accelerations of the joints θ¨ . The same forces could be expressed with respect to the accelerations of the Cartesian variables *X*¨. The general form is found in [5]:

$$F = M\_x(\theta)\ddot{X} + V\_x(\theta, \dot{\theta}) + G\_x(\theta) \tag{8.61}$$

Where:

$$M\_x = J^{-T}(\theta)M(\theta)J^{-1}(\theta)\tag{8.62}$$

$$V\_x = J^{-T}(\theta)(V(\theta, \dot{\theta}) - M(\theta)J^{-1}(\theta)\dot{J}(\theta)\dot{\theta}) \tag{8.63}$$

$$G\_x = J^{-T}(\theta)G(\theta) \tag{8.64}$$

From this example:


## **8.5 Role of Simulation**

The design steps of kinematic mechanisms are introduced in the previous sections. The example introduced in Sect. 8.4.4.1 shows how complex the equations of motion are. This complexity, not only in the equations of motion, but rather in the whole dimensioning process, is the reason behind using computer-based simulations.

Figure 8.17 shows a block diagram of the usage of kinematic mechanisms in a general application. Taking a pick-and-place application, the goal of the manipulator is to follow a certain trajectory in Cartesian space. The desired trajectory is the input in our case.

**Fig. 8.17** Block diagram of the usage of a kinematic mechanism in a general application

**Fig. 8.18** Block diagram of the usage of a kinematic mechanism in a haptic application

Next comes the role of the inverse kinematics to transfer the desired trajectory into desired joint angles. Generally, inverse kinematics has another importance, that is the definition of the manipulator workspace. Any point inside the workspace has a solution for the inverse kinematics equations of the mechanism.

The desired angles are subtracted from the actual joint angles and the difference is fed to the control block. For the scope of this chapter, the control block isn't discussed. What is important for this chapter, is that the output of this block is the torques applied to the motors.

The torques are the inputs to the manipulator dynamics block, that contains the equations of motion of the manipulator. The output of this block are the actual angles of the joints. The actual joints angles complete the feedback loop to the desired angles summation point.

The actual angles can be also used to get the pose of the TCP using the forward kinematics of the manipulator.

For haptic applications, the general diagram in Fig. 8.17 doesn't perfectly match. In haptic applications, sometimes, the goal isn't to follow a specific trajectory, but rather to have a desired force on the TCP, as shown in Fig. 8.18.

The desired forces are subtracted from the actual forces sensed on the TCP. The difference is fed to the control block. The output of the control block are the torques applied on the motors.

The torques are fed to the manipulator dynamics block. This block doesn't only contain the mechanism dynamics, but also the dynamics of the user are modeled. The simplest model of the user is a mass-spring-damper system. The outputs of the this block are the actual forces on the TCP.

## *8.5.1 Example of Software Used in Simulation*

There are many software on the market that model and simulate the kinematics of haptic interfaces. For haptic interfaces, the inverse kinematics is of high importance and it limits the software to be used. Generally, all the software have many features in common:


Comparing between different software isn't the scope of this chapter. We will focus in this secion on Matlab and its offered toolboxes to give only an example on how the modeling and simulation are implemented. The example shown in Sect. 8.4.4.1 will be discussed in terms of optimizing the workspace and how the kinematic and dynamic equations are solved.

## *8.5.2 Optimizing the Workspace*

As mentioned earlier, any point inside the workspace has a solution for the inverse kinematics of the mechanism. Referring to the example in Sect. 8.4.4.1, the workspace depends on the lengths of the two links and the limits of the two revolute joints. Figure 8.19 shows the workspace of the mechanism with two different combinations of link lengths. In both combinations the limits of the joints are as follow:

$$\begin{array}{rcl} \mathbf{0}^{\circ} & \leq \theta\_{\mathbf{l}} \leq \mathbf{90}^{\circ} \\\\ \mathbf{0}^{\circ} & \leq \theta\_{\mathbf{2}} \leq \mathbf{90}^{\circ} \end{array}$$

The lengths in the first combination are (*l*<sup>1</sup> = *l*<sup>2</sup> = 10 cm), where in the second combination the lengths are (*l*<sup>1</sup> = *l*<sup>2</sup> = 20 cm). For more complex mechanisms, more variables will be included in the optimization process. One has to keep in mind that by changing the lengths, the dynamics of the mechanism change as well. This will be discussed in the following sections.

**Fig. 8.19** Workspace of 2-DoF mechanism with different combinations of link lengths

## *8.5.3 Solving Kinematic and Dynamic Equations*

In Matlab there are many ways to model the kinematic equations. One way is to model the mechanism as a rigid-body tree. This is done by defining all the links/legs, and the joints. The rigid-body tree approach supports serial mechanisms, however, parallel mechanisms aren't directly supported.

In haptic applications, another approach is more applicable. As mentioned earlier in Fig. 8.18, the goal is to maintain certain forces on the TCP. This can be modeled using Simulinkand Simscape MultibodyTM toolbox.

Generally, forward and inverse kinematics equations, Jacobian matrix and equations of motion could be modeled using Matlab or Simulink, such that the optimization criteria listed in Table 8.5 could be implemented.

In Simscape MultibodyTM toolbox, the designer has the option to either import the mechanism from a CAD software, or to use the predefined model blocks in the toolbox. The toolbox offers variety of joints and sensors. Figure 8.20 shows a simple modeling of our 2-DoF mechanism. The two links (*l*<sup>1</sup> and *l*2) and the two masses (*m*<sup>1</sup> and *m*2) are modeled using the predefined blocks, frames are attached to the each end of the links and the masses. The revolute joints connect the frames and constraint the motion in the specified direction only, rotation about the z-axis. The solver of the toolbox solves the kinematic equations defined by connection of the

**Fig. 8.20** Simscape MultibodyTM model of 2-DoF serial mechanism

blocks, and singular positions are also detected. Modeling the mechanism using the toolbox solves the equations of motion. The torques applied on the joints and the joints' values are sensed using predefined sensors in the joint block. Applying Eq. (8.35), the forces on the TCP are calculated.

As mentioned earlier, changing the lengths of the links affects the dynamics of the mechanism. Consider the two link length combinations used in Sect. 8.5.2 and the masses (*m*<sup>1</sup> and *m*2) are set to be 0.1 kg.

A simplified haptic scenario is when the user is obliged to follow a certain trajectory. An example of such trajectory could be that θ<sup>1</sup> is fixed, and the input to θ<sup>2</sup> is in the form of a sine wave. To simulate the torques applied to the joints and the result forces on the TCP for both combinations, the trajectory of the simplified scenario is applied to the joints. Fig. 8.21 shows the required torques on the joints for both combinations of lengths. From Fig. 8.21, it can be seen that the τ<sup>1</sup> is higher than τ<sup>2</sup> in both length combinations. The reason behind it, as mentioned earlier, in serial mechanisms one actuator carries the load of all the following actuators. Subsequently, the forces on the TCP can be calculated using Eq. (8.35). The forces are represented in Fig. 8.22.

The Simulink model of this simplified scenario is shown in Fig. 8.23. The model consists of two main subsystems; the first contains the model of the mechanism: links, masses, joints, and the environment. The second subsystem contains the Jacobian matrix of the mechanism in order to calculate the forces on the TCP. Generally, more subsystems are added that contain, for example, the controller. From the values represented in Figs. 8.21 and 8.22, one can decide on the driver, the motors in this example, that will be able to apply the required torques to the joints.

**Fig. 8.21** Torques on both joints

**Fig. 8.22** Forces on the TCP

**Fig. 8.23** Simulinkmodel of the mechanism in a simplified haptic scenario

## **References**


**Open Access** This chapter is licensed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made.

The images or other third party material in this chapter are included in the chapter's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the chapter's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.

# **Chapter 9 Actuator Design**

#### **Thorsten A. Kern, Henry Haus, Marc Matysek, and Stephanie Sindlinger**

**Abstract** Actuators are the most important elements of any haptic device. Their selection or design significantly influences the quality of the haptic impression. This chapter deals with commonly used actuators, organized according to their physical principle of operation. It focuses on the electrodynamic, electromagnetic, electrostatic and piezoelectric actuator principles. Each actuator type is discussed in terms of its main physical principles, with examples of sizing and one or more applications. Other, less frequently used actuator principles are mentioned in several examples. The preceding chapters focused on the basics of control engineering and kinematic design. They covered topics of structuring and fundamental character. This and the following chapters deal with the design of technical components as parts of haptic devices. Experience teaches us that actuators for haptic applications can rarely be found "off-the-shelf". Their requirements always include some outstanding features in rotational frequency, power-density, working point, or geometry. These specialities make it necessary and helpful for users to be aware of the capabilities and possibilities for modifying existing actuators. Hence this chapter addresses both groups of readers: the users who want to choose a certain actuator and the mechanical engineer who intends to design a specific actuator for a certain device from scratch.

H. Haus

M. Matysek Continental Automotive GmbH, VDO-Str. 1, 64832 Babenhausen, Germany e-mail: mark.matysek@continental-corporation.com

S. Sindlinger

T. A. Kern (B)

Hamburg University of Technology, Hamburg, Germany e-mail: t.a.kern@tuhh.de

Darmstadt University of Technology, Institute for Electromechanical Design, Merckstr. 25, 64283 Darmstadt, Germany e-mail: henry.haus@gmx.de

Roche Diabetes Care GmbH, Sandhofer Strasse 116, 68305 Mannheim, Germany e-mail: stephanie.sindlinger@roche.com

## **9.1 General Facts About Actuator Design**

#### Thorsten A. Kern

Before a final selection of actuators is made, the appropriate kinematics and the control-engineering structure, according to the previous chapters, should have been fixed. However, in order to handle these questions in a reasonable way, some basic understanding about actuators is mandatory. Especially the available energy densities, forces and displacements should be estimated correctly for the intended haptic application. This section provides some suggestions and guidelines to help and preselect appropriate actuators according to typical requirements.

## *9.1.1 Overview of Actuator Principles*

There are a certain number of approaches to transform an arbitrary energy source into mechanical energy. Each of these approaches is one actuation principle. The best known and most frequently used principles are:


Each of these principles is used in different embodiments. They mainly differ in the exact effective direction of e.g. a force vector<sup>1</sup> or a building principle.2 As a consequence a wide-spread terminology exists for naming actuators. The major terms are given here:

<sup>1</sup> The electromagnet principle for instance is divided into magnetic actuators and actuators according to the reluctance principle; the piezoelectric principle is subdivided into three versions depending on the relative orientation of electrical field and movement direction.

<sup>2</sup> E.g. resonance drives versus direct drives.


coil. These actuators have very nonlinear force-displacement characteristics, but can create high forces with comparably little power required.


Each of the above actuation principles can be found in tactile and/or kinaesthetic systems. To simplify the decision process for a new design, all actuators can be grouped into classes. Most of the physical working principles can be grouped either into "self-locking" or "free-wheeling" systems.<sup>3</sup> These groups are identical to:


According to the basic structures of haptic systems (Chap. 6) it is likely that both classes are used within different haptic systems. The correlation between basic structures of haptic systems and actuators is depicted in Table 9.1. This table shows a tendency towards typical applications. However actuators can be "impedancematched" to a certain application. This happens by adding mechanical elements (springs, dampers) in series to the actuator. By this it is possible to use any actuator for any basic structure of haptic systems, trading in advantages for disadvantages which may be justified by the specific application and its requirements.

## *9.1.2 Actuator Selection Aid Based on Its Dynamics*

Different actuator designs according to the same physical principle still cover wide performance ranges regarding their achievable displacements or forces. Based on the author's experience, these properties are put into relation to the dynamical ranges relevant for haptic applications. In Fig. 9.1 the most important actuation principles are visualized in squares scaled according to their achievable displacements (a)<sup>4</sup> and typical output forces and torques (b). The area covered by a specific version of an actuator is typically smaller than the area shown here. The diagram should be read in such a way that e.g. for haptic applications, . electromagnetic linear actuators exist, providing a displacement up to 5 mm at ≈ 50 Hz. These designs are not necessarily the same actuators which are able to provide ≈200 N, as with electromagnetic systems the effectively available force increases with smaller maximum displacement (Sect. 9.4). The diagrams in Fig. 9.1 visualize the bandwidths of realizationpossibilities according to a certain actuation principle and document the preferred dynamic range of their application. Using the diagrams, we have to keep in mind

<sup>3</sup> This is—of course—a simplification. An actuator is supposed to be considered to have an internal impedance *z* and a source-capability, e.g. a force *F* or velocity *v*. The combination of both make the impedance-range actuators can address in dependency of Frequency *f* . This is similar to all other sources, may it be electrical with a certain Wattage at a certain voltage up to a threshold of current, or may it be hydraulic where a certain flow can be provided up to a certain pressure. However for the sake of simplification and as a matter of fact, actuators can be considered first of all and within a certain operational range *ideal* sources.

<sup>4</sup> For continuously rotating principles all displacements are regarded as unlimited.


**Table 9.1** Typical application areas for actuator principles in haptic systems

X: is frequently used by many groups and even commercialized

(X): some designs, especially in research


Type: Gives an idea about which actuator design (translatory or rotatory) is used more often. If the actuator is unusual but does exist, the marker is set into brackets

#### **Annotations:**

<sup>a</sup> in the meaning of a mechanically commutated drive with a power between 10–100W

<sup>b</sup> by high frequency vibrations of the commutation

**Fig. 9.1** Order of the actuator principles according to the achievable displacement (**a**) and forces resp. torques (**b**) in dependency from their dynamics. Further information can be found in [1]

that the borders are fluent and have to be regarded in the context of the application and the actuator's individual design.

## *9.1.3 Gears*

In general machine engineering the usage of gears is a matter of choice for adapting actuators to their load and vice versa. Gears are available in many variants. A simple lever can be a gear; complex kinematics according to Chap. 8 are a strongly nonlinear gear. For haptic applications specialized gear designs are discussed for specific actuation principles in the corresponding chapters. However, there is one general aspect of the application of gears with relevance to the design of haptic systems which has to be discussed independently: the scaling of impedances.

There is no principal objection to the use of gears for the design of haptic systems. Each gear (Fig. 9.2) may it be rotatory/rotatory (gearwheel or frictional wheel), translatory/translatory (lever with small displacements), rotatory/translatory (rope/cable/capstan) has a transmission "*tr*". This transmission ratio scales forces

**Fig. 9.2** Simple gear design with wheels (**a**), a lever (**b**) and a cable, rope or belt (**c**)

and torques neglecting loss due to friction according to

$$\frac{F\_{\text{out}}}{F\_{\text{in}}} = tr = \frac{l\_2}{l\_1} \,, \tag{9.1}$$

$$\frac{M\_{\text{out}}}{M\_{\text{in}}} = tr = \frac{r\_2}{r\_1} \,, \tag{9.2}$$

$$\frac{F\_{\text{out}}}{M\_{\text{in}}} = tr = \frac{1}{2\pi |r\_1|}\,, \tag{9.3}$$

and displacements respectively angles according to

$$\frac{x\_{\text{in}}}{x\_{\text{out}}} = tr = \frac{l\_2}{l\_1} \,, \tag{9.4}$$

$$\frac{\alpha\_{\rm in}}{\alpha\_{\rm out}} = tr = \frac{r\_2}{r\_1} \,, \tag{9.5}$$

$$\frac{\alpha\_{\rm in}}{\alpha\_{\rm out}} = tr = \frac{1}{2\pi |r\_1|}\,. \tag{9.6}$$

The velocities and angular velocities scale to the differential of above equations. Assuming the impedance of the actuator *<sup>Z</sup>*transl <sup>=</sup> *<sup>F</sup> <sup>v</sup>* resp. *<sup>Z</sup>*rot <sup>=</sup> *<sup>M</sup>* <sup>α</sup> , one consequence of the load-condition of a driven impedance *Z*out from the perspective of the motor is:

$$\underline{Z}\_{\text{transl}} = \frac{\underline{F}\_{\text{in}}}{\underline{v}\_{\text{in}}} = \frac{\underline{F}\_{\text{out}}}{\underline{v}\_{\text{out}}} \frac{1}{tr^2} = \underline{Z}\_{\text{transl out}} \frac{1}{tr^2} \tag{9.7}$$

$$\underline{Z}\_{\rm rot} = \frac{\underline{M}\_{\rm in}}{\underline{\alpha'}} = \frac{\underline{M}\_{\rm out}}{\underline{\alpha'}} \frac{1}{tr^2} = \underline{Z}\_{\rm rot\ out} \frac{1}{tr^2} \tag{9.8}$$

The transmission-ratio *tr* is quadratic for the calculation of impedances. From the perspective of an actuator, the driven impedance of a system is getting small with a gear showing transmission-ratios larger than one. This is favourable for the actuating system (and the original reason for the application of gears). For haptic applications, especially for impedance controlled ones, the opposite case has to be considered. In an idle situation and with a transmission ratio larger than one<sup>5</sup> the perceived mechanical impedance of a system *Z*out increases to the power of two with the transmission-ratio. Another aspect makes this fact even more critical, as the increase of output force changes only in a linear way with the transmission ratio, whereas e.g. a motor's unwanted moment of inertia is felt to increase quadratically.

<sup>5</sup> Which is the normal case, as typically the fast movement of an actuator is transmitted into a slower movement of the mechanism.

**Fig. 9.3** Rope-based gear as widely used in haptic interfaces. The driven structure is connected to a lever, on which the driving rope is running. The driving rope is wound around the driving shaft of the motor. The number of revolutions around the shaft is determined by the amount of torque to be transmitted via the gear, threads are used to minimize friction and wearout between individual

This effect is obvious to anyone who has ever tried to rotate a gear-motor with a high transmission ratio (e.g. *tr* > 100) at its output. The inertia and internal frictions upscaled by the gear are identical to a self-locking of the actuator.

As a consequence, the usage of gears with force-controlled haptic systems makes sense only for transmission ratios of 1–20 (with emphasis on the lower transmission ratios between 3–6). For higher transmission ratios, designs according to Fig. 9.2c and Eq. (9.6) based on ropes or belts proved valid. They are used in many commercial systems, as with the aid of the definition *tr* <sup>=</sup> <sup>1</sup> <sup>2</sup><sup>π</sup> *<sup>r</sup>*<sup>1</sup> and the included factor 2π a comparably high transmission ratio can be achieved easily. In combination with rotatory actuators (typically EC-drives) with low internal impedances this design shows very impressive dynamic properties. Figure 9.3 shows an example for the application of such a gear to drive a delta mechanism [2].

Recently, a new type of gear came into view of several researchers [3]. The Twisted-String-Actuator (TSA) is based on a relatively small motor with large rotation speed that twists a string or a set of strings. Because of the twisting, the strings contract and provide pulling forces in the range of several ten newtons that can be transferred via bowden cables. Applications especially include exoskeletons as for example presented in [4] and other devices that are weight-sensitive.

Some advice may be given here out of practical experience: wheel-based gears are applicable for haptic systems but tend to generate unsteady and waving output torques due to their toothing. By a careful mechanical design, this unsteadiness can be reduced. The mechanical backlash should be minimized (which is typically accompanied by an increase in friction) for example by material combinations with at least one soft material. At least one gear should have spur/straight gearing, whereas the other one can keep involute gearing.

## **9.2 Electrodynamic Actuators**

#### Thorsten A. Kern

Electrodynamic actuators are the most frequently used type of drives for haptic applications. This popularity is a result of the direct proportion between their output value (force or torque) from their input values (the electrical current). In case of kinaesthetic applications they are typically used as open-loop controlled force sources. In tactile applications these very dynamic actuators are frequently used as oscillators or exciters to move a mass and by the inertia and system-reaction create a buzz-feeling. They can be found equally as much in form of rotary or translational actuators. Depending on the design either the electrical coil or the magnet is the moving component of the actuator, whereas the other part is fixed to the device. This section gives a short introduction to the mathematical basics of electrodynamic systems. Afterward some design variants are discussed in more details. The final subsection deals with the drive electronics necessary to control electrodynamic systems.

## *9.2.1 The Electrodynamic Effect and Its Influencing Variables*

Electrodynamic actuators are based on the Lorentz-force

$$\mathbf{F}\_{\text{Lorentz}} = \mathbf{i} \cdot l \times \mathbf{B},\tag{9.9}$$

acting upon moving charges in a magnetic field. The Lorentz-Force is dependent on the current **i**, the magnetic induction **B** such as the length of the conductor *l*, which is typically formed as a coil. This subsection deals with optimization of each parameter for the maximization of the generated output force *F*Lorentz. Any electrodynamic actuator is made of three components:

#### 9 Actuator Design 319


After a shallow look at Eq. (9.9) a recommendation for the maximization of the output force could be to simply increase the current **i** in the conductor. However with a given and limited available space for the conductor's length *l* (coil's cross section), and a flux density **B** with an upper border (0.8 to 1.4 T), the effectiveness of this change has to be put into question. This can be shown with a simple calculation example.

#### **9.2.1.1 Efficiency Factor of Electrodynamic Actuators**

For example a straight-forward design of an electrodynamic actuator similar to the AVN 20–10 (Fig. 9.4) is analyzed. It contains a wound coil with a permanent-magnet in a ferromagnetic core. The electrical power loss *P*el of this electrodynamic system is generated mainly in a small moving coil with a pure ohmic resistance *R*coil = 3.5 Ω and a nominal current *i* = 0.78 A:

$$P\_{\rm el} = R\_{\rm coil} i^2 = 3.5 \,\Omega \cdot 0.78 \,\text{A}^2 = 2.13 \,\text{W}. \tag{9.10}$$

With this electrical power loss, and at a flux density**B** = 1.2 T, and with an orthogonal conductor orientation, and a conductor length within the air gap *l* = 1.58 m, the actuator itself generates the force

**Fig. 9.4** Moving-coil actuator and corresponding functional elements

**Fig. 9.5** Actuator as an exciter, moving mass-type actuator with fixed coil, Grewus Exciter EXS241408WA

$$F\_{\text{Lorentz}} = i \, l \, B = 0.78 \,\text{A} \cdot 1.58 \,\text{m} \cdot 1.2 \,\text{T} = 1.48 \,\text{N} \,\text{.}\tag{9.11}$$

Assuming the system being driven in idle mode—working against the coil's own mass of *m* = 8.8 g only—being accelerated from idleness, and performing a displacement of *x* = 10 mm, above electrical power *P*el is needed for a period of

$$t = \sqrt{2\frac{x}{a}} = \sqrt{2\frac{x\,m}{F}} = 0.011\,\text{s}\tag{9.12}$$

seconds. The electrical energy loss sums up to

$$W\_{\rm el} = P\_{\rm el} \cdot t = 23,4 \, mJ. \tag{9.13}$$

This gives an efficiency factor of *<sup>W</sup>*mech *<sup>W</sup>*el+*W*mech <sup>=</sup> 38% for idle mode and continuous acceleration. And this is a valid working point leading to exciter-type actuators (Fig. 9.5) whose efficiency and primary application in mobile applications derive from a highly dynamic movement.

Assuming now that such an actuator shall generate a force of 1 N against a finger tip for a period of e.g. two seconds, an electrical power of *W*el = 2.13 W · 2s = 4.26 J is needed. This would be identical to an efficiency factor well below 1%. And indeed the efficiency factor of electrodynamic actuators in haptic applications lies in the area of low percentages due to the common requirement to generate quite static forces without much movement. This simple calculation points to one major challenge with electrodynamic actuators: The electrical power lost due to heat transmission extends the mechanically generated power by far. Consequently during the design of electrodynamic actuators the thermal management of energy losses is key.

#### **9.2.1.2 Minimization of the Power Loss**

Typical designs of electrodynamic actuators either have a wound conductor which in itself is self-supportive, or which is wound on a coil-carrier (Fig. 9.6). The available **Fig. 9.6** Cross-section through a cylindrical electrodynamic actuator according to the moving coil principle

space for the electrical coil within the homogenous magnetic flux is limited (*ACoil*). The number of coil-turns *N*Conductor is limited too within this area due to the crosssectional surface a single turn needs *AConductor*. This cross-sectional surface is always more than the actual cross-section of the conductor, as the winding will have gaps in between single turns (Eq. (9.15)). Additionally the actual conducting core with the cross sectional surface *A*Core will be smaller than the cross-section of the conductor itself due to its isolation. Both parameters describing the geometrical losses in cross sections which are available within tables of technical handbooks [5] and are assumed as factors *k* ≥ 1 according to Eq. (9.14). The length *l* of the conductor can be easily calculated by multiplication with the numbers of turns and the mean circumference *Circ* of the coil (Eq. (9.16)).

The choice of the conductor's diameter influences the resistance of the coil via the conducting area *ACore*. The specific length-based resistance *R*spezf of a conductor is given according to Eq. (9.17). Big conducting diameters with large cross-sections *A*Core allow coils able to conduct high currents at low voltages but—due to the limited volume available—few windings. Small diameters limit the necessary currents at high voltages and carry more windings. By a careful choice of wire-diameter the winding can be adjusted as a load to the corresponding source to drain the maximum available power.

The power loss *P*Loss (Eq. (9.18)) acceptable within a given winding is limited. This limit is defined by the generated heat being able to dissipate. As a rule of thumb a standard copper winding can carry (if able to dissipated heat to one side) 4 *<sup>A</sup> mm*<sup>2</sup> continuously. In case of printed-circuit-boards (PCBs) the current-density for copper can be increased to even 20−<sup>40</sup> *<sup>A</sup> mm*<sup>2</sup> due to the very good thermal coupling between copper and environment. The real technical solution is dependent on the time of continuous operation, the thermal capacity resulting from the volume of the actuator and the materials it consists of, and a potential cooling system. A calculation of heat transmission is specific to the technical solution and can not be solved in general within this book. But the dependency of Lorentz-force on power loss can be formulated:

$$A\_{\text{Conductor}} = k \cdot A\_{\text{Core}} \tag{9.14}$$

$$N\_{\text{Conductor}} = \frac{A\_{\text{Coil}}}{A\_{\text{Conductor}}} \tag{9.15}$$

$$l\_{\text{Conductor}} = N\_{\text{Conductor}} \cdot Circum \tag{9.16}$$

$$R\_{\text{spezf.}} = \frac{l\_{\text{Conductor}} \, \rho}{A\_{\text{Conductor}}} \tag{9.17}$$

$$P\_{\rm Loss} = i^2 \cdot R\_{\rm Col} \tag{9.18}$$

From Eq. (9.18) follows

$$i = \sqrt{\frac{P\_{\text{Loss}}}{R\_{\text{Coil}}}} \tag{9.19}$$

With Eq. (9.17) there is

$$i = \sqrt{\frac{P\_{\text{Loss}} A\_{\text{Core}}}{\rho \int\_{\text{Conductor}}}} \tag{9.20}$$

put into Eq. (9.9) (keeping the direction of current flow **e***i*) there is

$$F\_{\text{Lorenz}} = \sqrt{\frac{P\_{\text{Loss}} A\_{\text{Core}} I\_{\text{Conductor}}}{\rho}} \mathbf{e}\_i \times \mathbf{B} \tag{9.21}$$

by considering Eqs. (9.15)–(9.16) the result is

$$F\_{\text{Lorenz}} = \sqrt{\frac{P\_{\text{Loss}} \, A\_{\text{Coil}} \, N \, \text{Circ}}{\rho \, k}} \mathbf{e}\_i \times \mathbf{B} \tag{9.22}$$

The Eqs. (9.15)–(9.18) put into Eq. (9.9) gives a precise picture of the influence values on the Lorentz-force (Eq. (9.22)). The level of Lorentz-force is given by the power loss *P*Loss acceptable within the coil. If there is room for modifications to the geometrical design of the actuator, the cross-sectional area of the coil and the circumference of the winding should be maximized. Additional a choice of alternative materials (e.g. alloy instead of copper) may minimize the electrical resistance. Furthermore the filling-factor *k* should be reduced. One approach could be the use of wires with a rectangular cross section to avoid empty spaces between the single turns.

The question for the maximum current itself is only relevant in combination with the voltage available and in the context of adjusting the electrical load to a specific electric source. In this case for *i*Source and *u*Source the corresponding coil-resistance has to be chosen according to Eq. (9.23).

$$\begin{aligned} P\_{\text{Source}} &= \mu\_{\text{Source}} \cdot i\_{\text{Source}} = i\_{\text{Source}}^2 \cdot R\_{\text{Coil}}\\ R\_{\text{Coil}} &= \frac{P\_{\text{Source}}}{i\_{\text{Source}}^2} \end{aligned} \tag{9.23}$$

Surprisingly from the perspective of a realistic design an increase in current is not necessarily the preferred option to increase the Lorentz-force according to Eq. (9.22). The possibility to optimize *P*Loss by adding cooling, or to analyze the temporal pattern of on- and off-times is much more relevant. Additionally the flux-density **B** has a—compared to all other influence factors—quadratically higher influence on the maximum force.

#### **9.2.1.3 Maximization of the Magnetic Flux-Density**

For the optimization of electrodynamic actuators a maximization of the flux density **B** is necessary within the area where the conducting coils are located. This place is usually called air-gap and resembles an interruption of the otherwise closed ferromagnetic core conducting the magnetic flux. The magnetic flux density is influenced by


In the context of this book some basic design criteria for magnetic circuits are given. For an advanced discussion and optimization process source [6] is recommended.

#### **Basics for the Calculation of Magnetic Circuits**

Calculating magnetic circuits show several parallels to the calculation of electrical networks. As shown in Table 9.2 several analogies between electrical and magnetic variables can be defined.

The direct analogy to the magnetic flux φ is the electrical current *I*. Please note that this is an aid for thinking and not a mathematical reality, although it is very common. The actual direct analogy for the current *I* would be a time dependent magnetic flux *d*φ *dt* , which is usually not defined with an own variable name. The great exception with this model is the magnetomotive force Θ, which resembles the sum of all magnetic voltages V identical to a rotation within an electrical network. Or another way to say it: It is the source of potential difference in a magnetic network. Nevertheless it is treated differently, as many applications require generating a magnetomotive force Θ to be generated by a certain number of winding-turns *N* and a current *I*, often referred to as ampere turns. The coupling between field- and flux-variables is given by the permittivity ε in case of electrical values and by the permeability μ in case


**Table 9.2** Analogies between electric and magnetic values

of magnetic values. It is obvious that the field-constants ε<sup>0</sup> differs from μ<sup>0</sup> by the factor 106. This is the main reason for the electromagnetic effect being the preferred physical realization of actuators in macroscopic systems.6

However above dependencies although valid consider linearity. The electrical permittivity can be regarded as quite constant (Sect. 9.5) even for complex actuator designs, and can be approximated as linear around an operating point. The permeability μ*<sup>r</sup>* of typical flux-conducting materials however shows a strong nonlinear relationship, the materials are reaching saturation. The level of magnetic flux has to be limited to prevent saturation-effects in the design of magnetic core.

<sup>6</sup> In micro-mechanical systems the energy-density relative to the volume becomes more important. The manufacture of miniaturized plates for capacitive actuators is much easier to realize with batch processes than the manufacture of miniaturized magnetic circuits.

## **Magnetic Circuits**

For the maximization of the magnetic flux density it is necessary to either analyze the magnetic circuit mathematical-analytically and/or do a numerical simulation of it. For the simulation of magnetic fields common CAD and FEM products are available.<sup>7</sup> For classification of the mathematical problem three solution levels exist: stationary, quasi-stationary, and dynamic magnetic fields. With stationary magnetic fields there is no time dependent change of the magnetic circuit. A steady state of flux density is assumed. With quasi-stationary field the induction is being considered resulting from changes within the current generating the magnetic field or a linearized change within the geometry of the magnetic circuit (e.g. a movement of an anchor). Dynamic magnetic fields consider additional effects covering the dynamic properties of moving mechanical components up to the change of the geometry of the magnetic circuit and the air gaps during operation. By dealing with electrodynamic actuators the analysis of static magnetic circuits is sufficient for a first dimensioning. The relevant dynamic drawbacks for electrodynamic actuators are presented in Sect. 9.2.1.4.

There are two principle possibilities to generate the magnetic flux densities within the volume of a conducting coil:


Both approaches show specific pros and cons: With a winded conductor the flux density *B* = μ (*N I* − *HFe lFe*) can be raised without any theoretical limit. In practical application the flux-conducting material will reach saturation (Fig. 9.7) actually limiting the achievable maximum flux density. Additionally the ohmic resistance of the winding will generate electrical power losses, which will have to be dissipated in addition to the losses resulting from the electrodynamic principle itself (Sect. 9.2.1.1). Abandoning any flux-conducting material and using exciter-windings with extremely low electrical resistance extraordinary high field-densities can be reached.8 Till now, such a technological effort for haptic devices is not made yet.

Building a magnetic circuit with a permanent magnet, the practical border for the flux density is given by the remanence flux density *Br* of the magnetic material. Such a magnet can be compared to a source providing a certain magnetic power. The flux density—being the relevant quality for electrodynamic actuators—is not independent from the magnetic load attached to the permanent magnet. Additionally the relevant properties of the magnetic material are temperature-dependent, and wrong use of specific magnet-materials may harm its magnetic properties.9

Nevertheless modern permanent-magnetic materials made of "rare earths" are the preferred source to generate static magnetic fields for electrodynamic actuators. The

<sup>7</sup> For the very beginning there are several free or open software-projects available for electrical and magnetic field simulation, e.g. for rotatory or planar systems a program from David Meeker named "FEMM" www.femm.info.

<sup>8</sup> MRI systems for medical imaging generate field densities of 2 T and more within air gaps of up to 1m diameter by the use of supra-conducting coils and almost no magnetic circuit at all.

<sup>9</sup> E.g. when removing AlNiCo magnets out of their magnetic circuit after magnetization, they may drop below their coercive field strength actually losing performance.

**Fig. 9.7** Saturation curve of typical magnetic materials [6] c Springer Nature, all rights reserved

following section gives some basics on the calculation for simple magnetic circuits. In extension to what is shown here a more precise analytical calculation is possible [6]. However it is recommended to use simulation tools early within the design process. Especially leakage fields are a great challenge for the design of magnetic circuits. And especially beginners should develop a feeling for the look of these fields with the aid of simulation tools.

#### *Direct Current Magnetic Field*

Figure 9.8a shows a magnetic circuit out of iron with a cross section *A* and an air-gap with the length ξ*<sup>G</sup>* (*G* = Gap). The magnetic circuit has a winding with *N* turns conducting a current *I*. The medium length of the magnetic circuit is *l*Fe. For calculation the circuit can be transformed into a magnetic equivalent network (Fig. 9.8b). According to the analogies defined in Table 9.2 the magnetic induction generates a magnetomotive force Θ as a differential value. In combination with two magnetic resistances of the iron circuit *R*mFe and the air gap *R*mG a magnetic flux φ can be identified.

For the calculation of the flux density *B* in the air gap, it is assumed that this magnetic flux φ is identical to the flux within the iron part of the circuit. Leakage fields are disregarded in this example.<sup>10</sup>

<sup>10</sup> Considering leakage fields would be identical to a parallel connection of additional magnetic resistors to the resistance of the air gap.

**Fig. 9.8** Magnetic field generation *B* via a current-conducting coil with *N* turns (**a**), and derived equivalent circuit representation (**b**)

$$B = \frac{\phi}{A}$$

The magnetic resistance of materials and surfaces are dependent on the geometry and can be found in special tables [6]. For the magnetic resistance of a cylinder of the length *l* and the diameter *d* a resistance according to Eq. (9.24) is given.

$$R\_m = \frac{4l}{\mu \,\,\pi \,\,d^2} \tag{9.24}$$

For the magnetic circuit the magnetic resistances *Rm*Fe and *Rm*<sup>G</sup> can be regarded as known or at least calculable. The magnetic flux is given by

$$\phi = \frac{\Theta}{R\_{m\text{Fe}} + R\_{m\text{G}}},\tag{9.25}$$

and the flux density by

$$B = \frac{\Theta}{\left(R\_{\rm mFe} + R\_{\rm mG}\right)A}.\tag{9.26}$$

Using this procedure a clever approximation of the magnetic resistances of any complex network of magnetic circuits can be made. In this specific case of a simple horseshoe-formed magnet an alternative approach can be chosen. Assuming that the magnetic flux density in the air-gap is identical to the flux density in the iron (no leakage fields, see above) the flux-density *B* is given by:

$$B = \mu\_0 \mu\_r \, H \tag{9.27}$$

Assuming that μ*<sup>r</sup>* is given either as a factor or with a characteristic curve (like in Fig. 9.7) only the magnetomotive force Θ within the iron has to be calculated. With

$$\Theta = H\_{\rm Fe} \, l\_{\rm Fe} + H\_{\rm G} \, \xi\_{\rm G} = \frac{B}{\mu\_0 \mu\_r} \, l\_{\rm Fe} + \frac{B}{\mu\_0} \, \xi\_{\rm G} \tag{9.28}$$

the flux density

$$B = \Theta \frac{1}{\frac{l\_{\text{Fe}}}{\mu\_0 \mu\_r} + \frac{\xi\_G}{\mu\_0}},\tag{9.29}$$

results and can be written down immediately. The generalized model of a coil in a magnetic circuit is that of an ideal magnetic voltage source.

#### *Permanent Magnets Generating the Magnetic Field*

As stated earlier the typical approach to generate the magnetic field within an electrodynamic actuator is by using a permanent magnet. Permanent magnets are not just some ideal flux- or field-sources. Therefore some basic understanding of magnet technology will be necessary.

As a simple approach a magnet is a source of energy which is proportional to the volume of the magnet. Magnets are being made out of different magnetic materials (Table 9.3) differing in the maximum achievable flux density (remanence flux density *Br*), the maximum field-strengths (coercive field strength *Hc B* and *Hc J* )), and their energy density *B Hmax* , such as the temperature coefficient. Additionally identical materials are differentiated according to being isotropic or anisotropic. With isotropic magnets its substance is made of homogeneous material which can be magnetized in one preferred direction. With anisotropic material a magnetic powder was mixed with a binding material (e.g. epoxy) and formed via a casting or injection-molding process. Latter approach enables almost unlimited freedom for the magnet's geometry and a very large influence concerning the pole-distribution on the magnet. However anisotropic magnets are characterized by slightly worse characteristic values in energy density such as maximum field-strengths and flux densities.

Figure 9.9 shows the second quadrant of the *B*-*H*-characteristic curve (only this quadrant is relevant for an application of a magnet within an actuator) of different magnetic materials. The remanence flux density *Br* equals the flux density with


**Table 9.3** Magnetic properties of permanent-magnet materials [6] c Springer Nature, all rights reserved

**Fig. 9.9** Demagnetization curves of different permanent-magnet materials [6] c Springer Nature, all rights reserved

short-circuit pole shoes (a magnet being surrounded by ideal iron as magnetic circuit). When there is an air gap within the magnetic circuit (or even by the magnetic resistance of the real magnetic circuit material itself), a magnetic field strength *H* appears as a load. As a reaction an operation point is reached, which is shown here as an example on a curve of NdFeB for a flux-density of ≈200 kA/m. The actually available flux density at the poles is decreased accordingly. As electrodynamic actuators for haptic applications face high requirements according to their energy density, there are almost no alternatives to the usage of magnet materials based on rare earths (NdFeB, SmCo). This is very accommodating for the design of the magnetic circuit, as nonlinear effects near the coercive field strength such as with AlNiCo or Barium-ferrite are of no relevance.<sup>11</sup> Rare earth magnets allow an approximation of their B/H-curve with a linear equation, providing a very nice relationship for their magnetic resistance (Fig. 9.10c):

$$R\_{\rm Mag} = \frac{V}{\phi} = \frac{H\_c \, l\_{\rm Mag}}{B\_r \, A} \tag{9.30}$$

Equation 9.30 and Fig. 9.10c reveal the actual mental model of a permanent magnet in a circuit: At their working-point, they can be considered linear non-ideal magnetic voltage source *V* = *Hc l*Mag with an internal resistance *R*Mag.

<sup>11</sup> The small coercive field strength of these materials e.g. result in the effect, that a magnet magnetized within a magnetic circuit does not reach its flux density anymore once removed and even after re-assembly into the circuit again. This happens due to the temporary increase of the air gap, which is identical to an increase of the magnetic load to the magnet beyond the coercive field strength. Additionally the temperature-dependency of the coercive field strength and of the remanence flux density is critical. Temperatures just below the freezing point may result in a demagnetization of the magnet.

**Fig. 9.10** Magnetic field generation *B* via permanent magnets (**a**), derived equivalent circuit (**b**), and dimensions of the magnet (**c**)

With this knowledge the magnetic circuit of Fig. 9.10a and the corresponding equivalent circuit (Fig. 9.10b) can be calculated identical to an electrically excited magnetic circuit.

The flux density within the iron is once again given by

$$B = \frac{\phi}{A} \tag{9.31}$$

For the given magnetic circuit the resistances *Rm*Fe and *Rm*<sup>G</sup> are assumed as known or calculable. From Eq. (9.30) the magnetic resistance of the permanent magnet is known. The source within the equivalent circuit is defined by the coercive field strength and the length of the magnets *Hc l*Mag. These considerations result in

$$\phi = \frac{H\_{\text{c}} \, l\_{\text{Mag}}}{R\_{m\text{Fe}} + R\_{m\text{G}} + R\_{\text{Mag}}},\tag{9.32}$$

and the flux density

$$B = \frac{H\_c \, l\_{\rm Mg}}{\left(R\_{\rm mFe} + R\_{\rm mG} + R\_{\rm Mg}\right)A} \,\tag{9.33}$$

Slightly rearranged and *R*Mag included gives

$$B = \frac{B\_r \ H\_c \frac{l\_{\text{May}}}{A}}{(R\_{\text{mFe}} + R\_{\text{mG}}) \ B\_R + H\_c \frac{l\_{\text{May}}}{A}}.\tag{9.34}$$

Equation (9.34) states by the factor *Br Hc l*Mag *<sup>A</sup>* that it is frequently very helpful for achieving a maximum flux density *B* in the air gap to increase the length of a magnet with at the same time minimized cross-sectional area of the magnetic circuit—which is of course limited by the working distance within the air gap and the saturation field strengths of the magnetic circuit.

#### **9.2.1.4 Additional Effects in Electrodynamic Actuator**

To do a complete characterization of an electrodynamic actuator there are at least three more effects, whose influences will be sketched within the following paragraphs.

#### **Induction**

For a complete description of an electrodynamic actuator the *dynamic* properties needs to be considered next to the geometrical design of its magnetic circuit and the mechanical design of its winding and the considerations concerning electrical power losses. For this analysis the electrodynamic actuator is regarded as a bipolar transformer (Fig. 9.11).

A current *i* <sup>0</sup> generates via the proportional constant *B l* a force *F*0, which moves the mechanical loads attached to the actuator. The movement itself results in a velocity *v*<sup>0</sup> which is transformed via the induction law and the proportional constant to an induced voltage *u*1. By measurement of *u*<sup>1</sup> and a current source the rotational velocity or the movement velocity *v* can be measured, with a voltage source the measurement of *i* <sup>0</sup> provides a force- or torque-proportional signal. This is the approach taken by the variant of admittance controlled devices as a control value (Sect. 6.7).

The induction itself is a measurable effect, but should not be overestimated. Typically electrodynamic actuators are used as direct drives at small rotational or translational velocities in haptic systems. Typical coupling factors with rotatory drives are—depending on the size of the actuators—in an area between 100 to 10 revolutions *s V* . At a rotational speed which is already fast for direct drives of 10 Hz, induced voltage amplitude |*u*1| of 0.1–1 V can be achieved. This is around 1–5% of the control voltage's amplitude.

**Fig. 9.11** Electrical and mechanical equivalent circuit of an electrodynamic actuator as being a transformer

#### **Electrical Time Constant**

Another aspect resulting from the model according to Fig. 9.11 is the electrical transfer characteristics. Typical inductances *L* of electrodynamic actuators lie in the area of 0.1–2 mH. The ohmic resistance of the windings is largely depending on the actual design, but as a rule of thumb values between 10–100Ω can be assumed. The step-response of the electrical transfer system *<sup>i</sup>* <sup>0</sup> *u*0 shows a time-constant <sup>τ</sup> <sup>=</sup> *<sup>L</sup> <sup>R</sup>* = 10−30 µs and lies within a frequency range 10 kHz, which is clearly above the relevant dynamic area of haptics.

#### **Field Response**

A factor which can not so easily be neglected when using electrodynamic actuators for high forces is the feedback of the magnetic field generated by the electromagnetic winding on the static magnetic field. Taking the actuator from the example at the beginning (Fig. 9.4) positive currents generate a field of opposite direction to the field generated by the magnet. This influence can be considered by substitution of both field sources. Depending on the direction of current this field either enforces or weakens the static field. With awkward dimensioning this can result in a directional variance of the actuator properties. The problem is not the potential damage to the magnet, modern magnetic materials are sufficiently stable, but a variation of the magnetic flux density available within the air gap. An intended application of this effect within an actuator can be found in an example according to Fig. 9.52.

A deeper discussion about electrodynamic actuators based on concentrated elements can be found in [7].

## *9.2.2 Actual Actuator Design*

As stated earlier electrodynamic actuators are composed of three basic components: coil/winding, magnetic circuit, and magnetic exciter. The following section describes a procedure for the design of electrodynamic actuators based on these basic components. As the common principle for excitation a permanent magnet is assumed.

#### **9.2.2.1 Actuator Topology**

The most fundamental question for the design of an electrodynamic actuator is its topology. Usually it is known whether the system shall perform rotary or translation movements. Afterward the components magnetic circuit, the location of magnets, pole-shoes and the coil itself can be varied systematically. A few quite common structures are shown in Fig. 9.13 for translational actuators, and in Fig. 9.12 for rotatory actuators. For the design of electrodynamic actuators in any case the question should be asked, whether the coil or the magnetic circuit shall move. By this variation apparently complex geometrical arrangements can be simplified drastically. Anyway

**Fig. 9.12** Variants of electrodynamic actuators for translational movement with moving magnets (**a**), moving coils (**b**), as plunger-type (**c**), and as flat-coil (**d**)

**Fig. 9.13** Variants of electrodynamic actuators for rotatory movements with self-supportive winding (**a**), and with disc-winding

it has to be considered that a moving magnet has more mass and can typically be moved less dynamically than a coil. On the other hand there is no contact- or commutating problem to be solved with non-moving windings.

## **Moving Coils**

Electrodynamic actuators according to the principle of moving coils with a fixed magnetic circuit are named "moving coil" in the case of a linear movement and "iron-less rotor" in the case of a rotatory actuator. They always combine few moving masses and as a result high dynamics. The translatory version shows displacements of a few millimeters, and is used especially within audio applications as loudspeaker. Actuators according to the principle of "moving coils" have two disadvantages:

**Fig. 9.14** Design of an electrodynamic actuator with self-supportive winding according to the Faulhaber-principle. Picture courtesy of *Dr. Fritz Faulhaber GmbH*, Schöneich, Germany, used with permission


A similar situation happens with rotatory systems. Based on the electrodynamic principle there are two types of windings applicable to rotatory servo-systems: the *Faulhaber* and the *Maxon*-winding of the manufacturers with identical names. These actuators are also known as "iron-less" motors. Both winding principles allow the manufacture of self-supportive coils. A diagonal placement of conductors and a baking process after winding generates a structure sufficiently stable for the centrifugal forces during operation. The baked coils are connected with the rotating axis via a disk. The complete rotor (Fig. 9.14) is build of these three components. By the very small inertia of the rotor such actuators show impressive dynamic properties. The geometrical design allows placing the tubular winding around a fixed, diametral-magnetized magnet. This enables another volume reduction compared to conventional actuators as its housing has to close the magnetic circuit only instead of providing additional space for magnets.

Within the self-supportive winding there are areas of parallel lying conductors combined to poles.<sup>12</sup> With moving coils there is always the need for a specialized

<sup>12</sup> The *Faulhaber* and the *Maxon* excel by a very clever winding technique. On a rotating cylinder respectively a flatly pressed rectangular winding poles can be combined by contacting closely located areas of an otherwise continuous wire.

contactor, either via contact rings, or electronic commutation or via mechanical switching. Depending on the number of poles all coils are contacted at several points. In case of mechanical switching these contacts are placed on the axis of the rotor and connected via brushes with the fixed part of the actuator named "stator". This design enables a continuous movement of the rotor, whereas a change of the current flow is made purely mechanically by the sliding of the brushes on the contact areas of the poles on the axis. This mechanical commutation is a switching procedure with an inductance placed in parallel.

As such an actuator can be connected directly to a direct current source, they are known as "DC-drives". As stated within Sect. 9.1 the term "DC-drive" is not only limited to actuators according to the electrodynamic principle but is also frequently applied to actuators following the electromagnetic principle (Sect. 9.4).

#### **Moving Magnet**

In case of *translatory* (Fig. 9.12a) systems actuators according to the principle of a moving magnet are designed to provide large displacements with compact windings. The moving part of the actuator is composed almost completely of magnetic material. The polarity direction of this material may vary in its exact orientation. Actuators according to this principle are able to provide large power, but are expensive due to the quantity of magnet material necessary. Additionally the moving magnet is heavy; the dynamics of the actuator is therefore smaller than in the case of a moving coil. Nevertheless some very successful designs exists. A special form-factor can be found with the TapticEngine (Fig. 9.15) specialized for a very slim design at a still comparably large accelerated mass. The translator followed a movingmagnet-design with poles facing each other forcing the magnetic flux to exit through the airgap with coils wound flat on a magnetic back iron. The whole translator is spring-balanced and can operate in a wide frequency range with a clear resonance defined by the spring *<sup>k</sup>* and the moving-mass *<sup>m</sup>*: *fr* <sup>=</sup> <sup>1</sup> 2π *<sup>k</sup> <sup>m</sup>* . A related design but more straightforward as built on rotational symmetry is shown in Fig. 9.16 called HapCoilOne manufactured and sold by the French company actronika. Due to the large moving mass, a combined damper and spring-element and some very reasonable coiling such system allows a wide-bandwidth at an excellent power-level.

In case of a *rotatory* system a design with moving magnet is comparable to a design with moving coil. Figure 9.17 shows such a drive. The windings fixed to the stator are placed around a diametral magnetized magnet. It rotates on an axis, which frequently additionally moves the magnetic circuit too. Providing the right current feed to the coil the orientation of the rotor has to be measured. For this purpose sensors based on the Hall-effect or optical code wheels are used.

Electrodynamic actuators with moving magnet are known as EC-drives (electroniccommutated). This term is not exclusive to electrodynamic actuators, as there are electronic-commutated electromagnetic drives too. EC-drives—whether they are electrodynamic or electromagnetic—combined with the corresponding driver electronics are frequently known as servo-drives. Typically a servo-drive is an actuator able to follow a predefined movement path. Servo-drives are rarely used for haptic

**Fig. 9.15** TapticEngine as used in mobile devices of the company Apple. Flat electrodynamic actuator with moving magnet on a translator. Figure shows principle sketch (**a**), assembled unit (**b**) and disassembled unit with lower translator with spring and upper body forming the magnetic back iron visible (**c**)

**Fig. 9.16** Exciter-concept HapCoilOne by the company actronika with moving-magnet design for high-performance haptic applications, c 2022 *actronika*, used with permission

devices. However the usage of EC-drives for haptic application is very frequent, but then they are equipped with specialized driver electronics.

#### **9.2.2.2 Commutation in the Context of Haptic Systems**

If continuous rotations are required, there is the need to switch the direction of the current flow. This process is called *commutation*. This necessary commutation of the current for rotating actuators has a big influence upon the quality of force- respective torque-output.

**Fig. 9.17** Components of a EC-drive. Pictures courtesy of *Dr. Fritz Faulhaber GmbH*, Schöneich, Germany, used with permission

## **Mechanically Commutating Actuators**

With mechanically commutating actuators the current flow is interrupted suddenly. Two effects of switching contacts appear: The voltage at the contact point increases, sparks may become visible—an effect which is called electrical brush sparking. Additionally the remaining current flow induces a current within the switched-off part of the winding which itself results in a measurable torque. Depending on the size of the motor, this torque can be felt when interacting with a haptic system and has to be considered in the design.

The current- and torque changes can be reduced by the inclusion of resistors and capacitors into the coil. However this results into high masses of the rotor and worse dynamic properties. Beside that a full compensation is impossible. Nevertheless mechanically commutating actuators are in use for inexpensive haptic systems. The geomagic Touch from *geomagic* and the Falcon from *Novint* use such actuators.

#### **Electronic Commutated Electrodynamic Actuators**

Electronic commutated electrodynamic actuators differ from mechanically commutated actuators by the measurement technology used as a basis for switching currents. There are four typical designs for this technology:

• In sensor-less designs (Fig. 9.18a) an induced voltage is measured within a coil. At zero-crossing point one pole is excited with a voltage after an interpolated 30◦ phase delay dependent on the actual revolution speed of the rotor. In combination of measurement of the inductance followed by a switched voltage, a continuous rotation with batch-wise excitation is realized. This procedure can not be applied to low rotation speeds, as the induced voltage becomes too low and accordingly the switching point can hardly be interpolated. Additionally the concept of using

**Fig. 9.18** Technologies for different commutation methods: sensor-less (**a**), block-commutation (**b**) and optical code-wheel (**c**)

one to two coils for torque generation results in a high torque variations at the output of up to 20%, making this approach not useful for haptic systems.


#### 9 Actuator Design 339

• Sinus-commutating with digital code-wheels (Fig. 9.18c) are based on the measurement of rotor position by the use of—usually optical—code discs. By reflective or transmissive measurement the rotor position is sampled with high resolution. This relative positioning information can be used for position measurement after an initial calibration. Depending on the code-wheels resolution a very smooth sinusoid commutation can be achieved with this method.

The sinus-commutating methods are the preferred solutions used for haptic applications due to the little torque variations and their applicability for slow revolution speeds typical to direct drives.

## *9.2.3 Actuator Electronics*

Electrodynamic actuators require some specific electrical circuits. In the following section the general requirements on these electronics are formulated.

## **9.2.3.1 Driver Electronics**

Driver electronics are electrical circuits transforming a signal of low power (several volts, some milli-ampere) into a voltage- or current level appropriate to drive an actuator. For electrodynamic actuators in haptic applications driver electronics have to provide a current in a dynamic range from static up to several kilohertz. This paragraph describes general concepts and approaches for such circuits.

## **Topology of Electric Sources**

Driver electronics for actuators—independently from the actuation principle they are used for—are classified according to the flow of electrical energy (Fig. 9.19). There are four classes of driver electronics:


**Fig. 9.19** Visualization of the four quadrants of an electric driver, formed by the directions of current and voltage

For haptic application the switched 1-quadrant controller is frequently met, as many haptic systems do not have the necessity to control the device near the voltageor current-zero point. However for systems with high dynamics and low impedance the 2-quadrant and the 4-quadrant controller are relevant, as the unsteadiness near the zero-point is perceivable with high quality applications.

#### **Pulse-Width-Modulation and H-Bridges**

With the exception of some telemanipulators, the sources controlling the actuators are always digital processors. As actuators need an analogue voltage or current to generate forces and torques some transformer between digital signals and analogue control value is necessary. There are two typical realizations of these transformers:


The use of D/A-converters as external components or integrated within a microcontroller is not covered further in this book, as it is, if necessary to use, extremely simple. It just requires some additional efforts in circuit layout. Latter results in it being not used much for the control of actuators.

With electrodynamic actuators the method of choice are driver electronics based on PWM (Fig. 9.20a). With the PWM a digital output of a controller is switched with a high frequency (>10 kHz13). The period of the PWM is given by the frequency. The program controls the duty cycle between on- and off-times. Typically one byte is available to provide a resolution of 256 steps within this period. After filtering the

<sup>13</sup> Typical frequencies lie in between 20–50 kHz. However especially within automotive technology for driving LEDs, PWMs for current drivers with frequencies below 1 kHz are in application. Frequencies within this range are not applicable to haptic devices, as the switching in the control value may be transmitted by the actuator and will therefore be perceivable especially in static conditions. Typical device designs show mechanical low-pass characteristics even at frequencies in the area of 200 Hz already. However due to the sensitivity of tactile perception in an area of 100–200 Hz, increased attention has to be paid on any switched signal within the transmission chain.

**Fig. 9.20** Principle of puls-width-modulation (PWM) at a digital μC-output (**a**), h-bride circuit principle (**b**), and extended h-bridge with PWM (S1) and current measurement at (RSense) (**c**)

PWM, either via an electrical low-pass or via the mechanical transfer-characteristics of an actuator, a smoothed output signal becomes available.

Pulse-width-modulation is frequently used in the combination with H-bridges (Fig. 9.20b). The term H-bridge results from the H-like shape of the motor surrounded by four switches. The H-bridge provides two operation modes for two directions of movement and two operation modes for braking. If according to Fig. 9.20b the two switches S2 and S5 are on, the current I will flow through the motor in positive direction. If instead switches S3 and S4 are switched on, the current I will flow through the motor in negative direction. One additional digital signal acting upon the H-bridge will change the direction of movement of the motor. This is the typical procedure with switched 1-quadrant controllers. Additional switching-states are given by switching the groups S2 and S3 respectively S4 and S5. Both states results in short-circuit of the actuator and stops its movement. Other states like simultaneously switching S2 and S4 respectively S3 and S5 results in short-circuit of the supply voltage, typically destroying the integrated circuit of the driver.

To combine the H-bridge with a PWM either switch-groups S2 and S5 can be switched according to the timing of the PWM, or additional switches S1 (Fig. 9.20c) can be placed in series to the H-bridge modulating the supply voltage *U*. In a practical realization latter is the preferred design, as the timing of the switches S2 to S5 is very critical to prevent likely short circuits of the supply voltage. The effort to perform this timing between the switching is usually higher than the costs of another switch in series. The practical realization of H-bridges is done via field-effect transistors. The discrete design of H-bridges is possible, but not easy. Especially the timing between switching events, the prevention of short-circuits, and the protection of the electronics against induced currents is not trivial. There are numerous integrated circuits available at the market which already include appropriate protective circuitry and provide only a minimum of necessary control lines. The ICs L6205 (2A), L293 (2.8A) and VNH 35P30 (30A) are some examples common with test-bed developments. For EC drives there are specific ICs performing the timing for the field-effect transistors and reducing the number of necessary PWMs from the microcontroller. The IR213xx series switches three channels with one external half-bridge per channel built up from N-MOS transistors with a secure timing for the switching events.

The PWM described above with an H-bridge equals a controlled voltage source. For electrodynamic systems such a control is frequently sufficient to generate an acceptable haptic perception. Nevertheless for highly dynamic haptic systems a counter induction (Sect. 9.2.1.4) due to movement has to be expected, resulting in a variation of the current within the coils generating an uncontrolled change of the Lorentz-force. Additionally the power-loss within the coils (Sect. 9.2.1.1) may increase the actuator's internal temperature resulting in a change of conductivity of the conductor's material. The increasing resistance with increasing temperatures of the conductor results in less current flow at a constant voltage source. An electrodynamic actuator made of copper as conductive material would generate a reduced force when operated. With higher requirements on the quality of haptic output a controlled current should be considered. In case of a PWM a resistor with low resistance (RSense in Fig. 9.20c) has to be integrated, generating a current-proportional voltage USense, which itself can be measured with an A/D input of the controller. The control circuit is closed within the microcontroller. However the A/D transformation and the closing of the control circuit can be challenging for state of the art electronics with highly-dynamic systems with border frequencies of some kilohertz. Therefore analog circuits should be considered for closed-loop current controls too.

#### **Haptic Driver ICs**

Meanwhile for standard applications using excentric rotating mass (ERM) motors or linear resonant actuators (LRA) such as the engines shown in Fig. 9.15 or Fig. 9.16 integrated circuits with additional value exists. Texas Instruments (TI) for example offers the *DRV2605* driver circuit with included PWM, controlled by an I2C protocol. It includes already some basic tactile patterns and by this offers a simple extension to any microcontroller to create basic patterns without loading the computing needs onto the main unit. And it even goes beyond that. For example with focus on industrial applications Maxim released the *MAX11811*, a driver combining resistive touchscreen measurement with haptic actuation. Almost all major manufacturers of integrated circuits meanwhile offer such drivers, which—for standard applications—makes it easy to create some level of haptic output especially for touchscreen-type of applications.

**Fig. 9.21** Discrete closed-loop current control [8] c Springer Nature, all rights reserved (**a**), and closed-loop current control with a power-operational-amplifier (**b**)

#### **Analogue Current Sources**

Analogue current sources are—to make it simple—controlled resistors within the current path of the actuator. It should be noted that with the wide and easy access to PWMs this technology is not common anymore. However in terms of tactile performance, those sources are still a gold-standard as no high-frequency-component is involved into the signal generation. Their resistance is dynamically adjusted to provide the wished current flow. Identical to classical resistors analogue current sources transform the energy which is not used within the actuator into heat. Consequently in comparison to the switched H-bridges they are generating a lot of power loss. By the use of a discrete current control (Fig. 9.21a) analogue current sources for almost any output currents can be built by the choice of one to two field-effect-transistor (FET). For heat dissipation they are required to be attached to adequate cooling elements. There are only little requirements on the operational amplifiers themselves. They control the FET within its linear range proportional to the current-proportional-voltage generated at RSense. Depending on the quadrant used within operational mode (1 or 3) either the N-MOS transistors or the P-MOS transistor is conductive. An alternative to such discrete designs is the use of power-amplifiers (e.g. LM675, Fig. 9.21b). It contains fewer components and is therefore less dangerous to make errors. Realized as non-inverting or inverting operational amplifier with a resistor for measurement RSense, they can be regarded as a voltage controlled current source.

#### **9.2.3.2 Monitoring Temperature**

Resulting from the low efficiency factor and the high dissipative energy from electrodynamic actuators it is useful to monitor the temperature nearby the coils. Instead of including a measuring resistor PT100 nearby the coil, another approach monitors the electrical resistance of the windings themselves. Depending on the material of the windings (e.g. cooper, Cu) the conductivity changes proportional to the coil's temperature. With copper this factor is 0.39% per Kelvin temperature change. As any driver electronics either works with a known and controlled voltage or current, measurement of the other component immediately provides all information to calculate resistance and consequently the actual coil temperature.

## *9.2.4 Examples for Electrodynamic Actuators in Haptic Devices*

Electrodynamic actuators are most frequently used as exciters for tactile systems also named linear-resonant-actuators (LRA) , or as force and torque sources within kinaesthetic systems. Especially EC-drives can be found in the products of *Quanser*, *ForceDimension*, *Immersion*, and *SensAble/geomagic*. Mechanically commutated electrodynamic actuators are used within less expensive devices, like the Phantom Omni or the Novint Falcon.

#### **9.2.4.1 Cross-Coil System as Rotary Actuator**

Beside self supportive coils electrodynamic actuators according the design of cross coils are one possibility to generate defined torques. Continental VDO developed a haptic rotary actuator device being a central control element for automotive applications (Fig. 9.22). It contains a diametral magnetized NdFeB-magnet. The magnet is surrounded by a magnetic circuit. The field-lines reach from the magnet to the magnetic circuit. The coils surround the magnet in an angular phase of 90◦, and the electrodynamic active winding section lies in the air-gap between magnetic circuit and magnet. The rotary position control is made via two hall-sensors placed in a 90◦ position. The actuator is able to generate a ripple-free torque of ≈25 mNm at a geometrical diameter of 50 mm, which is additionally increased by an attached gear to ≈100 mNm torque output.

#### **9.2.4.2 Reconfigurable Keypad—HapKeys**

Although the design shows similarities to Fig. 9.16, this design was built for kinaesthetic feedback. The electrodynamic linear actuators building the basis of this device are equipped with friction type bearings, and moving magnets with pole-shoes within cylindrically wound fixed coils as shown in Fig. 9.23. The coils have an inner diameter of 5.5 mm and an outer diameter of 8 mm. The magnetic circuit is decoupled from other nearby elements within the actuator-array. It is made of a tube with a wall thickness of 0.7 mm of a cobalt-iron alloy with very high saturation flux density. Each actuator is able to generate 1 N in continuous operation mode.

**Fig. 9.22** Electrodynamic cross-coil system with moving magnet as haptic rotary actuator

**Fig. 9.23** Electrodynamic linear actuator with moving magnet [9]

## *9.2.5 Conclusion About the Design of Electrodynamic Actuators*

Electrodynamic actuators are the preferred actuators used for kinaesthetic impedancecontrolled haptic devices due to their proportional correlation between the control value "current" and the output-values "force" or "torque". The market of DC -and EC-drives offers a wide variety of solutions, making it possible to find a good compromise between haptic quality and price for many applications. Most suppliers of such components offer advice on how to dimension and select a specific model based on the mechanical, electrical and thermal properties as for example shown in [10].

If there are special requirements to be fulfilled, the design, development, and start of operation of special electrodynamic actuator variants are quite easy. The challenges by thermal and magnetic design are manageable, as long as some basic considerations are not forgotten. The examples of special haptic systems seen in the preceding section prove this impressively. Just driver electronics applicable to haptic systems and its requirements are still an exceptional component within the catalogs of manufactures from automation-technology. They must either be paid expensively or be built by oneself. Therefore commercial manufacturers of haptic devices, e.g. *Quanser*, offer their haptic-applicable driver electronics independent from the own systems for sale.

For the design of low-impedance haptic systems currently no real alternative to electrodynamic systems exists. Other actuation principles which are discussed within this book need a closed-loop control to overcome their inner friction and nonlinear force/torque-transmission. This always requires some kind of measurement technology such as additional sensors or the measurement of inner actuator states. The efforts connected with this are still a big advantage for electrodynamic actuators, which is gained by a low efficiency factor and as a consequence the relatively low energy density per actuator-volume.

## **9.3 Piezoelectric Actuators**

Stephanie Sindlinger and Marc Matysek

Next to the very frequently found electrodynamic actuators, the past few years piezoelectric actuators were used for a number of device designs. Especially their dynamic properties in resonance mode allow an application for haptics, which is very different from the common positioning application they are used for. As variable impedance a wide spectrum of stiffnesses can be realized. The following chapter gives the calculation basics for the design of piezoelectric actuators. It describes the design variants and their application in haptic systems. Beside specific designs for tactile and kinaesthetic devices approaches for the control of the actuators and tools for their dimensioning are presented.

## *9.3.1 The Piezoelectric Effect*

The piezoelectric effect was discovered by Jacques and Pierre Curie first. The term is derived from the Greek word "piedein—piezo" = "to press" [11].

Figure 9.24 shows a scheme of a quartz crystal (chemical: SiO2). With force acting upon the crystal mechanical displacements of the charge-centers can be observed

**Fig. 9.24** Crystal structure of quarz in initial state and under pressure

**Fig. 9.25** Effects during applied voltage: longitudinal effect (left), transversal effect (center), shear effect (right)

within the structure, resulting in microscopic dipoles within its elementary cells. All microscopic dipoles sum up to a macroscopic measurable voltage. This effect is called "reciprocal piezoelectric effect". It can be reversed to the "direct piezoelectric effect". If a voltage is applied on a piezoelectric material a mechanical deformation happens along the crystal's orientation, which is proportional to the field strength in the material [12].

Piezoelectric materials are anisotropic—direction dependent—in their properties. Consequently the effect depends on the direction of the electrical field applied, and on the angle between the direction of the intended movement and the plane of polarization. For the description of these anisotropic properties the directions are labeled with indices. The index is defined by a Cartesian space with the axes being numbered with 1, 2 and 3. The plane of polarization of the piezoelectric material is typically orientated on direction 3. The shear at the axes is labeled with indices 4, 5 and 6.

Among all possible combinations, there are three major effects (Fig. 9.25), commonly used for piezoelectric applications: longitudinal- , transversal- and sheareffect.

The *longitudinal effect* acts in the same direction as the applied field and the corresponding field strength *E*3. As a consequence the resulting mechanical tensions *T*<sup>3</sup> and strains *S*<sup>3</sup> lie within plane 3 too. With the *transversal effect* mechanical actions show normal to the electrical field. As a result from a voltage *U*<sup>3</sup> with the electrical field strength *E*<sup>3</sup> the mechanical tensions *T*<sup>1</sup> and strains *S*<sup>1</sup> appear. The *shear-effect* happens with the electrical voltage *U* applied along plane 1 orthogonal to the polarization plane. The resulting mechanical tensions appears tangential to the polarization—in the direction of shear—and are labeled with the directional index 5.

#### **9.3.1.1 Basic Piezoelectric Equations**

The piezoelectric effect can be described most easily by state equations:

$$P = e \cdot T \tag{9.35}$$

and

with

$$S = d \cdot E \tag{9.36}$$

*P* = direction of polarization (in C/m2) *S* = deformation (non-dimensional) *E* = electrical field strength (in V/m) *T* = mechanical tension (in N/m2)

The piezoelectric coefficients are

• the piezoelectric coefficient of tension (also: coefficient of force) *e* (reaction of the mechanical tension on the electrical field)

$$e\_{ij,k} = \frac{\partial T\_{ij}}{\partial E\_k} \partial \tag{9.37}$$

• and the piezoelectric coefficient of strain (also: coefficient of charge) *d* (reaction of the strain on the electrical field)

$$d\_{ij,k} = \frac{\partial \varepsilon\_{ij}}{\partial E\_k} \partial \tag{9.38}$$

The correlation of both piezoelectric coefficients is defined by the elastic constants *Cijlm*

$$e\_{ij,k} = \sum\_{lm} \left( C\_{ijlm} \cdot d\_{lm,k} \right) \tag{9.39}$$

Usually the tensors shown in the equation above are noted as matrix In this format, matrices result of six components identical to the defined axes. The matrix shown below describes the concatenation of the dielectrical displacement *D*, the mechanical

strain *<sup>S</sup>*, the mechanical tension *<sup>T</sup>* , and the electrical field strength *<sup>E</sup>*. This matrix can be simplified for the specific cases of a longitudinal and a transversal actuator. For a longitudinal actuator with electrical contact in direction 3 the following equations are the result.


$$D\_3 = \varepsilon\_{33}^T E\_3 + d\_{31} T\_1 \tag{9.40}$$

$$S\_3 = d\_{31}E\_3 + s\_{11}^E T\_1.\tag{9.41}$$

Accordingly for a transversal actuator the correlation

$$D\_3 = \varepsilon\_{33}^T E\_3 + d\_{33} T\_3 \tag{9.42}$$

$$S\_3 = d\_{33}E\_3 + s\_{33}^E T\_3 \tag{9.43}$$


Therefore the calculation of piezoelectric coefficients simplifies into some handy equations: The charge constant *d* can be calculated for the electrical short-circuit which is *E* = 0— to

$$d\_{E=0} = \frac{D}{T} \tag{9.44}$$

and for the mechanical idle situation—which is *T* = 0—to

$$d\_{T=0} = \frac{S}{E}.\tag{9.45}$$

The piezoelectric tension constant is defined as

$$\mathbf{g} = \frac{d}{\varepsilon^T}.\tag{9.46}$$

The coupling factor *k* is given by Eq. (9.47). It is a quantity for the energy transformation and consequently for the strength of the piezoelectric effect. It is used for comparison among different piezoelectric materials. However note that it is not identical to the efficiency factor, as it does not include any energy losses.

$$k = \frac{\text{convected energy}}{\text{absorbed energy}}.\tag{9.47}$$

A complete description of the piezoelectric effect, a continuative mathematical discussion, and values for piezoelectric constants can be found in literature, such as [7, 13, 14].

#### **9.3.1.2 Piezoelectric Materials**

Till 1944 the piezoelectric effect was observed with monocrystals only. These were quartz, turmalin, lithiumniobat, potassium- and ammonium-hydrogen-phosphat (KDP, ADP), and potassium sodium tartrate [12]. With all these materials the direction of the spontaneous polarization is given by the direction of the crystal lattice [11]. The most frequently used material was quartz.

The development of polarization methods made it possible to retrospectively polarize ceramics by the application of a constant exterior electrical field in 1946. By this approach "piezoelectric ceramics" (also "piezoceramics") were invented. By this development of polycrystalline materials with piezoelectric properties the whole group of piezoelectric materials got an increased attention and technical significance. Today the most frequently used materials are e.g. barium titanate (MaTiO3) or lead zirconate titanate (PZT) [12]. C 82 is a piezoelectric ceramic suitable for actuator design due to its high *k-factor*. However as all piezoelectric ceramic materials it shows reduced long term stability compared to quartz. Additionally it has a pyroelectric effect which is a charge increase due to temperature changes of the material [7]. Since the 1960s the semi-crystalline synthetic material polyvinylidene fluoride (PVDF) is known. Compared to the materials mentioned before, PVDF excels by its high elasticity and reduced thickness (6–9µm).

Table 9.4 shows different piezoelectric materials with their specific values.

Looking at these values PZT is most suitable for actuator design due to its high coupling factor with large piezoelectric charge modulus and still a high Curie temperature . The Curie temperature represents the temperature at which the piezoelectric properties from the corresponding material are lost permanently. The value of the curie temperature depends on the material (Table 9.4).


**Table 9.4** Selection of piezoelectric materials with characteristic values [7]

## *9.3.2 Designs and Properties of Piezoelectric Actuators*

Actuators using the piezoelectric effect are members of the group of solid actuators (also: solid-state actuators). The transformation from electrical into mechanical energy happens without any moving parts, resulting in a very fast reaction time and high dynamics compared to other actuation principles. Additionally piezoelectric actuators have a high durability. The thickness changes are smaller—compared to other actuation principles. Although the generated forces are much higher.

#### **9.3.2.1 Basic Piezoelectric Actuator Designs**

Depending on the application different designs may be used. One may require a large displacement; another one may require self-locking or high stiffnesses. The most frequently used actuator types are bending actuators and actuator staples. A schematic sketch of each design is given in Fig. 9.26a and c.

*Stacked actuator* are based on the longitudinal piezoelectric effect. For this purpose several ceramic layers of opposite polarity are stapled above each other. In between each layer contact electrodes are located for electrical control. A staple is able to generate high static forces up to several 10 kN. The achievable displacement of 200 µm is low compared to other piezoelectric designs. By the use of levers Fig. 9.26b the displacement can be significantly increased (Fig. 9.26b). Voltages of several 100 V are necessary to drive a piezoelectric actuation staple.

**Fig. 9.26** Piezoelectric transducers separated by longitudinal and transversal effect: longitudinal effect: **a** stack, **b** stack with lever transformation, change of length: *x* = *d*<sup>33</sup> · *UB* transversal effect: **c** bending actuator, **d** cone, **e** band **f** bending disk, change of length: *x* = −*d*<sup>31</sup> · *UB*. Further Information can be found in [11]

*Bending actuators* are based on the transversal piezoelectric effect. Designed according to the so called bimorph principle—with two active layers—they are used in applications requiring large displacements. The transversal effect is characterized by comparably low controlling voltages [11, 12]. These electrical properties and the large displacements can be achieved by very thin ceramic layers in the direction of the electrical fields, and an appropriate geometrical design. Other geometrical designs using the transversal effect are tubular actuators, film actuators, or bending discs Fig. 9.26d–f. Due to their geometry they equal staple actuators in their mechanical and electrical characteristics. The achievable displacements of 50 µm are comparably low, whereas the achievable forces excel bending actuators at several orders of magnitude.

The use of the *shear effect* is uncommon in actuator design. This is somewhat surprising as it shows charge modulus and coupling factor which is twice as much as the transversal effect. Additionally it is possible to increase the elongation *x*<sup>0</sup> in idle mode (displacement without any load) by the optimization of the length to thickness (l/h) ratio. However the clamping force *Fk* of the actuator is not influenced by these parameters.

Table 9.5 summarizes the properties of different geometrical designs. Typical displacements, actuator forces and control voltages are shown.


**Table 9.5** Properties of typical piezoelectric actuator designs based on [11]

#### **9.3.2.2 Selection of Special Designs for Piezoelectric Actuators**

Beside the standard designs shown above several variations of other geometrical designs exist. In this section, examples of an ultrasonic drive with resonator, oscillatory/traveling waves actuators and piezoelectric stepper motors are discussed. Ultrasonic actuators are differentiated according to resonators with bar like geometry and rotatory ring geometry.

#### **Ultrasonic Actuators with Circular Resonators**

As mentioned before beside actuators providing standing waves another group of actuators based on traveling waves exists. The traveling wave actuators known best are circular in their design. The first actuator according to this principle has been built in 1973 by Sashida [15]. Traveling waves actuators count to the group of ultrasonic actuators as their control frequencies typically lie in between 20 kHz up to 100 kHz.

This section is reduced to the presentation of ring-shaped traveling wave actuators with a bending wave. Other design variants for linear traveling wave actuators can be found in the corresponding literature [16–18].

Figure 9.27 shows an actuator's stator made of piezoelectric elements. They have an alternating polarization all around the ring. The stator itself carries notches actually enabling the formation of the rotating traveling wave.

Each point on the surface of the stator performs a local elliptic movement (trajectory). This movement is sketched schematically in Fig. 9.27 These individual elliptic movements overlay to a continuous wave on the stator. With a frictional coupling this movement is transferred on the rotor, resulting in a rotation. The contact between stator exists with the same number of contact points anytime during operation.

The movement equation of the traveling wave actuator is given by

$$u(\mathbf{x},t) = A\cos(k\mathbf{x} - at)\tag{9.48}$$

By reshaping it the following form results in

$$u(\mathbf{x},t) = A(\cos(k\mathbf{x}))(\cos(\omega t)) + A(\cos(k\mathbf{x} - \pi/2))(\cos(k\mathbf{x} + \pi/2)) \quad (9.49)$$

**Fig. 9.27** Piezoelectric traveling wave motor: left: stator disk with piezoelectric elements. right: schematic view of the functionality of a ring-shaped piezoelectric traveling wave motor [11]

The second term of Eq. (9.49) includes important information for the control of traveling wave actuators. A traveling wave can be generated by two standing waves being spatially and timely different. Within typical realization the spatially difference of *x*<sup>0</sup> = λ/4 unis chosen, and a time phase lag of <sup>0</sup> = π/2. The usage of two standing waves is the only practical possibility for generating a traveling wave. The direction of rotor-movement can be switched by changing the phase lag from +π/2 to −π/2 [15, 19–21].

Figure 9.28 shows the practical realization of a traveling wave motor. Big advantages of a traveling wave motor are the high torques possible to achieve at low rotational speeds. It has a low overall size and is of little weight. This enables a very thin design as shown in Fig. 9.28. In passive mode the traveling mode motor has a high locking torque of several Nm.

Other advantages are given by the good control capabilities, the high dynamics, and robustness against electromagnetic noise such as the silent movement [22]. Typical applications of traveling wave actuators are autofocus-functions in cameras.

#### **Piezoelectric Stepper Motors**

Another interesting design can be found with the actuator PI Nexline. It combines the longitudinal effect with the piezoelectric shear effect, resulting in a piezoelectric stepper motor.

The principle design is sketched in Fig. 9.29. The movement of the motor is similar to the inchworm-principle. Drive- and release-phases of the piezoelectric elements produce a linear movement of the driven rod. The piezoelectric longitudinal elements generate the clamping force in z-direction, the shear elements rotated by 90◦ a translational movement in y-direction is possible too.

The advantage of this design is given by the high positioning resolution. Over the whole displacement of 20 mm a resolution of 0.5 nm can be achieved. The stepping frequency is given by—dependent on the control—up to 100 Hz and enables, depending on its maximum step-width, velocities of up to 1 mm/s. The step-width can be chosen continuously between 5 nm and 8µm. The intended position can be achieved either closed-loop or open-loop controlled. For the closed-loop control a

**Fig. 9.28** Realization of a traveling wave motor. **a** cross-section model with functional parts, **b** motor model USR30 with a maximum speed of 300 rpm and a nominal torque of 0.05 N m with driver D6030, **c** model USR60 with attached rotary encoder, maximum speed of 150 rpm and a nominal torque of 0.05 N m. All examples by *Shinsei Corporation*, Tokyo, JP, used with permission

**Fig. 9.29** Piezoelectric stepper motor using the shear effect and the longitudinal effect [23]

linear encoder has to be added to the motor. In open-loop control, the resolution can be increased to 0.03 nm in a high resolution dithering mode.

The actuator can generate push- and pull-forces of 400 N maximum. The selflocking reaches up to 600 N. The typical driving voltage is 250 V. The specifications given above are based on the actuator N-215.00 Nexline of the company *Physik Instrumente (PI) GmbH & Co. KG*, Karlsruhe, Germany [23]. Beside the impressive forces and positioning resolutions which can be achieved, these actuators have a high durability compared to other designs of piezoelectric actuators, as no friction happens between moving parts and stator.

Olsson et al. presented a haptic glove to display stiffness properties based on such an actuator as shown in Fig. 9.30 [24].

**Fig. 9.30** Hand exoskeleton for the display of stiffness parameters. Forces exerted by the user are recorded with thin force sensors and the actuator position is tracked magnetically. Figure taken from [24] c Springer Nature, all rights reserved

## *9.3.3 Design of Piezoelectric Actuators for Haptic Systems*

Within the preceding section the basic designs of piezoelectric actuators have been discussed and special variants were shown. This section transfers this knowledge about the design of piezoelectric actuators focusing on haptic applications now.

First of all the principle approach for designing is sketched. Hints are given about which designs are appropriate for which applications. Afterward three tools for practical engineering are shown: description via electromechanic networks, analytical formulations and finite-element simulations.

## *9.3.4 Procedure for the Design of Piezoelectric Actuators*

Figure 9.31 gives the general procedure for the design of piezoelectric actuators.

The choice of a general design based on those shown in the prior section is largely dependent on the intended application. For further orientation Fig. 9.32 shows a decision tree for classifying the own application.

The following paragraph describes the appropriate designs for specific application classes according to this scheme. The list has to be regarded as a point for orientation, but it does not claim to be complete. The creativity of an engineer will be able to find and realize other and innovative solutions beside those mentioned here. Nevertheless especially for the design of tactile devices some basic advice can be given based on the classification in Fig. 9.32:

**Fig. 9.31** Procedure of designing piezoelectric actuators

**Fig. 9.32** Decision tree for the selection of a type of piezoelectric actuator


**Fig. 9.33** Force-amplitude-diagram for the classification of the piezoelectric actuating types

to the user. With the diagram of Fig. 9.33 especially bending disc or staple actuators would be appropriate to fulfill these requirements, although there are overpowered concerning the achievable forces.


Figure 9.33 gives an overview about piezoelectric actuation principles and has to be interpreted according to the specific kinaesthetic problem at hand. Generally speaking ultrasonic piezoelectric actuators are usually the matter of choice for kinaesthetic devices, although they have to be combined with a closed-loop admittance control.

Additional reference for actuator selection can be found in Sect. 9.3.2 are suitable for haptic applications, but still need some care in their usage due to high voltages applied and their sensitivity on mechanical damage. This effort is often rewarded by piezoelectric actuation principles, which can be combined to completely new actuators. And the only thing required is some creativity of the engineer.

After choosing the general actuator the design process follows next. For this purpose three different methods are available, which are presented in the following and discussed in their pros and cons. In addition some hints on further references are given.

**Fig. 9.34** Piezoelectric actuator as a electromechanical schematic diagram **a** gyratory and **b** transformatory combination [7] c Springer Nature, all rights reserved

#### **9.3.4.1 Methods and Tools for the Design Process**

There are three different engineering tools for the design of piezoelectric actuators:


#### **Description via the Aid of Electromechanical Concentrated Networks**

The piezoelectric basic equations from Sect. 9.3.1.1 are the basis for the formulation of the electromechanic equivalent circuit of a piezoelectric converter.

The piezoelectric actuator can be visualized as an electromechanical circuit. Figure 9.34 shows the converter with a gyratory coupling (a), alternatively a transformatory coupling (b) is possible too. For the gyratory coupling the Eqs. (9.50)–(9.53) summarize the correlations for the calculation of the values for the concentrated elements. They are derived from the constants *e*, *c*, ε such as the actuator's dimensions *l* and *A* [7].

$$C\_b = \varepsilon \cdot \frac{A}{l} = (\varepsilon - d^2 \cdot c) \frac{A}{l} \qquad \text{with} \quad \nu = 0 \tag{9.50}$$

$$m\_K = \frac{1}{\underset{\cdot}{C}} \cdot \frac{l}{A} = s \cdot \frac{A}{l} \qquad\qquad\text{with}\quad U = 0\tag{9.51}$$

$$Y = \frac{1}{e} \cdot \frac{l}{A} = \frac{s}{d} \cdot \frac{l}{A} \tag{9.52}$$

$$k^2 = \frac{e^2}{\varepsilon \cdot c} = \frac{d^2}{\varepsilon \cdot c} \tag{9.53}$$

With the piezoelectric force constants

$$e = d \cdot c = \frac{d}{s} \tag{9.54}$$

Which makes for the transformatory coupling

$$X = \frac{1}{\omega C\_b \cdot Y} \qquad \text{and} \qquad n\_C = Y^2 \cdot C\_b \tag{9.55}$$

**Fig. 9.35** Piezoelectric bimorph bending element in a electromechanical schematic view in quasistatic state [7] c Springer Nature, all rights reserved

Figure 9.35 shows the sketch of a element Δ*x* taken out of a piezoelectric bimorph bending actuator (dimensions Δ*l* × Δ*h* × Δ*b*) as a electromechanical equivalent circuit.

It is:

$$C\_b = 4\varepsilon\_{33}^T (1 - k\_L^2) \frac{b \cdot \Delta x}{h}$$

$$\Delta n\_{RK} \approx 12 \, s\_{11}^E \, \frac{(\Delta x)^3}{b \cdot h^3}$$

$$\frac{1}{Y} = \frac{1}{2} \, \frac{d\_{31}}{s\_{11}^E} \, \frac{b \cdot h}{\Delta x}$$

The piezoelectric lossless converter couples the electrical with the mechanical rotatory coordinates first, which are torque *M* and angular velocity Ω. To calculate the Force *F* and the velocity *v* an additional transformatory coupling between rotatory and translatory mechanical network has to be introduced. As a result the complete description of a sub-element Δ*x* of a bimorph is given in a ten-pole equivalent circuit.

## **Analytical Calculations**

A first approach for the design of piezoelectric actuators is given by the application of analytical equations. The advantage of analytical equations lies in the descriptive visualization of physical interdependencies. The influence of different parameters on a target value can be derived directly from the equations. This enables a high flexibility in the variation of e.g. dimensions and material properties. Additionally the processing power for the solution of equations can—compared to simulations be neglected.

A disadvantage of analytical solution results from the fact, that they can only be applied to simple and frequently only symmetrical geometrical designs. Although already limited to such designs even simple geometries may result in very complex mathematical descriptions requiring a large theoretical background for their solution.

The following collection gives relevant literature for the familiarization with specific analytical challenges being faced with during the design of piezoelectric actuators:


#### **Finite Element Simulation**

The application of both approaches given before is limited to some limited geometrical designs. In reality complex geometrical structures are much more frequent which can not be solved with analytical solutions or mechanical networks. Such structures can be analyzed according to the method of finite element simulation (FEM).

For the design of piezoelectric actuators the use of coupled systems is relevant. One example of a FEM-simulation for a piezoelectric traveling wave motor is shown in Fig. 9.36.

**Fig. 9.36** Example FEM simulation of the oscillation shape of the stator of a piezoelectric travelling wave motor (view highly exaggerated)

## *9.3.5 Piezoelectric Actuators in Haptic Systems*

Piezoelectric actuators are among the most frequently used actuation principles in haptic systems. The designs shown before can be optimized and adapted for a reasonable number of applications. One of the most important reasons for their usage is their effectiveness at a very small required space, which is identical to a high power density. To classify the realized haptic systems a division into tactile and kinaesthetic is done for the following paragraphs.

## **9.3.5.1 Piezoelectric Actuators for Tactile Systems**

For the design of any tactile systems the application area is of major importance. The bandwidth ranges from macroscopic table top devices, which may be used for embossed printings in Braille being placed below a PC-keyboard, up to highly integrated systems, which may be used for mobile applications. Especially for latter use, the requirements on volume, reliable and silent operation, but also on low weight and energy consumption are enormous. The following examples are structured into two subgroups. Each of them addresses one of two directions of the penetration of the skin: lateral and normal.

## **Tactile Displays with Normal Stimulation**

## *Braille Devices*

A Braille character is encoded by a dot pattern formed by embossed points on a flat surface. By touching this pattern made of eight dots (two columns with four rows of dots each) combinations of up to 256 characters can be addressed. Since the seventies reading tablets for visually handicapped people are developed which are capable to present these characters with a 2 × 4 matrix of pins. The most important technical requirements are a maximum stroke of 0.1−1 mm and a counter force of 200 mN. Already early in this development electromagnetic drives have been replaced by piezoelectric bimorph bending actuators. These actuators enable a thinner design, are more silent during operation and faster. At typical operating voltages of ±100−200 V and a nominal current of 300 mA they additionally need less energy than the electromagnetic actuators used before. Figure 9.37 shows the typical design of a Braille character driven by a piezoelectric bimorph actuator. A disadvantage of this system is the high price, as for 40 characters with eight elements each all together 320 bending actuators are needed. Additionally they require still a large volume as

**Fig. 9.37** Schematic setup of a Braille row with piezoelectric bending actuators

**Fig. 9.38** HyperBraille display: Whole device **a** [42], and single actuator module **b** [43]. Pictures courtesy of *metec AG*, Stuttgart, Germany, used with permission

the bending elements have to show a length of several centimeters to provide the required displacements. This group of tactile devices is part of the shape-building devices. The statically deflected pins enable the user to detect the displayed symbol.

In the "HyperBraille"-project a two dimensional graphics-enabled display for blind computer users based on piezoelectric bending actuators is realized. The pin matrix of the portable tablet display consists of 60 rows with 120 pins each to present objects such as text blocks, tables, menus, geometric drawings and other elements of a graphical user interface. The array is an assembly of modules that integrate 10 pins, spaced at intervals of 2.5 mm as shown in Fig. 9.38. The benders raise the pins above the plate by 0.7 mm [41].

#### *Vibrotactile Devices*

With vibrotactile devices the user does not detect the displacement of the skin's surface but the skin itself is put into oscillations. At smaller amplitude the sensation is similar to the static elongation. The general design of vibrotactile displays equals an extension of the Braille-character to an N×N matrix which is actuated dynamically. The tactile image generated is not perceived by the penetration depth but by the amplitude of the oscillation [44]. Another impact factor is the oscillation frequency, as the tactile perception depends extremely on the frequency. With the knowledge of these inter-dependencies optimized tactile displays can be built generating a very well perceivable stimulation of the receptors. Important for displays according to

**Fig. 9.39** Schematic setup of the 100-pin-array [46] c AIP Publishing, all rights reserved

this approach is a large surface, as movements performed by the own finger disturb the perception of the patterns.

The Texture Explorer presented in [45] is designed as a vibrating 2 <sup>×</sup> <sup>5</sup> pin array. It is used for the research on the perception of tactile stimulation, such as the overlay of tactile stimulation with force-feedback. The surfaces displayed change according to their geometry and their roughness within the technical limits of the device. The contact-pins have a size of 0, 5 × 0, 5 mm<sup>2</sup> with a point-to-point distance of 3 mm. Each pin is actuated separately by a bimorph bending actuator at a driving voltage of 100 V and a frequency of 250 Hz. The maximum displacement of these pins with respect to surface level is 22 µm and can be resolved down to 1µm resolution.

An even more elaborate system is based on 100 individually controlled pins [46]. It can be actuated dynamically in a frequency range of 20−400 Hz. Figure 9.39 shows a schematic sketch. 20 piezoelectric bimorph bending actuators (PZT-5H, *Morgan Matrinic, Inc.*) in five different layers one above the other are located in a circuit around the stimulation area. Each bending actuator carries one stimulation pin, which is placed 1 mm above the surface in idle state. The pins have a diameter of 0.6 mm and are located equally spaced at 1 mm distance. At a maximum voltage of ± 85 V a displacement of ± 50 µm is achieved. A circuit of equally high passive pins is located around the touch area to mark the borders of the active display.

Another even more compelling system can be found in [47] (Fig. 9.40). The very compact 5×6 array is able to provide static and dynamic frequencies of up to ≈500 Hz. Once again, piezoelectric bending actuators are used to achieve a displacement of 700 µm. However the locking-force is quite low with a maximum of 60 mN.

#### *Ubi-Pen*

The Ubi- Penis one of the highest integrated tactile systems. Inside of a pen both components, a spinning disc motor and a tactile display, are assembled. The design of the tactile display is based on the "TULA35" ultrasonic linear drive (*Piezoelectric*

**Fig. 9.40** Schematic setup of the 5 × 6-pin-array [47] c Springer Nature, all rights reserved

*Technology Co.*, Seoul, South Korea). The schematic sketch of the design is given in Fig. 9.41. The actuator is made of a driving component, a rod and the moving part. The two piezoelectric ceramic discs are set to oscillation resulting into the rod oscillating up- and downwards. The resulting movement is elliptical. To move upwards, the following procedure is applied: in the faster downward movement the inertial force is excelled by the speed of the operation and the element remains in the upper position. Whereas in the upwards movement the speed is controlled slow enough to carry the moving part up by the frictional coupling between moving element and central rod. The actuator discs have a diameter of 4 mm and a height of 0.5 mm. The rod has a length of 15 mm and a diameter of 1 mm. It can be used as contact-pin directly. The actuator's blocking force is larger than 200 mN and at a control frequency of 45 kHz velocities of 20 mm/s can be reached.

Figure 9.42 shows the design of a 3 × 3 pin array. Especially the very small size of the design is remarkable: all outer dimensions have a length of 12 mm. The pins are distributed in a matrix of 3 mm. On the area of 1.44 cm<sup>2</sup> nine separate drives are located. To achieve such a high actuator density the lengths of the rods have to be different, allowing the moving parts to be placed directly next to each other. If this

**Fig. 9.42** Tactile 3 × 3-pin-array [47] c Springer Nature, all rights reserved

**Fig. 9.43** Prototype of the "Ubi-Pen" [48] c Springer Nature, all rights reserved

unit is placed at the upper border of the display all pins move in- respectively out-of the plane. The weight of the whole unit is 2.5 g. When the maximum displacement of 1 mm is used, a bandwidth of up to 20 Hz can be achieved.

The integration in a pen including another vibratory motor at its tip is shown in Fig. 9.43. This additional drive is used to simulate the contact of the pen with a surface. The whole pen weights 15 g.

The Ubi-Pen provides surface structures such as roughness and barriers or other extreme bumpy surfaces. To realize this, vibrations of the pins are superimposed with the vibratory motor. If the pen is in contact with a touch-sensitive surface (touchpanel), the shown graphical image may be displayed in its gray scale values by the pins of the tactile display. The system has been used for a number of tests for recognition of information displayed in different tactile modalities [49]. The results are very good with a mean recognition rate of 80% with untrained users.

#### *Structural Impact Detection with Vibro-Haptic Interfaces*

In this work, a new sensing model for structural impact detection by using vibrohaptic interfaces is developed. This haptic interface tries to obtain the touch sense of humans to provide information of structural impacts. This interface is designed as an arm-wearable one and provides a haptic stimulation to the human. This system helps humans to 'feel' structural responses and determine the structural conditions. The hardware and software parts are formed to achieve the vibro-haptic-based impact

**Fig. 9.44** General view of the whole system [50]. c IOP Publishing, all rights reserved

detection system. Some piezoelectric sensor arrays are used to measure the wing acoustic data. By processing the measured acoustic data, haptic signals are then created. Human arms will receive the haptic signals wirelessly, and the impact location, intensity, and possibility of subsequent damage initiation are identified with the vibro-haptic interface. The motors are installed at least 4 cm apart; thus, humans can detect the haptic vibration location (Fig. 9.44). The motors are operated as on and off states; thus, their frequency and amplitude cannot be directly modulated. Therefore, pulse width modulation (PWM) is used. The pulse length sends an intensity feeling such that the haptic signals amplitude and frequency could be simulated.

## **Tactile Displays with Lateral Stimulation**

## *Discrete Stimulation*

The concept of discrete stimulation is based on an excitation lateral to the skin's surface ("laterotactile display") [51]. Figure 9.45a shows the schematic sketch of a one-dimensional array of actuators. An activation of a piezoelectric element results in its elongation and a deformation of the passive contact comb (crown). If the skin of a touching finger is stretched by this movement a contact point is perceived. Part (b) of Fig. 9.45 shows a 2-dimensional display on the right. With the extension from 1D to 2D array it is necessary to consider the more complex movement patterns resulting from it. A deeper analysis of the capabilities of such a system and the application for the exploration of larger areas can be found in [52, 53], proving the applicability of a laterotactile display as a virtual 6-point Braille device and to render surface properties. Further tests prove the generated tactile impression very realistic, it can hardly be distinguished from a moving pin below the finger surface.

#### *Continuous Stimulation*

The transfer from the discrete points to a piezoelectric traveling wave drive is shown in [54]. The touching finger faces a closed and continuous surface. Due to this design the tactile display itself becomes less sensitive in its performance to movements of the finger. With the contact surface beyond the skin being excited as a standing wave, the user perceives a surface texture. With relative movement between wave and finger even a roughness can be perceived. By modifying the shape of the traveling wave a simulation of a touch-force perceivable by the moving finger can be achieved. Figure 9.46 shows the schematic sketch of the contact between finger and traveling wave, such as the corresponding movement direction.

In a test bed application [54] the stator of a traveling actuator USR60 from *Shinsei* has been used. This actuator provides a typical exploration speed by its tangential wave speed of 15 cm/s and forces up to ≈2 N. This system enables to generate continuous and braking impressions by the change of wave shapes. An additional modulation of the ultrasonic signals with a low frequency periodical signal generates the sensation of surface-roughness. Actual research is performed on the design of linear ultrasonic traveling wave displays.

## **Ultrasonic Touchscreens**

Ultrasonic actuators received a lot of attention in the past years. They promise the opportunity that on hard surfaces (e.g. glas) tactile sensations can be generated without the need for complex suspensions or moving components. Hudin et al. [55] showed that with a reverse analysis of the mechanoacoustics of a touchscreen localized vertical exaggeration can be generated efficiently. Multiple demonstrators clearly showed that this approach is very impressive and promising. This lead to two directions for implementing ultrasonic touchscreen solutions:


In case of texture-generation the key-challenge is given by overcoming the variability of users and conditions of use. Measuring the finger-movement is a basic feature. One further element to improve the reproducibility of perceived textures is by measuring the interaction forces and controlling the textures intensity [56]. However recently also the limitations of such systems were explored, as despite all success the textures do not feel natural. Bernard et al. compared the richness of texture sensation artificially generated with real textures [57] and identified, that the stimulated frequencies scale linearly with natural textures, what they can not do with ultrasonic textures. Furthermore, ultrasonic touchscreens are competing with capacitive systems based on electroadhesion [58] being discussed in Sect. 9.5. Both concepts rely on the same perceptual dynamics as fundamentally analyzed by Wiertlweksi et al. [59].

## **9.3.5.2 Piezoelectric Actuators for Kinaesthetic Systems**

Piezoelectric actuators used in kinaesthetic systems are usually part of active systems (in a control-engineering sense). The user interacts with forces and torques generated by the actuator. A classic example is a rotational knob which is actuated by a traveling wave motor. With passive systems the actuator is used as a switching element, which is able to consume power from an actuator or user in either time-discrete or continuous operation. Examples are breaks and clutches.

## **Active Kinaesthetic Systems**

Piezoelectric traveling wave actuators show a high torque to mass ratio if they are compared to other electrical actuation principles. They are predestined for the use in applications with a high torque at small rotational velocity, as they do not need an additional bearing or other transmission ratios. Kinaesthetic systems require exactly these properties. A very simple design can be used to build a haptic knob: a rotationally mounted plate with a handle attached for the user is pressed on the stator of a traveling wave piezoelectric motor. A schematic sketch of the critical part is shown in Fig. 9.27. Due to the properties specific to the traveling wave motor the rotational speed of the rotor can be adjusted easily by increasing the wave's amplitude w at the stator. As this actuation principle is based on a mechanical resonance mode, it is actuated and controlled with frequencies nearby its resonance. Coincidentally, this is the most challenging part of the design, as the piezoelectric components show a very nonlinear behavior at the mechanical stator-rotor interface. Hence, the procedures for its control and its electronics have a large influence on its performance.

In such systems the torque [60] is highly dependent on the actual rotational speed and the amplitude. By monitoring the speed and controlling the phase and the wave amplitude, the system can be closed-loop controlled to a linear torque-displacement characteristic. In this mode a maximum torque of ≈120 mNm can be achieved with this example. A deeper discussion of the phase control for a piezoelectric traveling wave motor according to this example is given in [61].

A specialized design of such a device is used in neurological research for the application with magneto-resonance tomography [62]. To close-loop control the admittance of the device the torque has to be measured. In the specific application near a large magnetic field glass fibers are used, measuring the reflection at a bending polymer body. Preventing disturbance from the device on the switching gradients of MRI and vice versa the device has been designed from specific non-conductive materials. It was based on a traveling wave motor in a special MR-compatible version of the "URS60" from *Shinsei*.

#### *Hybrid Systems*

Another class of kinaesthetic systems are so called hybrid systems. If there is the need of generating a wide bandwidth of forces and elongations with a single device, there is no actuator fulfilling all requirements alone. Due to this reason several hybrid systems are designed with two (or more) components complementing each other. A typical example is the combination of a dynamic drive with a brake, latter is used to provide large blocking torques. As seen in the above paragraph the closed-loop control of a traveling wave motor's impedance is a challenging task. A simple approach to avoid this problem is the combination of a traveling wave actuator with a clutch. The main difference between a traveling wave actuator and other types of actuators is given by its property to rotate with a load-independent velocity. Providing a certain torque this system is accompanied with a clutch. Other designs add a differential gear or a break. Such a system is presented in [63]. If the system experiences a mechanical load, the operation of the break is sufficient to increase friction or even block the system completely. Consequently, the system provides the whole dynamic range of the traveling motor in active operation, whereas passive operation is improved significantly by the break. Due to the simple mechanical design and the reduction of energy consumption such systems are suitable for mobile applications, as well.

#### **Passive Kinaesthetic Systems**

Objects can be levitated by standing acoustic waves. To achieve this, an ultrasonic source has to provide a standing wave first. Within the pressure nodes of the wave a potential builds up, attracting a body placed nearby the node. The size of the object is important for the effect to take place, as with a large size the influence on the next node may be too high. A system based on this principle is described in [64]. It shows a design of an exoskeleton in form of a glove with external mechanical guides and joints. The joints are made of piezoelectric clutches. In their original state both discs are pressed together by an external spring generating a holding torque. With the vibrator being actuated the levitation mode is achieved between rotor and stator creating a gap *h*. This reduces the friction drastically allowing to turn both disks almost free from each other.

#### **9.3.5.3 Summary**

Tactile systems are distinguished according to their direction of movement. With a normal movement into the direction of the skin additional differences are made between passive systems simulating a more or less static surface by their pins, and active systems—so called vibrotactile systems—providing information by a dynamic excitement of the skin's surface. The user integrates this information into a static perception. The advantages of this approach are given by the reduced requirements on the necessary force and displacements, as the dynamical perception of oscillations is higher than the perception of static slow-changing movements. When the display is not fixed to the finger however its fast movements will be a problem. With a static display in a fixed frame the user is able to repeatedly touch the display, increasing the dynamics of the haptic impression by own movements. With a dynamic display this interaction does not work as well anymore as periods of oscillations from the vibrating elements are lost.

Another alternative are tactile systems with a lateral movement of the skin. With an appropriate control the human can be "fooled" to feel punctual deformations analog to an impression of a normal penetration. Systems with a closed surface are very comfortable variants of such displays, but their dynamic control is demanding for finger positions moving across larger surfaces. Typically today's solutions show smaller contact areas than with other variants, as the actuator elements can not be placed as close together as necessary.

Kinaesthetic (force-feedback) systems can be distinguished in active and passive systems according to a control-engineering sense. Active systems are able to generate counter forces and supportive forces. The spectrum of movements is only limited by the degrees-of-freedom achieved by the mechanical design. A stable control for active system tends to become very elaborate due to required measurement technology and complex control algorithm of sufficient speed. As with all active systems a danger remains: an error of its functionality of the system may harm the user. This danger increases for piezoelectric actuators, as the available forces and torques are high. Passive systems with breaks and clutches enable the user to feel the resistance against their own movement as reactive forces. These designs are simpler to build and less dangerous by definition of passivity. General disadvantages of passive systems can be found in their high reaction times, the change of their mechanical properties in long-time applications and their comparably large volume. Hybrid systems combining both variants—usually including another actuation principle—may enlarge the application area of piezoelectric actuators. Although the mechanical design increases in volume and size, the requirements on control may become less and large holding forces and torques can be achieved with low power consumption. From a standpoint of haptic quality they are one of the best actuator solutions for rotating knobs with variable torque/angle characteristics available today.

## **9.4 Electromagnetic Actuators**

Thorsten A. Kern

Electromagnetic actuators are the most frequently used actuator type within general automation-industry. Due to their simple manufacture and assembly they are the matter of choice. Additionally they do not necessarily need a permanent magnet, and their robustness against exterior influences is very high. They are used within coffeemachines, water pumps and for redirecting paper flow within office printers. But nevertheless their applicability for haptic devices, especially kinaesthetic devices, is limited. Their main fields of application are tactile systems. This can be reasoned by several special characteristics of the electromagnetic principle. Within this chapter the theoretical basics for electromagnetic actuators are given. Technical realizations are explained with examples, whereas first the general topology and later the specific designs are shown. The chapter closes with some examples of haptic applications of electromagnetic actuators.

## *9.4.1 Magnetic Energy*

The source responsible for the movement of a magnetic drive is the magnetic energy. It is stored within the flux-conducting components of the drive. These components are given by the magnetic core (compare Sect. 9.2.1.3) and the air gap, such as all leakage fields—which are neglected for the following analysis. It is known from Table 9.2, that stored magnetic energy is given by the products of fluxes and magnetic voltages in each element of the magnetic circuit:

$$W\_{\rm mag} = \sum\_{n} H\_{n} \, l\_{n} \, \cdot \, B\_{n} \, A\_{n} \tag{9.56}$$

As every other system does, the magnetic circuit tries to minimize its inner energy.<sup>14</sup> Concentrating on electromagnetic actuators, the minimization of energy almost always refers to the reduction of an air gap's magnetic resistance *R*mG. For

<sup>14</sup> Minimizing potential energy is the basis for movements in all actuator principles. Actuators may therefore be characterized as "*assemblies aiming at the minimization of their inner energy*".

**Fig. 9.47** Electromagnetic transversal- (**a**) and longitudinal-effect (**b**)

this purpose two effects may be used, which can be found within electrostatic for electrical fields too (Sect. 9.5):


The forces respectively torques generated with the individual effects are the derivations of the energy according to the corresponding direction,

$$\mathbf{F}\_{\xi} = \frac{dW\_{\text{mag}}}{d\xi},\tag{9.57}$$

being equal to a force in the direction of the change of the air gap

$$\mathbf{F}\_{\xi} = -\frac{1}{2} \phi^2 \frac{d \, R\_{\text{mG}}}{d\xi}. \tag{9.58}$$

#### **Example: Transversal Effect**

The magnetic resistance of an arbitrary homogenous element of length *l* between two boundary surfaces (Fig. 9.48a) with the surface *A* is calculated as

**Fig. 9.48** Electromagnetic transversal effect in the air-gap (**a**) and with a qualitative force plot (**b**)

$$R\_{\rm m} = \frac{l}{\mu \, A}.\tag{9.59}$$

This gives the stored energy *W*mag within the magnetic resistance:

$$W\_{\rm mag} = \left(B\,A\right)^2 \frac{l}{\mu\,A}.\tag{9.60}$$

The flux density *B* is dependent on the length of the material. Assuming that the magnetic core contains one material only the magnetomotive force Θ is calculated as

$$
\Theta = \frac{B}{\mu \, l} = NI,\tag{9.61}
$$

which gives

$$B = NI \frac{\mu}{l}.\tag{9.62}$$

This equation used to replace the flux density in Eq. (9.60), several variables canceled, finally results in the magnetic energy

$$W\_{\text{mag}} = \left(NI\right)^2 A\mu \,\frac{1}{l}.\tag{9.63}$$

With the assumption about the magnetic energy concentrating within the air gap which is identical to the assumption that the magnetic core does not have any relevant magnetic resistance—the approximation of the force for the transversal effect in the direction of *l* can be formulated

$$F\_l = -\frac{1}{2}(NI)^2 A\mu \,\frac{1}{l^2}.\tag{9.64}$$

The force shows an anti-proportional quadratic coherence (Fig. 9.48b) to the distance *l*. The closer the poles become, the higher the force attracting the poles increases.

#### **Example: Longitudinal Effect, Reluctance Effect**

The same calculation can be repeated for the longitudinal effect. Assuming that the surfaces *A* from Eq. (9.63) is rectangular and its edges' lengths are given by *a* and *b*, further assuming that a flux-conducting material is inserted into direction *a*, the forces in longitudinal direction can be calculated as

$$F\_a = \frac{1}{2}(NI)^2 \, b\mu \, \frac{1}{l},\tag{9.65}$$

and in direction *b* as

$$F\_b = \frac{1}{2}(NI)^2 a\mu \,\frac{1}{l}.\tag{9.66}$$

**Fig. 9.49** Electromagnetic longitudinal effect in the air gap (**a**) and as qualitative force plot (**b**)

The reluctance effect is—in contrast to the transversal effect—linear (Fig. 9.49b). The force is dependent on the length of the mobbing material's edge only. Consequently, the stored energy within the magnetic circuit is necessary for the design of an electromagnetic actuator. Above examples have the quality of rough estimations. They are sufficient to evaluate the applicability of an actuation principle—no more, no less. The magnetic networks sufficient for a complete dimensioning should contain effects with magnetic leakage fields and the core's magnetic resistance. Therefore it is necessary to further deal with the design of magnetic circuits and their calculation.

## *9.4.2 Design of Magnetic Circuits*

The basic interdependencies for the design of magnetic circuits have already been discussed within Sect. 9.2.1.3 in the context of electrodynamic actuators. Taken from the approach of longitudinal- and transversal effect, several basic shapes (Fig. 9.50) can be derived applicable to electromagnetic actuators. In contrast to electrodynamic actuators the geometrical design of the air gap within electromagnetic actuators is freer. There is no need to guide an electrical conductor within the air gap anymore. Beside the designs shown in Fig. 9.50 there are numerous other geometrical variants too. For example all shapes can be transferred into a rotational-symmetrical design around one axis. Additional windings and even permanent magnets can be added. There are just two limits to their design:


**Fig. 9.50** Basic shapes of electromagnetic actuators

#### **9.4.2.1 Cross Section Surface Area—Rough Estimation**

The calculation of the cross section surface area for dimensioning the magnetic core is simple. A common, easily available and within precision engineering and prototype-design gladly used material is steel ST37. The B/H-characteristic curve with its saturation is given in Fig. 9.9. For this example we choose a reasonable flux density of 1.2 T. This equals a field intensity of *H* ≈1000 A/m. Within the air gap a flux density of 1 T should be achieved. The magnetic flux within the air gap is given as

$$
\phi = A\_{\rm G} B\_{\rm G}.\tag{9.67}
$$

As the magnetic flux is conducted completely via the magnetic core—neglecting leakage fields and other side bypasses—the relation

$$A\_{\rm Iron} \, B\_{\rm Iron} = A\_{\rm G} \, B\_{\rm G}, \tag{9.68}$$

is given, and consequently with the concrete values from above:

$$\frac{A\_{\text{Iron}}}{A\_{\text{G}}} = \frac{B\_{\text{G}}}{B\_{\text{Iron}}} = 0.833. \tag{9.69}$$

At its tightest point the magnetic core may have 83% of the cross section of the air gap. Whereas more surface of the cross-section results in lower field intensities, which should be aimed at if is geometrically possible. Please note that *A*<sup>G</sup> ≤ *A*Iron is with almost all technical realization, as the boundary surface of the magnetic core is always one pole of the air gap.

**Fig. 9.51** Qualitative change of the permeability for common flux-conducting materials

#### **9.4.2.2 Magnetic Energy in the Magnetic Core and Air Gap**

Within the preceding examples the assumption was made that the energy stored within the magnetic core is clearly less than the energy within the air gap. This assumption should be checked now for validity. Calculating the magnetic resistance of an arbitrary element

$$R\_{\rm m} = \frac{l}{\mu \, A},\tag{9.70}$$

the relation of two elements with identical length and cross-section scales via the magnetic resistance with the permeability μ:

$$\frac{R\_{\rm m1}}{R\_{\rm m2}} = \frac{\mu\_2}{\mu\_1} \tag{9.71}$$

The permeability <sup>μ</sup>*<sup>r</sup>* <sup>=</sup> *<sup>B</sup> <sup>H</sup>* <sup>μ</sup><sup>0</sup> is given by the relation between flux density versus field strength relatively to the magnetic constant. It is nonlinear (Fig. 9.51) for all typical flux-conducting materials within the flux-density areas relevant for actuator design in between 0.5 and 2 T. It is identical to the inverse gradient of the curves given in Fig. 9.9. The maximum permeability values are given in tables frequently, but refer to field strengths within the material only. They range from 6,000 for pure iron over 10,000 for nickel alloys up to 150,000 for special soft-magnetic materials.

Mechanical processing of flux-conducting materials and the resulting thermal changes within its microstructure will result in a considerable degradation of its magnetic properties. This change can be restored by an annealing process.

Generally speaking however even outside an optimum value for flux density the stored energy within typical materials is always several orders of magnitudes below the energy within the air gap. This legitimates to neglect this energy portion for the rough estimations in actuator design, but does show too that there is potential within the optimization of electromagnetic actuators. This potential can be used by the application of FEM-software, which is typically available as module for CAD software.15

<sup>15</sup> Or as free software, e.g. the tool "FEMM" from David Meeker.

**Fig. 9.52** Permanent magnet in the magnetic circuit in shape (**a**), field-lines with inactive coil (**b**) and field-lines with active coil (**c**), releasing the anchor

#### **9.4.2.3 Permanent Magnets in Electromagnetic Actuators**

Permanent magnets do not differ significantly in their properties from coils conducting a DC current. They generate a polarized field, which—in combination with another field—provide attraction or repulsion. For calculating combined magnetic circuits a first approach can be taken by substituting the sources within the magnetic equivalent circuit (neglecting saturation effects). The calculation is analog to the methods read in the chapter about electrodynamic actuators (Sect. 9.2.1.3). A permanent magnet within a circuit either allows


A good example for a currentless held state [6] shows the calculation of a polarized magnetic clamp (Fig. 9.52). With non-active winding the flux is guided through the upper anchor and held securely. With active coil the magnetic flux via the upper anchor is compensated. The magnetic bypass above the coil prevents the permanent magnet from being depolarized by a counter field beyond the kink in the B/H-curve.

## *9.4.3 Examples for Electromagnetic Actuators*

Electromagnetic actuators are available in many variants. The following section gives typical designs for each principle and shows the corresponding commercial products. The knowledge about the designs will help to understand the freedom in the design of electromagnetic circuits more broadly.

**Fig. 9.53** Two-phase stepper motor made of stamped metal sheets and with a permanent magnet rotor in a 3D-sketch (**a**), cross-section (**b**), and with details of the claws-poles (**c**). Figure based on [22] c Springer Nature, all rights reserved

#### **9.4.3.1 Claw-Pole Stepper Motor**

The electromagnetic claw-pole stepper motor (Fig. 9.53) is one of the most frequently used rotatory actuation principle. These actuators are made of two stamped metal sheets (1,2) with the poles—the so called claws—bended by 90◦ to the interior of the motor. The metal sheets are the magnetic core for conducting the flux of one coil each (3). The permanent-magnet rotor (4) with a pole subdivision equalizing the claw pattern orientates to the claws in currentless state. In stepper mode the coils are powered subsequently, resulting in a combined attraction and repulsion of the rotor. The control of the coils' currents may happen either by simply switching them or by a microstep mode with different current levels being interpolated between discrete steps. Latter generates stable states for the rotor not only in the positions of the claws but also in between.

Claw-pole stepper motors are available with varying number of poles, different numbers of phases and varying loads to be driven. As a result of the permanentmagnet they show a large holding torque with respect to their size. The frequency of single steps may reach up to 1 kHz for fast movements. By counting the steps the position of the rotor can be detected. Step-losses—the fact that no mechanical step happens after a control signal—are not very likely with a carefully designed power chain. Claw-pole stepper motors are the working horses of the electrical automation technology.

#### **9.4.3.2 Reluctance Drives**

The rotatory reluctance drives (Fig. 9.54) are based on the electromagnetic longitudinal effect. By clever switching of the windings (2) it is possible to keep the rotor (3) in a continuous movement with just minimal torque ripples. To make this possible the rotor has to have fewer poles than the stator. The rotor's pole-angle β*<sup>r</sup>*

**Fig. 9.54** Switched reluctance drive with pole- and coil-layout (**a**), in cross section (**b**), and with flux-lines of the magnetic excitation (**c**). Figure based on [22] c Springer Nature, all rights reserved

is larger than the pole-angle β*s*. Reluctance drives can be used as stepper motors by the integration of permanent magnets too. Generally speaking it excels by high robustness of the components and a high efficiency factor with—for electromagnetic drives—comparably little torque ripples.

#### **9.4.3.3 Electromagnetic Brakes**

Electromagnetic brakes (Fig. 9.55) are based on the transversal effect. They make use of the high force increase at electromagnetic attraction to generate friction on a spinning disk (1). For this purpose usually rotational-symmetrically flux conducting magnetic cores (2) are combined with embedded coils (3). The frontal area of the magnetic core and braking disk (1) itself is coated with a special layer to reduce abrasion and influence positively the reproducibility of the generated torque. The current/torque characteristic of electromagnetic brakes is strongly nonlinear. At the one hand this is the result of the quadratic proportion between force and current of the electromagnetic transversal effect, but on the other hand this is also a result of its friction pairing. Nevertheless they are used in haptic devices for the simulation of "hard contacts" and stop positions. A broad application for haptic devices is nevertheless not visible. This is likely a result of their limits in reproducibility of the generated torque, the resulting complex control of the current and the fact, that they can only be used as a break (passive) and not for active actuation.

#### **9.4.3.4 Plunger-Type Magnet**

Electromagnetic plunger-type magnets (Fig. 9.56) are frequently based on the electromagnetic transversal effect. Their main uses are switching and control applications, requiring the actuation in specific states. With a magnetic core frequently made of bended iron steel sheets (2) a coil-induced (3) flux is guided to a central anchor,

**Fig. 9.55** Electromagnetic brake in cross section (**a**) and as technical realization for an airplane model, by *Flight-Depot.com OHG*, used with permission (**b**)

**Fig. 9.56** Plunger type magnet **a** with altered force-position curve (4), and realization as pullinganchor **b** with metal-sheet-made magnetic circuit (2)

which itself is attracted by a yoke (4). The geometry of the yoke influences significantly the characteristic curve of the plunger-type magnet. By varying its geometry a linearization of the force-position curve is possible within certain limits. Even strongly nonlinear pulling-force characteristics can be achieved by such a modification. Plunger-type magnets are available with additionally magnets and with more coils. In these more complex designs they provide mono- and bi-stable switching properties. By variation of the wires diameter and the number of turns they can be adapted easily to any power level.

## *9.4.4 Magnetic Actuators in Haptic Devices*

For haptic applications electromagnetic actuators are mainly used within tactile devices. Nevertheless admittance controlled devices can be found providing impressive output forces of high quality even by the use of stepper-motors. Beside the commercial systems such as HapticMaster of *Moog FCS* (Sect. 6.7) especially an idea of Lawrence attracted attention within the past few years.

#### **9.4.4.1 Spring-Tendon Actuator**

In [65] Lawrence describes an inexpensive actuator for kinaesthetic haptic systems based on an electromagnetic stepper motor coupled via a tendon to a pen and with a spring mechanically connected in parallel. Analog to other haptic devices the pen is the interface to the user. Between pen and tendon and spring there is a bending body with DMS as a force sensor. To additionally minimize the torque ripples of the stepper drive resulting from the latching of the poles, a high resolution external encoder has been attached to the motor, and a torque/angle curve was measured. A mathematical spline fit of this curve was used for the actuator's control to compensate the torque oscillations. Beside this compensation the closed-loop control of the actuator via the force sensor near the pen includes a compensation of frictional effects too. The result of all these efforts is a force source, providing a force transmission with little noise and high stiffness up to 75 kN/m with movements of limited dynamics.

#### **9.4.4.2 Electromagnetic Pin Array**

The usage of electromagnetic actuators for the control of single pins in array design is very frequent. The earliest usages for haptic applications go back to printer-heads of dot matrix printers being used in the 80th an earl 90th of the last century. Modern designs are a lot more specific to haptics and make use of manufacturing technologies available from microtechnology. In [66] an actuator array is shown made of coils with 430 windings each and a 0.4 mm wide iron core. Above of it a magnet is embedded in a polymer-layer, being attracted by the flux induced into the core. With such an actuator and a diameter of 2 mm a maximum force of up to 100 mN is possible. A position measure is realized via inductive feedback. Further realizations of tactile arrays based on electromagnetic actuators and different manufacturing techniques can be found in the work of Streque et al. [67].

#### **9.4.4.3 Electromagnetic Plunger-Type Magnet for the Tactile Transmission of Speech Information**

One fundamental motivation for the design of haptic devices is the partially substitution of lost senses. Especially methods to communicate information from the sense of sight or hearing by the aid of tactile devices have some tradition. Blume designed and tested an electromagnetic plunger-type magnet according to the reluctance effect 1986 at the University of Technology, Darmstadt. Such actuators were attached to the forearm and stimulated up to eight points by mechanical oscillations encoded from speech signals. The actuator (Fig. 9.57) was made of two symmetrical plunger-type magnets (on the horizontal axis) acting upon a flux conducting element integrated into the plunger. The whole anchor takes a position symmetrically within the actuator due to the integrated permanent magnet. In this symmetrical position both magnetic circuits conduct a magnetic flux resulting in identical reluctance forces. In active

**Fig. 9.57** Electromagnetic actuator according to the reluctance principle in a "counteractive plunger type" design with permanent magnet: cross-section (**a**) and design incl. driver-electronics (**b**) [68]

mode either the flux in the upper or the lower magnetic circuit is amplified depending on the direction of current flow. The reluctance forces on the amplified side pull the anchor in a current proportional position working against the magnetic pretension from the permanent magnets, the mechanic pretension from the springs and the load of the actuator. The plunger is displaced in the direction of the weakened magnetic field. At a diameter of 20 mm this actuator covers a dynamic range of 500 Hz at a efficiency factor of 50%. The forces lie in the range of ≈4 N per ampere.

## *9.4.5 Conclusion About the Design of Magnetic Actuators*

Electromagnetic actuators are—identical to electrodynamic systems—mainly force sources. In rotary drives especially the reluctance effect is used to generate a continuous movement. With linear drives mainly plunger-type magnets are used based on the nonlinear transversal effect. Whereas there are exception to both showing some surprising properties (Sect. 9.4.4.3). The translational systems are usually used to actuate as either bistable switches between two discrete states or monostable against a spring (plunger-type, break, and valve). There are applications within haptics based on either of both effects. Whereas reluctance based actuators can be found equally often within kinaesthetic applications as drives and in an admittance controlled application in tactile systems as vibrating motor, switching actuators are almost exclusively found in tactile devices with single pins or pin arrays. In contrast to the highly dynamic electrodynamic drives, electromagnetic actuators excel in less dynamic applications with higher requirements on torque and self-holding. During switching in between two states however the acceleration and deceleration at the mechanical stop are a highly dynamic but almost uncontrollable action. The dynamic design of switching actions were not subject to this chapter, but are usually based on modeling a nonlinear force source of the electromagnet and assuming the moving parts as concentrated elements of masses, springs and dampers. Due to their relatively high masses within the moving parts, the hard to control nonlinearities of fluxes and forces, and the low efficiency factor of the transversal effect in many designs, electromagnetic actuators occupy niches within haptic applications only. However in those niches there is no way around their usage. If an appropriate area has been found they excel by an extremely high efficiency factor for the specific design and a big robustness against exterior influences.

## **9.5 Electrostatic Actuators**

Henry Haus and Marc Matysek

Electrostatic transformers are part of the group of electric transformers, such as the piezoelectric actuators, too. *Electric transformers* show a direct coupling between the electrical value and the mechanical value. This is contrary to electrodynamic and electromagnetic actuators, which show an intermediate transformation into magnetic values as part of the actuation process. In principle the transformation may be used in both directions. Hence, all these actuators can be used as sensors, as well.

Electrostatic field actuators are utilized due to their simple design and low power consumption. As a result from the technical progress of micro-engineering, the advantages of integrated designs are fully utilized. Especially for miniaturized systems the electrostatic field actuators gain increased importance compared to all other actuator principles. This is even more surprising as their energy density is significantly lower in macroscopic designs. But during miniaturization the low efficiency factor and the resulting power loss and heat produced become limiting factors for magnetic actuators [69].

An important subgroup of electrostatic field actuators is given *by solid state actuators*, with an elastomeric dielectric. It has a high breakdown field strength compared to air, builds the substrate of the electrodes and can simultaneously provide an isolating housing, too.

Beside the classic field actuators mentioned above, *electro-rheological* fluids are part *of electrostatic actuators*, as well. With these actuators an electric field of an arbitrary external source results in a change of the physical (rheological) properties of the fluid.

## *9.5.1 Definition of the Electric Field*

The following paragraphs define the electric field and relevant variables for the design of electrostatic actuators.

#### **9.5.1.1 Force on Charge**

The magnitude of a force *F* acting on two charges *Q*<sup>1</sup> and *Q*<sup>2</sup> *with* a distance *r* is given by Coulomb's-law (Eq. (9.72)).

$$F = \frac{1}{4\pi\varepsilon\_0} \frac{\mathcal{Q}\_1 \mathcal{Q}\_2}{r^2} \tag{9.72}$$

#### **9.5.1.2 Electric Field**

The electric field *E* describes the space where these forces are present. The field strength is defined as the relation of the force **F** acting on the charge in the field and the charge's magnitude *Q*.

$$\mathbf{E} = \frac{\mathbf{F}}{\mathcal{Q}}\tag{9.73}$$

The charges cause the electric field; the forces on the charges within an electric field are the effect. Cause and effect are proportional. With the electric constant ε<sup>0</sup> = 8,854 · 10−<sup>12</sup> C/Vm within vacuum and air Eq. (9.74) results:

$$\mathbf{D} = \varepsilon\_0 \mathbf{E} \tag{9.74}$$

The electric displacement field **D** describes the ratio of the bound charges and the area of the charges. The direction is given by the electric field pointing from positive to negative charges. If the electric field is filled with an insulating material (dielectric), the electric displacement field is bound partly due to the polarizing of the dielectric. Accordingly, the field strength drops from *E*<sup>0</sup> to *E* (with still the same electric displacement field). Consequently, the ratio of the weakened field depends on the maximum polarization of the dielectric and is called "permittivity" ε*<sup>r</sup>* = *E*0/*E*.

#### **9.5.1.3 Capacity**

The electrical capacity is defined as the ratio of charge *Q* on each conductor to the voltage *U* between them. A capacitor with two parallel-plates charged contrary with a surface of the plates *A* and a fixed distance *d* shows a capacity *C* depending on the dielectric:

$$C = \frac{\mathcal{Q}}{U} = \varepsilon\_0 \varepsilon\_r \frac{A}{d} \tag{9.75}$$

#### **9.5.1.4 Energy Storage**

Work must be done by an external influence to move charges between the conductors in a capacitor. When the external influence is removed, the charge separation persists and energy is stored in the electric field. If charge is later allowed to return to its equilibrium position, the energy is released. The work done in establishing the electric field, and hence the amount of energy stored, is given by Eq. (9.76) and for the parallel-plate capacitor by the use of Eq. (9.75) according to Eq. (9.77).

$$W\_{el} = \frac{1}{2}CU^2 = \frac{1}{2}\frac{\mathcal{Q}^2}{C} \tag{9.76}$$

$$W\_{el} = \frac{1}{2} \varepsilon\_0 \varepsilon\_r \frac{A}{d} U^2 \tag{9.77}$$

This stored electric energy can be used to perform mechanical work according to Eq. (9.78).

$$W\_{mech} = F\mathbf{x} \tag{9.78}$$

## *9.5.2 Designs of Capacitive Actuators with Air-Gap*

A preferred setup of electrostatic actuators is given by parallel-plate capacitors with air-gap. In these designs one electrode is fixed to the frame, while the other one is attached to an elastic structure, enabling the almost free movement in the intended direction (DoF). All other directions are designed stiff enough to prevent a significant displacement of this electrode. To perform physical work (displacement of the plate) the energy of the electric field according to Eq. (9.77) is used. Considering the principle design of these actuators two basic variants can be distinguished: the displacement may result in a change of the distance *d*, or the overlapping area *A*. Both variants are subject of discussion in the following paragraphs.

#### **9.5.2.1 Movement Along Electric Field**

Looking at the parallel-plate capacitor from Fig. 9.58, the capacity *CL* can be calculated with

$$C\_L = \varepsilon\_0 \cdot \frac{A}{d} \tag{9.79}$$

As shown before the stored energy *Wel* can be calculated for an applied voltage *U*:

$$W\_{el} = \frac{1}{2}CU^2 = \frac{1}{2}\varepsilon\_0 \frac{A}{d}U^2\tag{9.80}$$

**Fig. 9.58** Parallel-plate capacitor with air-gap

The force between both plates in z-direction can be derived by the principle of virtual displacement:

$$F\_{z,el} = \frac{\partial W}{\partial z} = \frac{1}{2}U^2 \frac{\partial C}{\partial z} \tag{9.81}$$

$$\mathbf{F}\_{z,el} = -\varepsilon\_0 \frac{A}{2d^2} U^2 \mathbf{e}\_z \tag{9.82}$$

The inhomogeneities of the electric field at the borders of the plates are neglected for this calculation, which is an acceptable approximation for the given geometrical relations of a large plate surface *A* and a comparably small plate distance *d*. A spring pulls the moving electrode into its idle position. Consequently, the actuator has to work against this spring. The schematic sketch of this actuator is shown in Fig. 9.59. The plate distance *d* is limited by the thickness of the insulation layer *dI* . Analyzing the balance of forces according to Eq. (9.83) the interdependency of displacement *z* and electrical voltage *U* can be calculated:

$$F\_z(z) = F\_{spring}(z) + F\_{z,el}(U, z) = 0\tag{9.83}$$

$$-k \cdot z - \frac{1}{2} \varepsilon\_0 A \frac{U^2}{(d+z)^2} = 0\tag{9.84}$$

$$U^2 = -2\frac{k}{\varepsilon\_0 A} (d+z)^2 \cdot z \tag{9.85}$$

Analyzing the electrical voltage *U* in dependancy of the displacement *z*, a maximum can be identified:

$$\frac{\mathrm{d}U^2}{\mathrm{d}z} = -2\frac{k}{\varepsilon\_0 A}(d^2 + 4dz + 3z^2) = 0\tag{9.86}$$

$$\varepsilon\_{z,\mathrm{max}} \begin{array}{c} 4 \ \mathrm{d}z \ \mathrm{d}z \end{array} \begin{array}{c} 1 \ \mathrm{d}z \end{array} \tag{9.86}$$

$$z^2 + \frac{4}{3}dz + \frac{1}{3}d^2 = 0$$

**Fig. 9.59** Schematic setup of an actuator with variable air-gap

$$z\_1 = -\frac{1}{3}d; \; z\_2 = -d \tag{9.87}$$

To use the actuator in a stable state, the force of the retaining spring has to be larger than the attracting forces of the charged plates. This condition is fulfilled for distances *z*

$$0 > z > -\frac{1}{3}d$$

Smaller distances cause attracting forces larger than the retaining force and the moving plate is strongly pulled onto the fixed plate ("pull-in" effect). As this would immediately result in an electric short cut, typical designs include an insulating layer on at least one plate. Equations (9.85) and (9.87) are used to calculate the operating voltage for the pull-in:

$$U\_{pull-in} = \sqrt{\frac{8}{27} \frac{k}{\varepsilon\_0 A} d^3} \tag{9.88}$$

The retention force to keep this state is much less than the actual force at the point of pull-in. It should be noted that force increases quadratically with decreasing distance. A boundary value analysis for *d* → 0 provides the force *F* → ∞. Consequently, the insulation layer fulfills the purpose of a force limitation, too.

#### **9.5.2.2 Moving Wedge Actuator**

A special design of air-gap actuators with varying plate distance is given by the moving wedge actuator. To increase displacement, a bended flexible counter-electrode is placed on a base electrode with a non-conductive layer. The distance between

**Fig. 9.60** Schematic view of a moving wedge actuator

the electrodes increases wedge-like from the fixation to its free end. The resulting electrical field is higher inside the area where the flexible electrode is closest to the counter-electrode and it decreases with increasing air-gap. When designing the stiffness of the flexible electrode it has to be guaranteed, that it is able to roll along with the tightest wedge on the isolation. Figure 9.60 shows the underlying principle in idle-state and during operation [70].

#### **9.5.2.3 Movement Perpendicular to Electric Field**

The major difference compared to the prior design is given by the fact, that the plates are moving in parallel to each other. The plate distance *d* is kept constant, whereas the overlapping area varies. Analog to Eq. (9.80) the forces for the displacement can be calculated in both direction of the plane:

$$F\_x = \frac{\partial W}{\partial x} = \frac{1}{2} U^2 \frac{\partial C}{\partial x} \tag{9.89}$$

$$\mathbf{F}\_x = \frac{1}{2}\varepsilon\_0 \frac{b}{d} U^2 \mathbf{e}\_x \tag{9.90}$$

$$F\_\mathbf{y} = \frac{\partial W}{\partial \mathbf{y}} = \frac{1}{2} U^2 \frac{\partial C}{\partial \mathbf{y}} \tag{9.91}$$

$$\mathbf{F}\_{\mathbf{y}} = \frac{1}{2} \varepsilon\_0 \frac{a}{d} U^2 \mathbf{e}\_{\mathbf{y}} \tag{9.92}$$

The forces are independent on the overlapping length only. As a consequence they are constant for each actuator position. Figure 9.61 shows the moving electrode being attached to a retaining spring.

If an electrical voltage is applied on the capacitor, the surface *A* increases along the border *a*. Hence, the spring is deflected and generates a counter force **F***<sup>F</sup>* according to

$$\mathbf{F}\_F = -kx \mathbf{E}\_x \tag{9.93}$$

**Fig. 9.61** Electrostatic actuator with variable overlapping area

The equilibrium of forces acting upon the electrode is given by

$$F\_{\mathbf{x}}(\mathbf{x}) = F\_F(\mathbf{x}) + F\_{\mathbf{x},el}(U) \tag{9.94}$$

From idle position (*Fx* (*x*) = 0) the displacement of the electrode in x-direction is given by:

$$
\alpha = \frac{1}{2} \varepsilon\_0 U^2 \frac{b}{d} \frac{1}{k} \tag{9.95}
$$

Typically, this design is realized in a comb-like structure, with one comb of electrodes being engaged in a counter electrode comb. This equals an electrical parallel circuit of *n* capacitors, which is identical to a parallel circuit of force sources complementing each other. Figure 9.62 shows such a design. The area of the overlapping electrodes is given by *a* in x-direction and *b* in y-direction. With the plate distance *d* the capacity according to Eq. (9.96) can be calculated.

$$C\_{\mathcal{Q}} = \varepsilon\_0 \cdot \frac{ab}{d} \cdot n \tag{9.96}$$

By differentiating the energy according to the movement direction the electromotive force can be calculated:

$$F\_x = \frac{\partial W}{\partial x} = \frac{1}{2}U^2 \frac{\partial C}{\partial x} = \frac{1}{2}U^2 \varepsilon\_0 \frac{b}{d} \cdot n \tag{9.97}$$

#### **9.5.2.4 Summary and Examples**

For all actuators shown, the electrostatic force acts indirect against the user and is transmitted by a moveable counter-electrode. A much simpler design of tactile displays makes use of the user's skin as a counter electrode, which experiences the

**Fig. 9.62** Actuator with comb-electrodes and variable overlapping area

whole electrostatic field force. Accordingly, tactile electrostatic applications can be distinguished in direct and indirect principles.

## **Direct Field Force**

The simplest design combines one electrode, respectively a structured electrode array, and an isolating layer. A schematic sketch is given in Fig. 9.63. The user and his finger resemble the counter electrode. With the attractive force between the conductive skin and the electrodes, a locally distributed increase of friction can be achieved. It hinders a relative movement and can be perceived by the user. Such systems can be easily realized and excellently miniaturized. Their biggest disadvantage is their sensitivity on humidity on the surface, which is brought onto the electrodes during any use in form of sweat. This leads either to a blocking of the electrical field by the conductive watery layer above the isolation or it contributes by capillary effects to the field forming and the resulting perceived impression [71]. Nevertheless, the compactness of such solutions and the possibility to combine it with existing touchscreen-concepts lead to a relevant industrial and research-interest into it. Especially the group of Colgate did some very fundamental studies on comparing ultrasonic and electroadhesive vibrotactile effects and their perceptional basis in comparision to the physical domains involved [72].

## **Indirect Field Force**

With these systems the field force is used to move an interacting surface. The finger of the user interacts with these surfaces (sliders) and experiences their movements as a perceivable stimulation. A realization with a comb of actuators moving orthogonal to the field direction is given in Fig. 9.64. The structural height is 300 µm providing 1 mN at operating voltages of up to 100 V. The same design with an actuator made of parallel electrodes can achieve displacements of 60 µm. The comb-electrodes shown here displace 100 µm.

**Fig. 9.63** Electrostatic stimulator with human finger as counter electrode [73], own visualization

**Fig. 9.64** Electrostatic comb-actuator for tangential stimulation [74], own visualization

#### **Electrostatic Break**

Despite the area of electrostatic actuators is dominated by texture simulations using direct field forces and for kinaesthetic systems by the application of electroactive polymers (EAP) Sect. 9.5.4, there are still surprising solutions using this concept. Such a solution was shown by Hinchet et al. with the system *DextrES* for virtual reality application, where between two conducting elements an electrostatic force can be applied increasing friction and therefore creating a strong sensation of contact for finger-manipulations in virtual space. With the system forces >20 N can be realized, although they require voltages in the range of 1.5 kV. The dynamics of the system clearly proved beneficial in the study conducted to explore a VR-scenario [75].

## *9.5.3 Active Skin*

One of the significant trends in robotic research is how to employ haptic feel in the human-and-machine interaction. Virtual reality, neurophysiology, and biomedical engineering require the haptic interface as their system primary function. In this work, a new haptic interface, active skin, by using a tactile sensor and a tactile stimulator is designed. By synchronizing the sensor and the stimulator, it generates a wide variety of haptic feel in response to the touch Fig. 9.65.

Integration makes this active skin of tactile stimulator and tactile sensor into a single haptic unit. A layer of the tactile sensor is located on the tactile stimulator top.

**Fig. 9.65** Configuration of the layers and working principle of the active skin [76]c (2010) Society of Photo-Optical Instrumentation Engineers (SPIE), all rights reserved

In this design, a dielectric elastomer layer is sandwiched between two conductive electrode layers. Two protective layers are located on the top and bottom of the active skin to save the system from damages. Figure 9.65 shows the layers' structure.

The interaction part of the active skin is shown in Fig. 9.65 The tactile and capacitance sensors are responsible for detecting contact with an external object, as well as the contact position. The corresponding tactile stimulator is actuated according to the detected force. For representing different touch feelings, each cell of the active skin can be controlled independently. Therefore, as shown in Fig. 9.65, the position and magnitude of the force and touch feeling will be found by using different sensors.

#### **Summary**

Electrostatic drives with air gap achieve force in the range of mN to N. As the actuators are driven by fields, the compromise between plate distance and electrical operation voltage has to be validated for each individual application. The breakdown field strength of air (approx. 3 V/µm) is the upper, limiting factor. The actuators' displacement is limited to several μm. At the same time the operating voltages reach several hundred volts. Due to the very low achievable displacement the application of electrostatic actuators is limited to tactile stimulation only.

For the concrete actuator design it is recommended to deal with the modeling of such actuators, e.g. based on concentrated network parameters (see Sect. 4.3.2). This allows the analysis of the complete electromechanical system starting from the applicable mechanical load situation to the electrical control with a single methodological approach.

## *9.5.4 Dielectric Elastomer Actuators*

As with many other areas, new synthetic materials replace classic materials such as metals in actuator design. Thanks to the enormous progress in material development their mechanical properties can be adjusted to a large spectrum of possible applications. Other big advantages are given by the very cheap material costs. Additionally, almost any geometrical shape can be manufactured with relatively small efforts.

Polymers are called "active polymers" if they are able to change their shape and form under the influence of external parameters. The causes for theses changes may be manifold: electric and magnetic fields, light and even ph-value. Being used within actuators, their resulting mechanical properties like elasticity, applicable force and deformation at simultaneously high toughness and robustness are quite comparable to biological muscles [77].

To classify the large variety of "active polymers" they are usually distinguished according to their physical working principle. A classification into "non-conductive polymers", activated e.g. by light, pH-value or temperature, and "electrical polymers", activated by an arbitrary electrical source. Latter are called "electroactive polymers" (EAP) and are further distinguished into "ionic" and "electronic" EAPs. Generally speaking electronic EAP are operated at preferably high field strengths near the breakdown field strength. Depending on the layer thickness of the dielectrics 1−20 kV are typical operation voltages. Consequently, very high energy densities at low reaction times (in the area of milliseconds) can be achieved. In contrast, ionic EAP are operated at obviously lower voltages of 1−5 V. However, an electrolyte is necessary for transportation of the ions. It is frequently provided by a liquid solution. Such actuators are typically realized as bending bars, achieving large deformations at their tip with long reaction times (several seconds).

All EAP technologies are subject of actual research and fundamental development. However, two actuator types are already used in robotics: "Ionic polymer metal composite" (IPMC) and "dielectric elastomer actuators" (DEA). A summary and description of all EAP-types is offered by Kim [78]. Their functionality will be further discussed in the following paragraphs as they affiliate to the group of electrostatic actuators. A comparison between characteristic values of dielectric elastomer actuators and the muscles of the human is shown in Table 9.6. By the use of an elastomer actuator with large expansion additional mechanical components such as gears or bearings are needless. Additionally, the use of these materials may be combined to complex designs similar to and inspired by nature. One application is e.g. the locomotion of insects and fishes within bionic research [79].

#### **9.5.4.1 Dielectric Elastomer Actuators—Electrostatic Solid State Actuators**

The design of dielectric elastomer actuators is identical to the design of a parallelplate capacitor, but with an elastic dielectric (a polymer respectively elastomer)


**Table 9.6** Comparison of human muscle and DEA according to Pei [80]

**Fig. 9.66** DEA in initial state (left) and charged state (right)

sandwiched by compliant electrodes. Hence, it is a solid state actuator. The schematic design of a dielectric elastomer actuator is visualized in Fig. 9.66, left. In an uncharged condition the capacity and the energy stored is identical to an air-gap actuator (Eqs. (9.75) and (9.76)). A change of this condition happens by the application of a voltage *U* and is visualized in Fig. 9.66, on the right: The charged capacitor contains more charges (*Q* + Δ*Q*), the electrode area increases (*A* + Δ*A*), whereas the distance (*z* − Δ*z*) simultaneously decreases. The change of energy after an infinitesimal change d*Q*, d*A* and d*z* is calculated in Eq. (9.98):

$$\mathrm{d}W = \left(\frac{\mathcal{Q}}{C}\right)\mathrm{d}\mathcal{Q} + \left(\frac{1}{2}\frac{\mathcal{Q}^2}{C}\frac{1}{z}\right)\mathrm{d}z - \left(\frac{1}{2}\frac{\mathcal{Q}^2}{C}\frac{1}{A}\right)\mathrm{d}A\tag{9.98}$$

$$\mathbf{d}\,W = U\mathbf{d}Q + W\left[\left(\frac{1}{z}\right)\mathbf{d}z - \left(\frac{1}{A}\right)\mathbf{d}A\right] \tag{9.99}$$

The internal energy change equals the change of the electrical energy by the voltage source and the mechanical energy used. Latter depends on the geometry (parallel (dz) and normal (dA) to the field's direction). In comparison to the air-gap actuator in Sect. 9.5.2 an overlay of decreasing distance and increasing electrodes' area occurs. This is causes by a material property which is common to all elastomers and to almost all polymers: the aspect of volume constancy. A body compressed in one direction will extend in the remaining two other dimensions if it is incompressible. This gives a direct relation between distance change and the change of electrodes' area. As a consequence Eq. (9.100) results

$$A\,\mathrm{d}z = -z\mathrm{d}A\tag{9.100}$$

simplifying Eq. (9.99) to

$$\mathrm{d}W = U\mathrm{d}\mathcal{Q} + 2W\left(\frac{1}{z}\right)\mathrm{d}z\tag{9.101}$$

The resulting attractive force of the electrodes can be derived from this electrical energy. With respect to the electrode surface *A* the electrostatic pressure *pel* at d*Q* = 0 is given according to Eq. (9.102)

$$p\_{el} = \frac{1}{A} \frac{\text{d}W}{\text{d}z} = 2W \frac{1}{Az} \tag{9.102}$$

and by the application of Eq. (9.76)

$$p\_{el} = 2\left(\frac{1}{2}\varepsilon\_0 \varepsilon\_r A z \frac{U^2}{z^2}\right)\frac{1}{Az} = \varepsilon\_0 \varepsilon\_r E^2 \tag{9.103}$$

Comparing this result with Eq. (9.81) as a reference for a pressure of an air-gap actuator with variable plate distance, dielectric elastomer actuators are capable of generating a pressure twice as high with otherwise identical parameters [81].

Additional reasons for the obviously increased performance of the dielectric elastomer actuators are based on their material, too. The relative permittivity is given by ε*<sup>r</sup>* > 1, depending on the material ε*<sup>r</sup>* 3−10. By chemical processing and implementation of fillers the relative permittivity may be increased. However, it has to be noticed that other parameters (such as the breakdown field strength and the Emodulus) may become worse, possibly the positive effect of the increased ε*<sup>r</sup>* gets lost. Especially the breakdown field strength is one of the most limiting factors. With many materials an increase in breakdown field strength could be observed after planar prestrain. In these cases breakdown field strengths of 100−400 V/µm are typical [82].

The pull-in effect does not happen at *z* = 1/3 · *z*<sup>0</sup> (air-gap actuators), but at much higher deflections. With some materials mechanical prestrain of the actuator allows to displace the pull-in further, reaching the breakdown field strength before. The reason for this surprising property is the volume constant dielectric layer showing viscoelastic properties. It complies with a return spring with strong nonlinear forcedisplacement characteristics for large extensions. Its working point is displaced along the stress-strain-curve of the material as an effect of the mechanical prestrain.

For the application in dielectric elastomer actuators many materials may be used. The material properties cover an extreme wide spectrum ranging from gel-like polymers up to relatively rigid thermoplastics. Generally speaking, every dielectric material has to have a high breakdown field strength and elasticity beside a high relative permittivity. Silicone provides highest deformation-velocities and a high temperature resistance. Acrylics have high breakdown field strength and achieve higher energy densities. The following list is a selection of the dielectric materials most frequently used today:

• silicone

HS 3 (Dow Corning) CF 19-2186 (Nusil) Elastosil P7670 (Wacker) Elastosil RT625 (Wacker)

• acrylics

VHB 4910 (3M)

The most frequently used materials for the elastic electrodes are graphite powder, conductive carbon, and carbon grease.

## *9.5.5 Designs of Dielectric Elastomer Actuators*

As mentioned before, dielectric elastomer actuators achieve high deformations (compression in field direction) of 10−30%. To keep voltages within reasonable ranges, layer thicknesses of 10−100 µm are used depending on the breakdown field strength. The resulting absolute displacement in field direction is too low to be useful. Consequently, there are several concepts to increase it. Two principle movement directions are distinguished for this purpose: the longitudinal effect in parallel to the field (thickness change), and the transversal effect orthogonal to field (surface area change). The importance of this discrimination lies in the volume constancy of the material: uni-axial pressure load equals a two-axial tension load in the remaining spatial directions. Hence, two transversal tensions within the surface result in a surface change. For materials fulfilling the concept of "volume constancy", Eq. (9.104) is valid, providing the following properties for the longitudinal compression *Sz* and the transversal elongation *Sx* :


$$S\_x = \frac{1}{\sqrt{1 - S\_z}} - 1\tag{9.104}$$

The extension of the surface area *SA* depends on the longitudinal compression *Sz* according to Eq. (9.105):

$$S\_A = \frac{dA}{A} = \frac{S\_z}{1 - S\_z} \tag{9.105}$$

**Fig. 9.67** Typical designs of dielectric elastomer actuators: roll-actuator (left), stack-actuator (center) and diaphragm-actuator (right)

The increase of the area with uni-axial compression is always larger than the change of thickness. Actuators built according to this principle are the most effective ones. Figure 9.67 shows three typical designs. A roll-actuator (left) being built as full or tubular cylinder can achieve length changes of more than 10%. Kornbluh [83] describes an acrylic roll-actuator achieving a maximum force of 29 N with an own weight of no more than 2, 6 g at a extension of 35 mm. The manufacture of electrodes with a large area is very simple. On the other hand the rolling of the actuators with simultaneous pre-strain (up to 500%) can be very challenging. With a stackactuator (middle) very thin dielectric layers down to 5µm with minimized operational voltages can be achieved, depending mainly on the manufacturing technique [84]. As the longitudinal effect is used extension is limited to approximately 10%. However, due to their design and fabrication process actuator arrays at high density can be built [85] and offer typically lifetimes of more than 100 million cycles depending on their electrical interconnection [86]. The most simple and most effective designs are based on a restrained foil, whose complete surface change is transformed into an up-arching (diaphragm-actuator, right) [87]. If this actuator experiences a higher external load, such as from a finger, an additional force source, e.g. a pressure, has to be provided to support the actuators own tension.

#### **9.5.5.1 Summary and Examples**

As with air-gap actuators, a dielectric solid state actuator's major limit is given by the breakdown field strength of the dielectric. However, in contrast to the air-gap actuators, a carefully chosen design can easily avoid any pull-in effect. Consequently, these actuators show a larger workspace; and with the high number of different design variants a wide variety of applications can be realized, just depending on the requirements on displacement, maximum force and actuator density.

#### **Tactile Displays**

The simplest application of a tactile display is a Braille-device. Such devices are meant to display Braille-letters in patterns of small, embossed dots. In standard Braille six dots are being used, in computer compatible Euro-Braille eight dots.

**Fig. 9.68** Presenting a Braille sign with roll-actuators, left: geometry, right: schematic setup of a Braille row [88] c Elsevier, all rights reserved

These points are arranged in a 2x3 or 2x4 matrix, respectively (Fig. 9.68). In a display-device 40 to 80 characters are displayed simultaneously. In state-of-the-art designs each dot is actuated with one piezoelectric-bending actuator (Sect. 9.3.5). This technical effort is reason for the high price of these devices. As a consequence there are several functional samples existing, which prove the applicability of less expensive drives with simplified mechanical designs, but still sufficient performance. Each of the three variants for dielectric elastomer actuators has already been used for this application.

Figure 9.68 shows the schematic sketch of roll-actuators formed to an actuator column [88]. Each roll-actuator moves one pin, which itself is pushed up above the base plate after applying a voltage. The elastomer film is coiled around a spring of 60 mm length with a diameter of 1.37 mm. With an electric field of 100 V/µm applied, the pre-tensioned spring can achieve a displacement of 1 mm at a force of 750 mN. The underlying force source is the spring with spring-constant 225 N/m pre-tensioned by a passive film. The maximum necessary displacement of 500 µm is achieved at field strengths of 60 V/µm.

The application of stack actuators according to Jungmann [85] is schematically sketched in Fig. 9.70, left. The biggest advantage of this variant is given by the extremely high actuator density combined with a simple manufacturing process. Additionally, the closed silicone elements are flexible enough to be mounted on an almost arbitrary formed surface. The surface—in itself being made of silicone shows an adequate roughness and temperature conductivity. It is perceived as "convenient" by many users. With a field strength of 30 V/µm a stack made of 100 dielectric layers displacements of 500 µm can be achieved. The load of a reading finger on the soft substrate generates a typical contact pressure of 4 kPa resulting in a displacement of 25µm. This extension is considerably less than the perception threshold of 10% of the maximum displacement. For control of the array it has to be noted, that the actuators are displaced in a negative logic. With applied voltage the individual pin is pulled downwards.

A remote control providing tactile feedback based on the same type of stack actuators is presented in [89]. The mobile user interface consists of five independent

**Fig. 9.69** Tactile feedback enhanced PC-mouse: CAD model (**a**), and demonstration device (**b**) [90] c (2014) Society of Photo-Optical Instrumentation Engineers (SPIE), all rights reserved

actuating elements. Beside the presentation of tactile feedback the stack transducers offer to acquire user's input using the transducers intrinsic sensor functionality. The actuators are driven with a voltage of up to 1100 V generated out of a primary litiumion battery cell. A free form touchpad providing tactile feedback to the human palm is described in [90]. Four actuators are integrated in a PC-mouse to enhance user experience and substitute visual feedback during navigation tasks (Fig. 9.69). The stacks consist of 40 dielectric layers each 40 µm in thickness and are supplied by a maximum voltage of 1kV in a frequency range from 1.5 to 1 kHz. The mouse contains all the required driving electronics and can be customized by a software configuration tool.

The design of a Braille display with diaphragm-actuators according to Heydt [91] demonstrates the distinct properties of this variant. The increase of elastomer surface results in a notable displacement of a pin from the device's contact area. However, a mechanical prestrain is necessary to provide a force. This can be either generated by a spring or air pressure below the actuator. Figure 9.70 on the right gives a schematic sketch for a single point being pretensioned by a spring with a diameter of 1.6 mm. At an operating voltage of 5.68 kV the actuator displaces in idle mode 450 µm.

Carpi combines the principle of the diaphragm actuators with fluid-based hydrostatic transmission [92]. The result is a wearable tactile display intended for providing feedback during electronic navigation in virtual environments. The actuators are based on an incompressible fluid that hydrostatically couples a dielectric elastomer membrane to a passive membrane interfaced to the user's finger. The transmission of actuation from the active membrane to the finger, without any direct contact allows suitable electrical safety. However, the actuator is driven with comparatively high voltages up to 4 kV.

**Fig. 9.70** left: Actuator row with stack-actuators [85], used with permission; right: Use of diaphrag-Actuators [91], c 2003 John Wiley & Sons Inc, all rights reserved

## *9.5.6 Electro-Rheological Fluids*

Fluids being influenced in their rheological properties (especially the viscosity) by electrical field varying in direction and strength are called → Electro-Rheological Fluid (ERF). Consequently ERF are classified as non-Newton fluids, as they have a variable viscosity at constant temperature. The electro-rheological effect has been observed for the first time in 1947 at a suspension of cornstarch and oil by Willis Winslow.

Electro-rheological fluids include dipoles made of polarized particles, which are dispersed in a conductive suspension. These particles are aligned in an applied electrical field. An interaction between particles and free charge carriers happens. Chainlike microstructures are built between the electrodes [93–95] in this process. However, it seems as if this is not the only effect responsible for the viscosity change, as even when the microstructures [96] were destroyed a significant viscosity increase remained. The exact analysis of the mechanism responsible for this effect is subject of actual research.

The viscosity of the fluid changes depending on the strength of the applied electrical field. With an electric field of 1−10 kV/mm the viscosity may change up to a factor of 1000 compared to the field-free state. This enormous change equals a viscosity difference between water and honey. A big advantage of this method can be found in the dynamics of the viscosity change. It is reversible and can be switched within one millisecond. Therefore, electro-rheological fluids are suitable for dynamic applications, too.

If large field strengths are assumed, the ERF can be modeled as Bingham-fluid. It has a threshold for linear flow characteristics: starting at a minimum tension τ*<sup>F</sup>*,*<sup>d</sup>* (flow threshold) the fluid actually starts to flow. The fluid starts flowing right below this threshold. The shear forces τ are calculated according to Eq. (9.106):

$$
\pi = \mu \dot{\boldsymbol{\gamma}} + \boldsymbol{\tau}\_{F,d} \tag{9.106}
$$

With μ being the dynamic viscosity, γ˙ the shear rate, and τ*<sup>F</sup>*,*<sup>d</sup>* is the dynamic flow limit. Latter changes quadratically with the electrical field strength (Eq. (9.107)). The proportional factor *Cd* is a constant provided with the material's specifications.

$$
\pi\_{F,d} = \mathcal{C}\_d E^2 \tag{9.107}
$$

For complex calculations modelling the fluid's transition to and from the state of flow, the model is extended to a nonlinear system according to Eq. (9.108) (for *n* = 1 this equals Eq. (9.106))

$$
\pi = \mathfrak{r}\_{F,d} + k\dot{\chi}^n \tag{9.108}
$$

This general form describes the shear force for visco-plastic fluids with flow-limit according to Vitrani [97]. For an analysis of idle state with shear rate <sup>γ</sup>˙ <sup>=</sup> 0 the static flow-limit τ*<sup>F</sup>*,*<sup>s</sup>* with τ*<sup>F</sup>*,*<sup>s</sup>* > τ*<sup>F</sup>*,*<sup>d</sup>* is introduced. When exceeding the static flow limit, the idle fluid is deformed. With the specific material constants *Cs* and *Eref* Eq. (9.109) can be formulated:

$$
\pi\_{F,s} = C\_s (E - E\_{ref}) \tag{9.109}
$$

The materials used for the particles are frequently metal oxides, silicon an hydride, poly urethane and polymers with metallic ions. The diameter of particles is 1−100 µm, their proportion on the fluid's volume is 30–50%. As carrier medium typically oils (such as silicone oil) or specially treated hydrocarbon are used. To additionally improve the viscosity change, nanoscale particles are added in the electrorheological fluids, too ("giant electro-rheological effect" [98, 99]). In [100, 101] further mathematical modelling is presented for the dynamic flow behavior of ERfluids.

The central property of ERF—to reversibly change the viscosity—is used for force-feedback devices, haptic displays, and artificial muscles and joints. As the change in viscosity is mainly a change in counter-forces but not in shape or directforces, ERF actuators are counted to the group of "passive actuators". For the characterization of their performance, the ratio between stimulated and idle state is used. They are built in three principle design variants [102] as described in the following sections.

#### **9.5.6.1 Shear Mode**

The ER-fluid is located in between two parallel plates, one fixed and one moving relatively to the fixed one. The only constrain is given by a fixed inter-plate distance *d*. If a force *F* is applied on the upper plate, it is displaced by a value *x* at a certain velocity *v*. For the configuration shown in Fig. 9.71 the mechanical control ratio λ can be calculated according to Eq. (9.112) from the ratio of dissipative forces (fielddependent flow-stresses, Eq. 9.115) and the field independent viscosity-term (Eq. (9.110)) [103]. η gives the basis viscosity of the ER-Fluid (in idle state) and τ*<sup>y</sup>* the low-stress depending on the electrostatic field.

**Fig. 9.71** Using ERF to vary the shear force

$$F\_{\eta} = \frac{\eta vab}{d} \tag{9.110}$$

$$F\_{\mathfrak{r}} = \mathfrak{r}\_{\mathfrak{y}} ab \tag{9.111}$$

$$
\lambda = \frac{F\_\text{\tiny\tau}}{F\_\text{\eta}} = \frac{\tau\_\text{y} d}{\eta \nu} \tag{9.112}
$$

#### **9.5.6.2 Flow Mode**

The schematic sketch of this configuration is shown in Fig. 9.72. Both fixed plates form a channel, with the fluid flowing through it due to an external pressure difference *p* and a volume-flow *V*˙ . With an electric field *E* applied between the plates the pressure loss increases along the channel and the volume flow is reduced. Analog to the prior design a field independent viscosity based pressure loss *p*<sup>η</sup> and a field dependent pressure loss *p*<sup>τ</sup> can be calculated [103]:

$$p\_{\eta} = \frac{12\eta Va}{d^3b} \tag{9.113}$$

$$p\_{\tau} = \frac{c\tau\_{y}a}{d} \tag{9.114}$$

The mechanical control ratio equals

$$
\lambda = \frac{p\_\tau}{p\_\eta} = \frac{c\tau\_y d^2 b}{12\eta \dot{V}} \tag{9.115}
$$

At an adequate dimensioning of the fluid, the flow-resistance can be increased by the electrical field to such a degree, that the fluid stops completely when exceeding a specific voltage. This makes the channel a valve without any moving mechanical components.

**Fig. 9.72** Varying the flow channel's resistivity with ERF-actuators

**Fig. 9.73** Varying the acoustic impedance with ERF-actuators under external forces

#### **9.5.6.3 Squeeze Mode**

A design to generate pressure is schematically sketched in Fig. 9.73. In contrast to the variants shown before, the distance between both plates is subject to change now. If a force acts on the upper plate, it moves downwards. This results in the fluid being pressed outside. A plate distance *d*<sup>0</sup> is assumed at the beginning, and a relative movement of *v* of the plate moving downwards. The velocity dependent viscosity force *F*<sup>η</sup> and the field dependent tension term *F*<sup>τ</sup> [104] are calculated according to:

$$F\_{\eta} = \frac{3\pi\eta\nu r^4}{2(d\_0 - z)}\tag{9.116}$$

$$F\_{\tau} = \frac{4\pi\tau\_{y}r^{3}}{\Im(d\_{0} - z)}\tag{9.117}$$

which gives the mechanical control ratio:

$$
\lambda = \frac{8\pi\_\text{y}\Im(d\_0 - z)^2}{9\eta\nu r} \tag{9.118}
$$

With pressure (force on the upper plate) the fluid is pressed out of the gap. In this configuration the force-displacement-characteristics is strongly influenced by the electrical field strength. An analysis of the dynamic behavior of such an actuator is described in [105].

#### **9.5.6.4 Designing ERF-Actuators**

The maximum force *F*<sup>τ</sup> and the necessary mechanical power *Pmech* are the input values for the design of ERF-actuators from the perspective of an application engineer. Equations (9.110) and (9.118) can be combined to calculate the necessary volume for providing a certain power with all three actuator configurations.

$$V = k \frac{\eta}{\tau\_\text{y}^2} \lambda P\_{mech} \tag{9.119}$$

Consequently, the volume is defined by the mechanical control ratio, the fluidspecific values η and τ*<sup>y</sup>* , such as a constant *k* dependent on the actual configuration. The electrical energy *Wel* necessary to generate the electrostatic field of the actuator (volume-dependent) is calculated according to Eq. (9.120).

$$W\_{el} = V(\frac{1}{2}\varepsilon\_0 \varepsilon\_r E^2) \tag{9.120}$$

#### **9.5.6.5 Comparison to Magneto-Rheological Fluids**

→ Magneto-Rheological Fluid (MRF) are very similar to electro-rheological fluids. However, the physical properties of the fluids are influenced by magnetic fields. All calculations which are shown before are applicable to MRF, too. Looking at the volume necessary for an actuator according to Eq. (9.119), considering the viscosities of electro-rheological and magneto-rheological fluids being comparable, a volume ratio proportional to the reciprocal ratio of the fluid-tensions' square according to Eq. (9.121) results:

$$\frac{V\_{REF}}{V\_{MRF}} = \frac{\tau\_{MRF}^2}{\tau\_{REF}^2} \tag{9.121}$$

In a rough but good approximation the flow-stress of a magneto-rheological fluid is one magnitude larger than of an ERF, resulting in a smaller (approximately factor 100) volume of a MRF actuator compared to the ERF. However, a comparison between both fluids going beyond the pure volume analysis for similar output power is hard: for an ERF high voltages at relatively small currents are required. The main power leakage is lost by leakage-currents through the medium (ERF) itself. With MRF-actuators smaller electrical voltages at very high currents become necessary to generate an adequate magnetic field. The energy for a MRF-actuator is calculated according to Eq. (9.122) with the magnetic flux density *B* and the magnetic field strength *H*.

$$W\_{cl,MRF} = V\_{MRF}(\frac{1}{2}BH) \tag{9.122}$$

The ratio between the energies for both fluids is calculated according to Eq. (9.123)

**Fig. 9.74** Schematic setup of a tactile actuator based on ER fluids [109] c Elsevier, all rights reserved

$$\frac{W\_{el,ERF}}{W\_{el,MRF}} = \frac{V\_{ERF}}{V\_{MRF}} \frac{\varepsilon\_0 \varepsilon\_r E^2}{BH} \tag{9.123}$$

With typical values for all parameters the necessary electrical energy for actuator control is comparable for both fluids. A good overview on the design of actuators for both types of fluids is given in [106].

#### **9.5.6.6 Summary and Examples**

Electro-rheological fluids are also called partly-active actuators, as they are not transforming the electrical values into a direct movement, but change their properties due to the electrical energy provided. The change of their properties covers a wide range. Naturally, their application in haptics ranges from small tactile displays to larger haptic systems.

#### **Tactile Systems**

The first application of ERF as tactile sensor in an artificial robot hand has been made in 1989 by Kenaley [107]. Starting from this work several ideas developed to use ERF in tactile arrays for improving systems for virtual reality applications. Several tactile displays, among them a 5 <sup>×</sup> 5 Matrix from Taylor [108] and another one from Böse [109], were built. Figure 9.74 shows the schematic design of such a tactile element. A piston is pressed in an ERF filled chamber by the user. Varying counter forces are generated depending on the actuation state of the ERF. Elastic foam is connected to the piston as a spring to move it back to its resting position. With an electric field of 3 V/µm a force of 3.3 N can be achieved at a displacement of 30 mm. Switching the electrical voltages is realized by light emitting diodes and corresponding receivers (GaAs-elements) on the backplane.

**Fig. 9.75** Haptic joystick based on pneumatic actuators and a MRF-brake as presented in [112] c Elsevier, all rights reserved

#### **Haptic Operating Controls**

Another obvious application for ERF in haptic systems is their use as a "variable brake". This is supported by the fact that typical applications beside haptic systems are variable brakes and bearings (e.g. adaptive dampers). There are several designs with a rotary knob mobbing a spinning disk within an ERF orMRF generating varying counter torques as shown in [110]. In this case, the measurement of the rotary angle is solved by a potentiometer. In dependency on the rotary angle the intended counter force respectively torque is generated. The user can perceive a "latching" of the rotary knob with a mature system. The latching depth itself can be varied in a wide range. By the varying friction hard stops can be simulated, too, such as sticking and of course free rotation.

An extension of the one-dimensional system is presented in [111]. Two systems based on ERF are coupled to a joystick with two DoF. A counter force can be generated in each movement direction of the joystick. As ERF are able to generate higher torques with less energy required compared to a normal electrical drive, they are especially suitable to mobile applications like in cars.

Senkal et al. presented a combination of an MRF brake and pneumatic actuators for a 2D joystick as shown in Fig. 9.75. This hybrid concept uses pneumatic actuators because of the high energy density and the MR brake to increase the fidelity of rigid objects [112]. Further realizations of MRF operating controls can be found in [113].

#### **Force-Feedback Glove**

A force-feedback glove was designed as a component for a simulator of surgeries [114]. Surgical interventions shall be trained by the aid of haptic feedback. The system MEMICO ("Remote Mechanical Mirroring using Controlled stiffness and Actuators") shall enable a surgeon to perform the treatment with a robot in telemanipulation, whereas the haptic perception is retained. ERF actuators are used for both ends: on the side of the end-effector, and for the haptic feedback to the user. The adjustable elasticity is based on the same principle as with tactile systems. For generating forces a force source is necessary. A new ECFS actuator ("Electronic Controlled Force and Stiffness") is used for this application. The schematic design

is shown in Fig. 9.76. It is an actuator according to the inchworm-principle, wherein both brakes are realized by the ER-fluid surrounding it. The driving component for the forward- and backward movement is realized by two electromagnets. Both actuators are assembled within a haptic exoskeleton. They are mounted on the rim of a glove to conserve the mobility of the hand. With the actuators in between all fingerjoints arbitrary forces and varying elasticities can be simulated independently. The ECFS actuators are operated at voltages of 2 kV and generate a force up to 50 N.

## **9.6 Special Designs of Haptic Actuators**

#### Thorsten A. Kern and Christian Hatzfeld

The actuation principles discussed so far are the most common approaches to the actuation of haptic devices. Besides these principles, there are numerous research projects, singular assemblies, and special classes of devices. The knowledge of these designs is an enrichment for any engineer, yet it is impossible to completely cover the variety of all haptic designs in a single book. This section, nevertheless, intends to give a cross section of alternative, quaint and unconventional systems for generating kinaesthetic and tactile impressions. This cross-section is based on the authors' subjective observations and knowledge and does not claim to be exhaustive. The discussed systems have been selected, as examples suited best to cover one special class of systems and actuators, each. They are neither the first systems of their kind, nor necessarily the best ones. They are thought to be crystallization points of further research, if specific requirements have to be chosen for special solutions. The systems shown here are meant to be an inspiration and an encouragement not to discard creative engineering approaches to the generation of haptic impressions too early during the design process.

#### 9 Actuator Design 409

**Fig. 9.77 a** Desktop-version of the Spidar with ball-like interaction handle by Shoichi Hasegawa, used with permission, **b** room-size version INCA 6D with 3D visualization environment by c *Haption*, used with permission

## *9.6.1 Haptic-Kinaesthetic Devices*

Haptic-kinaesthetic devices of this category excel primarily due to their extraordinary kinematics and not to very special actuation principles. Nevertheless, every engineer may be encouraged to be aware of the examples of this device class and let this knowledge influence his/her own work.

#### **9.6.1.1 Rope-Based Systems**

With rope-based systems, actuators and the point of interaction are connected with ropes, i.e. mechanical elements that can only convey pulling forces. They are especially suited for lightweight systems with large working spaces, as for example simulation of assembly tasks, rehabilitation and training. von Zitzewitz describes the use of rope-based systems for sport simulation and training (tennis and rowing) as well as an experimental environment to investigate vestibular stimulation in sleeping subjects [115].

Another system, the Spidar (Fig. 9.77) is based on the work of Prof. Sato and has frequently been used in research projects [116, 117] as well as in commercial systems. It is composed of an interaction handle—usually a ball—held by eight strings. Each string is operated by an actuator, which is frequently (but not obligatorily) mounted in the corners of a rectangular volume. The drives are able to generate pulling forces on the strings, enabling the generation of forces and torques in six DoFs on the handle. Typically the actuators used are based on electrodynamic electronic—commutated units. The Spidar-system can be scaled to almost any size, ranging from table-top devices to room-wide installations. It convinces by the small number of mechanical components and the very small friction. As strings are able to provide pull forces only, it is worth noting that just two additional actuators are sufficient to compensate this disadvantage.

**Fig. 9.78** SCS basic principles: **a** driving unit **b** and particular patented mounting [118]. Figure courtesy of *CEA LIST*

#### **9.6.1.2 Screw-and-Cable-System**

In several projects relating to medical rehabilitation and master slave teleoperation the CEA-LIST Interactive Robotics Unit use their Screw-and-Cable-System (SCS) shown in a first prototype in 2001 [118–121]. In this prototype 6 Screw-Cable actuators are used to motorize a master arm in a teleoperation system enabling high-fidelity force feedback. In the meantime the master arm is commercialized by *Haption S.A.* under the name Virtuose 6D 4040 [122]. The patented SCS basic principle can be seen in Fig. 9.78.

A rotative joint is driven by a standard push-pull cable. On one side, the cable is driven by a ball-screw which translates directly in its nut (the screw is locked in rotation thanks to rollers moving into slots). The nut is rotating in a fixed bearing and is driven by the motor thanks to a belt transmission [121].

Using SCS allows significantly mass and volume reduced driving units for joint torque control. The low friction threshold and high backdrivability enable true linear torque control without a sensor, avoiding drift and calibration procedure. Low inertia of the structure lead to a high transparency. In the upper limb exoskeleton ABLE 4D the SCS is embedded in the moving parts of the arm resulting in reduced cable

#### 9 Actuator Design 411

**Fig. 9.79** ABLE arm module: **a** optimized architecture to be integrated, **b** structure with its 2 integrated actuators [118]. Figure courtesy of *CEA LIST*

**Fig. 9.80** Magnetorheological actuation principle for full-hand interaction based on a 4x4 pattern [124] c Springer Nature, all rights reserved

length and simplified routing (Fig. 9.79). The two SCS integrated in the arm module perform alike artificial electrical muscles. Further information about the design of such systems can be found in [123].

#### **9.6.1.3 Magnetorheological Fluids as Three-Dimensional Haptic Display**

The wish to generate an artificial haptic impression in a volume for free interaction is one of the major motivations for many developments. The rheological systems shown in Sect. 9.5 provide one option to generate such an effect. For several years the team of Bicchi has been working on the generation of spatially resolved areas of differing viscosity in a volume (Fig. 9.80) to generate force-feedback on an exploring hand. Lately, the results were summarized in [124]. The optimization of such actuators is largely dependent on the control of the rheological fluid [125]. The psycho-physical experiments performed until today show that the identification of simple geometrical structures can be achieved on the basis of a 4x4 pattern inside the rheological volume.

**Fig. 9.81** Principle of eddy currents damping a rotating disc (**a**) and realization as a haptic device (**b**) by [127] c Springer Nature, all rights reserved

#### **9.6.1.4 Self-induction and Eddy Currents as Damping**

An active haptic device is designed to generate forces resp. torques in any direction. By the concept of "active" actuation the whole spectrum of mechanical interaction objects (e.g. masses, springs, dampers, other force sources like muscles, and moving objects) is covered. Nevertheless, only a slight portion of haptic interaction actually is "active". This has the side effect (of control engineering approaches) that active systems have continuously to be monitored for passivity. An alternative approach to the design of haptic actuators is given by choosing technical solutions able to dissipate mechanical energy. A frictional brake would be such a device, but its properties are strongly nonlinear and hard to control. Alternatives are therefore highly interesting. The team of Colgate showed in [126] how to increase the impedance of an electronic—commutated electrodynamic actuator, whereby two windings were bypassed by a variable resistor. The mutual induction possible by this bypass damped the motor significantly. In [127] the team of Hayward went even further by implementing an eddy current break into a pantograph-kinematics (Fig. 9.81). This break is a pure damping element with almost linear properties. By this method a controlled dynamic damping up to 250 Hz was achieved.

#### **9.6.1.5 Serial Coupled Actuators**

Serial coupled actuators include an additional mechanical coupling element between actuator and the driven element of the system. In the majority of cases this is a elastic element that was originally inserted to ease force control of actuators interacting with stiff environments [128]. These so called serial-elastic actuators allow the replacement of direct force control by the position control of both sides of the series elasticity and are used in applications like rehabilitation or man-machine interaction [129]. An

**Fig. 9.82** Example for the realization of a serial elastic actuator for use in an active knee orthesis [129]. The bevel gear is needed for better integration of the actuator near the knee of the wearer. Picture courtesy of Roman Müller, Institute of Electromechanical Design, Technische Universität Darmstadt, used with permission

**Fig. 9.83** Setup with two serial actuators coupled with an eddy-current clutch as presented in [130] c SAGE Publications, all rights reserved

example is shown in Fig. 9.82. For haptic applications, this configuration is especially interesting for the display of null-forces and -torques, i.e. free space movements.

Another application of serial coupled actuators was introduced by *Mohand-Ousaid et al.* To increase dynamics, lower the impact of inertia and increase transparency, a serial arrangement of two actuators connected with an viscous coupler based on eddy currents was presented in [130] and is shown in Fig. 9.83. By using two motors, the range of displayable forces/torques can be extended, because of the low inertia of the smaller motor higher dynamics can be archived. The viscous clutch couples slip velocity to transmitted torque that is used in the control of the device. With this approach, inertia is decoupled from the delivered torque effectively.

**Fig. 9.84** MagLev device **a** inner structure, **b** use of two devices in bimanual interaction. Images courtesy of *Butterfly Haptics, LLC*, Pittsburgh, PA, USA, used with permission

## **9.6.1.6 MagLev—Butterfly Haptics**

In the 1990s the team of Hollis developed a haptic device [131] based on the electrodynamic actuation principle (Fig. 9.84). Since recently, the device has been sold commercially by *Butterfly Haptics*. It is applied, e.g., to ongoing research projects on psychophysical analysis of texture perception. Six flat coils are mounted in a hemisphere with a magnetic circuit each. The combination of Lorentz-forces of all coils allows an actuation of the hemisphere in three translational and three rotational directions. Via three optical sensors—each of them measuring one translation and one rotation—the total movement of the sphere is acquired. Besides the actuation within its space, the control additionally includes compensation of gravity with the aid of all six actuators. This function realizes a bearing of the hemisphere with Lorentz-forces, only. The air-gap of the coils allows a translation of 25 mm and a rotation of ±8◦ in each direction. Resolutions of 2 µm (1 σ) and stiffnesses of up to 50 N mm−<sup>1</sup> can be reached. As a consequence of the small mass of the hemisphere, the electrodynamic actuator principle as a drive and the abandonment of mechanical bearings, forces of a bandwidth of 1 kHz can be generated.

## *9.6.2 Haptic-Tactile Devices*

Haptic-tactile devices of this category are intelligent combinations of well-known actuator principles of haptic systems with either high position resolutions or extraordinary, dynamic properties.

**Fig. 9.85** Pneumatic actuated tactile display: sketch (**a**) bidigital teletaction, and realization (**b**) [133], used with permission

#### **9.6.2.1 Pneumatic**

Due to their working principle pneumatic systems are a smart way to realize flexible high resolution tactile displays. But these systems suffer acoustic compliance, low dynamics and the requirement of pressurized air. A one-piece pneumatically-actuated tactile 5 × 5 matrix molded from silicone rubber is described in [132, 133]. The spacing is 2.5 mm with 1 mm diameter tactile elements. Instead of actuated pins an array of pressurized chambers without chamber leakage and seal friction is used (Fig. 9.85). 25 solenoid 3-way valves are used to control the pressure in each chamber resulting in a working frequency of 5 Hz. Instead of closed pressurized chambers the direct contact between the fingertip and the compressed air is used for tactile stimulation in [134]. The interface to the skin consists of channels each 2 mm in diameter. A similar display is shown in [135]. Using negative air pressure the tactile stimulus is generated by suction through 19 channels 2.5 mm in diameter with 5 mm intervals.

#### **9.6.2.2 Thermo-Pneumatic**

A classic problem of tactile pin-arrays is given by the high density of stimulator points to be achieved. The space below each pin for control and reconfiguration of the pin's position is notoriously finite. Consequently, a large number of different designs has been tested until today. In [136] a thermo-pneumatic system is introduced (Fig. 9.86) based on tubes filled with a fluid (methyl-chloride) with a low boiling point. The system allows a reconfiguration of the pins within 2 s. however, it has high power requirements, although the individual elements are very cheap.

**Fig. 9.86** Thermo-pneumatic actuation principle in a schematic sketch (**a**), and as actual realization (**b**) [136] c Springer Nature, all rights reserved

## **9.6.2.3 Shape Memory Materials**

Materials with shape memory property are able to remember their initial shape after deformation. When the material is heated up its internal structure starts to change and the materials forms back in its pre-deformed state. Due to the material intrinsic actuating effect high resolution tactile displays are achievable. The low driving frequencies caused by thermal inertia and needed heating and/or cooling systems are the drawback of this technology.

#### **Shape Memory Alloys**

In [137] a pin-array with 64 elements is realized covering an area of 20×40 mm2. The display consists of 8 modules each containing eight dots (Fig. 9.87). Each element comprise a 120 mm length NiTi SMA wire pre-tensioned by a spring. When an electrical current flows through the wire it heats up and starts to shorten. The result is a contraction of up to 5 mm. Driving frequencies up to a few Hz using a fan to cool the SMA wires down can be reached.

## **Bistable Electroactive Polymers**

Bistable Electroactive Polymers (BSEP) combine the large-strain actuation of dielectric elastomers with shape memory properties. The BSEP provide bistable deformation in a rigid structure. These polymers have a glass transition temperature *Tg* slightly above ambient temperature. Heated above *Tg* it is possible to actuate the materials like a conventional dielectric elastomer.

Using a chemically crosslinked poly(tert-butyl acrylate) (PTBA) as BSEP a tactile display is presented in [138]. The display contains a layer of PTBA diaphragm actuators and an incorporated heater element array. Figure 9.88 shows the fabricated refreshable Braille display device with size of a smartphone screen.

#### 9 Actuator Design 417

**Fig. 9.87** Sketch of tactile display using SMA: Top view (**a**), and side view of one module (**b**) [137] c The Institution of Engineering and Technology, all rights reserved

**Fig. 9.88** Bistable BSEP Braille display: actuator array (**a**) [138], and zoom of Braille dots in "OFF" and "ON" state (**b**) [139] c (2012) Society of Photo-Optical Instrumentation Engineers (SPIE), all rights reserved

#### **9.6.2.4 Texture Actuators**

Besides an application in Braille-related tasks, the design of tactile displays is relevant for texture perception, too. Instead of vibrotactile stimulation on a user's finger the modification of the friction between a sliding finger and a touch screen surface is a promising new direction in touch screen haptics. This kind of displays are mostly based on two basic technologies. In displays based on the electrovibration effect a periodic electrical signal is injected into a conductive electrode coated with a thin dielectric layer. The result is an alternating electrostatic force that periodically attracts and releases the finger, producing friction-like rubbery sensations as mentioned in Sect. 9.5.2.4.

In friction displays based on the squeeze film effect a thin cushion of air under the touching finger is created by a layer which is placed on top of the screen and is vibrated at an ultrasonic frequency. The modulation of the frequency and intensity

of vibrations allows to put the finger touching the surface in different degrees of levitation, thus actually affecting the frictional coefficient between the surface and the sliding finger [140].

In 2007, Winfield impressively demonstrated a simple tactile texture display called TPaD based on the squeeze film effect. The actuating element is a piezoelectric bending disk driven in resonance mode [141]. By the aid of an optical tracking right above the disc, and with a corresponding modulation of the control signal, perceivable textures with spatial resolutions were generated. The 25 mm diameter piezoelectric disk is bonded to a 25 mm diameter glass disk and supported by an annular mount (Fig. 9.89).

A similar display with an increased surface area of 57 × 76 mm<sup>2</sup> is shown in [86]. The vibrations are created by piezoelectric actuators bonded along one side of a glass plate placed on top of an LCD screen.

#### **9.6.2.5 Flexural Waves**

A transparent display providing localized tactile stimuli is presented in [142]. The working principle is based on the concept of computational time reversal and is able to stimulate one or several regions, and hence several fingers, independently. According to the wave propagation equation the direct solution of a given propagation problem is a diverging wave front and a converging wave front for the time reversed one if the initial condition is an impulse force. Consequently it is possible to generate peaks of deflection localized in space and in time using constructive interference caused by a multiple of stimulating actuators. Of course the quality of the focusing process is increasing with the number of transducers and has to be optimized. Depending on the requirements the noise occurring at the passive areas might be reduced below the tactile perception threshold.

Relating to their patented bending wave technology the company Redux Laboratories offers electromagnetic transducers for medium and large applications as well as piezo exciters for small form factor applications such as mobile devices (Fig. 9.90).

**Fig. 9.90** *Redux* transducers for bending wave technology: Moving coil exciters (**a**), and multilayered piezos

**Fig. 9.91** Tactile display based on ultrasonic sound pressure as array of senders for a transmission in the air [144] c Springer Nature, all rights reserved

#### **9.6.2.6 Volume-Ultrasonic Actuator**

Iwamoto built tactile displays which are made of piezoelectric actuators and are actuated in the ultrasonic frequency range. They use sound pressure as a force transmitter. The underlying principle is given by generating a displacement of the skin and a corresponding haptic perception by focused sound pressure. Whereas in the first realization an ultrasonic array had been used to generate tactile dots in a fluid [143], later developments used the air for energy transmission [144]. The pressures generated by the designs (Fig. 9.91) provide a weak tactile impression, only. But especially the air-based principle works without any mechanical contact and could therefore become relevant for completely new operation concepts combined with gesture recognition.

The concept developed further, especially as applications in public areas were getting into focus where operation of machines is preferred without direct contact to surfaces touched by others. Scientifically the group around Shinoda explores the limites in size and sound pressure of such devices [145]. On a commercial level the concept was taken and extended by ultraleap offering a wide variety of products

**Fig. 9.92** Ungrounded haptic display to convey pulling impressions as shown in [149] c Springer Nature, all rights reserved

and solutions into multiple industries. A key-challenge for successful application lies in the fact that ultrasonic interfaces require unique and new tactile effects for operation [146], but offers at the same time a holographic-like volumetric experience [147].

#### **9.6.2.7 Ungrounded Haptic Displays**

In case of the interaction with large virtual worlds it is frequently necessary to design devices which are worn on the body, i.e. that do not exhibit a fixed ground connection. An interesting solution has been shown in [148], generating a tactile sensation with belts at the palm and at each finger. The underlying principle is based on two actuators for each belt, generating a shear force to the skin when being operated in the same direction, and a normal force when being operated in the opposite direction. This enables to provide tactile effects when grasping or touching objects in a virtual world, but without the corresponding kinaesthetic effects.

Other realizations of ungrounded devices include the use of gyro effects and the variation of angular momentum and designs incorporating non-linear perception properties of the human user as presented in [149]. The device shown in Fig. 9.92 is based on the display of periodical steep inertial forces generated by a spring-mass system and an electrodynamic voice coil actuator.

#### **9.6.2.8 Electro-Tactile**

As haptic receptors can be stimulated electrically, it is not far-fetched to design haptic devices able to provide low currents to the tactile sense organs. The design of such devices can be traced back to the 1970s. One realization is presented in [150] (Fig. 9.93). Electro-tactile displays do work—no doubt—however they have the disadvantage also to stimulate noci-receptors for pain sensation beside the mechano-receptors. Additionally, the electrical conductivity between display and skin is subject to major variations. These variations are inter-person differences due to variations in skin-

**Fig. 9.93** Electro-tactile display worn on the forehead: Electrodes (**a**), and edge recognition and signal conditioning principle (**b**) [150] c Springer Nature, all rights reserved

thickness, but they are also a time-dependent result of electro-chemical processes between sweat and electrodes. The achievable tactile patterns and the abilities to distinguish tactile patterns are subject to current research.

## **Recommended Background Reading**


*Analysis of several performance indices for different actuation principles.*

## **References**


Eng 72(2):23–34. ISSN: 1454-2358. http://www.scientificbulletin.upb.ro/rev\_docs\_arhiva/ full9662.pdf


**Open Access** This chapter is licensed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made.

The images or other third party material in this chapter are included in the chapter's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the chapter's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.

# **Chapter 10 Sensor Design**

#### **Jacqueline Gölz and Christian Hatzfeld**

**Abstract** Multiple sensors are applied in haptic devices designs. Even if they are not closed-loop controlled in a narrow sense of force or torque generation, they are used to detect movement ranges and limits or the detection of the presence of a user and its type of interaction with an object or human-machine-interface (HMI). Almost any type of technical sensor had been applied in the context of haptic devices. The emerging market of gesture based user interaction and integration of haptics due to ergonomic reasons extends the range of sensors potentially relevant for haptic devices. However, what exactly is a sensor? Which is the *right one* for your purpose and is there a systematic way to choose it? To support you answering these fundamental questions, classification of sensors is helpful. This chapter starts with a definition and classifications according to measurand and sensing principles. Constraints, you will have to focus on, are discussed and selection criteria are deduced. An introduction in technologies and design principles for mechanical sensors serves as an overview for your selection process. Common types of force/torque, positioning, velocity and acceleration sensors are presented. Furthermore, imaging and temperature sensors are addressed briefly in this section.

Christian Hatzfeld deceased before the publication of this book.

J. Gölz (B)

C. Hatzfeld Technische Universität Darmstadt, Darmstadt, Germany

Technische Hochschule Ulm, Ulm, Germany

Fakultät Elektrotechnik und Informationstechnik, Institut für Automatisierungssysteme (IAS), Albert-Einstein-Allee 53, 89081 Ulm, Germany e-mail: jacqueline.goelz@thu.de; j.goelz@hapticdevices.eu

## **10.1 What is a Sensor?—A Definition**

What is a sensor and why is it crucial for every technical system? Let me give you an example: Grasping an object is a very complex task. You need information about position and dimension as well as elasto-mechanic properties, weight and texture of the object to be able to grasp and to avoid slipping. Your brain is processing information about object location (detected with your eyes) and object properties (detected with small cells in your skin, joints and muscles) to plan and realize the mentioned task. These organs of perception are nothing more than transducers, linking non-electric parameters with electric pulses containing the necessary information for controlling the task of grasping. The transformation into an electric signal is needed, so that the information can be processed by our brain.

In haptic systems different physical domains are interacting too. A transducer (sensor) connecting different physical domains is needed, so that information can be processed in electric control units to control system behavior. You can interpret a sensor as a black box providing a certain transmission behaviour (Fig. 10.1). Correlation of electrical signal and associated measurand can be derived from measurement data and resulting characteristic curves. A mathematical description models the ideal functional dependence of measurand (input signal *x*) and sensor response (output signal *y* = *f* (*x*)). The characteristic curve usually is measured under controlled reference conditions (e.g. constant environmental conditions, defined measuring procedure). Due to imperfections of every sensor (e.g. cross-sensitivity to temperature, noise or drift) and varying environmental conditions measurement data is deviating from the reference characteristics and thus from the true value. These deviations of the ideal transmission behavior are indicated in errors and are usually listed in the data sheets

**Fig. 10.1** Sensor as a black box linking input values *x* (measurands) to electrical output values *y* = *f* (*x*)(measured value). Depending on the analysis of static or dynamic behavior we distinguish a (constant) transfer factor *B*<sup>0</sup> (equivalent to sensitivity *S*) or a frequency dependent transfer function *B*(ω) displaying the frequency characteristic. Environmental influences like temperature, vibration or humidity etc. are disturbances, which do influence transfer characteristics

of the sensor manufacturer. To describe sensor performance, we distinguish between the analysis of:


Based on the analysis of static and dynamic transmission behavior, characteristics of sensors are determined and listed for example in the manufacturers' datasheet as characteristics of the actual sensor. Your measurement task dictates the scale of these parameters. If you are choosing sensors for your haptic system, you need to take specifications like dimensions, measuring range, sensitivity, resolution, frequency range and thus accuracy of sensors into account, regardless of measurand. Following Regtien [1] important universal sensor specifications are put together (Fig. 10.4):


**Fig. 10.2** Static transfer factor is determined under steady state and reference condition by (equidistantly) in- and decreasing the measurand. v*i* refers to the measured values and v*is* to the nominal characteristic curve. Environmental parameters are tracked and kept constant. Imperfections of the sensor and the measurement setup lead to deviations of the ideal (linear) transfer function (nominal characteristic curve). Beside transfer factor *B*0, nonlinearity and hysteresis error as main systematic errors can be estimated

a variation of the offset during measurement. This very slow change of the offset is called zero drift and has be taken into account.


Nonlinearity and hysteresis are systematic errors that can be compensated. Noise and other random deviations can be reduced e.g. using filtering, but cannot be compensated and do limit the resolution of the sensor. Five basic error reduction methods are in use: compensation, feedback, filtering, modulation and correction. [1] gives a short overview about the mentioned five. All methods do influence the topology of the internal signal processing. For further reading [2, 3] are recommended. Until now, we had a universal look at sensors as a transmission system with an unknown internal life, a black box. We described the transmission behavior considering responses *y* = *f* (*x*) on input signals *x*. If it comes to quantification, it is crucial to analyze our measurement chain in detail: measurand, its input into the measuring system, physical operating principle and its imperfections as well as signal processing and sensor electronics.

In the following, we want to focus on measurand and sensing principle to perform a helpful classification of sensors. You may ask yourself: *whatsoever does classification of sensors help me developing haptic systems?*

**Fig. 10.3** Measuring the step response due to a step of the measurand leads us to the frequency dependent transfer function. We assume a linear and time invariant system and we are neglecting static and random errors

Let us recap: You learned about haptic perception (Sect. 2.1), did your conscious choice in actuation technology (Chap. 9) and want to implement a control system (Chap. 7). You understood now (Sect. 10.1) what a sensor is. Let us assume you have to pick the appropriate sensor to measure forces up to 50 N at a handle. Prescreening the market, you find dozens of sensors. Which one would you choose? In our book, we narrow down the selection to three different sensors. The table above lists heir key parameters. From a dynamics perspective of course the piezoelectric sensor would be great, from a building-space consideration, the resistive sensor is fantastic. However,

**Fig. 10.4** The internal structure of a sensor depends on considered input values, function blocks and thus its signal processing. We distinguish chain structure, parallel structure (open-loop) and closed-loop structure (see also [1] c Elsevier, all rights reserved.). The most common one is the chain structure, consisting of several function blocks transforming or converting the measurand. Representatively, the structure of a piezoresistive pressure sensor is presented to show typical function blocks


**Table 10.1** Selection of Force Sensors: piezoelectric load cell 9217A1, *Kistler*, strain gage Micro-Force, *Forsentek*, and piezoresistive load cell TAL220, *HTC-Sensor*

in most cases the piezoresistive sensors are chosen. Why is this? Well, because of dynamic range they can cover (from static to several hundred Hz) and because of their high accuracy. What does this example tell you? It is important to know more about sensors and their functions to make an educated decision because rash and unfounded sensor choice may impair the overall performance of the haptic system. The following chapters will introduce typical sensing principles to you to enable you to understand the pros and cons of each of those sensors for your haptic and tactile application (Table 10.1).

## **10.2 Classification According to Sensing Principles**

About 5,000 physical and chemical effects are known which could be used as sensing principles. About 150 are already in use for sensors [4]. The principles differ in basic parameters like sensitivity, resolution, error rate and dynamics. Classifying can help you to narrow down the plenty of principles according to your measuring task. One way is to cluster these principles in three groups depending on interaction of measuring object and sensing system:


Quite helpful in terms of assessing dynamic behavior and power consumption is the classification depending whether or not external energy is necessary. Most sensors used in industrial environment are active transformers based on the so-called deflection method. The measurand is converted in an intermediate non-electric quantity like stress, strain or intensity, which serves as actual input for the sensing element. They are called active ones, because the measurand is modulating an external electric power or energy; even if the measurand is state and does not vary, an output signal is generated. Thus, active sensors are suitable for state measurands. Sensors belonging to the group of active transducers are resistive, capacitive, inductive, optic and magnetic ones. Active transformers can be clustered in five groups:


The upper cut-off frequency is influenced by the resonance frequency of the mechanical linking system transforming the measurand into the intermediate quantity. Modelling of frequency characteristics of the linking system is crucial to rate the upper cut-off frequency of the sensor. Miniaturization can shift the resonance frequency and thus enlarge the bandwidth of the sensor. Resonant sensors have the highest resonant frequency of all active transducers.

For high dynamics and less power consumption, passive transducers, especially piezoelectric sensors are recommended. Beside piezoelectric, electro-dynamic or electro-static sensors are passive transducers. Energy is taken from interaction process itself, for example deformation of piezo-electric material. They are called passive, because only in case of a variation on the measurand, an output signal will be generated. For state measurands, passive transformers are unsuitable. Table 10.2 links sensing principles to common physical measurands.

## **10.3 Classification According to Measurand and Application Field**

As we stated, a sensor is performing an exchange of information or energy from one subsystem to another, it is an interface between different physical domains and the electric subsystem. Physical quantities can be classified according to different


rare

> **Table 10.2**

application.

 Beside resistive strain gages, differential

Mechanical

 measurands

 and common sensing principles

 pressure sensors and sensors based on

 in process measuring.

*X*

marks industrial application,

Coriolis-force

 are used for flow (mark 1) and level detection (mark

major application, (*X*)


**Table 10.3** List of physical quantities according to [7]

characteristics and a comprehensive classification is for example given in [1, 7]: We will focus on the following and do a classification in association with...

• Physicals domain: acoustic, chemical, electric, magnetic, mechanical, nuclear radiation, optical, thermal and time. Table 10.3 shows many possible measurands. For each of the mentioned quantities severals sensing principles are known.


**Table 10.4** Summary of relations between measurands in haptic systems


Our field of application limits the amount of direct mechanical measurands to only a fistful ones related to user movement and interaction with objects. Force plays a major role in haptic system control followed by the movement related (translational and rotatory) quantities acceleration, velocity, displacement and position. As temperature is both the major disturbance and important parameter while interacting with objects, it should be observed too. In addition, current and voltage sensors can also be useful in haptic systems. Table 10.4 summarizes measurands and their relationship. To understand, where they come from, let us take a closer look to the constraints resulting from our field of application.

## **10.4 Constraints in Haptic Systems**

The topology of haptic systems significantly influences our sensor design. The application of the haptic device itself has an extraordinary relevance. All systems have in common that an user mechanically contacts objects. It has to be clarified, which use of the device is intended, e.g. if it is going to be a telemanipulator for medical purposes, or a CAD tool with force feedback. The mechanical properties of the user itself and in case of telemanipulation systems the mechanical properties of manipulating objects have to be analyzed for sensor development or selection. Beside constraints resulting from mechanical contacts, interaction and movement have to be tracked. Thus, position, acceleration and velocity (both rotation and translation) of interactions are relevant measurands too. Directions in space (according to active DoF) and sensor specifications like measuring range, resolution and bandwidth depend on the topology of the haptic device itself and the intended kind of interaction (kinesthetic or tactile). All these factors will be discussed within this section.

## *10.4.1 Topology of the Device*

The application itself appoints the topology of the haptic device. Taking control engineering aspects into account haptic systems can be classified into four types, which are discussed in Chap. 6. In the following, these topologies are analyzed referring to the measured values:


In case of open-loop control, only mechanical properties of objects have to be taken into account for sensor design, irrespective of whether objects are physical or virtual ones. In case of haptic simulators like flight simulators virtual objects are acting. Mechanical properties are often stored in look-up tables and force sensors are dispensable. In case of telemanipulation systems, the end effector of the haptic system interacts with physical objects. Their mechanical properties have to be detected with capable force sensors.

Most telemanipulation systems are impedance controlled. In case of closed-loop control the mechanical impedance of both user and manipulating object are considered. Designing closed-loop impedance controlled systems force sensors have to be integrated into the device detecting the user force. Designing closed-loop admittance controlled systems the output movements of the haptic interface have to be measured using e.g. a velocity sensor (Chap. 6, Sect. 10.7)

Consequently, the measuring object can be both the user itself and a real, physical object. Beside its mechanical properties the modality of the interaction with haptic systems has to be analyzed to identify fundamental sensor requirements like dynamic bandwidth, nominal load and resolution. The main factors influencing the sensor design are both contact situation and objects' mechanical properties. In the following, they are analyzed by examining mechanical properties and texture of the objects' surface separately.

## *10.4.2 Contact Situation*

It is necessary to distinguish between the user of the haptic system and the physical object due to different interaction modalities identifying mechanical properties. If the user is the "measuring object", interaction forces have to be measured. Universally valid conclusions concerning amplitude, direction and frequency of the acting force cannot be done. Mechanical impedance depends on the manner of grasping the device, age and gender of the user itself (Chap. 3). In Sect. 3.1.3 manners of grasping are classified: power-grasps, precision-grasps and touch-grasps. In case of powerand precision-grasps finger or palm are used as counter bearing, which results in a high absolute value of force up to 100 N [8, 9] and a stiffer contact.

Additionally, the direction of the force vector has to be taken into account. Depending on application of the haptic device and manner of grasping up to six degrees of freedom result–three force components and sometimes three torques. Neglecting torques between user and device three components of force have to be measured. If the user is in static contact with the handheld device, measuring normal force components with respect to orientation of the contact plane is sufficient. If the user is exerting relative movements to the device, also, shear forces occur and three components have to be measured.

Considering the frequency dependence of humans' haptic perception, both static and dynamic signal components have to be considered equally. The lower cut-off frequency of haptic devices tends to quasi-static action at almost zero Hertz, which may happen when a device is held without movement in free space. If the force signal is subject to noise or even the slightest drift, the haptic impression will be disturbed soon (compare perception thresholds in Sect. 2.1). Manner and pre-load of grasping affect the upper cut-off frequency of the sensor. In case of power- and precision-grasps, the absolute value of force achieves higher values which results in an upper cut-off frequency being 10,000 Hz. Values of about 300 Hz are sufficient (Sect. 2.1). Within contact grasps pre-load is much lower than before enabling high frequency components to be transmitted directly to the skin up to a range of approximately 1,000 Hz.

In case of telemanipulation systems, the end effector interacts with a real, physical object. Assumptions made for the measuring object "'user"' can partially be transferred to this situation. Following Newton's law *actio et reactio*, the absolute value of force depends on intensity and way of interaction. Possible examples are compression and lift of objects with a gripper, or exploration with a stick. For telemanipulation systems in minimally invasive surgery, the absolute value of force ranges from 1 to 60 N (comp. e.g. [10]). The most promising approach is given by analyzing the intended application within preliminary tests and derivation of a model. The mechanical impedance of the object itself, which will be described within the following section, dominates the dynamics of the interaction, especially of the upper cut-off frequency.

## *10.4.3 Mechanical Properties of Measuring Objects*

As stated for the user in Chap. 3, the mechanical impedance of objects can be subdivided into three physical actions: elastic compliance *n*, damping *d* and mass *m*. In case of rigid objects made of e.g. metal or ceramics, the property of elasticity is dominant. Interaction between haptic systems and objects can be considered as a rigid contact. Consequently, the force signal includes high-frequency components. The upper cut-off frequency should take a value of at minimum 1,000 Hz, to make sure to cover all dynamics responsible for haptic perception. Soft objects, such as silicone or viscera have a viscoelastic material performance. Following Kelvin viscoelastic behavior can be simulated by a network made of elastic compliances *ni* and damping elements *di* , such as masses *mi* . Using such an equivalent network, dynamic effects like relaxation and creeping can be modeled (Figs. 10.5 and 10.6).

First of all the elasticity of measuring objects has to be investigated for designing a haptic sensor. An arithmetic example in Sect. 2.4.2 compares the different cut-off

(a) Kelvin model modeling dynamically effects

(b) Kelvin model extended by objets weight for calculating the resonance frequency

**Fig. 10.5** Kelvin model (standard linear solid) modeling viscoelastic behavior of objects. For calculating the resonance frequency a mass element has to be added. By adding further damping and spring elements dynamic behavior of every object material can be modeled

**Fig. 10.6** Visualization of visco-elastic phenomena relaxation, creep and hysteresis

frequencies of materials. For soft materials such as rubber, upper cut-off frequency takes values below 10 Hz. During interaction with soft materials mainly low frequency components appear. The upper cut-off frequency is defined by the interaction frequency of 10 Hz at maximum [11–13]. If the measuring object is a soft one with embedded rigid objects, like for example tumors in soft body tissue, an upper cut-off frequency of about 1,000 Hz should be realized. To get a more precise information about frequency requirements, it can hardly be done without an analysis of the interaction object. For a first rule of thumb calculated cut-off frequency as derived in Sect. 2.4.2 are sufficient. In case of doubt, the frequency range of the sensor should always be oversized, not to already loose relevant haptic information already at this very first point in the processing chain.

Beside dynamics, the required force resolution depends on a physiological value too. The → JND lies in the range between 5 and 10% of the absolute force value (Sect. 2.1). From the JND the sensor characteristics of measurement uncertainty can be derived. If realized as a constant value—which is common to many technical sensor solutions—5% of the lowest appearing value of force should be chosen to prevent distortion of the haptic impression of the object. Nevertheless, there is no actual requirement for haptic applications to have a constant or even linear sensor resolution. With telemanipulation systems, the interaction of the haptic system and real, physical objects is the main application. Depending on the type of interaction, frequently the surface structure of objects, the so-called texture become equally or even more important than the object's elastic compliance. Helpful literature for modeling dynamics of mechanical or electromechanical systems are [14, 15]. The resulting challenges for sensor development are discussed within the following subsection.

## *10.4.4 Texture of Measuring Objects*

Properties, which are relevant for the human perception of texture, are geometrical surface structure on the one hand (e.g. the wood grain), on the other hand some kind of "frequency image" generated by the geometrical structure in the (vibro-)tactile receptors when being touched by skin. To detect the surface structure of an object,

**Fig. 10.7** Illustration of static and spatially resolved force measurement using as 3x*n* array. One sensing element has the same dimension like a texture element. At position 1 the array is optimally placed. If the array is shifted about *x* to position 2 the texture is incorrectly detected

variation of force against the contact area can be derived. For **static measurement** sensor arrays of single-component force or pressure sensors are a common technical solution. These arrays are placed onto the object. The objects structure generates different values of contact forces, providing a force distribution on the sensor surface. Size of both array and individual array element cannot be defined in general, but it depends on the smallest detectable structure on the measurement object itself. In case of static measurement sketched above, number and size of the sensor, array elements should be dimensioned slightly smaller than the minimum structure of the measuring object. The size of each element should be less than half of the size of the smallest structure to be measured. However, even fulfilling this requirement aberration will appear. Figure 10.7 shows that in case of the width of the sensor element being larger or identical to the smallest structure the distance between the elements is detected smaller than in reality. With *n* sensor elements the width of the structure element is replayed to *<sup>n</sup>*+<sup>1</sup> *<sup>n</sup>* and the distance to *<sup>n</sup>*−<sup>1</sup> *<sup>n</sup>* . If the number of sensor elements per surface area increase, the aberration is diminishing and the structure is approximated more realistic (Fig. 10.8). However, with the number of elements the effort of signal conditioning and analysis is increasing.

Beside the described aberration, an additional disadvantage of static measurements is given by the fact, that the knowledge of the texture is not sufficient to get information about the object's material. The complete haptic impression needs frequency information depending on the elastic properties of texture and surface friction too. To gain these data, a relative movement between object and haptic system should be performed, to measure the texture **dynamically and spatially**. Depending on velocity of the relative movement and speed of the signal detection algorithms, the spatial resolution can be multiplied using the same number of sensor elements as in the example shown before. Even the use of sensor array with a simultaneous detec-

**Fig. 10.8** Illustration of static and spatially resolved force measurement using as 6x*n* array. Size of one sensing element is half of a texture element. At position 1 the array is optimally placed. In case of any other position an aberration occurs. Aberration decreases with increasing number of sensing elements in an appropriate array

tion of multiple points becomes unnecessary. With knowledge about the exploration velocity and its direction, the information can be put into relation to each other. For texture analysis, multi-component force sensors should be used, as especially the combined forces in the direction of movement and normal to the surface contribute to haptic perception [16]. This dynamic measurement principle is comparable with the intuitive exploration made by humans: To gain the texture of an object humans gently touch and stroke over its surface. The surface structure excites the fingerprint to oscillate and the vibrotactile sensors acquire the frequency image. The absolute value of normal forces reached during such explorations are in a range of 0.3–4.5 N [17]. As stated earlier, force resolution is defined by the → JND. Haptic information about texture is included into the high-frequency components of the signal. For haptic applications the maximum frequency should be located at 1,000 Hz. The absolute value of nominal force should be chosen depending on the elastic compliance of the object. In case of softer objects, a lower absolute value can be chosen. Surface structures will deform and cannot be detected anymore. To be able to measure equally good at soft and rigid objects, the nominal force should take values ≤ 4.5 N. Caldwell [17] for example decided to use *<sup>F</sup>* <sup>=</sup> <sup>0</sup>.3 N.

## *10.4.5 Interaction and User Movements*

As stated earlier, users are interacting with haptic devices based on physical contact or non-physical in case of motion capture. Beside force position, velocity and acceleration are relevant measurands. The kind of interaction with the haptic system determines measuring range as well as dynamics and sensing principle. In case of

**Fig. 10.9** Workspace of thumb and index finger and trajectories during grasping tasks [19]

physical contact, nearly every sensing principle is possible; only the task (dynamic exploration of objects, tactile, grasping or other kinesthetic) has to be taken into account concerning measuring range, resolution and bandwidth. In case of nonphysical contact, contactless sensing principles like marker-based motion capture (e.g. reflecting or magnetic markers and appropriate detection system as well as inertial measurement units) or camera-based systems and rarely found acoustic sensors (e.g. ultra-sound or near-field RADAR sensors) are in use [6]; motion range and distance between display and user give a limit to sensor specifications.

Let us have a closer look on patterns of movement and trajectories of fingers, hands and limbs. Concerning fingers and hands, we distinguish three different grasps: Finger touch, precision grasp and power grasp (Chap. 2). Depending on the users' grasping strategy, one up to five digits are involved in the reach-to-grasp-task [18]. Kamper et al. found out that no matter of grasping strategy or object property fingertip followed a stereotypical trajectory, which can be modelled by a logarithmic spiral [19]. The spiral is scaling with object size and shape. Working space of fingers were analyzed as well as velocity of finger movement. Grasping a soft ball a speed of fingertip (ring finger) ≤40 *rmcm*/*s* was detected. Grasping a mug, fist moved with a speed of up to 60 *cm*/*s*. They found out that just a percentage of maximum workspace of fingers is used during grasping, in case of the thumb just 4.2%. Figure 10.9 shows typical workspace of thumb and index finger measured in the mentioned study.

In 2009, unconstrained three-dimensional hand and arm movement has been analyzed [20, 21]. Movement speed of up to 1.5 m/s for arms [20] and up to 1 m/s for hands [21] depending on the movement trajectory occurred. Maximum speed of hand and influence of age on it was studied in 2001, where 20 men (age in between 25 and 70) have been tested [22]. An average speed of maximal 3 m/s was observed. In 1996, dynamic movement of elbow, wrist and forearm was modelled for simulation purposes based on measurements [23]. Peak velocity of wrist action was 45◦/*s* (deviation of ±20%). Acceleration was not observed.

Another source of information for working space, speed and acceleration is the field of biomechanics; there are plenty of studies analyzing movement of limbs and hands while sportive disciplines like boxing, swimming etc. (e.g. [24]). Movement capabilities of fingers, wrist and forearm are focused in rehabilitation. The progress of patients' flexibility is of interest [25, 26]. Sensorized gloves (e.g. *SenseGlove* combined with VR headsets are used for movement analysis during grasping- and positioning-tasks; e.g. Ay [25] provides an overview of actual measurement and trainings systems for movement analysis in rehabilitation. In 2021, a study was performed at Hamburg University of Technology (TUHH) to observe the progress in flexibility of forearm, wrist and fingers of 24 hemiparetic patients [27]. Beside the displacement of fingertips, also hand and forearm movement was tracked. The maximum flexion angles for the index finger, corresponding to the fist configuration on each cycle, have been statically measured on the subjects as 85◦ for the MCP joint, 105◦ for the PIP joint, and 70◦ for the DIP one. In addition, the average of whole joints for a healthy person highly depends on both time of exercise and motivation of person. An average nominal angular velocity of 25 rad/s and an average nominal acceleration of 38 rad/s <sup>2</sup> was derived from position measurement. Frequency rage of all analyzed movements was within a few hundred Hertz.

#### **10.4.5.1 Selection of Design Criteria**

Following the description of the most relevant constraints, limiting factors for sensor design in haptic applications can be found in physiological values. Nominal load, resolution, covered frequency range and measurement uncertainty can be derived from humans' haptic perception. For a quantitative analysis of these requirements, the contact between measuring object and sensor is to bring into focus. Measurement range and number of detectable vector components are defined by intended application and the structure of the device. The geometrical dimensions and other mechanical requirements are given depending on the point of integration into the haptic system. The diagram displayed in Fig. 10.10 visualizes the procedure of how to identify most important requirements for sensor design.

## **10.5 Force Sensor Design**

This section deals with selection and design of force sensors, which are implemented in haptic systems. Approaches like measuring current in actuators to derive occurring force are not part of the chapter. In Sect. 10.4 fundamental problems have been re discussed, which are the basis of every sensor design process. A selection of factors to be taken in account is made in Sect. 10.4.5.1 and will help us during the development or selection process. After a short introduction in basic transfer properties, sensor characteristics are analyzed according to haptic aspects and complemented by application examples.

**Fig. 10.10** Tree diagram to identify the principle requirements on haptic sensors, representatively listed for force sensors. Beside mechanical characteristics of the object, also physiological parameters of human haptic perception have to be considered

**Fig. 10.11** Overview of established measurement principles for detecting forces in haptic systems. Furthermore, active sensor systems are also discussed in the following section

## *10.5.1 Sensing Principles*

Within the previous section, the most important criteria for the design and development of a haptic sensor were named and introduced. Sect. 10.5.2 summarizes major requirements once again in tabular form. In order to help choosing a suitable sensor principle, variants according to Fig. 10.11 are presented in this section. Beside established measurement elements, such as resistive, capacitive, optic or piezoelectric ones, other less common sensor designs based on electro-luminescence or active moving coils are discussed too.

Most sensor principles are active transformers using the deflection method for force measurement, which means that elasto-mechanic values such as stress or strain are detected and the corresponding force is calculated. Sensors belonging to the group of active transducers are resistive, capacitive, optic and magnetic ones, working according to the displacement-principle too. Piezoelectric, electro-dynamic or electrostatic sensors are part of the group passive transducers. After a short introduction in elasto-mechanics each sensing principle will be discussed according to its operating mode and several applications will be presented. All sensor principles will be estimated concerning their applicability for kinesthetic and tactile force measurement, and put into relation to requirements known from Chap. 5. At the end of this chapter a ranking method for the selection of suitable sensor principles is be given.

**Fig. 10.12** Voxel *dV* of an elastic object. Due to external deformation internal stress occurs which can be described by the component *Ti j* of the stress tensor [28] c Springer Nature, all rights reserved

#### **10.5.1.1 Basics of Elasto-Mechanics**

As mentioned before, a large number of sensor principles base upon elasto-mechanics. This section will summarize fundamental knowledge that is necessary for sensor design. If force is exerted to an elastic body, it deforms elastically depending on the amount of force. Internal stress *T* occurs resulting in a shape change–the strain *S*. Stress and strain are correlated by specific material parameters, the so called elastic moduli *si j* .

For a better comprehension a short *gedankenexperiment* will be performed [28]. If a volume element *V* is cut from an object under load (Fig. 10.12), substitute forces *F* will act upon the surfaces of the cuboid to keep the state of deformation. Due to the required state of equilibrium the sum of all forces and torques acting upon *V* must equal zero.

Subdividing the force *F* in its three components *F*1, *F*<sup>2</sup> and *F*3, just those components remain orthogonal to the surface elements *Aj* . The quotient of the acting force component *Fi* and the corresponding surface element *Aj* results in a mechanical stress *Ti j* . Following the equilibrium condition *Ti j* = *Tji* six independent tension components remain, resulting in the stress tensor. Tensor elements can be factorized into normal (stress parallel to surface normal) and shear stress components (stress orthogonal to surface normal). Analyzing the volume element *V* before and after load, a displacement of the element *V* with relation to the coordinate system 123 such as a deformation happens. The sides of the cube change their lengths and are not orthogonal to each other anymore (Fig. 10.13).

To describe that shape change, strain *Si j* is introduced. The quantity strain is a tensor too, consisting of nine elements (Eq. 10.1)

$$
\begin{pmatrix} d\xi\_1\\ d\xi\_2\\ d\xi\_3 \end{pmatrix} = \begin{pmatrix} \mathbf{S}\_{11} \ \mathbf{S}\_{12} \ \mathbf{S}\_{13} \\\ \mathbf{S}\_{21} \ \mathbf{S}\_{22} \ \mathbf{S}\_{23} \\\ \mathbf{S}\_{31} \ \mathbf{S}\_{32} \ \mathbf{S}\_{33} \end{pmatrix} \cdot \begin{pmatrix} \Delta x\_1\\ \Delta x\_2\\ \Delta x\_3 \end{pmatrix} \tag{10.1}
$$

**Fig. 10.13** Displacement of point *P* to *P* due to application of force visualizes the state of strain [28] c Springer Nature, all rights reserved

Due to volume constancy the following correlation can be defined as

$$S\_{ij} = S\_{ji} = \frac{1}{2} \cdot \left(\frac{\delta \xi\_i}{\delta x\_j} + \frac{\delta \xi\_j}{\delta x\_i}\right) \tag{10.2}$$

and thus, the matrix can be reduced to six linear independent elements. Normal strain components act parallel to the corresponding normal to the surface, which results in volume change. Shear components, acting normal to the surface, describe the change of the angle between the borders of the volume element. In case of isotropic materials, such as e.g. metals or *Al*2*O*<sup>3</sup> ceramics, the correlation between shape change mentioned before and mechanical strains can be formulated as follows:

$$
\begin{pmatrix} S\_1 \\ S\_2 \\ S\_3 \\ S\_4 \\ S\_5 \\ S\_6 \end{pmatrix} = \begin{pmatrix} s\_{11} \ s\_{12} \ s\_{12} & 0 & 0 & 0 \\ s\_{12} \ s\_{11} \ s\_{12} & 0 & 0 & 0 \\ s\_{12} \ s\_{12} \ s\_{11} & 0 & 0 & 0 \\ 0 & 0 & 0 & 2(s\_{11} - s\_{12}) & 0 \\ 0 & 0 & 0 & 0 & 2(s\_{11} - s\_{12}) & 0 \\ 0 & 0 & 0 & 0 & 0 & 2(s\_{11} - s\_{12}) \end{pmatrix} \cdot \begin{pmatrix} T\_1 \\ T\_2 \\ T\_3 \\ T\_4 \\ T\_5 \\ T\_6 \end{pmatrix} \tag{10.3}
$$

For simplification, six independent strain resp. stress components are summarized in a vector. Components with index 1, 2 and 3 mark normal components, those with indices 4, 5 and 6 shear components [28]. Parameters *si j* are regardless of direction. Taking Youngs modulus *E* and shear modulus *G* into account, parameters can be derived:

$$s\_{11} = \frac{1}{E}, s\_{12} = \frac{\nu}{E}, \frac{1}{G} = 2(s\_{11} - s\_{12}) = \frac{2}{E}(1 + 2\nu) \tag{10.4}$$

ν marks the so-called Poisson ratio, which is material dependent. Using metal ν values between 0.25 and 0.35 can be achieved. In case of homogeneous materials Eq. (10.3) can be reduced to a linear correlation *T* = *E* · *S*. For anisotropic materials such as silicon or quartz elasto-mechanic properties are depending on the orientation of the coordinate system (comp. Sect. 10.5.1.3), resulting in a matrix of elastic coef-

**Fig. 10.14** Behavior of a bending beam, the right-hand detail shows stress distribution along the profile

ficients with up to 21 elements. For further reading on elasto-mechanics e.g. [28, 29] are recommended.

#### **Example "Beam Bending"**

If a force vector is exerted to the tip of a beam bender made of isotropic materials and clamped on one side (Fig. 10.14), a bending moment *M*<sup>B</sup> occurs.

Mechanical stress components *T* (*y*) are linear distributed on the cross section and take values of *T* (*y*) = *c* · *y*, whereas *c* is a proportional factor. Bending moment equals the integral of the stress *T*3(*y*) distributed on the cross section.

$$M\_B = \int\_A \mathbf{y} \cdot T\_3(\mathbf{y}) dA = c \cdot \int\_A \mathbf{y}^2 dA \tag{10.5}$$

As the integral *<sup>A</sup> y*2*d A* equals the axial moment of inertia *I*, *c* is calculated as

$$c = \frac{M\_B}{I}.\tag{10.6}$$

The resulting strain components *S*<sup>1</sup> and *S*<sup>2</sup> act transversal to the beam's surface. For elastic deformation strain component *S*<sup>1</sup> and stress component *T*<sup>2</sup> are correlated via theYoungs modulus *E*

$$S\_2 = \frac{T\_2}{E} = \frac{M\_B}{I \cdot E} = \frac{F \cdot (l - z)}{I \cdot E} \tag{10.7}$$

and therefore depending on the geometry of the cross section *A* of the beam, position *z* at the beam's surface and acting force *F*. For calculations of strain component *S*<sup>1</sup> transversal contraction has to be considered as follows

$$S\_1 = -\nu \cdot S\_2.\tag{10.8}$$

Further readings of elasto-mechanics, for example the calculations of deformation of fiber-reinforced composites, the works of Gross [28], Werthschützky [29] and Ballas [30] are recommended.

#### **10.5.1.2 Detection of Force**

According to Fig. 10.14 acting forces can be measured evaluating both resulting strain distribution on the surface and displacement of beam. According to the example above, detection of strain *S*<sup>2</sup> can be derived using Bernoullis theory. Thus, strain components acting transversal to the surface can be neglected for slender and long beam-geometries. Stress or strain sensitive elements should be placed in such a way, that a maximum surface strain change can be detected.

Correlations described above, are examples for a cantilever beam. Being able to measure more than just one force component, a suitable deformation element has to be designed considering the elasto-mechanic correlations. For example works of Bray [31] and Rausch [32] can help designing such an element. Primary objective is to generate a strain distribution in loading case, which enables to deduce the force components.

The correlation of force *Fi* and electric signal v*<sup>i</sup>* of the sensor element usually is given by a linear system of equations (e.g. [33]). Equation (10.9) shows an example for a three-axial sensor:

$$
\begin{pmatrix} v\_1 \\ v\_2 \\ v\_3 \end{pmatrix} = \begin{pmatrix} a\_{11} \ a\_{12} \ a\_{13} \\ a\_{21} \ a\_{22} \ a\_{23} \\ a\_{31} \ a\_{32} \ a\_{33} \end{pmatrix} \cdot \begin{pmatrix} F\_1 \\ F\_2 \\ F\_3 \end{pmatrix} \tag{10.9}
$$

It can be assumed that all force components contribute to each individual voltage signal v*<sup>i</sup>* . The elements *ai* of the matrix can be found by calibrating the sensor. During the calibration process, only one independent force component for each direction is applied to the sensor and the resulting voltage components are measured. After inverting the matrix **A** to **A**−<sup>1</sup> the force vector can be calculated easily.

A lot of research is done in reducing the number of measuring cycles for calibration of multi-axial force sensors. Most common methods are:


For further information on calibration check above mentioned literature.

#### **10.5.1.3 Resistive Strain Measurement**

One of the most commonly used sensing principles for force sensing is based on resistive detection of strain or rather stress components occurring in a (measuring) object. With resistive strain measurement, a resistor pattern is applied on the bending elements surface. Resistors must be located in areas of maximum strain. As a quick reminder: Electrical resistance is defined via

$$R\_0 = \rho \cdot \frac{l}{A} = \rho \cdot \frac{l}{b \cdot h},\tag{10.10}$$

ρ marks specific resistance,*l*, *b*, *h* (length, width, height) define volume of the resistor itself. The total differential shown in Eq. (10.11) gives relative resistivity change resulting from the deformation:

$$\frac{dR}{R\_0} = \underbrace{\frac{dl}{l} - \frac{db}{b} - \frac{dh}{h}}\_{rel.\text{ volume changing}} + \underbrace{\frac{d\rho}{\rho}}\_{piezorezitive\ part}.\tag{10.11}$$

Deformation causes on the one hand the change of the geometrical part *<sup>l</sup> <sup>A</sup>* . Taking Youngs modulus *E* and Poissons ratio ν account, plain stress for isotropic material can be derived [28]:

$$\frac{dl}{l} = \mathcal{S}\_1 = \frac{1}{E} \cdot T\_1 - \frac{\nu}{E} \cdot T\_2,\tag{10.12}$$

$$\frac{db}{b} = \mathcal{S}\_2 = -\frac{\nu}{E} \cdot T\_1 + \frac{1}{E} \cdot T\_2,\tag{10.13}$$

$$\frac{dh}{h} = S\_3 = -\frac{\nu}{E} \cdot T\_1 - \frac{\nu}{E} \cdot T\_2. \tag{10.14}$$

Indices 1, 2 and 3 mark the direction components. Concerning the geometrical change, the resulting gage factor *k* describing the sensitivity of the material takes a value of about two (Eq. 10.15). On the other hand plane stress provokes a chance of specific resistivity ρ.

Material specific changes will be discussed within paragraph Sect. 10.5.1.3. Using Eq. (10.15) the correlation between strain and relative resistivity change is formulated:

$$\frac{dR}{R\_0} = \underbrace{\left(2 - \frac{d\left(N \cdot \mu\right)}{S \cdot N \cdot \mu}\right)}\_{:=k,\,\text{age}\,\text{factor}}\,\text{-}\,\text{S}\tag{10.15}$$

whereas μ represents the electron mobility and *N* the number density of molecules. The change of the resistivity can be measured using a so called Wheatstone bridge circuit. This circuit is built of one up to four active resistors connected in a bridge circuit and fed by a constant voltage or constant current (Fig. 10.15). Eq. (10.16)

**Fig. 10.15** Wheatstone bridge configurations for evaluating one up to four resistors

calculates the bridge Fig. 10.15c with the assumption, that the basic resistances *R*0*<sup>i</sup>* equal the resistance *R*0. The values of *R*<sup>0</sup> such as gage factors are specific to material and listed in Table 10.5 (further Informations e.g. [32, 37]).

$$
\Delta v = \frac{V\_{cc}}{R\_0 \cdot I\_0} = \frac{1}{4} \cdot \left\{ \frac{r\_1}{R\_{01}} - \frac{r\_2}{R\_{02}} + \frac{r\_3}{R\_{03}} - \frac{r\_4}{R\_{04}} \right\} \tag{10.16}
$$

The supply with constant current *I*<sup>0</sup> has the great advantage that a temperature dependent drift of the measurement signal will be compensated. More advanced information can be found in [38, 39].

In case of metallic resistors a gage factor of approximately two occurs. The material specific component of metals is less important and affects the first decimal place only. In case of semiconductors and ceramic materials, the material specific component is dominant. In case of semiconductor-strain gages, the gage factor takes values up to 150. Using resistor pastes, applied in thick film technology on substrates,1 and poly-silicon layers, sputtered in thin film technology the material specific component is dominant. On this, gage factors achieve values of up to 18 in case of thick-film resistors and up to approximately 30 for thin-film resistors. Table 10.5 lists the gage factor for several materials usually used in strain measurement. As mentioned earlier, strain gages are manufactured in different technologies. The most commonly used

<sup>1</sup> For substrate material mainly (layer-) ceramics are used. Less frequent is the use of metals, as isolating layers have to be provided then.


**Table 10.5** Gage factor, strain resolution and nominal strain of important resistive materials according to [32]

**Fig. 10.16** Assembly of conventional strain gages: measuring grid is usually made of a patterned metal foil. In case of special applications metal wires are applied

types are foil-strain gages; thick- and thin-film manufactured measurement elements are found mainly in OEM-sensors and for specific solutions in automation industry due to the necessary periphery and the manufacturing process. Relevant literature can be found in the publications of Partsch [40] and Cranny [41].

To deposit thin-film sensing layers, other technologies like inkjet or aerosol jet printing can be used. The inks are suspension containing electrically conducting particles made of carbon, copper, gold, silver or even conducting polymers like PEDOT:PSS. One advantage is that compared to conventional thick-film pastes the finishing temperature is below 300 ◦ C and thus various substrates can be functionalized. For further information, see [32, 42, 43].

Foil strain gages are multilayer systems made of metallic measurement grids and organic substrates. It is applied (Fig. 10.16) and fixated on bending elements via cold hardening cyano-acrylate adhesive (strain analysis) or via hot hardening adhesives such as epoxy resin (transducer manufacture). These gages are long-term stable, robust, and especially used for high-precision tasks in wind-tunnel-scales and balance sensors. Achievable dynamics, resolution and measurement range are solely depending on the deformation element. The minimum size of the individual strain

**Fig. 10.17** Compilation of possible grid configurations of strain gages. See also

gages taken of the shelf is in the area of 3 mm width and 6 mm length. The measurement pattern itself is smaller in its dimensions. On this, it is possible to shorten the organic substrate to finally achieve 1.5 mm width and 5 mm length as a typical minimum size. If foil strain gages are considered, the surface strains resulting from the nominal load should be 1,000µm/m for an optimum usage of the strain gage. Many measurement patterns are applied for force and torque sensors. Fig. 10.17 shows a selection of commercialized measuring grids ready for application on deformation elements.

Beside resistive foil strain gages, semiconductor strain gages are available. Their general design is comparable to conventional strain gages, as the semiconducting elements are assembled with organic substrates.<sup>2</sup> Measurement elements are used identical to foil strain gages and are available in different geometrical configurations such as T-rosettes.

Using measuring elements with a higher gage factor (Table 10.5) deformation elements can be designed stiffer, allowing smaller nominal strains. Such elements are especially relevant for the design of miniaturized sensors for haptic systems, as small dimensions and high cut-off frequencies have to be achieved. A commercially available example is the OEM-sensor *nano 17* from ATI (Fig. 10.18). Strain elements are piezoresistive ones and their gage factor takes values of approximately 150. Due to high potential for miniaturization and manifold application in haptic systems,

<sup>2</sup> Also single semiconducting elements without organic substrate are available. They are highly miniaturized (width of about 230µm, length of about 400µm), but has to be insulated from the deformation element.

**Fig. 10.18** Miniaturized force/torque sensor nano17. Resonance frequency of the sensor takes a value of about 7.2 kHz. c 2022 *ATI Industrial Automation, Inc.*, Apex, NC, USA, all rights reserved

piezoresistive sensors–especially silicon sensors–will be discussed in an independent subsection.

## **Piezoresistive Silicon Sensors**

Published by Charles S. Smith in 1954 for the first time [44], semiconducting materials with a symmetric crystal structure such as silicon or germanium possess a change in their conductivity σ due to an applied force or pressure. In the following paragraphs, this effect is discussed more deeply for mono-crystalline silicon.

## **The Piezoresistive Effect**

If a semiconducting material is deformed due to a load, stress components *Ti* are generated inside the material. For your information: Due to the anisotropic properties of the material the elasto-mechanic properties are depending on the position of the coordinate system, and consequently on the orientation of the crystal lattice. These stress components affect the electron mobility μ and–as a consequence–the specific resistivity ρ. ρ is a material specific value, characterized via the parameters electron mobility μ and number of charge carriers *N* (comp. Sect. 10.5.1.1). Considering these parameters, correlation between relative resistivity change and the resulting strain tensor can be expressed to:

$$\frac{d\rho}{\rho} = \frac{dV}{V} - \frac{d(N \cdot \mu)}{N \cdot \mu}, \text{with } \rho = \frac{V}{N \cdot \mu \cdot |q|}, \tag{10.17}$$

whereas *V* is the volume of the resistive area and |*q*| is the charge of the particles.

Following the Ohm's law the specific resistance ρ is connected by the vector **E** = (*E*1; *E*2; *E*3) *<sup>T</sup>* of the electrical field and the current density **<sup>J</sup>** <sup>=</sup> (*J*1; *<sup>J</sup>*2; *<sup>J</sup>*3) *T* :

$$
\begin{pmatrix} E\_1 \\ E\_2 \\ E\_3 \end{pmatrix} = \begin{pmatrix} \rho\_{11} \ \rho\_{12} \ \rho\_{13} \\ \rho\_{21} \ \rho\_{22} \ \rho\_{23} \\ \rho\_{31} \ \rho\_{32} \ \rho\_{33} \end{pmatrix} \cdot \begin{pmatrix} J\_1 \\ J\_2 \\ J\_3 \end{pmatrix} = \begin{pmatrix} \rho\_1 \ \rho\_6 \ \rho\_5 \\ \rho\_6 \ \rho\_2 \ \rho\_4 \\ \rho\_5 \ \rho\_4 \ \rho\_3 \end{pmatrix} \cdot \begin{pmatrix} J\_1 \\ J\_2 \\ J\_3 \end{pmatrix} \tag{10.18}
$$


**Table 10.6** Piezoresistive coefficients of homogeneously doped silicon [45]

Due to the symmetric crystalline structure of silicon<sup>3</sup> six independent resistive components ρ*<sup>i</sup>* result, which are symmetrical to the diagonal of tensor ρ. Taking the matrix of piezoresistive coefficients π into account the influence of the six acting stress components *Ti* can be formulated. The cubic symmetry results in a reduction of the number of piezoresistive and direction dependent coefficients to three. By doping silicon with impurity atoms such as boron or phosphor areas of higher resistivity are generated. By influencing the type and the concentration of dopant the three π-coefficients can be influenced. Further information on doping can be found e.g. in [45, 46].

$$
\begin{pmatrix}
\rho\_1\\ \rho\_2\\ \rho\_3\\ \rho\_4\\ \rho\_5\\ \rho\_6
\end{pmatrix} = \begin{pmatrix}
\rho\_0\\ \rho\_0\\ \rho\_0\\ 0\\ 0\\ 0
\end{pmatrix} + \begin{pmatrix}
\pi\_{11}\ \pi\_{12}\ \pi\_{12} & 0 & 0 & 0\\ \pi\_{12}\ \pi\_{11}\ \pi\_{12} & 0 & 0 & 0\\ \pi\_{12}\ \pi\_{12}\ \pi\_{11} & 0 & 0 & 0\\ 0 & 0 & 0 & \pi\_{44} & 0 & 0\\ 0 & 0 & 0 & 0 & \pi\_{44} & 0\\ 0 & 0 & 0 & 0 & 0 & \pi\_{44}
\end{pmatrix} \cdot \begin{pmatrix}
T\_1\\ T\_2\\ T\_3\\ T\_4\\ T\_5\\ T\_6
\end{pmatrix} \cdot \rho\_0 \tag{10.19}
$$

For homogenous silicon with a small concentration of dopants the values in Table 10.6 can be used.

Depending on angle between current density vector **J** and stress component *Ti* three effects can be distinguished. Within the so-called longitudinal effect current *i* is guided parallel to the normal component of stress, within transversal effect *i* is guided normal to the normal component of stress, and thus shear effect *i* is guided parallel or normal to the shear component of stress. Figure 10.19 visualizes the mentioned correlations.

For the resistivity change, depending on the orientation of the resistive area from Fig. 10.19, the following equation becomes valid:

$$\frac{d\mathcal{R}}{d\mathcal{R}} \approx \frac{d\rho}{\rho} = \pi\_\mathcal{L} \cdot T\_\mathcal{L} + \pi\_\mathcal{Q} \cdot T\_\mathcal{Q} \tag{10.20}$$

As a consequence longitudinal and transversal stress components are influencing the calculation of the resistivity change. Depending on the crystallographic orientation of the resistive areas the π-coefficient is formed by longitudinal- and transversal coefficient (Table 10.7). For homogeneous Boron concentration of *N*<sup>R</sup> ≈ 3 · 10<sup>18</sup> cm−<sup>3</sup> the following values are achieved [45]:

<sup>3</sup> Face centered cubic.

**Fig. 10.19** Visualization of the the piezoresistive effects: longitudinal, transversal and shear effect in silicon [32]. Transversal and longitudinal effect is normally used for commercial silicon sensors


**Table 10.7** Compilation of π*l*- und π*<sup>q</sup>* -coefficients for selected resistor assemblies dependent on the crystallographic orientation [59]

$$\pi\_{\rm L} = 71.8 \cdot 10^{-8} \,\text{MPa}^{-1},$$

$$
\pi\_{\mathcal{Q}} = -6 \mathbf{5}.1 \cdot 10^{-8} \,\mathrm{MPa}^{-1} .
$$

More advanced information for the design of piezoresistive silicon-sensors can be found in the publications of Bao [45], Barlian [46], Meiss [57], Rausch [32] and Werthschützky [58].

#### **Examples of Piezoresistive Silicon Sensors**

Piezoresistive silicon sensors for physical quantities like pressure and force are commonly integrated in silicon deformation elements. In case of pressure transducers, this kind of manufacture is state of the art. For all pressure ranges, sensor elements can be purchased. For example the company Silicon Microstructures Inc. (SMI) sells chips with glass-counter body for absolute pressure measurement with an edge length of 650µm (Fig. 10.20a). In case of a suitable packaging, these sensors could be arranged in an array to measure the uni-axial force- or pressure-distribution on a surface.

In case of force sensors, realization of miniaturized multi-component force sensors is current issue in research. Dimensions of single sensor elements range from 200µm to 2 mm. Nominal force covers a range of 300 mN to 2 N. Due to batch-manufacture of

(a) unseperated chips, edge length of about 650 µm (b) sectional drawing of the sensor

**Fig. 10.20** Example of piezoresistive silicon pressure sensors [60]

measurement elements, realization of both single sensor elements and array-design<sup>4</sup> is possible. Sensitivity of sensors takes values of 2% relative resistivity change in loading case. Figure 10.21 shows four examples of current topics in research. Variants (a) [61], (b) [62] and (d) [63] were designed for force measurement in haptic systems. Variant (c) [64] was built for tactile, dimensional measurement technology. Force transmission is always realized by beam- or rod-like structures.

Since year 2007 a Hungarian manufacturer is selling the *Tactologic* system. Up to 64 miniaturized sensor elements are connected in an array of 3 × 3 mm2. Sensor elements have a size of 0.3 × 0.3 mm2 and are able to measure shear forces up to 1 N and normal force up to 2.5 N at nominal load. The force transmission is realized by soft silicone dots, applied to every individual sensor element (Fig. 10.22a and b). Using this array, static and dynamic loads in the range of kilohertz are measurable. But the viscoelastic material properties of the force transmission influence the dynamics due to creeping, especially the measurement of the normal forces [65, 66]. Another approach is to use piezoresistivity of silicon micro-machined transistors using the above mentioned shear effect (especially MOSFET, [67–69]). As well as strain measurement, these sensors are used to monitor the state of stress occurring in packaging process [67, 69]. Polyimide foil containing sensor elements (strain sensitive transistors) with a thickness of about 10µm are available since at least around year 2000 [68].

#### **Further Resistive Sensors**

Besides resistive transducers presented until now, other more "exotic" realizations exist, which will be introduced within three examples. All sensors are suitable for array assembly to measure position-dependent pressure and a single force component. The used measurement principles are based on the change of geometrical parameters of the force elements. The examples shown in Fig. 10.24 (a) [70] and (b) [71] use the load-dependency of the constriction resistance. With increased pressure5 the electrical contact area *A* increases and the resistance decreases.

<sup>4</sup> By isolating arrays instead of single sensors in the last processing step.

<sup>5</sup> The force can be calculated taking the contact area into account.

(c) single tri-axial sensor (d) single tri-axial sensor

( s a) single tri-axial sensor (b) array of tri-axial sensor

**Fig. 10.22** Tactile multi-component force sensor [65] c Elsevier, all rights reserved

The companies *Interlink Electronic* and *TekScan* use this effect for their sensor arrays (also called *Force Sensing Resistors*–*FSR*). Interlink distributes polymer-foils printed with resistor pastes in thick-film technology. Their basic resistance takes values in the region of *M*. The sensor foils have a height of 0.25 mm and a working range of zero to one Newton respectively 100 N. Beside the sensitivity to force or pressure, the sensors show a temperature dependency of 0.5% *K*.

The sensor foils from *TekScan* are located in the range of 4.4 N up to 440 N, the spatial distribution reaches up to 27.6 elements per centimeter. The available

**Fig. 10.23** Foil-sensors for compressive force detection, top: FlexiForce by *TekScan Inc.*, South Boston, MA, USA , bottom: FSR by *Interlink Electronics*, Camarillo, CA, USA. These sensors are often used to detect grasp forces. c 2022 *TekScan Inc.*, used with permission

(a) micro machined tactile array developed by Fraunhofer Institute IBMT (b) variation of the electrodes distance

**Fig. 10.24** Selected examples of foil sensors using the effect of a load-dependent constriction resistance [70, 71], own illustrations

array size reach from approximately 13 × 13 mm2 up to 0.5 × 0.5 m2. The height of the foils is around 0.1 mm. The measurement inaccuracy takes a value of 10%. The frequency range reaches from static up to 100 Hz. Beside the application in data gloves, as described by Burdea [8], the foil sensors are used in orthopedics to detect the pressure distribution in shoes and prosthesis and within automotive industry for ergonomic studies (Fig. 10.23).

Another approach is the variation of the distance between two electrodes (Variant (b) in Fig. 10.24). The sensing element is made of flexible substrates. The electrodes are arranged in rows and columns. The gaps in between are filled with an electrically conductive fluid. In loading case the fluid is squeezed out and the distance of the electrodes varies. A disadvantage of this principle is given by the necessity for very large distance variations up to 10 mm to achieve usable output signals. Until today, this principle is still a topic of research.

#### **10.5.1.4 Capacitive Sensors**

Within every capacitive sensor at least two electrodes are parallel located to each other. Figure 10.25 shows a design based on a single measurement capacity. In contrast to the resistive principle–measuring the mechanical variables stress and strain– the capacitive principle measures the integral values displacement (or elongation) directly.

Concerning the working principle, three classes can be identified, which shows some similarities to electrostatic actuators discussed in Sect. 9.5. The first class uses the displacement principle. On this, the mechanical load changes the electrode distance *d* or the active electrode area *A*. In the third class the relative dielectric ε*<sup>r</sup>* is influenced. The change of electrode distance is usually used for measuring force, pressure, displacement, and acceleration. In these cases, the mechanical load is directly applied to the electrode and displaces it relatively to the other one. The resulting capacitance change can be calculated:

$$\frac{\Delta C}{C\_0} = \frac{1}{1 \pm \xi/d} \approx \pm \frac{\xi}{d}.\tag{10.21}$$

ξ marks the change of distance. Additionally, the electrode distance can be kept constant, and only one electrode can be parallel displaced (Fig. 10.26). The active electrode area varies accordingly and the resulting capacitance change can be used to measure angle, filling level, or displacement. It is calculated according to:

$$\frac{\Delta C}{C\_0} = 1 \pm \frac{\Delta A}{A\_0}.\tag{10.22}$$

The third option for a capacitance change is the variation of the relative dielectric. This principal is often used for measuring a filling level e.g. of liquids, or as a proximity switch for layer thickness. This capacitance change is calculated according to

$$\frac{\Delta C}{C\_0} = 1 \pm \frac{\Delta \varepsilon\_r}{\varepsilon\_{r0}}.\tag{10.23}$$

#### **Characteristics of Capacitive Pressure and Force Sensors**

The main principle used for capacitive force- respectively pressure transducers is measuring displacements. Consequently, the following paragraph will concentrate on this principle. As stated within Eq. (10.21) for the change of distance, the interconnection between capacitance change and mechanical load is nonlinear for single

**Fig. 10.26** Schematic view of capacitive sensing principle and characteristic curve of capacitance

capacities. The displacement ξ lies in the range of 10 nm–50µm [29]. For linearization of the characteristic curves, an operating point has to be found, e.g. by arranging three electrodes as a differential capacitor. The displacements ξ typical for the working range are ≤10% than the absolute electrode distance *d*. In this range the characteristic curve can be approximated as linear (Fig. 10.26a). With the principle varying the electrode's surface the capacitance changes proportional to it, resulting in a linear capacitance change (Fig. 10.26b).

The evaluation of the capacitance change can be made by an open- or closed-loop measuring method. Either concerning open-loop method, the sensor is integrated in a capacitive bridge circuit, or it is put into a passive oscillating circuit with coil and resistor. Alternatively, the impedance can be measured at a constant measurement frequency. An alternative could be the application of a pulse-width-modulation (also called: re-charging method). A closed-loop approach is characterized by the compensation of the displacement by an additional energy. The main advantage of the closed-loop signal conditioning is the high linearity achieved by very small displacements. Additional information can be found in [39, 58].

The advantage of capacitive sensor in contrast to resistive sensors lies in the effect of little power consumption and high sensitivity. Additionally, the simple structure enables a low-cost realization of miniaturized structures in surface micro-machining (Fig. 10.27). In contrast to the resistive sensors–where positions and dimensions of the resistive areas have a direct influence on transfer characteristics–the manufacture tolerances for capacitive sensors are quite high. Mechanically induced stress due to packaging and temperature influence has almost no influence on their performance. Even miss-positioning of electrodes next to each other do not change the transfer characteristics, only the basic capacitance. The manufacturing steps with silicon capacitive sensors are compatible to CMOS-technology. This allows a direct integration of the sensor electronics on the chip, to minimize parasitic capacities. Especially with miniaturized sensors<sup>6</sup> a good signal-to-noise ration can be achieved [72]. The problem of parasitic capacities or leakage fields is one of the major challenges for the design of capacitive actuators, as it achieves easily a level comparable to the capacitance used for measurement. An additional challenge can be found in the constancy of the dielectric value, which is subject to changes in open air-gap actuators due to humidity or other external influence factors.

#### **Examples of Capacitive Sensors**

Concerning the manufacturing technology capacitive sensors integrated in haptic systems can be distinguished in three classes. Miniaturized pressure sensors, being realized using silicon microtechnology, represent the first class. Due to their small size of few millimeters, the moving masses of the sensor are low and thus cover a wide dynamic range (frequencies from static to several kilohertz). As shown before, the micro-machined capacitive sensors may be combined to arrays for measuring spatially distributed load. As an example Sergio [75] reports the realization of a capacitive array in CMOS-technology. A challenge is given by the capacity changes in the range of femto-Farad, which is similar to the capacity of the wiring. A relief is given by a parallel circuit of several capacities to a sensor element [29]. The frequency range of the shown examples range from static measurement up to several MHz upper cut-off frequency. Consequently, it is suitable for haptic related measurements of tactile information. Another example is given by an array made of poly-silicon. It has an upper cut-off frequency of 1,000 Hz and a spatial resolution of 0.01 mm2 suitable for tactile measurements. It was originally designed for acquisitions of finger-prints. Rey [76] reports of the use of such an array for intracorporal pressure measurement at the tip of a gripper. Once again, the leakage capacities are a problem, as they are within the range of the measured capacity changes.

Two examples of multi-component force sensors built in surface micro-machining are shown in Fig. 10.27 (a) [73] and (b) [74]. The two-axial sensor7 is designed for atomic force microscopy. The nominal load of this application lie in a range of μN. The three-axial sensor was designed for micro-manipulation, e.g. in molecular

<sup>6</sup> Due to the small electrodes a small basic capacitance is achieved, comp. equation in Fig. 10.25.

<sup>7</sup> With respect to "force" component.

(b) 6-component force/torque sensor, nominal load 500 µN

**Fig. 10.27** Examples of capacitive silicon multi-component force sensors [73, 74] c IOP Publishing, all rights reserved

biology, with similar nominal values of several μN. Both sensors are using the displacement change for measurement.

The second class is represented by ceramic pressure load cells. They are widely used in automotive industry and industrial process measurement technology. Substrate and measurement diaphragm are typically made of *Al*2*O*<sup>3</sup> ceramics. The electrodes are sputtered on ceramic substrates. Substrate and measurement diaphragm are connected via solder applied in thick-film technology. In contrast to silicon sensors, ceramic sensors are macroscopic and have dimensions in the range of several centimeters. Based on the technology sensors in differential-, relative-, and absolutedesigns with nominal pressures in the range of zero to 200 mbar such as in zero to 60 bar are available (e.g. Fig. 10.28, Fa. Endress + Hauser). The frequency range of these sensors is low, upper cut-off frequencies of approximately 10 Hz are achieved.

The third class is built from foil sensors, distributed e.g. by the company *Althen GmbH*, Kelkheim, Germany. These capacitive sensor elements are arranged in a matrix with a spacial resolution of ≤ 2 × 2 mm2. As substrate a flexible polymer foil is used. The height of such an array is 1 mm. The frequency range ranges from static to approx. 1,000 Hz. Nominal loads up to 200 kPa can be acquired with a

**Fig. 10.28** Schematic view of a ceramic pressure sensor fabricated by *Endress + Hauser*, Weil am Rhein, Germany, used with permission

**Fig. 10.29** Schematic view of capacitive shear force sensors as presented in [77], own visualization

resolution of 0.07 kPa. Due to creeping (comp. Sect. 10.4.3) of the substrate and parasitic capacities a high measurement inaccuracy exists.

Another polymeric foil sensor in the field of investigation is that one shown in Fig. 10.29 [77]. In contrast to prior examples, this array is used for direct force measurement. Normal forces are detected measuring the change of electrode distance, shear forces by detecting the change of active electrode surface. Similar to the sensor of the company *Althen* static and dynamic changes up to 1,000 Hz can be measured. The spatial resolution is given with 1 × 1 mm2. A disadvantage of the design is the high measurement inaccuracy through creeping of the polymer and leakage capacities.

#### **10.5.1.5 Optical Sensors**

In the area of optical measurement technology sensors based on freely propagating beams and fiber-optics are available. For force and pressure sensing mainly fiberoptic sensors are used, which will be introduced further within this subsection. All fiber-optic sensors have in common, that mechanical load influences transmission characteristics of the optical transmission network, resulting in an influence of parameters of a reflected or simply transmitted electromagnetic wave. The electromagnetic wave is defined by its wave equation [78].

$$
\nabla^2 \Psi = \frac{\delta^2 \Psi}{\delta \mathbf{x}^2} + \frac{\delta^2 \Psi}{\delta \mathbf{y}^2} + \frac{\delta^2 \Psi}{\delta z^2} \tag{10.24}
$$

 represents an arbitrary wave. A possible solution for this differential equation is the propagation of a plane wave in open space. In this case, electrical field *E* and magnetic field *B* oscillate orthogonal to each other. Electrical field propagating in z-direction is described by Eq. (10.25).

$$E(z,t) = \frac{1}{2}A(z,t) \cdot e^{j\alpha\_0 t - \beta\_0 z} \tag{10.25}$$

*A* marks the amplitude of the envelope, ω<sup>0</sup> is the optical carrier frequency and the propagation constant β0. With the propagation group velocity v*g*(λ)<sup>8</sup> the E- and Bfield are connected. Depending on the transmitting medium group velocity can be calculated via refraction index *ng* [79].

$$v\_{\mathfrak{g}}(\lambda) = \frac{c\_0}{n(\lambda)}\tag{10.26}$$

According to wavelength λ, *n* different values result. Waves are propagating differently depending on their frequency and wavelength. A pulse "spread out". For further information sources [78–82] are recommended. If only mechanical load such as force or pressure influences the transmission network, the resulting deformation can influence the transmission in two different ways:


The photo-elastic effect describes the anisotropy of the refraction index influenced by mechanical stress. Figure 10.30 visualizes this effect. Resulting refraction index change is dependent on applied stress *T* and is given by the following equation [83]:

$$
\Delta n = (n\_1 - n\_2) = C\_0 \cdot (T\_1 - T\_2) \tag{10.27}
$$

*C*<sup>0</sup> is a material specific, so-called photo-elastic coefficient. *Ti* marks the resulting internal stress. Depending on refraction index polarization, wavelength and phase of beam are changing. In the geometric case, mechanical load changes the conditions of the beam guidance. Using geometrical optics influences of mechanical loads on intensity and phase of radiation can be characterized.

A disturbing source for all fiber-optical sensors cannot be neglected: the temperature. Refraction index is depending on temperature changes, and consequently influences the properties of the guided wave. Beside thermal-elastic coefficients

<sup>8</sup> In vacuum it is equal to speed of light *<sup>c</sup>*<sup>0</sup> = 2.99792458· 108m/s.

**Fig. 10.30** Visualization of photoelastic effect [83]. Due to different refraction indices perpendicularly to the propagation direction propagation velocity of each field component is different and an optical path difference δ occurs. Polarization is changing

describing the strain resulting from temperature changes within any material, temperature directly influences the refraction index itself (paragraph Sect. 10.5.1.5). For temperature compensation a reference fiber has to be used, un-loaded and only influenced by temperature change. An advantage of all fiber-optical sensors is given by their immunity to electromagnetic radiation. The following paragraphs introduce the most important principles for optical force- and pressure measurement.

#### **Change of Intensity**

In principal, two transducer types varying the intensity can be distinguished. Both have in common, that mechanical load varies the condition of total reflection (Fig. 10.31). The angle α*<sup>c</sup>* is defined as the critical angle for total reflexion and defined by Snellius' law:

$$
\sin(\alpha\_c) = \frac{n\_2}{n\_1} \tag{10.28}
$$

The numerical aperture *N A* gives the appropriate critical angle θ*<sup>c</sup>* for coupling radiation into a multimode fiber:

$$
\sin(\theta\_c) = \sqrt{n\_1^2 - n\_2^2} \tag{10.29}
$$

If the angle varies due to mechanical load and takes values larger than θ*<sup>c</sup>* resp. smaller values than α*c*, conditions for total reflections are violated. The beam will not be guided within the core of the fiber. Total intensity of the transmitted radiation will become less. Figure 10.32 show a schematic sketch of the design of the very first variant. The sensor element is attached to the end of a multimode fiber.

In a first variant the light (e.g. emitted by a laser-diode λ = 1550 nm) is coupled into a multimode fiber. A reflective element is attached to the end of the transmission line. The element can be designed as a deformable object or a rigid one mounted on a deformable substrate. Mechanical load acts on this object. Due to the load the reflective element will be deformed (in case of a flexible surface) or displaced (in case

**Fig. 10.31** Guidance of multimode fibers. Beams injected with angles above θ*<sup>c</sup>* are not guided in the core

**Fig. 10.32** Schematic view of a fiber optic sensor with intensity modulation

of a rigid surface). Varying the displacement the mode of operation is comparable to a displacement sensor. The intensity is directly proportional to the displacement (Fig. 10.33). The load itself is a function of displacement and directly proportional to the elastic compliance *n* of the sensor element:

$$F(z) = n \cdot z.\tag{10.30}$$

If the geometry of the area changes, a part of the beam–according to the laws of geometrical optics–is decoupled into the cladding (dispersion) and an intensity loss can be measured at the detector (Fig. 10.33).

In academic publications from Peirs [84] and Kern [85] such a mode of operation is suggested for multi-component force measurement. In this case, the measurement range is directly proportional to the mechanical properties of fixation of the reflective body. Using the calculation method known from Sect. 10.5.1.1 this fixation can be designed. A disadvantage of this principal is the use of polymers for the coupling of the reflective object. This leads to creeping of the sensor signal. The measurement inaccuracy of these sensors lies in a range of 10% [85]. Their diameter takes a value of few millimeters. The length depends the application. Another source of noise is the temperature. A temperature change leads to a dilatation (or shrinkage) of the polymer itself and displaces the reflective element. The displacement change results in a defective measurement signal. Due to the small size, an array assembly is possible.

(b) flexible element: intensity is depending on the deformation

**Fig. 10.33** Variation of intensity due to displacement of rigid and flexible elements

The second variant is a so-called "micro-bending sensor". Its fundamental design is schematically given in Fig. 10.34. As stated before, a beam is coupled into a multimode fiber. Force, pressure or strain is applied by a comb-like structure results in micro-bending of the fiber (Fig. 10.34b).

In case of deformation–similar to the first variant–a part of the light is decoupled into the cladding. The intensity of the measured light diminishes.9 The gaps between the comb-like structures for micro-bending sensors are in the range of one millimeter. The height of the structure is in the same dimension [86]. To apply mechanical loads an area of ∼= 1 cm length and a width of ≥5 mm is used. Measurement range depends on displacement of the bending structure and diameter of the fiber itself. Pandey [87] describes the realization of a pressure sensor for loads up to 30 bar. If the bending diameter becomes smaller, lower nominal pressures and forces are possible. Concerning the detection of force components, only one-component sensors can be realized using this principle.

If spatially distributed mechanical load has to be measured, multiple microbending structures can be located along one fiber. To evaluate the several measuring points, for instance optical time domain reflectometry (ODTR) can be used. This device sends a pulsed signal (light pulses of around 10µs length) guided in a fiber, and measures the reflexion depending on time. Based on the propagation velocity of the beam inside the fiber v, the time delay for each measuring point can be calculated by relating them. Additional information can be found in [86, 88] or [87]. The dynamics of these sensors is only limited by sensor electronics and could theoretically be applied to the whole range of haptic applications.

<sup>9</sup> Both versions are possible: Measuring the transmitted and the reflected radiation.

**Fig. 10.34** Variation of beam guidance in case of microbending

#### **Change of Phase**

The variation of the phase of light by mechanical load is used for interferometric sensors. The most commonly used type is based on the Fabry-Pérot-interferometer, discussed in the following paragraph. Other variants are Michelson- and Mach-Zehnder-interferometers. The assembly is made of two plane-parallel, reflective and semi-transparent objects e.g. at the end of a fiber, building an optical resonator (Fig. 10.35). The beam is reflected several times within the resonator and interferes with each reflection. The resonance condition of this assembly is given by the distance *d* of the reflective elements and the refraction index *n* within the resonator. The so-called "free spectral range" marks the phase difference δ, generating a constructive superposition of beams:

$$\delta = \frac{2\pi}{\lambda} \cdot 2 \cdot n \cdot d \cdot \cos(\alpha) \tag{10.31}$$

Figure 10.35b shows the typical characteristics of the transmission spectrum of a Fabry-Pérot interferometer. According to the formula shown above the corresponding wavelength yields a transmission peak; all other wavelengths are damped and annihilated. Due to the mechanical load the distance *d* of the surfaces is varied, changing the conditions for constructive interference. Sensors using this principle are e.g. used by the company *LaserComponents Gmbh*, Olching, Germany, for uniaxial force- or pressure measurement, and can be bought for nominal pressures up to 69 bar [89]. The influence of temperature would also appear to be problematic too and has to be compensated by a reference configuration parallel working.

Beside pressure transducers, single component forces and strains can be measured (Fig. 10.36). The design equals a Michelson-Interferometer. The sensor element is made of two multimode fibers, whereas the strain acts upon only one fiber. Identical to the Fabry-Pérot-configuration the sensor element is made of two plane-parallel reflective surfaces, whose distance varies according to varying strain. Inside the measuring electronics a reference design is included. To measure the mechanical load the phase of reference and measuring assembly is compared. This measurement principle enables to measure frequencies in the range of several kilohertz. The geometrical

**Fig. 10.35** Assembly and operating mode of a Fabry-Pérot interferometer

**Fig. 10.36** Temperature compensation in interferometric strain sensing elements

dimension is given by the diameter of the fiber including some protective coating ≤ 1 mm, and the length of 2–20 mm depending on the application itself. For pressure sensors the measuring error with respect to nominal load takes a value of about 0.5%, with strain gages at a factor of 15·10−6.

#### **Change of Wavelength**

For optical detection of strain so called fiber Bragg grating sensors (FBG sensor) are widely used. To realize the sensing element the refractive index of the core in a single mode fiber is varied due to the position (Fig. 10.37) and a grating arise [90]. The refractive index modulation can be described by

**Fig. 10.37** Operational mode of fbg sensors [90]

$$n(z) = n\_0 + \delta n\_{effective}(z) = n\_0 + \delta \overline{n}\_{effective} \cdot \left(1 + s \cdot \cos\left(\frac{2\pi}{\Lambda} z + \phi(z)\right)\right) \tag{10.32}$$

whereas *n*<sup>0</sup> is the refractive index within the core, δ*nef f ecti*v*<sup>e</sup>* is the average of the index's modulation and *s* a measure of the intensity of the index's modulation. marks the grating period and the phase shift φ(*z*) resulting from the measured value. In idle situation results φ(*z*) = 0. Figure 10.37 gives a schematic drawing of the assembly.

If light is coupled into the fiber, only parts of it are reflected according to the law of Bragg. The reflective spectrum shows a peak at the so called Bragg-wavelength λ*b*. This wavelength depends on the refractive index *n*(*z*) and the grating period :

$$
\lambda\_b = 2n\Lambda.\tag{10.33}
$$

In loading case both grating distance and refractive index varies. The maximum of the spectrum is shifting from λ<sup>0</sup> to another wavelength. According to the wavelength shift mechanical load can be determined. This leads us to thee following condition:

$$\frac{\Delta\lambda}{\lambda\_0} = \underbrace{(1 - C\_0)}\_{gauge\,factor} \cdot (S + \alpha\_{VK} \cdot \Delta\vartheta) + \frac{\delta n/n}{\delta\vartheta} \cdot \Delta\vartheta,\tag{10.34}$$

whereas α*V K* is the coefficient of thermal expansion of the deformation body, and *C*<sup>0</sup> the photoelastic coefficient. Beside the change induced by mechanical strain *S* the change of temperature ϑ influences the wavelength shift in the same dimension. Compensating the influence of temperature another FBG sensor has to be installed as reference at an un-loaded area. The temperature compensation is afterwards achieved by comparison between both signals. Analogous to resistive strain sensors a gage factor of *k* ≈ 0.78 can be achieved with constant measurement temperature. Extensions up to 10.000µm/m can be achieved.

The width of the sensor lies in the area of single mode fibers. The sensors length is defined by the grating, which has to be three milimetres at least to provide a usable reflective spectrum ([91–93]). Resolution takes a value of 0.1µm/m and is–such as its dynamics–defined by the sensor electronics. Similar to strain gages these sensors can be mechanically applied on deformation elements, whose dimensions and shapes define the measurement range. A challenge with the application of fiber-sensors in this context is the differing coefficients of thermal expansion between deformable element, adhesive and fiber. Additionally, reproducibility of the adhesive-process for fibers is not as high as typically required. Especially creeping of glue results in large measurement errors. Comparable to the micro-bending principle, fbg sensors are applicable to several spatially distributed measurement points. To distinguish the several positions gratings with different periods *<sup>i</sup>* and thus different Bragg-wavelengths λ*<sup>b</sup>* are used. The company *Hottinger Baldwin Messtechnik GmbH*, Darmstadt, Germany, distributes several designs containing a application area around the grid for strain measurement.

Besides monitoring of structures or strain analysis, fbg sensors can be used for realizing force sensors too. Mueller describes the use of fbg sensors in a tri-axial force sensor for medical application [94]. Further information on the application of fbgs can be found in [90–93].

#### **10.5.1.6 Piezoelectric Sensors**

Piezoelectric sensors are widely used, especially for measurement of highly dynamic activities. The measurement principle is based on a measure induced charge displacement within the piezoelectric material, the so called reciprocal piezoelectric effect (Sect. 9.3). Charge displacement leads to an additional polarization of the material resulting in a change of charge on the materials surface. This can be detected using electrodes (Fig. 10.38). Beside the measurement of force, it is for pressureand acceleration measurement in particular. For force measurement, the longitudinal effect is primarily used. Detailed information about the piezoelectric effect and possible designs are found in Sect. 9.3. Materials used for sensing elements will be introduced in the following paragraph.

The general equation of state states for operation in sensor mode:

$$D\_i = \underbrace{\varepsilon\_{ij}^T \cdot E\_j}\_{\rightarrow \ 0} + d\_{im} \cdot T\_m,\tag{10.35}$$

$$D\_3 = d\_{31} \cdot T\_1.\tag{10.36}$$

A stress contribution in the sensing material leads to a change of charge density *Di* , whereas ε*<sup>T</sup> i j* marks relative permittivity and *dim* piezoelectric charge constant. Taking geometric parameters of the sensor, eletrode area *A* = *l*<sup>1</sup> · *l*<sup>2</sup> and thickness *l*<sup>3</sup>

of dielectric, the resulting charge *q* can be derived. Taking electric parameters into account, sensor output voltage *u* can be calculated [15, 29, 95]:

$$q = D\_3 \cdot A\_3,\tag{10.37}$$

$$
\Delta v = q \cdot \frac{1}{C\_{\text{p}}}, \text{ with } C\_{\text{p}} = \frac{e\_{33}^{T} \cdot l\_{2} \cdot l\_{1}}{l\_{3}}, \tag{10.38}
$$

*C*<sup>p</sup> marks capacitance and *e<sup>T</sup>* <sup>33</sup> piezoelectric force constant.

Technically relevant materials can be distinguished into three groups. The first group is built of mono-crystals such as quartz, gallium-, and orthophosphate.<sup>10</sup> Polarization change in case of mechanical load is direct proportional to the stress. Its transfer characteristic is highly linear and does not have any relevant hysteresis. Piezoelectric coefficients are long-term stable. One disadvantage is the small coupling factor *k* of about 0.1. For remembrance: *k* is defined as the quotient of the transformed to the absorbed energy.

The second group is formed by polycrystalline piezo-ceramics, such as barium titanate (*BaTiO*3) or lead zirconate titanate (PZT, *Pb*(*ZiTi*)*O*3), being manufactured in a sintering process. The polarization is artificially generated during the manufacturing process (Sect. 9.3). An advantage of this material is the coupling factor, which is seven times higher than that one of quartz. A disadvantage is the nonlinear transfer characteristics with a noticeable hysteresis, and a reduced long-term stability. The materials tend to depolarize.

The last group is build from partial crystalline plastic foils made of polyvinylidene fluoride (PVDF). Its coupling factor lies with 0.1 to 0.2 in the area of quartz. Advantageous are the limit size (foil thickness of a few μm) and the high elasticity of the material.

The first two sensor materials are used in conventional force sensors, as e.g. distributed by the company Kistler. Nominal forces take values of 50 N to 1.2MN. The sensor typically has a diameter of 16 mm and a height of 8 mm. Alternations of load up to 100 kHz are measurable. Single- as well as multiple-component sensors

<sup>10</sup> This crystal is especially applicable for high temperature requirements.

**Fig. 10.39** Possible assemblies of piezoelectric force sensors

are state of the art. Figure 10.39 shows the general design of a three-componentforce-sensor from Kistler.

Piezoelectric force sensors are typically used for the analysis of the dynamic forces occurring during drilling and milling or for stress analysis in automotive industry. In haptic systems, these sensor variants can hardly be found. Not exclusively but mostly because they are not suitable to measure static loads. Sensors based on PVDF-foils as piezoelectric material are increasingly used for the measurement of tactile actions. The piezoelectric effect however is used for the generation of a displacement and not for its measurement, making this variant being described in paragraph Sect. 10.5.1.7.

#### **10.5.1.7 Less Common Sensing Principles**

Sensor designs shown in this subsection are not force or pressure sensors for conventional purposes. All of them have been designed for different research projects in the context of haptic systems. Focus of these developments lies in the spatially distributed measurement of tactile information.

**Fig. 10.40** Schematic view of resonance sensors, own illustrations following **a** [32], **b** [96]

#### **Resonance Sensors**

For measurement of vibrotactile information e.g. the so called resonance principle could be used. Figure 10.40a shows the principal design of such a sensor. A piezoelectric foil (PZT or even PVDF) is used as an actuator. Electrodes on both sides of the foil apply an electrical oscillating signal, resulting in mechanical oscillations of the material due to the direct piezoelectric effect. The structure oscillates at its resonance frequency *f*<sup>0</sup> calculated by the following formula

$$f\_0 = \frac{1}{2d} \cdot \sqrt{\frac{n}{\rho}}\tag{10.39}$$

whereas *d* is the thickness, *n* the elasticity and ρ the density of the used material. The load, responsible for the deformation, is proportional to the frequency change [96]. For spatially distributed measurement, the sensors are connected as arrays of elements with 3x3 and 15x15 sensors. The dimensions of the sensing arrays takes values of 8x8 mm2 resp. 14x14 mm2. The thickness of the foil is 1 mm. A huge disadvantage of this principle is the high temperature dependency of the resonance frequency from the piezoelectric material used. The coefficient lies at 11.5 Hz per 1 ◦ C within a temperature range between 20 and 30 ◦ C [70, 97].

So called surface acoustic wave resonators, SAW sensors, make use of the change of their resonance frequency too. The excitation occurs via an emitter called "Interdigital structure" (Fig. 10.40b). The mechanical oscillations with frequencies in the range of MHz distribute along the surface of the material. They are reflected on parallel metal structures and detected by the receiving structure. Due to mechanical values applied the material is deformed, the runtime of the mechanical wave changes, and consequently the sensor's resonance frequency. With this design, the temperature is one of the major disturbing values. SAW sensors are used for measurement of force, torques, pressure and strain. The dynamic range reaches from static to highly dynamic loads.

**Fig. 10.41** Schematic view of an active element [99]. The dimensions are 6 <sup>×</sup> <sup>6</sup> <sup>×</sup> 1mm<sup>3</sup>

#### **Electrodynamic Sensor Systems**

Within the research project *TAMIC* an active sensor system for the analysis of organic tissue in minimally invasive surgery was developed [216]. The underlying principle is based on an electrodynamic actuated plunger excited to oscillations (Sect. 9.1). The plunger is magnetized in axial direction. The movements of the plunger induce voltages within an additional coil included in the system. The material to be measured is damping the movement, which can be detected and quantified by the induced voltage. The maximum displacement of the plunger is set to one millimeter. The system is able to measure dynamically from 10 to 60 Hz. The nominal force lies in the range of 200 mN. The geometrical dimensions of the system are a diameter of ≤15 mm, and a length of ≤400 mm, which is near to typical minimally invasive instruments. Detailed information can be found in [98].

Another example for a miniaturized sensor for the measurement of spatially distributed tactile information is presented by Hasegawa in [99]. Figure 10.41 shows the schematic design of one element.

The elements are arranged in an array structure. In quasi-static operation mode the system is able to measure contact force and the measurement object's elasticity. The upper surface is made of a silicon-diaphragm with a small cubical for forceapplication to the center of the plane. The displacement of the plate is measured identical to a silicon-pressure or -force sensor with piezoresistive areas on the substrate. By the displacement the applied contact force can be derived. For measuring the elastic compliance of the object, a current is applied to the flat coil (Fig. 10.41). In the center of the diaphragm's lower side a permanent magnet is mounted. The electrically generated magnetic field is oriented in the opposite direction of the permanent magnet. The plate is displaced by this electromagnetic actuator and the cube is pressed back into the object. The force necessary to deform the object is used in combination with the piezoresistive sensors signal for calculation of the object's elastic compliance. In the dynamic operation mode the coil is supplied with an oscillating signal, operating the diaphragm in resonance. Due to interaction with the measured object the resonance condition changes. By the changing parameters, such as phase rotation, resonance frequency, and amplitude, elastic coefficients such as damping coefficients of the material can be identified. Due to the high degree of miniaturization highly dynamic actions up to several kilohertz are possible to be measured. The nominal force lies in the area of 2 *N*, the resolution of the system is unknown.

#### **Electro-Luminescence Sensors**

A high resolution touch-sensors is presented by Saraf [100]. It is thought to be used for the analysis of texture on organ surfaces. On a transparent glass substrate a layercompound of 10µm height made of gold- and cadmium sulfite particles11 is applied. The single layers are separated by dielectric barriers. The mechanical load is applied on the upper gold layer, resulting in a break-through of the dielectric layer and a current flow. Additionally, energy is released in form of small flashes. This optical signal is detected using a CCD-camera. The signal is directly proportional to the strain distribution generated by the load. The resulting current density is measured and interpreted.

The spatial resolution of the design is given with 50µm. Nominal pressures of around 0.8 bar can be detected. The sensor area has a size of 2.5 × 2.5 mm2, the thickness of the sensor is ≤1 mm and thus very thin. Additional information can be found in [100].

## *10.5.2 Selection of a Suitable Sensor*

In earlier sections sources for the requirements identification have been presented. Afterward, presentation and discussion of the most relevant sensor principles to measure forces were made. This section is intended to help engineers to select or even develop an appropriate force sensor. Depending on the identified requirements found using Sect. 10.4, a suitable sensor principle can be chosen.

To get a better overview, the basic requirements described in Sect. 10.4.5.1 are collected in Table 10.8. The requirements are distinguished concerning human perception in kinesthetic and tactile information. More detailed information concerning force- and spatial-resolution can of course be found in Sect. 10.4. The properties of active and passive transformers–force measurement is done via a mechanical variable such as strain or stress detected via elasto-mechanics–are strongly dependent on the design of the deformation element. Especially the nominal force, number of components to be measured and the dynamics are directly influenced by the deformation element's design.

A comparison of all sensor principles can hardly be done. Consequently, the methods will be compared separately from each other. As an evaluation criterion transfer characteristics and geometrical dimensions are chosen. Figure 10.42 classifies the principles according to gage factor and geometry. According to the increasing size of the strain sensing element, the whole force sensor can be designed at a higher level

<sup>11</sup> A semi-conducting material.


**Table 10.8** Compilation of main requirements on haptic sensors. Depending on system topology and measurement task further requirements have to be considered

of miniaturization. A direct result of smaller size is the minimized mass, providing an increased upper cut-off frequency. If the gage factor of the sensing element is higher, lower absolute value of strain is necessary to get a high output signal. Additionally, the over-all design can be designed stiffer. This enables to detect smaller nominal forces and thus higher cut-off frequencies. Concerning the lower cut-off frequency strain sensing elements are suitable for measuring static loads. Using piezoresistive and capacitive silicon-sensor an upper cut-off frequency of 10 kHz or more can be measured with high resolution.

The other sensor principles can be compared contingent on nominal load and dimensions. Figure 10.43 classifies the presented principals according to their nominal load and corresponding construction space.

**Fig. 10.42** Comparison of different strain measurement technologies due to dimensions and gage factor

**Fig. 10.43** Comparison of different measurement technologies due to dimensions and nominal load

Except the piezoelectric sensors, all sensor principles can be used for measuring static and dynamic loads. The upper cut-off frequency mainly depends on the mass of the sensor which has to be moved. Consequently, the more miniaturized the sensor the higher the upper cut-off frequency becomes. Figure 10.44 compares the presented sensor principles according to the detectable nominal load and the corresponding dynamic range.

By means of the shown diagrams a pre-selection of suitable sensor principles for the intended application can be done. Additional sensor properties such as resolution, energy consumption, costs or impact of noise are strongly depending on the individual realization and will not be taken into account here. Advanced descriptions of sensor properties can be found in literature highlighted in the corresponding subsections for the individual principle.

To give an example how to select a suitable force sensor, the task *laparoscopic palpation of tissue* is chosen. Figure 10.45 shows the tree diagram which can be used for analyzing the task and deriving requirements. Laparoscopic palpation is a telemanipulation tasks for characterizing texture. It is done via closed-loop control.

**Fig. 10.44** Comparison of different measurement technologies due to nominal load and frequency range

To avoid undesired influences of the laparoscopic instrument itself onto the sensing signal (e.g. friction between instrument and abdominal wall), the sensor should be integrated into the tip. The laparoscope is used to scan the tissues surface. Detecting three directions of contact force, texture and even compliance of tissue can be analyzed.

Taking contact information and into account (Table 10.8) cut-off frequency, resolution and nominal force can be derived. The dimensions of the laparoscope limit the construction space. Also static information has to be measured, thus an active sensing principle like (piezo-)resistive, capacitive, inductive or optic should be considered. Due to limited space, piezoresistive sensing is recommendable.

Is no force sensor with the determined requirements available, a deformation element has to be designed separately taking load condition and elasto-mechanics into account. For example [31, 32] are helpful references for designing deformation elements. The strain sensing element can be chosen depending on the aimed resolution and construction space. Table 10.9 gives an overview of common strain sensing technologies.

**Fig. 10.45** Tree diagram for selecting a force sensor. Exemplarily, the task *laparoscopic palpation of tissue* is chosen


**10.9**Comparisonofcommonsensingprinciplesforstrainmeasurement[32]

 Maximum strain depends on elasticity of deformation element

> b

c

According to [115] elongation of break takes a value of ±0.2%

ddε/ε, sandwich-topology

epatch transducer,PI

## **10.6 Tactile Sensing and Touch Sensors**

With the increasing number of systems using touch-sensitive surfaces for → HCI, touch sensors have become more prevalent. They detect, whether a human user touches a sensitive two- or three-dimensional surface of an object or system. It is a special kind of force sensors (so-called tactile sensors ) and requirements as well as constraints were already mentioned in Section Sect. 10.5. One can differentiate between sensors that detect the contact position and ones that detect different types of touch or contact pose.

When analyzing this kind of systems, one can identify several functional principles. Because of robustness, low costs and high sensitivity, resistive and capacitive principles are among the most used in <sup>→</sup> HCI. Dahiya and Valle provide a thorough analysis of different measurement principles in [116] for the usage in robotic applications. In the following, the function of resistive and capacitive systems is described in more detail.

## *10.6.1 Resistive Touch Sensors*

Resistive touch sensors to detect contact positions are based on two flexible, conductive layers that are normally separated from each other. If a user touches one of the layers, a connection is made between both layers and the position of the connection point can be calculated from the different resistances as shown in Fig. 10.46 based on Eqs. (10.40) and (10.41).

#### 10 Sensor Design 489

$$
\mu\_{\rm x,out} = \left. \frac{R\_2}{R\_1 + R\_2} U\_{x1} \right|\_{U\_{x2} = 0 \,\text{V}, \quad U\_{y3}, U\_{y4} \text{ in Hi-Z state}} \tag{10.40}
$$

$$
\mu\_{\text{y,out}} = \left. \frac{R\_4}{R\_3 + R\_4} U\_{\text{y}3} \right|\_{U\_{\text{y4}} = 0 \,\text{V}, \quad U\_{x1}, U\_{x2} \text{ in Hi-Z state}} \tag{10.41}
$$

Resistive touch sensors exhibit a high resolution of up to 4096 dpi in both dimensions and a high response speed (<10 ms). With additional wiring, also the pressure on the screen can be recorded. This principle does not support multi-touch detection, i.e. the simultaneously contact in more than one position, with the setup shown in Fig. 10.46. A simple mean to measure multi-touch interactions, too, is the segmentation of one of the conductive layers in several (*n*) conductive strips called *hybrid analog resistive touch sensing*. This increases the number of calculations to obtain a position reading from 2 to 2*n*, but this is still less than the calculation of a whole matrix with at least *n*<sup>2</sup> calculations.

For the usage of resistive touch sensors, there are a couple of commercially available integrated circuits (as for example MAX 11800 with a footprint as low as 1.6 · 2.1 mm2) that will alleviate the integration of such a sensor in a new system.

## *10.6.2 Capacitive Touch Sensors*

For capacitive touch sensors detecting positions, two general approaches are known. *Self-capacitance* or *surface-capacitance* sensors are built up from a single electrode. The system measures the capacitance to the environment, that is altered when a user touches the surface. Based on the measurement of the current that is used to load the changed capacitance, a position measure can be deducted similar to the calculation in the case of resistive sensors. This sensor type is prone to errors from parasitic capacitive coupling, the calculation of multiple touch positions is possible but requires some effort.

When more than one capacitor is integrated in a surface, one can use the mutual capacitance type sensor. In that case, the capacitors are arranged in a matrix and the capacitance of each capacitor is changed by approaching conductive materials like fingers or special styluses. This matrix is read out consecutively by the sensor controller. Because of the matrix arrangement, the detection of multi-point touch is possible. As with resistive sensor systems, there are several commercially available integrated circuits for the readout of such capacitive matrices.

For the identification of contact poses, like for example the touch with a single finger or a whole hand, the self-capacitance approach can be used as well. In that case, the changed capacitance is considered as an indicator for the touch pose with regard to an arbitrary shaped electrode. Since the realization of such a function is quite simple in terms of the required electronics, this procedure is incorporated in standard components under several brand names, like for example Atmel QTouch.

Sometimes, capacitive sensor systems are combined with inductive sensor systems that track the position of a coil with respect to the reference surface. This is for example used to make use of stylus' on touchscreens and graphic tablets. Because of the different sensor principles, one can weigh the tool equipped with the coil more and avoid misreadings by the capacitive effect of the user's hand.

## *10.6.3 Other Principles*

However, a lot of other principles are known as well, that are often based on the change of a position and a detection of this change with different sensing principles. Examples include optical and magnetic measuring principles as described above in Sect. 10.7. They are often investigated in the context of robot tactile sensing, where not only touch, but also pose, handling and collision are of interest [116]. In these cases, the use of flexible materials and microtechnology is of interest, which makes this kind of sensor to most welcome examples for microsystem engineers.

A recent example for such a sensing system is a sensorized multicurved robot finger, which was published by researchers at Columbia Engineering in 2020 [117]. It is an optic-based sensing system of phototransistors and diods encapsulated by a transparent elastomer: Depending on contact situation and amount of force the elastomer is deforming, intensity of light is modulated, and thus force is evaluated.

A comparable topology provides the BioTac Toccare sensing system by *Syn-Touch Inc*. It mimics the interactions of human hand exploring material and identify different materials. The system contains biomimetic sensors evaluating 15 dimensions of touch evaluation information about texture, adhesive and thermal properties as well as compliance and friction.

Another commercially available tactile sensing systems is uSkin developed by *XELA Robotics* for measuring grip forces in three directions of space. Available are arrays of 3-axis tactile sensors in different sizes. Beside 3-axis force measurement, displacement in 3-axis is measured and evaluated too.

For the use in haptic systems, the current state of development of such systems as for example shown in [116–119] has to be critically checked. As a general classification Table 10.10 gives some advantages and disadvantages for possible sensing principles.

A quite fancy example for advanced touch sensors is based on an impedance spectroscopy measurement, the Touché-system by Sato et al. allows to differentiate different grips and body poses from an impedance measurement. Examples include the identification of the finger positions on a door knob, the discrimination of arm poses when sitting on a table or even the detection of someone touching a water surface. Applications include gesture interfaces for worn and integrated computers as well as the possibility of touch passwords, which consist out of a predefined sequence of touch poses. [120].


**Table 10.10** Advantages and disadvantages of different touch sensing principles according to [116]

## **10.7 Positioning and Displacement Sensors**

To acquire the user's reaction in haptic systems, a measurement of positions respectively their time derivatives (velocities, accelerations) is necessary. Several measurement principles are available to achieve this. A mechanical influence of the sensor on the system has to be avoided for haptic applications, especially kinaesthetic ones. Consequently, this discussion focuses on principles, which do not affect the mechanical properties significantly. Beside the common optical measurement principles, the use of inductive or capacitive sensors is promising especially in combination with actuator design. This chapter gives an overview about the most frequently used principles, amended by hints for their advantages and disadvantages when applied to haptic systems.

## *10.7.1 Basic Principles of Position Measurement*

For position measurement two principle approaches can be distinguished: differential and absolute measuring systems.

#### **10.7.1.1 Incremental Principle**

Differential systems acquire the change in discrete steps together with the direction of change, and protocol (typically: count) these events. This protocol has to be set back to a reference position by an external signal. If no step loss happens during movement, a prior initialized differential system is able to provide the absolute position as output. If this initializing reference position is set in point which is passed often, a differential system will be referenced frequently during normal operation. Potential step losses would then affect the time till the next initializing event only.

Measurement of the steps is done via a discrete periodic event, typically encoded in a code disc with grooves or a magnetic rotor. This event is transformed by the sensor in a digital signal, whose frequency is proportional to the velocity of the movement (Fig. 10.47a). Some additional directional information is required to be able to measure the absolute position. A typical solution for this purpose is the use of two identical event types with a phase shift (between 1 and 179◦, typically 90◦. By looking at the status (*high/low*) of these incremental signals (Fig. 10.47b) at e.g. the rising edges of the first incremental signal (A), a *low* encodes one movement direction, and a *high* encodes the opposite movement direction. Accordingly the count process either adds or subtracts the pulses generated–in this case–by the second signal (B). State-of-the-art microcontrollers are equipped with counters for incremental measurement already. They provide input pins for count-signal and count-direction. Discrete counters are sold as "Quadrature-Encoder" ICs and frequently include actuator drive electronics, which can be applied for positioning tasks. Latter prevents them from being useful for typical haptic applications.

**Fig. 10.47** Principle of direction detection with two digital signals with a 90◦ phase-lag

#### **10.7.1.2 Absolute Measurement Principle**

Absolute measurement systems acquire a position- or angle-proportional value directly. They are usually analog. A reference position for these systems is not necessary. They have advantages with reference to their measurement frequency, as they are not required to measure with dynamics defined by the maximum movement velocity. The acquisition dynamics of incremental principles is given by the necessity not to miss any events. In case of absolute measurement principles the measurement frequency can be adjusted to the process-dynamics afterward, which is usually less demanding. However by the analog measurement technology the efforts are quite high for the circuit, the compensation of disturbances, and the almost obligate digitization of the analog signal.

An alternative for the pure absolute measurement with analog technology is given by a discrete absolute measurement of defined states. In Sect. 9.2.2.1, Fig. 9.18, a commutation of EC-drives with a discrete, position coding of magnet-angles with field plates was already shown. This approach is based on the assumption to achieve a discrete resolution of *D* from *m* measurement points with *n* states by

$$m^m = \Delta D.\tag{10.42}$$

In case of the commutated EC-drive *m* = 3 measurement points, which are able to have *n* = 2 states, could encode 8 positions on the circumference, but only six were actually used. But there are other more complex code discs with several lanes for one sensor each. These sensors are usually able to code two states. However e.g. by the use of different colors on the disc many more states would be imaginable. A resolution of e.g. 1 degree (360 discrete steps) would need the number of

$$m = \frac{\log(\Delta D)}{\log(n)} = 8.49\tag{10.43}$$

at least nine lanes for encoding.

## *10.7.2 Requirements in the Context of Haptics*

Position measurement systems are primarily characterized by their achievable resolution and dynamics. For haptic devices, in dependence on the measurement basis for computer mice and scanners, position resolutions are frequently defined as dotsper-inch *R*inch. Consequently the resolution *R*mm in metric millimeters is given as:

$$
\Delta R\_{\rm mm} = \frac{25,4 \,\mathrm{mm\,dpi}}{\Delta R\_{\rm inch}}.\tag{10.44}
$$

A system with 300 dpi resolution achieves an actual resolution of 84μm. In dependency on the measurement principle used, different actions have to be taken to achieve this measurement quality. With incremental measurement systems the sensors for the acquisition of single steps (e.g. holes on a mask) are frequently less resolutive, requiring a transformation of the user's movement to larger displacements at the sensor. This is typically achieved by larger diameters of code discs and measurement at their edge. These discs are mounted on an axis, e.g. of an actuator. With analog absolute systems an option for improving the signal is conditioning. It is aimed at reducing the noise component in the signal relative to the wanted signal. This is usually done by a suppression of the noise source (e.g. ambient light), the modulation and filtering of the signal (e.g. lock-in amplifier, compare Sect. 10.7.6.1) or the improvement of secondary electronics of the sensors (high resolution A/D-transformer, constant reference sources).

Beside the position measurement itself, its dynamic has to be considered during the design process. This requirement is relevant for incremental measurement systems only. Absolute measurement systems need a bandwidth equal to the bandwidth provided by the interface and the transmission chain (Chap. 11) for positioning information. Incremental measurement systems however have to be capable of detecting any movement event, independent from the actual movement velocity. The protocol format, usually given by counters part of the microcontrollers, has to be dimensioned to cover the maximum incremental frequency. This requires some assumptions for the maximum movement velocity vmax. If e.g. a system with 300 dpi position resolution move at a maximum velocity of 100 mm/s, the dynamic *f*ink for detecting the increments is given as

$$\frac{1}{f\_{\rm ink}} = \frac{\Delta R\_{\rm mm}}{\upsilon\_{\rm max}}\tag{10.45}$$

For the example the necessary measurement frequency is given with *f*ink = 1190 Hz. The effective counting frequency is usually chosen with factor two to four higher than that, to have a security margin for counting errors and direction detection.

## *10.7.3 Optical Sensors*

Optical Sensors for position measuring are gladly and frequently used. They excel by their mechanical robustness and good signal to noise ratios. They are cheap and in case of direct position measurement quite simple to read out.<sup>12</sup>

#### **Code Discs**

Code discs represent the most frequently used type of position measurement systems with haptic devices, especially within the class of low-cost devices. They are based

<sup>12</sup> The examples presented here are discussed either for translatory and rotatory applications. But all principles may be applied to both, as a translation is just a rotation on a circle with infinite diameter.

on transmission (Fig. 10.48a) or reflection of an optical radiation, which is interrupted in discrete events. The necessary baffle is located near to the receiver. It is manufactured by stamping, or printed on a transparent substrate (glass, plastic material) via thigh-film technology or laser printers. For high requirements on resolution they are made of metal, either self-supportive or on a substrate again. In theses cases the openings are generated by a photolithographic etching process. The receivers can be realized in different designs. Figure 10.48 shows a discrete design with two senders in form of diodes and two receivers (photodiode, phototransistor). The placement of sender/receiver-units have to allow the phase shift for directional detection (Sect. 10.7.1.1). An alternative is given by a fork light barriers already including a compact sender/receiver unit. Additionally opto-encoders (e.g. HLC2705) exist including the signal conditioning for direction-detection from the two incremental signals. The output pins of these elements provide a frequency and one signal for the direction information.

## **Gray Scale Values**

With similar components, but for absolute measurement a gray scale disc or gray scale sensor can be built. Once again there are transmission and reflection (Fig. 10.48b) variants of this sensor. In any case the reflection/transmission of the radiation varies dependent on the angle or position of a code disc. The amplitude of the reflection gives absolute position information of the disc. For measurement, once again, either a discrete design or the usage of integrated circuits in form of so called reflection sensors is possible. Although such sensors are frequently used as pure distance switches only, they show very interesting proportional characteristics between the received numbers of photons and their output signal. They are composed of a light emitting diode as sender and a phototransistor as receiver. In some limits the output is typically given by a linear proportional photoelectric current.

## **Reflection Light Switches**

Reflection light switches show useful characteristics for a direct position measurement too. In the range of several millimeters they have a piecewise linear dependency

between photocurrent and the distance from the sensor to the reflecting surface. Consequently they are useful as sensor for absolute position measurement of translatory movements (Fig. 10.49a). By this method e.g. with the SFH900 or its SMD successor SFH3201 within a near field up to ≈1 mm measurement inaccuracies of some micrometers can be achieved. In a more distant field up to 5 mm the sensor is suitable for measurement inaccuracies of <sup>1</sup> <sup>10</sup> mm still.

#### **Mice-Sensor**

The invention of optical mice without ball resulted in a new sensor-type interesting for other applications too. The optical mice-sensors are based on an IC measuring an illuminated surface through an optic element (Fig. 10.49b). The resolution of the CMOS sensors typically used range from 16 × 16 to 32 × 32 pixels. By the image acquired the chip identifies the movement of contrast difference in their direction and velocity. The interface of the calculated values varies from sensor to sensor. The very early types provided an incremental signal for movements in X and Ydirection identical to approaches with code discs described above. They additionally had a serial protocol included to read the complete pixel information. Those sensors (e.g. ADNB-3532 family) provide serial protocols for a direct communication with a microcontroller only. This allowed a further miniaturization of the IC and a minimization of the number of contact pins necessary. The resolution of state-of-the-art sensor is in between 500 and 1000 dpi, and is usually sufficient for haptic applications. Only the velocity of position output varies a lot with the sensor types available at the market, and has to be considered carefully for the individual device design. The frequency is usually below 50 Hz. Additionally early sensor designs had some problems with drift and made counting errors, which could be compensated only by frequent referencing.

The sensors are usually sold for computer-mouse-similar applications and corresponding optics. But beside that it is also possible to make measurements of moving surfaces with an adapted optic design at a distance of several centimeters.

#### **Triangulation Sensors**

Optical triangulation is an additional principle for contactless distance measurement; however it is seldom used for haptic devices. A radiation source, usually a laser, illuminates the surface to be measured, and the reflected radiation is directed on different positions along a sensor array (Fig. 10.50). The sensor array may be made of discrete photodiodes. Frequently it is a CCD or CMOS row with the corresponding **Fig. 10.50** Triangulation of a distance with laser-diode and detector array

high resolution. By focal point identification weighting several detectors a further reduction of measurement inaccuracy can be achieved. Compared to other optical sensors, triangulation sensors are expensive as the detection row with a sufficient resolution is a high cost factor. Their border frequency 1 kHz) and their measurement inaccuracy (<10 µm) leave nothing to be desired. It is one of the very few principles, which can hardly be used for measuring rotating systems.

## *10.7.4 Magnetic Sensors*

Beside the optical measurement principles, especially the group of magnetic measurement principles is relevant for haptic devices. This is a consequence from the fact that electrodynamic and electromagnetic actuators already require magnetic fields to generate forces. For systematization, sensor for static fields, field plates and hall-sensors, and sensor for induced currents and time dependent fields can be distinguished.

## **Field Plates or Magnetic Dependent Resistors**

Field plates or magnetic dependent resistors (MDR) are two pole elements with a resistance being controlled by the presence of a magnetic field. They make use of the Gauss-effect, which is based on charge carriers being displaced by the Lorentzforce when crossing a magnetic field. The resulting increase of the path length [121] requires an increase of the ohmic resistance of the material. The parameter characterizing this dependency is dependent on the electron mobility and the path length in the magnetic field. A frequently used material is InSb with very high electron mobility. For an additional increase of the effect, the conductor is formed like in the shape of a meander similar to strain gauges. MDRs are not sensitive to the polarity of the magnetic field. They are detecting the absolute value only. The increase of resistance is nonlinear and similar to a characteristic curve of a diode or transistor. A magnetic bias is recommended when using the plates to make sure they are in their linear working point.

**Fig. 10.51** Measurement of the rotation angle of a magnet via field plates or hall-sensors

#### **Hall-Sensors**

Hall-sensors are based on the Gauss-effect too. In contrast to field plates they are not measuring the resistance increase of the current within the semiconductor, but the voltage orthogonal to the current. This voltage is a direct result of the displacement of the electrodes along the path within the material. The resulting signal is linear and bipolar in dependency on the field-direction. ICs with an integrated amplifier electronics and digital or analog output signals can be bought off the shelf. A frequent use can be found with sensors being located at a phase angle α with diametral magnetized rotational magnets (Fig. 10.51). In this application a rotation and rotationdirection is measured.

#### **Inductance Systems**

An often forgotten alternative for position measurement is the measurement of changing inductances. The inductance of a system is dependent on many parameters, for example the magnetic permeability of a material in a coil. Using a differential measurement in between two coils (Fig. 10.52b) a displacement measurement can be made, if a ferromagnetic material moves in between both coils as a positiondepending core. As alternatives the geometry of the magnetic circuit may be changed, or its saturation may influence the inductance of the coils. Latter approach is used in systems, where grooves on a ferromagnetic material trigger events in a nearby coil (Fig. 10.52a).

A simple electronic for measuring inductance is the use of a LR-serial circuit, which–for example with a microcontroller–is triggered with a voltage step. The measurement value is given by the time the voltage at the resistor needs to trigger a comparator voltage. The duration encodes the inductance, assuming a constant resistance. For the actual design it has to be considered, that the winded coil has an own resistance which cannot be neglected. As an alternative a frequency nearby the resonance *<sup>L</sup> <sup>R</sup>* of the LR-circuit can be applied. The voltage amplitude measured varies dependent on the inductance's detuned by the movement of the ferromagnetic core.

**Fig. 10.52** Incremental measurement of a movement via induced currents (**a**) and differential measurement of the position of a ferromagnetic core (**b**)

## *10.7.5 Other Displacement Sensors*

Beside the displacement measurement principles discussed above there are some rarely used principles still worth to be mentioned here.

#### **Ultrasonic Sensors**

Ultrasonic sensors (Fig. 10.53) are based on the running time measurement in between the emission of acoustic oscillations and the moment of the acquisition of their reflection. The frequency chosen is dependent on the requirements on measurement accuracy and the medium for propagation of the wave. As a rough rule of thumb, the denser a material is, the less the damping becomes for acoustic waves. For measurement in tissue frequencies between 1 and 40MHz are applied. In water frequencies between 100 and 500 kHz and in the atmosphere frequencies well below 30 kHz are used.

Whereas with medical applications in tissues the medium shows a damping quite linear in the range of 1dB/Mhz/cm, the measurement within the atmosphere is strongly dependent on the frequency chosen and usually nonlinear. Additionally the acoustic velocity is dependent on the acoustic density of the medium. For the transversal direction—typically used for measurement—velocities between 340 m/s for air and 1500 m/s for water can be achieved. According to the wave theory the minimum measurement accuracy possible in transversal direction is <sup>λ</sup> <sup>2</sup> , which is coupled to both factors mentioned above. It is a natural border of minimum resolution to be achieved.

The most frequently used source and receiver for the mechanical oscillation are piezoelectric materials (Fig. 10.53b), whose step response oscillations are sharpened by a coupled mass.

**Fig. 10.53** Distance measurement via ultrasonic sensors (**a**) and cross-section through a medical ultrasonic head with fixed focus (**b**)

## **Capacitive Sensors**

In Sect. 9.5 the equations for the calculation of capacities between plates of electrostatic actuators (Eq. 9.75) were introduced. Of course the measurement of a variable capacity, especially with the linear effect of a transversal plate displacement, can be used for position measurement. This is especially interesting if there are conductive components in the mechanical design, which already move relative to each other. As the capacity is very much dependent on the permittivity of the medium between the plates, which can be strongly influenced by oil or humidity, such a measurement can be done on insusceptible or other well housed actuators only. Additionally leakage fields of conductors or geometries nearby are usually of the same size as the capacity to be measured. But capacitive sensors for haptic devices can be found in the context of another interesting application. The measurement of the capacity of the handle, even when isolated by a non conductive layer, allows identifying a human touch very securely.

#### **Proximity Sensors**

Proximity sensors are realized using inductance, capacitive or photoelectric systems. In case of inductive or capacitive sensors a measuring object and its material properties (depending whether it is conductive or dielectric) are influencing magnetic or electric field of inductance or capacitor. Is a certain distance (switching theshold) exceeded, the output signal of the sensor is switched from low to high. In nearly every production process proximity sensors are used as contactless switches to control process itself. Inductive proximity sensors are the low-cost option to control state of grippers (closed or open) in industrial processes [4].

## *10.7.6 Electronics for Absolute Positions Sensors*

The absolute measurement of a position requires, as mentioned earlier, some additional effort in the electronic design compared to discrete sensors. Two aspects shall be discussed in the context of this chapter.

#### **Constant-Current Supply and Voltage References**

For the generation of a constant radiation or the measurement of a bridge circuit the use of constant currents is necessary. There is always the possibility to wire an operational amplifier as a constant current source, or use transistor circuits. Nevertheless for designs with low quantities there are ICs which can be used as current sources directly. The LM234 for example is a voltage-controlled 3-pin IC, providing a current with a maximum error of 0.1%/V change in the supply voltage. The maximum current provided is 10 mA, which is usually sufficient for the supply of optical or resistive sensors.

The change of the signal is usually measured in relation to a voltage in the system. In this case, it is necessary to provide a voltage which is very well known and independent from temperature effects or changes of the supply voltage. Common voltage regulators as used for electronic supply are not precise enough to fulfill these requirements. An alternative is given by Zener diodes operated in reverse direction. Such diodes however are not applicable to high loads and are of course only available in the steps of Zener voltages. Alternatively, reference voltage sources are available in many voltage steps on the market. The REF02 for example is a six-pin IC, providing a temperature-stable voltage of 5V with an error of 0.3%. The drivable load of such voltage sources is limited, in case of the REF02 it is 10 mA, but this is usually not a relevant limit as they are not thought as a supply to a complex circuit but only as a reference.

#### **10.7.6.1 Compensation of Noise**

The obvious solution for the compensation of noise in a measurement signal is given by the usage of a carrier frequency for modulating the signal. The sensor showing no damping at the modulating frequency of course gives a prerequisite. This is usually no problem for optical sensors in the range of several kilohertz. At the receiver, the signal is bandpass-filtered and equalized or otherwise averaged. This suppresses disturbance frequencies or otherwise superimposed offsets.

A simple but very effective circuit for noise compensation is the use of so called "lock-in" amplifiers. On the side of the sender a signal is switched between the states *on* and *off* at a frequency *f* . In the receiver the wanted signal such as e.g. the offset and other disturbing frequencies are received. A following amplifier is switched with the same frequency between +1 and −1 in such a way, that with the receipt of the wanted signal including the disturbance the positive amplification happens. During the period without the wanted signal, when the receiver measures the disturbance only, the signal is inverted with−1. The resulting signal is low-pass filtered afterward, resulting in a subtraction of the noise signal and providing a voltage proportional to the wanted signal only.

## *10.7.7 Conclusion on Position Measurement*

With haptic devices position measurement is a subordinated problem. In the range of physiological perceived displacements resolutions, there are enough sensor principles, which are sufficiently precise and dynamic for position measurement. We will see in the following Sect. 10.8 that calculation or measurement of accelerations or velocities is easily possible too. Without doubt, the optical measurement technology is the most frequently used technical solution. Nevertheless especially for the design of specific actuators it is indicated to ask the questions, whether there are other sensor principles applicable for a direct integration into the actuator.

If there are specific requirements for measurement in the range of a few μm positioning resolution, the proposed principles should be treated with reserve. Measurements in the range of μm require specific optical or capacitive measurement technology. With the exception of special psychophysical questions it is unlikely that such requirements are formulated for haptic devices.

## **10.8 Inertial Sensors–Measurement of Velocity and Acceleration**

As stated before user's reactions in haptic systems have to be acquired. Beside position measurement first and second derivatives of time (velocities, accelerations) are of interest. Such a necessity may be given with stability issues for closed-loop systems or impedance behavior of users or manipulated objects. Both are vector quantities and especially, velocity is*the* important quantity of kinematics. It represents the timedependent change of object movement along any curvature. Depending if whether the curvature is linear or not we distinguish translation and rotatory movement and thus, translational speed and acceleration or angular speed and angular acceleration. Velocity and acceleration sensors can be subsummized under the collective term *inertial* sensors.

## *10.8.1 Measurement of Velocity*

Several measurement methods are available to achieve the measurand velocity no matter if translation or rotatory. The acquisition can be done either by direct measurement or by differentiation of the position-signal of a position sensor with digital or analog circuits. Additionally, it can be imagined to e.g. measure a velocity and calculate the position by integration. The capabilities of integration and differentiation and their limits, such as typical direct measurement principles, are sketched in this section.

#### **10.8.1.1 Integration and Differentiation of Signals**

The integration and differentiation of signals can either be done analog or digital. Both variants have different advantages and disadvantages.

#### **Analog Differentiation**

The basic circuit for an active analog integrator is shown in Fig. 10.54a. It is a highpass filter, which already gives hints on the challenges connected with differentiation. The high-pass behavior is limited in its bandwidth. The upper border frequency is given by the resonance frequency *fR* <sup>=</sup> <sup>1</sup> <sup>2</sup><sup>π</sup> *RC* and by the bandwidth of the operational amplifier. As these components are sufficiently dynamic for haptic applications, this should be no problem in practical realization. Due to the negative feedback however the natural bandwidth limit of the operational amplifier at high frequencies has a phase of 90◦ adding to the phase of 90◦ from the differentiation. This makes the circuit sensitive to become electrically instable and oscillate [122].

This effect can be compensated by a serial resistance with a capacity *C*, which is identical to a linear amplification with the operational amplifier. This diminishes the phase for high frequencies by 45◦ resulting in a phase margin to the instable border condition. Analog differentiation is an adequate method for the derivation of velocities from positioning signals. A double analog differentiation needs a careful design of the corresponding circuit, as a number of capacitive inputs are placed in series. Additionally it should be considered that the supply voltage limits the amplitude of the operational amplifier. Accordingly, the amplitude's dynamic has to be adjusted to the maximum signal change expected.

#### **Analog Integration**

The basic circuit of an active analog integrator is given in Fig. 10.54b. Analog integration is a reliable method from analog calculation technique, but has limited use for haptic applications. The circuit has an upper border frequency given by the resonance *fR* <sup>=</sup> <sup>1</sup> <sup>2</sup><sup>π</sup> *RC* , and for a non-ideal operational amplifier it has a lower border frequency too. This is a result of the current *Ib* at the input of the OP-amplifier charging the capacitor with *U*in = 0V continuously. If C=10μF and *Ib* = 1μA, the voltage increases by 0.1 V per second. Whereas in signal processing applications this can be compensated by high-passes in series, for haptic applications covering a bandwidth from several seconds to 10 kHz this behavior is usually not acceptable.

**Fig. 10.54** Analog differentiation-(**a**) resp. integration-circuits (**b**) [122] c Springer Nature, all rights reserved

#### **Digital Differentiation**

Digital differentiation is realized by a subtraction of two consecutive measurement values. It is very applicable, especially when the signal is measured at high frequencies. The quality of the signal is dependent on the noise on the input. Frequently the least-significant bit of e.g. an AD-conversion is rejected before the differentiation is performed, as it is oscillating with the noise of the AD-conversion (quantizationnoise). To derive velocity from position measurements, Colgate recommends a high position measurement resolution as well as a low-pass filtering of the generated velocity signal to improve the quality [123].

#### **Digital Integration**

Digital integration is the summation of continuous measurement values and the division of the sum by the number of values. Alternatively, it can be the sum of discrete changes of a measurement value. The incremental measurement of a digital encoder is also a form of integration based on change information. The procedure is robust at high frequencies beyond the actual upper border frequency of the signal. Beside a sufficient dimension of the register size for the measurement values to prevent an overflow, there is nothing else to worry about.

#### **10.8.1.2 Induction as a Velocity Measure**

The most frequent variant to gain information about velocity is given by the digital signal processing of a position-measurement. Nevertheless, to be able to measure velocity directly, the use of a velocity-proportional physical effect is mandatory. Beside Doppler-ultrasonic measurement, which is seldom applicable to haptic systems due to the wavelengths (compare Sect. 10.7.5), the use of electrical induction is the most frequently used direct effect. Accordingly an electrical induced voltage *U* is generated in a conductor of the length *l*, moving orthogonal in a magnetic field *B* with the velocity v:

$$U = \upsilon \, B \, l. \tag{10.46}$$

**Fig. 10.55** Modelling of acceleration sensors based on force measurement. The differential equation (10.47) gives us the relation of force and equation of motion

Special geometric designs as given with electrodynamic actuators (Sect. 9.2) can be used for velocity measurements with inducing voltages in their coils. In contrast to electrodynamic actuators, the design requires a maximization of conductor length, to generate a pronounced voltage signal. The inductivity of the winding generates a lowpass characteristic in combination with its own resistance. This limits the dynamic of the signal. The biggest error made with these kinds of sensors is given by a bad homogeneity of the winding due to dislocation of single turns. This manufacturing error results in different winding lengths moving in the B-field at different positions of the sensor, which is directly affecting the quality of the measured signal.

## *10.8.2 Acceleration Measurement*

The vector quantity acceleration is the first derivative of velocity and second derivative of position. Depending on the inertial system of measuring object and its movement, up to six or components of motion (linear acceleration and angular rotation rates (role, pitch and yaw) occur. Both, single sensors for acceleration measurement or rotation rates as well as sensors systems measuring all occurring components of motion so called inertial measurement units (IMU) are in use. IMUs consist of both accelerometers and gyroscopes.

A 6-DoF IMU for example consists of an accelerometer for linear quantities and a gyroscope for angular rotation rates. IMUs with nine or 12 DoF are also available that contain accelerometers and gyroscopes for two different measurement ranges, a triaxis magnetometer (hall sensors), temperature and pressure measurement for example in systems of *Shimmer Sensing* or InvenSense technology by *TDK*. Because acceleration due to gravity always affects the measurement, it is crucial to know the orientation of both global reference system (output signal of magnetometer can be used) and local coordinate system of the inertial sensor. Acceleration measurement can be traced back to [4]:


The latter are the most common ones. The topology of that kind of sensors can be modelled with a mass-spring-damper-system: a concentrated seismic mass *m* is attached at a spring body Fig. 10.55, where damping factor *D* and spring constant *c* describing the elasto-mechanic parameters of the spring body.

$$\mathbf{F} = m \cdot \frac{d\mathbf{x}^2}{t^2} + D \cdot \frac{d\mathbf{x}}{dt} + c \cdot \mathbf{x} \tag{10.47}$$

Depending on acceleration mass is dislocating and spring body is deforming. The deformation can be described using elasto-mechanics, as we learned in the former section about force sensors (Sect. 10.5). In fact, force-measurement principles given in Sect. 10.5 are added by a known mass *m* only, resulting in a mechanical strain of a bending element or generating another acceleration-proportional signal.

In contrast to velocity measurement, a wide variety of accelerometers exists and most of them are based on force measurement. In professional measurement technology especially piezoelectric sensors for high dynamic measurements, but also piezoresistive sensors for low-frequency accelerations are established. In mechatronic systems with high quantities, MEMS-sensors with comb-like structures in silicon according to the capacitive measurement principle are used. The requirements of automotive industry for airbags and drive stability programs to measure acceleration in many directions made low-price and robust ICs available at the free market, e.g. the ADXL series of Analog Devices. The bandwidth of these sensors ranges from 400 Hz to 2.5 kHz with maximum accelerations >100 g in up to three spatial directions. Only a wide variance of their characteristic values, e.g. the output voltage at 0 g, requires a calibration of the individual sensor.

IMUs with up to 12 DoF are widely used for motion tracking in navigation applications, safety purposes in automotive context or in consumer applications like cellphones, cameras or remote controls for video games. MEMS inertial sensor units are increasingly used where size, weight and cost of sensors are key sides [124]. Both accelerometers and gyroscopes are based on force measurement, where latter ones are using the Coriolis Effect to detect relative angular rotation rates. For example *TDK* is providing 6 DoF (*ICM* − 20648) or 9 DoF (*ICM* − 20948) MEMS-IMUs (packaged ICs and breakout boards), which are integrated in wearables too. With a size of about 3 × 3 × 3 mm3 they do fit in most consumer applications and provide a drift of about 10 ◦/h. The high gyroscope drift is the main disadvantage of MEMS-IMUs which can be depending on the sensing unit up to 50 ◦/h in case of XSens MTi system [124]. If higher resolution and accuracy as well as absolute angular velocity has to be measured fiber-gyroscopes using the Sagnac-Effect are integrated. A precise measurement due to drift of 10−<sup>3</sup> up to 1 ◦/h is achievable (Xbow IMU 700 CA or 400 CC) [124]. That kind of sensing system is available as a sensing box (e.g. size of XSense MTI 5.8 × 5.8 × 2.2 cm3) and quite spacious compared to the common IC packages like LGA. Inertial sensors as wearables are quite common in rehab and health research. For example *Shimmer Sensing* is purchasing sensing systems including IMUs as well as bioimpedance sensors to provide real-time indication of state of health (Shimmer3 IMU unit, Shimmer3 Ebio Unit).

## **10.9 Imaging Sensors**

Within the contents of this book, the focus is laid on device based sensors, like the above described sensors for force, deflection and touch. Pure input sensors, such as the imaging sensors, will be not discussed further as per definition no haptic feedback can be given without a real physical contact. Nevertheless, they can be used to build a complex HMI when combined with body-worn tactile devices and should be kept in mind for such applications. For analyzing gesture and motion of users or interacting objects, imaging sensors play an important role in requirements engineering for quantifying maximum working space, resolution etc. [25]. For that purpose, one can differentiate two general classes of imaging sensors:


## **10.10 Temperature Measurement**

For temperature measurement two basic strategies are possible [4]:


Due to the major role of thermoresistive and thermoelectric sensors in technical applications, we want to go in detail. To get an overview about photosensitive semiconducting sensors [125] can be recommended.

## *10.10.1 Thermoresistors*

Thermoresistors detect the temperature of a small area of a measuring object in the range of the sensing element itself. The resistivity of conductive material depends on the concentration of free charge carriers and their mobility [1]. Depending on lattice structure of the material and bonding behavior of valence electrons, both mobility and concentration of free charge carrier are differing. Is a conductive material heated, (thermal) energy is entering the system. The reaction of it differs according to the conductive material. In *metals* we observe a negative temperature coefficient (NTC), what means that with increasing temperature conductivity decreases. Why is that so? We have to look into material modelling: At room temperature, all available charge carriers can more or less freely move through the lattice. Increase of temperature leads to energy entry and lattice vibrations become stronger. Mobility of charge carrier is reduced due to collisions of charge carriers and lattice. In *semiconductors* electrons are bound quite strongly to their atoms [1]. At room temperature only a few charge carriers can move freely. Increasing temperature and thus entry of energy energizes charge carriers, more electrons can overcome the bounding to their atom and concentration of free-charge carriers is increasing. We talk about a positive temperature coefficient (PTC). Commercially available sensors based on thermo resistivity are resistance thermometers (metal-based, e.g. PT100) and thermistors (oxide-based), respectively. Latter ones are beside thermocouples standard in household appliance, less accurate than metal-ones, but a good low cost option. Within metals, a quadratic approximation of resistance change according to temperature usually is sufficient.

$$R(\theta) = R\_0 \cdot \left(1 + \alpha \cdot (\theta - \theta\_0) + \beta(\theta - \theta\_0)^2\right) \tag{10.48}$$

where θ<sup>0</sup> is the reference temperature, *R*<sup>0</sup> resistance at θ = θ0, θ actual temperature and α, β material specific constants of the NTC. Within semiconductors, resistance change according to temperature is highly non-linear and usually exponentially approximated. Sensitivity in comparison to metal-based thermoresistors is up to 40 times higher [4].

$$R(\theta) = R\_0 \cdot \exp\left(B \cdot \left(\frac{1}{\theta} - \frac{1}{\theta\_0}\right)\right),\tag{10.49}$$

where θ<sup>0</sup> is the reference temperature, *R*<sup>0</sup> resistance at θ = θ0, θ absolute temperature in Kelvin and *B* a material specific constant of the PTC.

The evaluation and signal processing of thermoresistors does not vary from any other resistive sensor and is done e.g. in a Wheatstone-bridge configuration. Linearization of the transfer function of thermistors cab be achieved using a series resistor. The most accurate sensors for temperature measurement are metal-based ones (Platin-based sensors *PT* 100 or *PT* 1000), which often are used as reference for temperature measurement.

## *10.10.2 Thermocouples*

Thermoelectric sensors provide a nearly punctiform measurement. Thermocouples consist of two wires of different materials welded together in one point. Is a temperature acting on the interconnection (welded point), an electric potential difference (thermoelectric voltage according to the Seebeck-Effect) occurs at the connecting point (open ends of the wires). The output signal of a thermocouple is proportional to the temperature difference of the measuring point (welded point) and reference point (connection point).

$$v = a\_1 \cdot \left(\theta - \theta\_{ref}\right) + a\_2 \cdot \left(\theta - \theta\_{ref}\right)^2,\tag{10.50}$$

where v is the thermoelectric voltage, θ the temperature at the measuring point and θ*ref* the reference temperature. In comparison to thermoresistors thermocouples provide a higher upper temperature limit (nominal temperature) and do have a very short reaction time, but they lack on accuracy and long-term stability. In terms of accuracy metal-based thermoresistos are the most accurate ones.

## **10.11 Conclusion**

With the exception of force/torque sensors, commercially available sensors for position, velocity, acceleration, touch, images and temperature exhibit sufficient properties for the usage in haptic systems. Within this section, the necessary knowledge of the underlying sensing principles for a sound selection from available sensors is reported. For the use of force/torque sensors, the relevant principle and design steps for the development of a customized sensor are given and can be deepened in the below mentioned background readings.

## **Recommended Background Reading**

[46] Barlian, A. & Park, W. & Mallon, J. & Rastegar, A. & Pruitt, B.: **Review: Semiconductor Piezoresistance for Microsystems.** Proceedings of the IEEE 97, 2009.

*Review on piezoresistive silicon sensors. Physics, examples, sensor characteristics.*

[31] Bray, A. & Barbato, G. & Levi, R.: **Theory and practice of force measurement**. Monographs in physical measurement, Academic Press Inc., 1990 *Discussion of examples of force sensors. Basic mechanics and hints for designing deformation elements.*

	- [48] Keil, S.: **Beanspruchungsermittlung mit Dehnungsmessstreifen.** Cuneus Verlag, 1995. *All about strain gages. History, materials and technology, selection and appli-*
	- *cation.* [15] Lenk, A. & Ballas, R.G. & Werthschützky, R.& Pfeifer, G.: **Electrical, Mechanical and Acoustic Networks, their Interactions and Applications**. Springer, 2011.

*Introduction in modeling dynamics of electromechanical systems using network theory. Contains plenty of useful examples.*

	- [32] Rausch, J.: **Entwicklung und Anwendung miniaturisierter piezoresistiver Dehnungsmesselemente.** Dr-Hut-Verlag, München, 2012.

*Comparison of sensing principles for strain sensing with focus on piezoresistive silicon elements. Design of a tri-axial force sensor using semiconducting strain gages.*


*Extensive overview of sensors and sensor electronics.*

## **References**


**Open Access** This chapter is licensed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made.

The images or other third party material in this chapter are included in the chapter's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the chapter's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.

# **Chapter 11 Interface Design**

**Alireza Abbasimoshaei and Thorsten A. Kern**

**Abstract** This chapter deals with different interface technologies that can be used to connect task-specific haptic systems to an IT system. Based on an analysis of the relevant bandwidth for haptic interaction depending on the intended application and an introduction of several concepts to reduce the bandwidth for these application (local haptic models, event-based haptics, movement extrapolation etc.), several standard interfaces are evaluated for the use in haptic systems.

## **11.1 Introduction**

After the decision for the actuator (Chap. 9) used to generate the haptic feedback, and after the measurement of forces (Sect. 10.5) or positions (Sect. 10.7), it becomes necessary to focus on the IT-interface. This interface has to be capable of providing data to the actuation unit and catch and transmit all data from the sensors. Its requirements result—such as with any interface—from the amplitude resolution of the information and the speed at which they have to be transmitted. The focus of this chapter lies on the speed of transmission, as this aspect is the most relevant bottleneck when designing haptic devices. Haptic applications are frequently located on the borderline, may it be with regards on the delay acceptable in the transmission, or the maximum data rate in the sense of a border frequency.

With regards to the interface two typical situations may be distinguished: Spatially distributed tactile displays with a reasonable number of actuators; and primarily kinaesthetic systems with a smaller number of actuators. In case of tactile systems, pin-arrays, vibrators, or tactors the challenge is given by the application of

A. Abbasimoshaei (B) · T. A. Kern

Hamburg University of Technology,Eißendorfer Str. 38, 21073 Hamburg, Germany e-mail: al.abbasimoshaei@tuhh.de

T. A. Kern e-mail: t.kern@hapticdevices.eu

T. A. Kern et al. (eds.), *Engineering Haptic Devices*, Springer Series on Touch and Haptic Systems, https://doi.org/10.1007/978-3-031-04536-3\_11

**Fig. 11.1** The components of haptics

bus-systems for the reduction of cable lengths, and the decentralization of control. Although there are still some questions about timing left, for example to provide tactile signals in the right order despite of a decentralized control, the data rates transmitted are usually not a challenge for common bus systems. Van Erp points out [20], that a 30 ms time delay between impulses generated by two vibrators at the limbs may not be distinguished any more. For the data interface this observation implies for this application, that any time delay below 30 ms may be uncritical for transmitting information haptically. This is a requirement, which can be fulfilled by serial automation technology network protocols like CAN, or the time triggered version TTCAN, without any problems. Accordingly this section concentrates on requirements of haptic kinaesthetic devices with a small number of actuators only, whereas these devices usually have to satisfy tactile requirements according to their dynamic responses too (Fig. 11.1).

## **11.2 Border Frequency of the Transmission Chain**

Section 1.4.2 stated that it is necessary to distinguish two frequency areas when talking about haptic systems. The lower frequency range up to ≈30 Hz includes a bidirectional information flow, whereas the high frequency area *>*30 Hz transmits information only unidirectional from the technical system to the user. Although the user himself influences the quality of this transmission by altering the mechanical coupling, this change itself happens at lower frequencies only, and is—from the perspective of bandwidth—not relevant for the transmission. If this knowledge is applied to the typical structures of haptic devices from Chap. 6, some fascinating results can be found. For the following analysis it is assumed, that the transmission and signal conditioning of information happens digital. According to Nyquist, the maximum signal frequency has to be sampled at least two times faster. In practical application this factor two is a purely theoretical concept, and it is strongly recommend to sample and analog system around 10 times faster than its maximum frequency. The values within figures and texts are based on this assumption.

**Fig. 11.2** Block diagram of a telemanipulator with haptic feedback

## *11.2.1 Bandwidth in a Telemanipulation System*

For a telemanipulation system, (Fig. 11.2) the knowledge about the differing asymmetric dynamics during interaction gives the opportunity to benefit directly for the technical design. In theory it is possible to transmit the haptic information measured at the object within the bandwidth of 1 Hz to 10 kHz, and replay it as forces or positions to the user. The user's reactions may in this case be measured at a bandwidth from static to 5 or 15 Hz only, and be transmitted via controller and manipulator to the object. Although this approach would be functional indeed, the simplicity of position measurement and the necessity to process them for e.g. passivity control result in movements being sampled and transmitted similar dynamic as in the opposite transmission direction for haptic feedback.

## *11.2.2 Cloud-Enabled Communication*

Cloud system is one of the ways for sending and receiving data remotely. There are some research works that have already provided their ways for remote communication. Ongvisatepaiboon et al. [17] proposes a remote communication between therapist and patient through a web server. A web server is a hardware device like a computer that connects to the internet and supports data interchange between devices. The cost of setting up a web server is high and also needs a cumbersome process on both sides while setting up the therapy. Cloud robotics being the main aspect of the Internet of Robotics thing (IoRT) has been emerging a lot nowadays [6]. Cloud servers are internet servers that don't require any hardware. There are various reasons which can affect the time-lag in cloud communication between two robots. They are the quality of the internet service, speed of the internet connection, load on the system, and the distance of the device from the server.

Some papers used ROSLink [11] for remote communication that requires entering the I.P address of the device each time to connect and make it less user-friendly. Another way is to trust the cloud servers that has been utilized. This server can

be effectively configured (much less effect on system load), it is easily reachable, has accurate data management, and high data privacy. The most important thing to consider is the connection between two robots which is possible without any user input.

In certain applications, cloud-based multi-agent systems have been enabled to share much information between two systems [9, 14] using cloud-based systems. A similar type of approach could be utilized by using the libraries of cloud-based pub/sub provided by google server [7]. Data would be transferred by one robot becoming a publisher to another robot acting as a subscriber.

Bringing down the publishing speed to 35 Hz decreases the system load up to a larger extent. Only the important data of change in state is transferred from one robot to another. The generalized view of the robots connected to the cloud server by using ROS is shown in Fig. 11.3. Following the steps, various google topics were made for all the control parameters like position, velocity, current, PID values. You can look at the literature for more details on forming a google publisher and subscriber [8].

As stated, the user-friendliness of the system is important. Upon powering the raspberry pi and giving the internet connection, all the algorithms and code of the system and google cloud will be activated, and robots would be ready for therapy. No user inputs or manual setup will be required. Figure 11.4 will show the designed start-up service in brief.

## *11.2.3 Bandwidth in a Simulator-System*

For a simulation-system with haptic feedback the different dynamics results in slightly different findings. Nevertheless it is still true, that the movement information may be sampled at a lower rate. However the simulator (Fig. 11.5) has to provide the force output at a frequency of 1 to 10 kHz. Due to this simple reason, the simulator has to be aware of the actual position data for every simulation step. Consequently with simulators the haptic output and the measurement of user reaction has to happen at high frequency (exceptions, see Sect. 11.3).

**Fig. 11.4** Schematic view of a start-up service

**Fig. 11.5** Block diagram of a simulator with haptic feedback and an external controller

There are two approaches to integrate the haptic controller in the simulator. In many devices it is designed as an external hardware component (Fig. 11.5), which reduces the computing load for the main simulator, and helps reducing the data rate significantly in special data processing concepts with parametrizable models (Sect. 11.3). As an alternative the controller may be realized in software as a driver computed by the simulation main computing unit (Fig. 11.5). This is a concept used especially for high power permanently installed simulation machines, or which is used in cost-effective haptic devices for gaming industry with little requirements in dynamics and haptic output.

## *11.2.4 Data Rates and Latencies*

Table 11.1 summarizes the data rates necessary for kinesthetic applications in some typical examples. The data rates range from 200 kbit/s for simple applications up


**Table 11.1** Example calculating the required unidirectional data rates for typical haptic devices

to 50Mbit/s for more complex systems. Such rates for the information payload still excluding the overhead necessary for the protocol and the device control—are achieved by several standard interface types today (Fig. 11.6).

Beside the requirements for the data rate there is another requirement considering the smallest possible latency. Especially interfaces using packets for transmission, with an uncertainty about the exact time of the transmission (e.g. USB) have to be analyzed critically concerning this effect. Variable latencies between several packets are a problem in any case. If there are constant latencies the reference to other senses with their transmission channel becomes important: A collision is not allowed to happen significantly earlier or later haptically than e.g. visually or acoustically. The range possible for latency is largely dependent on the way to present the other sensual impressions. This interdependencies are subject to current research and are analyzed e.g. by the group around *Buss* at the Technische Universität München.

**Fig. 11.6** Block-diagram of a simulator with haptic feedback and a controller as part of the driver software

**Fig. 11.7** Raspberry Pi 4 B

## *11.2.5 What is a Raspberry Pi?*

The Raspberry Pi consists of a series of single-board computers. It is small and low cost and can be plugged into a computer monitor or TV. Also, you can connect it to a standard keyboard and mouse. Furthermore, it provides the ability to run different languages codes such as Scratch and Python. It's capable of doing everything that a desktop computer can do, such as browsing the internet and playing high-definition video, to making text files, and playing games.

Moreover, the Raspberry Pi can interact with the outside world. It has been used in many digital maker projects, from music machines to weather stations and infra-red cameras communication Fig. 11.7. So it makes this ability for a haptic system to have a communication with user and it receives the data from sensor and sends the commands to motor and by this way, it can provide tactile feedback.

## **11.3 Concepts for Bandwidth Reduction**

Whoever ever tried to process a continuous data flow of several megabit with a PC, and in parallel make this PC do some other tasks too, will have noticed that the management of the data flow binds immense computing power. With this problem in mind and as a result from the question about telemanipulation with remotely located systems several solutions for bandwidth reduction of haptic data transmission have been found.

**Fig. 11.8** Schematic view of a varactor structure [4]

## *11.3.1 Analysis of the Required Dynamics*

The conscious analysis of the dynamics of the situation at hand should be ahead of every method to reduce bandwidth. The limiting cases to be analyzed are given by the initial contact or collision with the objects. If the objects are soft, the border frequencies are in the range *<*100 Hz. If there are stiff objects part of interaction and if there is the wish to feed back these collisions too, the frequencies up to a border *>*1 kHz will have to be transmitted. Additionally it has to be considered that the user is limited concerning its own dynamics, or may even be further limited artificially. The *DaVinci* System (Fig. 1.12) as an unidirectional telemanipulator filters e.g. the high frequencies of the human movements to prevent a trembling of the surgical instruments.

#### **11.3.1.1 Example**

As an example, a varactor capacitance can be considered. It is a kind of semiconductor diode used in radio frequency tuning circuits. A Varactor Diode is a p-n junction diode and acts as a variable capacitor under a varying reverse bias voltage. It is a specially designed semiconductor diode whose, by modifying the applied voltage on its terminals, adjusts the capacitance at the p-n semiconductor junction Fig. 11.8.

Thus, the frequency control of a varactor device can be done by applying different voltages to the tuning port. It is united with a low impedance capacity and provides a high impedance to the driver. Varactor devices are more straightforward for having the same sweep speed driving than YIG devices, a ferrite with a sharp ferrimagnetic resonance and very high resistivity. These characteristics allow YIG resonator oscillators to have vast tuning ranges: 2–20 GHz tunable oscillators are available. Electromagnets are an important part of the YIG oscillators module, and frequency tuning of the YIG resonator is performed by modifying the currents in these electromagnets.

## *11.3.2 Ergonomic Standards for Haptic and Tactile Interactions*

Ergonomics is the method of designing or arranging products, workplaces, and systems to suit more for people's usage.

Most people consider ergonomics to do with seating or with the design of car instruments, but it is so much more. Ergonomics concerns the design of anything that includes people, from workspaces to leisure and safety. Ergonomic standards help improve usability in several ways, including improving effectiveness, reducing errors, increasing performance, and comfort. Ergonomic standards provide a base for analysis, design, evaluation, procurement.

One part of the standard series is a framework for tactile and haptic interaction. It provides a structure for understanding the different aspects of tactile/haptic interaction and communicating them. Different definitions, structures, and models used in other parts are included in this part. Moreover, it provides general information about how different forms of interaction can be applied to different tasks. Many efforts were made to define haptics terminologies [16, 17].

#### **11.3.2.1 Example**

While there is no difference between haptic and tactile in most dictionary definitions [2], many researchers use tactile for skin mechanical stimulation and haptic for all haptic sensations. The framework document explains interaction details and task primitives for haptic interaction. Users can start application tasks using one or more task primitives enabled by the haptic device and its software. Task primitives are modified according to the system functionality.

Furthermore, the framework document presents guidelines for the ergonomic design of different haptic interactions, interaction space, convenience, and resolution. In addition, this part proposes haptics physiology, device types, haptics application areas, and selection criteria. Figure 11.1 shows the relationship of the different components that make the field of haptics.

The other part of the document is the guidance on tactile and haptic interaction. This standard contains guidance in different areas. The first one is applicability points for haptic interactions, including effectiveness, efficiency, Workload, user satisfaction, user needs, accessibility, security, health, and safety considerations. Another one is tactile/haptic inputs, outputs, and their combinations such as unimodal and multimodal interactions, individualization, and user perceptions. The properties of objects are categorized as tactile/haptic information attributes. The document should also contain the layout of the tactile/haptic objects such as resolution, consistency, and separation. Last but not least could be interaction data such as interaction tasks, navigation, manipulation, techniques, gesturing, and encoding by using textual data. For example, reading tactile alphabets guidance suitable for the blind is controlled by Unicode and national standards.

The last part of the standard is some measures to characterize haptic devices and user capabilities. The base of the design and evaluation of haptic/tactile interactions is characterizing physical properties. It includes the development of new devices and their requirements. The part mainly comprises the description of sets of measures and corresponding measurement setups, such as physical measures specifying technical characteristics of devices and human performance criteria related to perception, frequency, and operation speed.

In this regard, since 2005, an ISO expert group has been working on standards certificates for haptic interaction. ISO TC159/SC4/WG9 published its progress at some conferences [1, 5, 19] and issued its first standard in 2009 [10]. Here there is a list of different standard series:


**Fig. 11.9** Block diagram of a simulator with haptic feedback and a local haptic model inside the controller

## *11.3.3 Local Haptic Model in the Controller*

A frequently used strategy being part of many haptic libraries is the usage of local haptic models. These models allow a much faster reaction on the user's input compared to the simulation of a complete object interaction (Fig. 11.9). Such models are typically linearized functions dependent on one or more parameters. These parameters are actualized by the simulation at a lower frequency. For example each degree-offreedom of the haptic system may be equipped with a model of spring, mass and damper, whose stiffness-, mass- and friction-coefficient is updated to the actual value at each simulation step, e.g. every <sup>≈</sup> <sup>1</sup> <sup>30</sup> s. This approach does not permit the simulation of nonlinear effects in this simple form. The most frequent nonlinear effect when interacting with virtual worlds is the lift-off of a tool from a surface. Dependent on the delay of the actualization of the local model, the lift-off will be perceived as "sticking", as the tools is held to the simulated surface by the local model in one simulation step, whereas it is suddenly released within the next. Concepts, which model nonlinear stiffnesses, compensate this effect satisfactory. By making the additional calculations necessary for the local model, a significant data reduction between simulation and haptic controllers is achieved. Distantly related concepts are used in automotive applications too, where CAN bus-systems are configured in their haptic characteristics by a host, and report selection events in return only.

## *11.3.4 Event-Based Haptics*

Kuchenbecker presented in 2005 the concept of "Event-based haptics" [12] and brought it into perfection since. It is based on the idea to split low frequency interaction and high-frequency unidirectional presentation, especially of tactile information (Fig. 11.10). These tactile events are stored in the controller and are activated by the

**Fig. 11.10** Block diagram of a simulator with haptic feedback and with events of high dynamic being held inside the controlling structure

simulation. They are combined with the low-frequency signal synthesized from the simulation, and are presented to the user as a sum. In an improved version, a monitoring of the coupling between haptic device and user is added, and the events' intensities are scaled accordingly. The design generates impressively realistic collisions with comparably soft haptic devices. As any other highly dynamic system it nevertheless requires a specialized driver electronics and actuator selection to achieve full performance.

A variant of the concept of event-based haptics is the overlay of measured highfrequency components on a low-frequency interaction. This concept can be found in the case of *VerroTouch* (Sect. 2.4.4) or in the application of an assistive system like *HapCath* (Sect. 14.2). The overall concept of all these systems follows Fig. 11.11. A highly dynamic sensor (piezoelectric or piezoresistive) is implemented in a coupled mechanical manipulation system. The interaction forces or vibrations induced by collisions between tool and object are then transmitted to an actuating unit attached near to the handle of the device. In case of these systems, it is then just a variant whether the interaction path is also decoupled or sticks to the normal mechanical connection.

## *11.3.5 Movement Extrapolation*

Another very frequently used method for bandwidth reduction on the path to measure user reaction is given by extrapolation of the movement. Especially with simulators using local models it is often necessary to have some information about steps in between two complete measurement sets, as the duration of a single simulation step

**Fig. 11.11** Concept of an event based haptic overlay of tactile relevant data with (**a**) and without (**b**) mechanical coupling of interface and manipulator

varies strongly, and the available computing power has to be used most efficiently. The extrapolation becomes a prediction with increased latency and a further reduced transfer rate. Prediction is used for haptic interaction with extreme dead times.

## *11.3.6 Compensation of Extreme Dead Times*

The working group of Niemeyer from Telerobotics Lab. at the Stanford University works on the compensation of extreme dead-times of several seconds by prediction [15]. The dead-time affects both paths: the user's reaction and the information to the user, such as the haptic feedback generated. The underlying principle is an extension of the telemanipulation system, which is added with a controller of the manipulator and a powerful controller for the haptic feedback (Fig. 11.12). Latter can be understood as an own simulator of the manipulated environment. During movement, a model of the environment is generated in parallel. If a collision happens in the real world, the collision is placed as a wall in the model, and its simulation provides a haptic feedback. Due to the time lag the collision does not happen at the position where it happened in reality. During the following simulation the collision point is

**Fig. 11.12** Block diagram of a telemanipulator with compensation of long dead times by an adaptable world-model

relocated slowly within the model back to its correct position. By successive exploration of the environment a more detailed haptic model is generated. The method has the status of a research project.

## *11.3.7 Compression*

As any data data stream, haptic data can be compressed for reducing their bandwidth. This may happen based on numerical methods on each individual packet, however it may also be possible to make use of the special properties of the haptic human-machine interaction and haptic perception. The following list shall give a short overview about common approaches:


of an interaction with a virtual environment at different velocities. The identified dependency of the force perception threshold on the velocity was successfully used as basis for data reduction.

## **11.4 Specifications for a Portable Haptic Interface**

Many haptic devices have never left the status of a prototype and grow into a commercial product. In most cases, the device design was the reason. One item that restricts the applications is heavy structure. Using electric motors, pneumatic cylinders, exoskeletons, and huge design dimensions are some of the reasons. The portable systems can make more freedom for users; however, nonportable devices take the massive weights away from the user. Involving a portable force-feedback interface is necessary to allow maximum motion freedom.

For a portable haptic system, different items are essential. As some general requirements:


A portable haptic system indicates an actuating and a sensing structure, and most of the time, it is attached to the user's body. Because of different limitations such as overall weight and volume, portable devices are more challenging to design. On the other hand, its cable connections to different parts and the power supply are other challenging parts that should be considered. In these systems, low energy consumption should be considered because of the battery use. Easy usage, simple fitting structure, and short training periods are some other important specifications. Last but not least is electrical and mechanical safety. Electrical safety means that the device specifications must obey the international electrical safety limits. Also, mechanical safety should be in a way that prevents users' injury. For example, there is no restriction in terms of mechanical force, but it is recommended to use the actuators with lower providing force than human power. Moreover, defining a suitable port fpr the system has an impact on user friendliness of the system.

#### **FireWire—IEEE 1394**

FireWire, Apple's brand name, according to the IEEE 1394 standard is a serial transmission format similar to USB. In fact it is a lot older than the USB specification. The six-pole FireWire Connector includes a ground and a supply line too. The voltage is not controlled and may take any value between 8 to 33 V. FireWire 400 defines up to 48W power to be transmitted. The data rates are—dependent on the port design— 100, 200, 400 or 1600 kbit/s. This is completely sufficient for any haptic application. Even fiber optics-transmission over 100 m distance with up to 3200 kbit/s are specified in the standard. The bus-hardware additionally includes a concept to share memory areas between host and client, enabling very latency less transmissions. Even networks without an explicit host can be established. The interface according to IEEE 1394 is the preferred design for applications with high data transmission rates. Only the little propagation of this interface in personal computers hinders a wide application.

#### **Ethernet**

The capabilities of the Ethernet-interface available with any PC are enormous. If you use a standard Ethernet network, you have the ability to transmit data at a rate up to 10 Megabits per second (10 Mbps). The available data payload within the transmission is largely dependent on the interlacing of the underlying protocols. A reliable 2-way data stream between remote applications is provided by TCP which is Transport Control Protocol. There is the possibility to increase the bytes for the Ethernet frame by using the Ethernet protocol. It can also increase the bytes of the whole package, resulting in an overhead per packet. Using them will provide enough bytes of data for typical haptic applications with respect to the available space per packet. Assuming a six-DOF kinematics with 16 bit (2 byte) resolution in their sensors and actuators, each packet has to carry only 12 bytes of data, with one packet for force-output and one for position-input. Even when considering that the data have to be extended with some additional overhead (address negotiations, status-information), this is still sufficient for many haptic applications. A disadvantage in using the Ethernet is given by the high efforts necessary for packet confection and protocol formulation, which would usually overload the computation power of standard microcontrollers. Additionally a high number of clients reduce the data rate within a network significantly. Using switches compensates this reduction to some extend. But the method of choice is usually given by an exclusive network for the haptic application.

#### **Measurement Equipment and Multi-Functional Interface Cards**

Measurement- and multi-function interface cards are a simple approach to interface to hardware designs. They are available for internal and external standard interfaces, such as PCMCIA, USB or even LAN. They are usually equipped with several standard sw-drivers optimized for their hardware capabilities. When considering a prototype design they should be considered in any case. Their biggest disadvantage is given by the data processing happening inside the hosting PC and within the restrictions of the operating system. Especially in combination with non-realtime operating systems like Windows the dynamics of controllers necessary for haptic applications may become not fast enough.

## **HIL-Systems**

"Hardware In the Loop" (HIL) systems were first used in control engineering and compensate the disadvantages from multifunctional cards for rapid prototyping and interfaces to haptic systems. HILs include a powerful controller with proprietary or open real-time operating system. The programs operated on these controllers have to be built on standard PCs and are transmitted as with any other microcontroller system too. Frequently the compilers allow programming with graphical programming language such as MatLab/Simlink or LabView. The processors of the HILs are connected via specialized bus-systems with variable peripheral components. Ranging from analogue and digital output over special bus- and actuator-interfaces a wide range of components is covered. HIL-systems are predestined for always the timecritical applications of haptics in design phase. But compared to other solutions they have a high price too.

## **11.5 Final Remarks About Interface Technology**

The interface subordinates to the requirements of the system. Any realistic application and its required data rate can be covered with today's standard components. This is a complete difference to the situation at the beginning of the 21st century. At this time highly specialized interfaces were designed for haptic devices, to cover the high requirements on data transmission rates. Accordingly even today commercial products with own ISA or PCI interface cards can be found on the market. Although the technical specifications are sufficient to fulfill the requirements, the first design and operation is far from being trivial.

## **References**


**Open Access** This chapter is licensed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made.

The images or other third party material in this chapter are included in the chapter's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the chapter's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.

# **Chapter 12 Haptic Software Design**

**Arsen Abdulali and Seokhee Jeon**

**Abstract** This chapter reviews design concepts of haptic modeling and rendering software. The main focus lies in realistic kinesthetic and tactile haptic models for virtual and augmented reality based on the data collected from physical objects. We consider both data-driven algorithms providing a black-box action-response mapping and measurement-based approaches identifying parameters of physics-based models. To show the research landscape and highlight ongoing research challenges, we introduce a series of state-of-the-art methods including data-driven models with deterministic and stochastic responses, physics-based simulation using optimizationbased FEM solver, and hybrid approaches of combining the concepts of both datadriven and physics-based methods. These examples also cover a wide range of haptic properties, i.e., modeling and rendering of elasticity and plasticity, tool deformation, and haptic textures.

## **12.1 Introduction**

Computer haptics is a research discipline studying the science, art, and engineering of software design that synthesize and display haptic content. Depending upon the target content, haptic software can be generally classified into algorithms encoding the abstract information and simulating haptic interaction [17]. The abstract content in the former methods is usually represented in the form of tactile patterns allowing utilization of the haptic channel for communication [56], navigation [25], notifica-

A. Abdulali (B)

S. Jeon

© The Author(s) 2023

Engineering Department, University of Cambridge, Trinity Lane, Cambridge CB2 1TN, UK e-mail: aa2335@cam.ac.uk

Kyung Hee University, Seocheon-dong, Giheung-gu, Yongin-si, Gyeonggi-do 446-701, South Korea e-mail: s.jeon@hapticdevices.eu

T. A. Kern et al. (eds.), *Engineering Haptic Devices*, Springer Series on Touch and Haptic Systems, https://doi.org/10.1007/978-3-031-04536-3\_12

tion and warning [33]. The latter approach, commonly known as haptic rendering, represents an interactive process of computing and displaying haptic stimuli with respect to the user's action. The essential role of the rendering is a simulation of the haptic interaction that enables a user to explore the haptic properties of a virtual entity like stiffness and roughness, as well as its physical attributes like shape and weight. The scope of this chapter is narrowed to rendering techniques of haptic interaction, which coincide with engineering aspects of haptic sensors and actuators presented in previous chapters.

The subject touching an object perceives its properties and attributes relying upon *kinesthetic* and *tactile* senses. Kinesthetic perception, also known as proprioception, provides spatial awareness of the body parts and joints, as well as the sense of external forces causing a limb load. This sense allows perceptions of the object's attributes like shape and weight and its internal material properties, which are usually perceived during deformation. The tactile, or cutaneous, sensation allows the subject to perceive surface properties of an object through a skin contact. Both kinesthetic and tactile feedback can be rendered either in the pure virtual configuration where the user perceives only synthetic feedback or by mixing the real physical feedback with a synthetic one.

## *12.1.1 Virtual Reality*

The rendering environment where the user interacts only with virtual objects through haptic interfaces is referred to as *Virtual Reality* [52]. Virtual Reality (VR) is a nonphysical computer-generated world that can be either newly created by a designer, e.g., computer game, fantasy world, or can mimic the scene from the real world. VR is typically expands to simulation of the other sensory modalities, e.g., visual and auditory.

The ultimate goal of the VR simulation is to enable the user to feel connected with or being a part of the virtual environment, which is referred to as immersion. Immersive haptic simulation, among other modalities, is considered to be more challenging as the user perceives the world through the prism of abundant haptic properties. Furthermore, sensing organs of haptic perception are ubiquitously distributed in user's body, which makes the design of haptic devices and software even more difficult. To render realistic haptic interaction, a wide range of haptic properties should be simultaneously considered. For instance, when we stroke our fingers over a wooden table, apart from high stiffness, we also feel the texture and friction. Likewise, we distinguish plastic and metal spoons not only due to their weights but also considering the temperature flux, where the metal spoon feels colder. To achieve a high level of realism, the research in haptic rendering strives to design a model that ultimately reproduce the haptic interaction, by gradually modeling individual properties and incorporating them into a unified framework.

In VR simulation, haptic properties can be broadly classified into surface properties, which we feel using our tactile perception, and material properties, which we feel in form of force feedback. In both cases, we perceive the feedback with respect to certain actions. To achieve a high level of realism, the research in haptic rendering strives to design a model that ultimately reproduce the feedback for all possible input actions. The essential idea of *data-driven* and *measurement-based* modeling is to build a virtual copy of an object from action-feedback data pairs collected during real interaction.

## *12.1.2 Mixed Reality*

Mixed Reality (MR) is an interactive environment where the real-world feedback and computer-generated stimuli superimpose one another. Depending upon the amount of the real and virtual content in the resultant feedback, the MR can be classified as *Augmented Reality* (AR) or *Augmented Virtuality* (AV) [39]. There is no specific rule clearly defining a boundary between AR and AV. However, taking into account the richness of real-world haptic feedback, the Augmented Reality configuration has been found more practical for applications with real-virtual mixed haptic content.

There are several abstraction levels in the design of AR applications. First, when we want to recreate the virtual object and its all haptic properties in a physical world. For instance, let's imagine that you want to buy a lamp from an online store. The AR techniques can render it on top of your table so you can touch it, press on the switches, etc. The second level is when one wants to alter a particular haptic property of a target object. For example, the pointiest is trying to switch from conventional paper-pencil sketching to a digital one using a stylus and tablet [5]. The artist might want to recreate the paper-pencil experience while drawing on the digital canvas of the tablet. Another practical example is medical training, where the size and stiffness of a tumor within a phantom body can be modulated to simulate various possible cancer cases [40]. All in all, the main strength of the AR system is that the designer can take advantage of the realism of the physical world and focus only on target properties. In the VR system, on the other hand, the designer should take care of all haptic properties.

Haptic properties of the physical world in AR simulation are either modulated by direct overlaying the synthetic stimuli or occluded by a physical barrier to be completely recreated. The former configuration, commonly known as a *feel-through* strategy, can be applied when it is possible to estimate the stimuli correcting the physical feedback. For instance, three-dimension force feedback during interaction with an object can be modulated to change its stiffness and friction [40, 41]. When then correcting stimuli is too challenging to estimate, the latter approach is more suitable. For example, direct modulation of haptic texture feedback having a stochastic nature still remains impractical. It is more convenient to cancel the physical stimuli using a special tool and then to recreate the complete feedback in a similar manner as in VR simulation [22]. In this chapter, however, we focus mainly on a feel-through approach, as our main goal is to deliver a software-related conceptual difference between AR and VR systems. Designing the hardware that occludes physical stimuli is beyond the scope of the current study.

## *12.1.3 Touch User Interfaces*

Recent trends in *Human Computer Interaction* (HCI) interfaces show the transition from conventional interfaces with physical buttons and switches to digital ones simulated on touch screens. These interfaces typically miss the physical feedback deteriorating the user experience. Haptic rendering techniques are getting popular for simulation of physical interfaces. For instance, real-like feedback can be simulated for virtual buttons, knobs, and switches displayed on smartphones and tablets [51].

It is important to note that we consider only rendering aspects of simulation of physical interaction in Touch User Interfaces (TUI). The TUIs can also be used for abstract information encoding and transfer, e.g., Braille display [38] and vibrotactile patterns of notification or warning [33], which is outside of the current study's interest.

## *12.1.4 Structure and Contents*

The focus of the current chapter mainly lies in data-driven haptic modeling and rendering techniques used for Virtual Reality. We first introduce a generic definition of the haptic model that relates the user's action to response stimuli in Sect. 12.2. Then we provide a series of examples addressing various aspects of modern haptic modeling and rendering. In Sect. 12.3, the interpolation-based data-driven method is introduced with deterministic input-output mapping. The data-driven haptic model that governs a mapping of user's action to stochastic response is presented in Sect. 12.4 along with the example of texture modeling and rendering. The physics-based haptic simulation was introduced in Sect. 12.5, where the Finite Elements Method was used to compute the deformation of a hyper-elastic object and corresponding non-linear force feedback. Finally, to model the plastic deformation, we introduce a hybrid approach of physics-based simulation with a data-driven controller (Sect. 12.6). Altogether, this chapter covers most modeling and rendering techniques that a novice haptics researcher might encounter.

## **12.2 Haptic Rendering**

Exploration and perception of haptic properties, as it has been already discussed in Chap. 2, involves a complex cognitive process incorporating both *action* and *feedback*. For instance, we perceive the object's stiffness by relating the amount of object deformation and sensed force feedback. Thus the key component in the haptic rendering is a mathematical *model* or *algorithm* governing the action-feedback mapping. The rendering pipeline can therefore be expressed in three steps: sensing action, estimating and displaying the haptic feedback.

## *12.2.1 Haptic Model*

Haptic model is a numerical method employing the action-feedback mapping. Haptic models can be generally classified into two groups, i.e., the *parametric* and *datadriven* methods. In parametric methods, the model with a fixed number of parameters is usually designed based-on rules, intuition, and empirical observations of underlying physical processes. Each parameter in these models, usually correlates with a certain material or haptic property of an object. One widely used example of parametric model is the Hook's elasticity model. In data-driven methods, the underlying physical processes of the object are neglected and the model of action-feedback relation is discovered directly from observations of a physical interaction. This approach is advantages in cases where the action-feedback relation is too complex or unclear like in texture rendering. Sometimes, similar problem can be modeled using both ways. For instance, the interaction with a fluid can be model by considering dynamics of its particles [18], or instant action can be directly map to the feedback bypassing the physics [35].

Haptic models can be designed in a closed- or open-loop setting. In close-loop rendering, the model output is continuously computed and displayed to the user. In open-loop simulation, the feedback stimuli is independent of the input action during the simulation, but computed and triggered based-on an action dependent event (event-based rendering). For example, object's hardness can be simulated by rendering contact vibrations, the pattern on which depends on the impact velocity [44]. The vibrotactile pattern of the click can be similarly rendered for digital buttons or switches [51].

## *12.2.2 Action*

The action representing a haptic contact is usually expressed in a form of a vector having a finite set of variables correlated with the target response. For example, the input action of an elastic stiffness model can be described by a displacement vector representing the local deformation at the contact point [52]. In a physicsbased simulation, where the object deformation is simulated by an external numerical solver, the action can be represented in the form of boundary conditions [9]. Tactile feedback from haptic texture is correlated with the velocity and contact pressure at the contact during the stroke over a surface. Hence the haptic texture model can be designed with two-dimensional action space [23]. The dimension of input space represents the degree of freedom of the model. By increasing the number of input variables, the model captures new characteristics of the interaction. For instance, to model the anisotropic texture, the authors in [4] included movement direction as an additional action variable. However, it is important to mention that with every additional input variable, the design and fine-tuning of the haptic model becomes much more difficult [2].

## *12.2.3 Response*

The dimension of the response vector usually corresponds to degree of freedom of the haptic device. For instance, the response of the virtual wall contains a single variable repressing a force in a normal to wall direction. In order to model interaction with a virtual sphere a three-dimensional force output is computed. If the user interact with an environment through a virtual tool having arbitrary shape, an additional three-dimensional torque vector is required [12].

Rendering system is considered to be under-actuated, when the device is incapable to deliver the feedback for all response variables. In this cases, the model can be adjusted trying to compensate missing actuation or simplified to eliminate unnecessary calculations. For instance, if haptic device supports only one dimensional force actuation (e.g., force feedback for a finger of a haptic glove), a three-dimensional force vector computed during interaction with virtual object can be project into actuation dimension. In haptic texture rendering, taking into the account that the user does not perceive the direction of vibrations, the model response output of the model can be simplified from three- down to one-dimensional signal.

Several models with the same kind of feedback can share a single device for the output if they compute independent haptic properties. For example, the model of stiffness producing a force feedback in a normal to contact direction can be accompanied by a friction model with a force response in lateral direction. To simulate a surface texture, stationary vibrations can be added to a force feedback [21].

Depending upon the nature of the response stimuli, the response stimuli of a haptic model can be deterministic or stochastic. Training a stochastic data-driven model, which produces a continuously changing output for a constant input, is more complicated since in the current state of the art methods, the data should be segmented into either piecewise constant action [4] or piecewise stationary output [23] segments. Each segment of a stochastic model represents a single model point of multidimensional input space.

## *12.2.4 Data-Driven Modeling*

In data-driven haptic modeling, the model governing the action-response relation is identified or trained exclusively using the experimental data collected during physical interaction with a real object or environment. Data-driven approaches generally omit the meaning of the underlying physics of the interaction and do not require manual design and tuning of the mathematical models. The data-driven modeling is advantageous where the action-response is non-linear and too complex for the manual design. Even the elastic object can exhibit high complexity and non-linearity due to its irregular morphology (shape). For instance, the feedback while deformation of the fork or spoon largely varies depending on the contact position and orientation (Sect. 12.3). In the case of texture modeling, it gets even more complex, as the feedback depends on many factors related to its surface properties and the applied action (Sect. 12.4). It is very challenging to capture all these factors in a manual model design, which greatly influences realism.

## *12.2.5 Measurement-Based Modeling*

In measurement-based haptic modeling, the action-response mapping is usually governed using a parametric model. The set of parameters is usually identified using the data collected during interaction with a real object or environment. For instance, the linear stiffness model can be approximated using Hook's law. The friction, on the other hand, can be simulated using a Dahl formulation [41]. Since the parametric models usually consider the underlying physics of the interaction, the rendering stability is usually guaranteed theoretically. Therefore, the parametric model is often utilized for complex problems, where it is challenging to collect sufficient data for interpolation-based approaches. For example, to render a large deformation of a hyper-elastic object, the FEM model can provide a reasonable level of approximation as we discussed in Sect. 12.5. The approximation quality of physics-based models, however, is limited as the many factors are often not included in the model. To increase realism, the hybrid approach of data-driven and physics-based simulation can be utilized. For instance, in Sect. 12.6, we overview the rendering of plastic deformation, where the non-linear forces are computed using the FEM framework, and the plastic flow of the deformation is handled by a data-driven controller.

**Fig. 12.1** General system overview

## **12.3 Deterministic Data-Driven Modeling**

Deterministic models are generally utilized when the response stimuli of the haptic model remain the same for a given input action regardless of the time and history of previous actions. For instance, the deformation of an elastic object that exhibits a unique action-response correspondence can be approximated using a deterministic model [7]. In some cases, the short-term history can be used as a part of the model input. To model visco-elasticity, for example, the rate of deformation, i.e., the difference between immediate and previous deformation states, is required [35]. In other cases, when the current feedback depends on a long sequence of actions, e.g., in plastic deformation modeling, incorporating history into a model input is rather impractical and a physics-based modeling approach becomes inevitable [9].

Interpolation and regression techniques are considered to be the backbone of the deterministic models. *Interpolation methods* compute the feedback stimuli based on the neighboring data points collected during real interaction. The goal of *regression methods*, on the other hands, is to find the best fit of a parametric function for a given set of data points. The parametric function can be as simple as linear defined by a single parameter or as large and complex as deep neural network.

In this section, for input-output mapping, we utilize Radial Basis Functions Network (RBFN) interpolation method. This method has been found beneficial for haptic rendering due to its simplicity, efficiency and the ability to handle non-linear input-output mapping. As an example, we apply this approach for modeling a tooldeformation [7], that exhibits non-linearity due to its morphological complexity.

## *12.3.1 Tool Deformation Modeling*

Tool-deformation modeling is a challenging non-linear problem. The morphological complexity of a tool like a spoon or a fork makes the force-displacement relation highly non-linear and anisotropic. Bending a spoon in different directions, for instance, requires a different amount of force. Additionally, the physics-based simulation for tool deformation is less practical, as the shape with a relatively thin and long body requires a high resolution of FEM tessellation. Therefore, the interpolationbased data-driven model is a good candidate to model the deformation of a tool.

To model the deformation of a tool, the RBFN-based data-driven approach can be utilized. The main objective of the current example, on the other hand, is to learn how to define the action and response spaces. By considering the action and response spaces of the model, we also need to design the data-collections setup that captures corresponding action-response pairs during the deformation of physical tools.

**Fig. 12.2** Descriptions of the model input space: **a** the input space is defined with respect to the origin of a tool; **b** a six-dimensional input vector consists the position of an initial contact **p** and a translation vector, **v** that describes the state of deformation; **c** A set of recorded input vectors are used in interpolation, and the force response is approximated at the tool's origin during rendering

**Fig. 12.3** Complex contact deformation: our data-driven model provides a non-linear input-output mapping that allows simulating the following deformations **a** multiple-contact; **b** self-collision; and **c** rolling-contact

## *12.3.2 Action and Response Spaces*

To model contact deformation of the elastic tool, we define a six-dimensional input space. First three dimensions describe the position of the initial contact in tool's local coordinate frame, which is denoted as the *local initial contact* (**p** in Fig. 12.2a). **p** is determined at the moment of initial contact and remains constant in the local coordinate frame during a contact. The last three dimensions are related to the position of the initial contact that remains constant in the global coordinate frame, i.e., initial contact point on the object surface (**p**˜ in Fig. 12.2b). We denote this as *global initial contact*. At the initial moment of the contact, both the local and global initial contact points represent the same point (Fig. 12.2a). However, when the tool begins deforming, **p** penetrates into the surface and moves away from **p**˜ as illustrated in Fig. 12.2b. The difference between the two point can explain the state of deformation, and the difference vector **v** = **p**˜ − **p** is referred to as *translation vector*. The translation vector becomes the last three input dimensions (Fig. 12.3).

The resultant input vector of the model is **u** = **p**, **v**. Taking into account that interaction happens in three-dimensional space, the final input space of the model can be expressed in six dimensions **u** = - *px* , *py* , *pz*, *vx* , *vy* , *vz* .

It is important to notice that translation vector **v** differs from deformation vector used in the continuum mechanics (a vector representing the total movement of a particle on a deformed surface). In general, calculating this deformation vector needs the actual geometry of the deformed surface, which requires geometry recalculation of the tool deformation. This information is computationally expensive and is not generally available in a data-driven modeling scenario. Thus, we decided to avoid it. Instead, we utilize a vector representing a change of the initial contact point during deformation.

The model output is a three-dimensional force vector **f** at the tool's origin (Fig. 12.2a). The force response under a certain deformation should be explicitly determined by an initial contact point and the current position of the external force application, i.e., the encountered surface in our case. Thus, the initial contact point and the translation vector can fully explain the response force at the tool's origin. In our implementation, both of the inputs are presented in the local coordinate frame of the tool.

## *12.3.3 Data Acquisition and Preprocessing*

We designed and assembled a manual recording setup that captures data from three sources (left side of Fig. 12.4). Three-dimensional force signal was captured by force/torque sensor (Nano17; ATI Industrial Automation, Inc., Apex, NC, USA) using NI DAQ acquisition board (PCI-6220; National Instruments, Austin, TX, USA) with a sampling rate of 1000 Hz. The position and orientation of the tool's origin were recorded by a haptic device (PHANToM Premium 1.5; Geomagic Inc., Rock Hill, SC, USA). In order to acquire the orientation of the tool, we design a custom gimbal encoder (right side of Fig. 12.4). The pitch and roll angles were measured by incremental encoders with angular resolution 0.045◦ (E2-2000; US Digital, Vancouver, WA, USA). The yaw angle was measured by a standard incremental encoder (OME-N; Nemicon, Tokyo, Japan) with angular resolution 0.18◦, which was mounted by the manufacturer of the haptic device. The raw data from the gimbal encoder was acquired through the original 24-pin Mini Delta Ribbon (MDR) interface of the haptic device using the library (OpenHaptics 3.4; 3D Systems, Inc., Rock Hill, SC, USA). In order to compute the position and orientation of the tool's origin, we implemented the forward kinematics of the haptic device considering the angular resolution of the custom gimbal encoder.

The collision point between the tool and a flat surface was recorded using a capacitive touch screen of the smartphone (Galaxy S7; Samsung Electronics Co. Ltd., Suwon, Korea). In order to make the tool sensitive for the touch screen, we coated it with a liquid that is comprised of evenly dispersed ultra-fine conductive nano-particles (Nanotips Black; Nanotips Inc., Vancouver, BC, Canada). The position of the initial contact was recorded from the smartphone through the network. The

**Fig. 12.4** Data recording setup. A user holds the handle of the device and presses the tool to surface of the touchscreen. Data from the tool deformation is recorded when the tool is in contact with the touchscreen. Left side - recording hardware; Right side - gimbal encoder

package delivery latency of the network was less than one millisecond. The position of the initial contact with respect to the world coordinate system (coordinate system of the haptic device) is stored and translated to the local coordinate system of the tool during the deformation. First translated initial contact point represents the *local initial contact* and each subsequent translated initial contact point represents the *global initial contact*. When the position and orientation of the tool's origin are changed, the global initial contact point moves away from the local initial contact. The vector pointing from the local to global initial contact points is a *translation vector*. The input vector of the proposed model **u** is a combination of the local initial contact and the translation vector.

To minimize the noise, force signals were filtered using a three pole Butterworth low-pass filter with a cut-off frequency 25 Hz. The cut-off frequency of the filter was selected accordingly to the human hand movement capability [11]. Similarly, the position data was smoothed using a third-order Butterworth low-pass filter with a with a cut-off corner frequency 25 Hz. Only the data points where the tool was in touch with the object were considered while other redundant data points were removed.

Each of four tools that we used for data collection (Fig. 12.5), was cut at the grip point, i.e., a point between index and thumb fingers while holding a tool. We refer to this point as tool's origin. Then, a 3D-printed adapter was attached to the edge of the cut for mounting the tools to the recording setup as shown in Fig. 12.5b. It is important to notice that the cut could cause mechanical changes in a tool's structure. However, the major contribution to the haptic feedback during tool-object interaction is provided by the deforming part of a tool. The upper part of a tool, which is on the other side of the grip point, is grasped in person's hand contributing negligible feedback.

In order to detect the contact with the touchscreen, each tool was coated with a thin layer of Nanotips Black (Fig. 12.5b). For the best performance, the coating layer was dried for 2 days. Hereafter, the modified versions of tool's presented in Fig. 12.5b are referred to as real tools.

**Fig. 12.5** A set of real tools for evaluation: **a** illustration of original tools; and **b** modified versions of tools prepared for numerical and psychophysical experiments

## *12.3.4 Model Training*

We develop a computational formulation that relates aforementioned input-output spaces. Data-driven model of the tool deformation can be understood as a function, whose parameters are optimized based on given observations, i.e., a set of inputoutput recordings from the real tool-surface interaction. The process of computing model parameters is referred to as *modeling*. During the simulation, the sequence of input vectors is fed into the resultant model while the model computes a continuous output of the force feedback.

In literature, there were several approaches proposed for input-output mapping. The most straightforward way is to utilize simplex-based methods [34] where the data are stored in a look-up table and approximated using weighted interpolation. The second way is to utilize feed-forward neural networks [1] that continuously compute a feedback output based on a given input during rendering. In this work, we adopted a radial basis functions network (RBFN), since it was successfully used in the majority of data-driven simulators of the object deformation [27, 35, 60].

An RBFN consists of three layers, i.e., input, hidden, and the output layer. The nodes of the hidden layer represent non-linear RBF activation functions. The input vector belongs to an *<sup>n</sup>*-dimensional Euclidean vector space, **<sup>u</sup>** <sup>∈</sup> **<sup>R</sup>***<sup>n</sup>*, and is transformed to a single scalar value of the output layer, <sup>φ</sup> : **<sup>R</sup>***<sup>n</sup>* <sup>→</sup> **<sup>R</sup>**, which can be described by

$$f\_l(\mathbf{u}) = \sum\_{j=1}^{N} w\_{lj} \phi(\|\mathbf{u} - q\_j\|) + \sum\_{k=1}^{L} d\_{lk} g\_k(\mathbf{u}) \quad \mathbf{u} \in \mathbf{R}^n,\tag{12.1}$$

where *wt j* is the weight constant, *qj* is the center of the radial basis function, the function *gk* (*u*) (*k* = 1, ..., *L*) forms a polynomial term, *t* is basis of the output space of the model, and φ(·) is a radial basis activation function. Since cubic spline φ(*r*) = *r* <sup>3</sup> is chosen as the RBF kernel, the polynomial term is needed to ensure stability [37].

The weight constants *wt* and polynomial coefficients *dt* can be estimated by solving the following linear system:

$$
\begin{pmatrix} \Phi & G \\ G^T & 0 \end{pmatrix} \begin{pmatrix} w\_l \\ d\_l \end{pmatrix} = \begin{pmatrix} f\_l \\ 0 \end{pmatrix}, \tag{12.2}
$$

where *i j* = φ(*ui* − *u <sup>j</sup>*), and *Gik* = *gk* (*ui*) for *i*, *j* = {1, ..., *N*}, and *k* = {1, ..., *L*}, respectively. Since the RBFN provides only vector-scalar mapping, each basis of the force vector *ft* is computed independently.

The desired weight vector *wt* and polynomial coefficients *dt* of the RBFN model can be calculated based on the inverse of the interpolations matrix, as follows

$$
\begin{pmatrix} w\_t \\ d\_t \end{pmatrix} = \begin{pmatrix} \Phi & G \\ G^T & 0 \end{pmatrix}^{-1} \begin{pmatrix} f\_t \\ 0 \end{pmatrix}. \tag{12.3}
$$

Since the size of the interpolation matrix is proportional to the square of the number of selected samples, it becomes computationally expensive to find the inverse of the interpolation matrix. It is important to notice that *wt* and *dt* are computed for three basis of the force vector independently. In order to compute force responses during rendering (Eq. (12.3)), three matrices should be provided, i.e., *w*, *d*, and *q*. A set of these matrices is referred to as *Haptic Model*.

During model building, the input vector can be directly derived from the sensor readings, i.e., data from the touch-contact sensor and the PHANToM's position encoder. However, during rendering, we do not employ the touch-contact sensor, so the initial contact positions and corresponding translation vector should be estimated.

In order to construct the input vector, the initial contact between virtual tool and object surface is required. This indicates that the shape of the object must also be provided to the rendering algorithm. One way is to use 3D mesh model of a tool for collision detection. However, this approach requires perfect reconstruction of the mesh model. Instead, we decided to build a *Collision Model* out of the local initial contacts from the training set. The collision model is a mesh model where its vertices are taken from the unique initial contact points measured for the haptic model. In order to build the mesh model out of arbitrary point, we utilize a 3D mesh processing software (MeshLab; ISTI-CNR, Pisa, Italy). The main benefit of such design is that the collision model perfectly matches the haptic model that ensures stability in rendering (Fig. 12.1).

## **12.4 Stochastic Data-Driven Models**

Stochastic data-driven models approximate the feedback stimuli with a random nature. The distribution of the stochastic response signal is usually conditional on the user's action. The classic example of the stochastic data-driven methods is a haptic texture modeling, where the distribution of the vibrotactile response changes with respect to applied action, e.g., contact pressure, velocity and direction of the movement [2].

To approximate stochastic signals conditional to a variable input, an interpolation or regression methods are commonly utilized to convert the applied action to a latent representation, which in turn, parameterizes a model of a stationary random process. For instance, Romano and Kuchenbecker proposed to employ the bilinear interpolation of the vibrotactile signals that are encoded in the form Linear Predictive Coding (LPC) coefficients and stored in a look-up table of force-velocity action space [50]. This model was further improved in [23], by encoding acceleration patterns into auto-regressive moving average (ARMA) coefficients and by using Delaunay triangulation for interpolation of vibrotactile patterns. In [4], the vibrotactile patterns was interpolated using RBFN network allowing the arbitrary dimension of the action space of a model.

Another challenge in data-driven modelling of stochastic response is the segmentation of a signal into stationary vibrotactile patterns with relatively constant applied action. In [29], the authors proposed to segment acceleration patterns into sections with decaying waveforms. They assumed that the acceleration signal consists of decaying waves, where magnitudes of the local extrema are decreasing. The main drawback of this method was over-segmentation of the stochastic signal and undersegmentation of the patterned signal. These limitations were partially resolved in [30] by including a deadband threshold into segmentation criteria that constraints a starting point of new segments. Culbertson et al. proposed to use an AutoPARM algorithm for acceleration signal segmentation [23]. The AutoPARM algorithm finds an optimal partition of the signal by applying the evolutionary algorithm and minimizing the minimum description length (MDL) of each fragment [24]. In a similar fashion, the other algorithms estimating structural breaks of time series signals can be used for acceleration signal partitioning, e.g., AutoSLEX [20]. Assuming that the stationary acceleration corresponds to the relatively constant input, the input variables were averaged. In [2], the segmentation was performed in the action space where the input signals are partitioned into piecewise constant fragments. The optimal segmentation was achieved using bottom-up agglomeration strategy.

In this section, we introduce a new method allowing modeling of both isotropic and anisotropic textures through unconstrained tool-based interaction. To incorporate the directionality of the texture, we developed an action-space segmentation concept allowing modeling haptic textures with an arbitrary number of input variables. To store and interpolate vibrotactile signals, we developed a Radial Basis Function Network (RBFN) based haptic texture model allowing more general and flexible data-driven modeling. The complete pipeline of the haptic texture modeling and rendering is provided in Fig. 12.6.

**Fig. 12.6** Data-driven haptic texture modelling/rendering pipeline

## *12.4.1 Haptic Texture Modeling Pipeline*

Surface haptic texture is one of the essential information for human to discriminate objects. While small-scale geometry variation is one of the main causes of haptic texture, human can effectively perceive fine details of the variation through not only a bare-hand interaction, but also tool-mediated stroking as high frequency vibrations. Sometimes, these small-scale geometry variations can be anisotropic: the characteristic of the vibration varies depending on the stroking direction. This direction-dependent haptic texture sometimes plays as a crucial cue for haptically identifying surfaces, e.g., identifying wooden surface based on directional grain, judging the quality of fabric using thread grain, etc.

Even though all haptic texture models have their own contributions, the conceptual representation of most models remain similar. The *model space* is an abstract coordinate system that describes the location of *model points*. Each model point is described by a location inside the model space and the feedback pattern *mi* = {*xi*, *yi*}, where *xi* denotes the *n*−dimensional vector of the model points location, and *yi* represents the feedback pattern *yi* = {*a*1, *a*2, ..., *an*}. Thus the general haptic texture model can be described by the set of model points *M* = {*m*1, *m*1, ..., *mp*}. An example is provided in Fig. 12.7. Since the data-driven model in most cases is an interpolation model, a given minimal set of model points *Mmin* is required for a stable output. In some case the *Mmin* consists of the synthetic model points, which marks the interpolation boundaries.

The modeling algorithm of haptic textures consists of three stages: *data preprocessing and segmentation*; *model building*; and *rendering*.

## *12.4.2 Data Processing and Segmentation*

In this section, we develop a generic segmentation algorithm that partitions a multivariate input signal. The segmentation algorithm searches the optimal partition, where the deviation of the input vectors within each segment is constrained by a set of constraint functions and corresponding thresholds. The set of constraint functions and thresholds is selected for a particular task, where a single pair of a constraint function and threshold can be applied for single or multiple input variables at the same time. To find the optimal partition of the input signal, we adopted a bottom-up agglomeration principle and employed it in three configurations, i.e., offline segmentation for single- and multi-trial data collection, and online segmentation of the streaming signal.

#### **Problem Formulation**

The main objective of the algorithm is to partition a multivariate signal *X* into a minimum number of segments *M* = {*m*1,..., *ml*} having an arbitrary number of input vectors, such that the distribution of input vectors within each segment *mi* must satisfy a given set of conditions *G* = {*g*1(*mi*) ≤ τ1,..., *gp*(*mi*) ≤ τ*p*} where *G* is a set of inequality constraints. It is important to notice that each constraint can condition a single or multiple basis of the input space. Furthermore, any input variable can be conditioned by multiple constraints.

The preceding formulation imposes the restriction on distribution of each segment but can admit multiple solutions (several partitions can satisfy a given set of conditions). For example, the signal which initially satisfies the given set of constraints can be further partitioned until all segments become finest. Thus in order to find an optimal partition of the signal, the number of resultant segments should be minimized. In this manner, the signal segmentation task can be seen as an optimization problem.

$$\begin{aligned} \underset{n(M)\in\mathbb{Z}^+}{\text{minimize}} \quad n(M) \\ \text{subject to} \quad \quad \quad g\_j(m\_i) \le \tau\_j \quad \text{for } j = 1, \ldots, p \end{aligned} \tag{12.4}$$

where *n*(·) denotes cardinality of a set and *Z*¯ <sup>+</sup> = {1,..., *n*(*X*)} is a finite set of positive integers bounded by the maximum number of samples in *X*. Below, we introduce the recursive constraint projection algorithm that segments a multivariate signal into a minimum number of segments satisfying a given set of constraints.

## *12.4.3 Bottom-Up Agglomerative Segmentation*

The bottom-up algorithm breaks the input signal into a set of segments, where each segment contains at most δ<sup>2</sup> data points. Then, the merging cost *e* for each pair of neighboring segments is calculated using a cost function ¯*f* and stored separately in set *E* = {*e*1,..., *en*−1}. A pair of neighboring fragments with the lowest cost is merged, and the merging costs for the resultant and neighboring segments are updated. The agglomeration process is repeated until the lowest cost from the entire set *E* exceeds the predefined threshold τ¯. Detailed steps of the algorithm are provided in the form of pseudocode in Algorithm 1, where the function *initial Partition*(·, ·) partitions the input signal into a sequence of segments having δ<sup>2</sup> data points each


Generally, the optimization problem given by Eq. 12.4 is a Zero-one integer programming problem, which has been proven to be an NP-complete [42]. The Bottomup Agglomerative Segmentation algorithm, on the other hand, belongs to the family of Branch-and-bound algorithms, which has been commonly used as the relaxation to the Zero-one integer programming problem [19]. Therefore, the proposed algorithm provides an approximation to Eq. 12.4. Although, the proposed approximation is computationally efficient which is crucial for real-time applications. Furthermore, due to the dynamic programming nature of the Bottom-up Agglomerative Segmentation algorithm, the resultant partition achieves the minimum number of segments balancing constraint function values across segments. This property of the Bottom-up Agglomerative Segmentation is very important as the resultant partition represents a set with evenly significant segments. Additionally, the bottom-up approach converges in finite number of steps. When the complete signal satisfies given constraints a single segment, the algorithm reaches the maximum number of operations (merging calls). The bottom-up approach also guarantees that all segments in the resultant partition satisfy a given set of constraints, as long as all finest segments from initial partition satisfy one.

#### **Constraint Function Design**

In order to evaluate the proposed algorithm, we developed two exemplary cases, i.e., data segmentation for modeling *isotropic haptic textures* and *anisotropic haptic textures*. Similarly, a user can define other task dependent constraints and apply them to an arbitrary multivariate signal. For instance, our approach can be used for other vibrotactile data-driven models, such as Virtual Reality bicycle [49], texture classification [54], and texture rendering on variable friction displays [47]. Furthermore, our segmentation algorithm can be applied for redundancy reduction in modelling non-linear force responses of visco-elastic object deformation [34], tool deformation [7], interaction with viscous fluids [35], where data segments with negligible variation can be substituted by representative vectors.

#### **Isotropic Haptic Texture**

In order to build the model of isotropic haptic texture, at least two input variables are required, i.e., normal force, and velocity magnitude [23]. The input stream *X*(*t*) from the recording device generates two-dimensional input vectors *xt* = *f*, *v* at each time step *t*. In order to build a reliable haptic texture, the input variables for each segment should be relatively constant.

The average deviation of the normal force within each segment *mi* can be conditioned using the following constraint

$$f\_1(m\_i) = \sqrt{\frac{\Sigma\_{i=1}^N (f\_i - \mu\_f)}{N}} \le \mathfrak{r}\_1,\tag{12.5}$$

where *N* denotes the size of the set, μ*<sup>f</sup>* is a mean force for segment. The mean deviation of the velocity magnitude can be constrained as follows

#### 12 Haptic Software Design 555

$$f\_2(m\_i) = \sqrt{\frac{\Sigma\_{i=1}^N (\upsilon\_i - \mu\_v)}{N}} \le \mathfrak{r}\_2,\tag{12.6}$$

where μ*<sup>v</sup>* is a mean velocity magnitude per segment. Based on forgoing equations, the segmentation algorithm finds the optimal partition, where the average deviation of the normal force and velocity magnitude will be at most τ<sup>1</sup> and τ2, respectively.

#### **Anisotropic Haptic Texture**

Anisotropic texture modeling requires partitioning of position data into segments with relatively straight movement trajectories. Thus, two additional input variables should be included into streaming signal *x t* = **p**, *f*, *v* where **p** is a two-dimensional position vector. The maximum deviation of the position points from the line segment between starting and ending points can be computed as follows

$$\lg(m\_i) = \left\| \frac{\| (\mathbf{p}\_k - \mathbf{p}\_l) \times (\mathbf{p}\_k - \mathbf{p}\_N) \|}{\| (\mathbf{p}\_N - \mathbf{p}\_l) \|} \right\|\_{\infty} \tag{12.7}$$

where *k* = {2,..., *N* − 1} and · means a vector norm. The foregoing equation can be used as a constraint function limiting the deviation of position points from the straight line (Fig. 12.8a). However, it does not prevent the change of direction close to the reverse movement, since the position points remain close to the line segment [4]. Therefore, an additional equation preventing loop-outs (change of the movement trajectory to reverse direction) is required.

$$h(m\_i) = \frac{\sqrt{\|\mathbf{p}\_k - \mathbf{p}\_1\|^2 - b^2} + \sqrt{\|\mathbf{p}\_k - \mathbf{p}\_N\|^2 - b^2}}{\|\mathbf{p}\_N - \mathbf{p}\_1\|},\tag{12.8}$$

where *b* is a distance between point **p***<sup>k</sup>* and the line segment (**p***<sup>N</sup>* − **p**1). The function *h*(·) equals to unity while a segment of the movement trajectory does not contain loops, and exceeds one otherwise. Combining Eqs. 12.7 and 12.8, we can create a constraint function allowing segmentation of position data into relatively straight line segments.

$$f\_3(m\_i) = \begin{cases} g(m\_i), & \text{if } h(m\_i) = 1 \\ \inf., & \text{Otherwise} \end{cases} \le \mathfrak{r}\_3 \tag{12.9}$$

By using three constraints, the algorithm can find the optimal partition, where movement trajectories are approximately straight lines, whereas normal forces and velocity magnitudes are maintained relatively constant.

The overall segmentation procedure is culminated with elimination of segments that are shorter than 75 samples, which is assumed to be data in transition period with very high curvature. Note also that we skip merging of neighboring segments if a mutual mean of velocity or normal force magnitudes is equal to near zero. Number of data in these subsegments is normally very small, and they are removed from the set *M*¯ . It is reasonable since subsegments with near zero mean velocity or normal

**Fig. 12.8** Bottom-up Agglomerative Segmentation and resultant partition

force can be assumed to be data from very beginning or very end of the contact, which normally has very small vibration. The segmentation result of 10 second data is shown in Fig. 12.8b.

## *12.4.4 Multi-trial Data Collection*

In this section, we propose a novel algorithm for representative sample selection across multiple recording trials. The main aim of this algorithm is to populate the input space by significant model points from multiple trials while reducing the number of outliers. Furthermore, a generic haptic texture model is also provided. This generic model provides the necessary platform to other haptic texture modeling algorithms to benefit from the aforementioned sample selection algorithm.

Despite the fact that none of available sample selection algorithms can be directly applied to model point selection for data-driven haptic texture modeling, the idea of several sample selection algorithms can be generalized and extended for this task. For example, Edited and Condensed Nearest Neighbor (ENN [58] and CNN [31]) were initially designed for classification task based on k-Nearest Neighbors (k-NN) classifier. The former algorithm is usually used for outlier reduction, whereas the later one eliminates redundant samples from the given set. Recently, Arnaiz-Gonzalez et al. adopted the idea of CNN and ENN for regression tasks [10].

Inspired by the work in [10], we extended the idea of ENN and CNN for the representative model point selection for data-driven haptic texture modeling. Instead of using the k-nn classifier, the general haptic texture model can be used in our approach.

The pseudo code of the proposed method is depicted in Algorithm 2. The algorithm starts with the outlier reduction procedure (lines 1–11), which is followed by

**Algorithm 2** Sample Selection Algorithm

```
Input: M = {(x1, y1), ...(xn, yn)}, k, l, τ1, τ2
Output: M ⊆ M
1: Removing outliers:
2: ρ ← get AverageSpar sit y(M)
3: for i = k + 1 to |M| do
4: model ← train(M \ {xi, yi})
5: yi ← model.simulate(xi)
6: d ← get Distance(yi, yi)
7: ρ← get Local Sparsity(mi, M)
8: θ ← τ1 + α ∗ (ρ/ρ  − 1)
9: if (d > θ) then
10: M ← M \ {xi, yi}
11: end if
12: end for
13: Removing redundant patterns:
14: M ← {(x1, y1), ...(xk , yk )}
15: for j = k + 1 to |M| do
16: model ← train(M ∪ {x j, y j})
17: y j ← model.simulate(x j)
18: d ← get Distance(y j, yj)
19: if (d > τ2) then
20: M ← M ∪ {x j, y j}
21: end if
22: end for
```
redundant sample elimination (lines 12–21). The input of the algorithm consists of an initial set of model points *M* = {{*x*1, *y*1}, ...{*xn*, *yn*}}, where the first *k* elements form the minimal set of model points. Threshold values τ<sup>1</sup> and τ<sup>2</sup> are used to control the reduction rate of outliers and redundant model points, respectively.

#### **Outlier Reduction**

Outlier Reduction is an iterative process over the initial set *M*, where each model point *mi* = {*xi*, *yi*} is examined one at a time, starting from the (*k* + 1)*th* element of the set. In each iteration, one model point is temporarily removed from the initial set *M* \ (*xi*, *yi*). The resultant set is used for the model training. Following this, the feedback pattern *<sup>y</sup>* is estimated by feeding the input vector *xi* to the model. If the estimated *yi* and original *yi* feedback patterns are considerably different, the probability that *ith* sample is an outlier increases. This dissimilarity means that the contribution from the feedback pattern *yi* contradicts to contributions of the neighboring ones. The dissimilarity between two feedback patterns is calculated by a dissimilarity metric, which is explained at the end of this section. The threshold value τ<sup>1</sup> denotes the level of dissimilarity, at which the model point is permanently removed from the set *M*.

This outlier detection strategy works well for dense regions, where the model point resembles to the neighboring ones. However, it can be misleading for sparse regions. The neighboring model points in sparse regions are usually different, since they are far from each other. Thus the threshold τ<sup>1</sup> should be adaptive to the local density of the model space. In order to solve this problem, the regularization term <sup>α</sup> <sup>∗</sup> (ρ/ρ  *<sup>i</sup>* <sup>−</sup> <sup>1</sup>) is introduced, where <sup>ρ</sup> and <sup>ρ</sup>*<sup>i</sup>* are average and local sparsity of the model space respectively. When the local sparsity equals to the average one, the regularization term turns to zero. Similarly, when the local sparsity is higher then the global one, the adaptive threshold value θ is increased, and the other way around. The parameter α represents the sensitivity of the algorithm to the local density. It is recommended to estimate the α by using the following equation.

$$
\alpha = \tau\_1 \* \frac{\widehat{\sigma}}{\widehat{\rho}},
\tag{12.10}
$$

where the<sup>σ</sup> denotes the mean deviation of local sparsity at each model point from the average sparsity <sup>ρ</sup>. In order to estimate the local sparsity <sup>ρ</sup>*<sup>i</sup>* of each model point *mi* for a two-dimensional model space, we built the Delaunay triangulation by excluding the target model point *mi* , and computed the average distance from *mi* to three enclosing neighbors. Similarly, the average distance to four surrounding model points of the tetrahedron represented the local density for a three-dimensional model space.

#### **Redundant Sample Elimination**

Unlike the previous stage, the process of redundant sample elimination starts with the minimal set, which contains only *k* model points. In every iteration, the haptic texture model is trained by using the set *<sup>M</sup>*. The set *<sup>M</sup>* is extended by the candidate model point *mi* , if the difference between the original and simulated feedback patterns exceeds the threshold τ2. This iterative process finishes when all samples from *M* are assessed.

#### **Error Metric**

The error metric used for comparing the acceleration patterns is the spectral rms error. It is the difference between the approximated *<sup>a</sup>*[*n*] and the recorded acceleration pattern *a*[*n*]. The equation for spectral rms error is given as:

$$E\_n = e\_n(\widehat{a}[n]) = \frac{RMS(F(\widehat{a}[n]) - F(a[n]))}{RMS(F(a[n]))},\tag{12.11}$$

where RMS is the root mean square operator in the frequency domain, and *F*(.) is the discrete Fourier transform. This error metric is preferred since it provides a better account of the perceptual differences as compared to the time domain error metrics.

## *12.4.5 Online Segmentation of Motion Primitives*

In order to perform online segmentation, we introduced two contributions. First, to apply the given set of constraints, we developed a Recursive Constraint Projection algorithm. The segmentation process starts with the first pair of constraint function and threshold. The resultant segments from the first round are sub-segmented accordingly to the second constraint function, and the process continues until all constraints

**Fig. 12.9** Basic principle of the online segmentation schema. The example illustrates how the repository having three segments in (a) is extended by the fourth one sometime later c 2022, IEEE Reprinted, with permission, from [2]

are applied. The second improvement is an online segmentation schema. Inspired by the sliding window approach in [43], we developed a novel concept, where the streaming data is partitioned inside the segmentation queue (refer to Sect. 12.4.5 and Fig. 12.9).

## **Recursive Constraint Projection**

The algorithm segments a multivariate signal *X* by recursively projecting it onto constraints from set *G*. The first constraint is applied to partition the complete signal, whereas each resultant segment is recursively sub-segmented utilizing the remaining set of constraints individually. The algorithm consists of two parts, i.e., a recursive function projecting multivariate signal onto the constraints and a bottom-up agglomerative data segmentation algorithm that finds an optimal segmentation for a given constraint.

The algorithm starts with a single element in the set of segments *M* where the only segment is represented by a complete multivariate signal *M* = {*m*1|*m*<sup>1</sup> ∼ *X*}. Every call of the recursive function commences with pulling a single constraint *g*¯ = {¯τ , ¯*f* }

**Algorithm 3** Recursive Constraint Projection

```
Input: X = {x1,..., xN }
  G = {(τ1, g1(·)), ...(τp, gp(·))}
Output: M = {m1,..., ml}
1: M ← X
2: M ← RecConst Proj(M, G)
3: return M
4:
5: function RecConstProj(M, G)
6: τ ,¯ ¯f ← SelectConstraint(M, G)
7: G ← G \{¯τ , ¯f }
8: M¯ ← ∅
9: for all mi ∈ M do
10: M¯ ← M¯ ∪ Segment Data(mi, τ ,¯ ¯f (·))
11: if G 
= ∅ then
12: M ← RecConst Proj(M¯ , G)  Recursive call
13: else
14: M ← M¯
15: end if
16: end for
17: end function
```
from set *G*. Afterwards, each segment in *M* is partitioned satisfying the selected constraint *g*¯, and resultant segments are stored in *M*¯ . If the set *G* is nonempty, the recursive function is called passing *M*¯ and *G*. Thus, the depth of the recursion equals the initial number of constraints, and the signal is sub-segmented in every call such that the resultant segments satisfy a selected constraint. Regardless of which constraint is selected first, the signal eventually will be partitioned into segments satisfying all constraints. However, in order to reduce computational complexity, we introduce a constraint selection criteria (Algorithm 4).

Due to the recursive nature of the proposed algorithm, the number of calls on each next call equals the number of segments in the previous. It is reasonable to arrange the order of constraints in such a way that the constraints producing a lower number of segments are applied earlier. For instance, the same signal is partitioned using two different constraints producing two and four segments, respectively. Next round, for the former case, will require only two calls, whereas in the latter case the recursion will be called four times. Thus, for a case with greater number of constraints, this constraint selection strategy can considerably reduce the computational complexity. On the other hand, it is also computationally expensive to apply every constraint and select the one with the least number of segments. Therefore, we introduce an alternative measure ψ*<sup>i</sup>* = ¯*c*/τ*<sup>i</sup>* representing a merging cost of complete signal normalized by the threshold value of the constraint. The ψ*<sup>i</sup>* is correlated with a number of resultant segments, meaning that lower value produces fewer segments and vice versa. The value ψ*<sup>i</sup>* less or equal than one indicates that the signal on average satisfies the given constraint, nevertheless should be further partitioned to meet the condition locally throughout the complete signal.

#### **Algorithm 4** Constraint Selection

**Fig. 12.10** Example illustrating the recursive segmentation. For simplicity, we assume that constraints were selected in order and the segments are always bisected (partitioned into two subsegments with equal length) c 2022, IEEE Reprinted, with permission, from [2]

This selection criterion can be applied only if the threshold values of all constraints are strictly positive. Furthermore, the most of the commonly used energy and distance cost functions are non-negative and the thresholds are strictly positive. However, if the task requires to use thresholds that are equal or less than zero, it is recommended to follow the rules which are commonly used in coordinate descend optimization, i.e., selecting constraints one by one or randomly.

A simple example illustrating the recursive process of the segmentation is depicted in Fig. 12.10. Suppose that we have a three-dimensional signal where each variable has an individual constraint. First, the RCP algorithm selects an optimal constraint using Algorithm 4. Then, the complete signal is segmented by Algorithm 1 using the selected constraint. The similar process is repeated for two resultant segments using the remaining set of constraints. This recursive process continues until all constraints are applied. The depth of the recursion in this example equals three, which is the number of initially available constraints.

#### **Online Segmentation Schema**

At this point, we will explain the architecture of the online segmentation algorithm that finds the optimal partition of the streaming signal (Fig. 12.9). The algorithm performs segmentation over the segmentation queue. The incoming input vectors are buffered into fragments with length δ1. Then, the buffered fragment is fed into the segmentation queue, which triggers the process of segmentation by passing the data from the segmentation queue and the set of constraints to recursive constraint projection algorithm. If there is more than one segment in the segmentation queue after partitioning, all except the last segment are stored into the repository. The last segment remains in the queue and used for segmentation in the next iteration since it can be a part of the future segment. Thus, the segmentation queue is always non-empty having a variable length, which makes our technique different from conventional sliding windows approaches [43].

The parameters δ<sup>1</sup> and δ<sup>2</sup> are required to reduce the processing time of the segmentation. When δ<sup>1</sup> is set to one, the segmentation process will be triggered at the sampling frequency of the input signal. Usually, in haptic modeling, the streaming frequency is very high and the change of the input state for one tick is negligible. Thus, it is reasonable to invoke the segmentation process with the step of δ<sup>1</sup> samples. Similarly, if the δ<sup>2</sup> is set to one, at the beginning of segmentation the signal will be broken into fragments with one sample. In such a case, the agglomeration process will take longer.

## *12.4.6 Interpolation Model*

The goal of the interpolation model is to estimate the vibration output under a given input data sample based on interpolating captured data. We denote the input data sample as a 3D vector, **u** = *vx* , *vy* , *fn*, where *vx* and *vy* are 2D tool velocity vector, and *fn* is a normal response force. Since output of the interpolation model is a time-series high frequency vibration, it is more convenient to express it using a timevarying parametric model. For this, auto regression model (AR) are commonly used in data-driven haptic texture rendering, which we also use.

However, the coefficients of the AR model cannot be directly interpolated due to stability problem, which happens when poles of the transfer function H in Fig. 12.11 are not within the unit circle in the complex plane [26]. Therfore we convert AR coefficients into line spectrum frequency (LSF) coefficients for storing in the interpolation model as introduced in [23]. For rendering, we restore the AR coefficient from LSF model.

Another contribution of this dissertation is the use of a radial basis function network (RBFN) as an interpolation model for texture modeling. The RBFN interpolation model outperforms simplex based in two aspects. First, the output is computed using basic mathematical operations that makes it fast whilst the interpolation result remains good. Second, the input space can be easily increased. For example it is

**Fig. 12.11** RBFN architecture for model storage and interpolation

possible to store several different models inside of the network, switch them or even interpolate using additional input during rendering.

RBFN architecture that we used in this work consists of three layers (Fig. 12.11). The input of the network is a vector **<sup>u</sup>** <sup>∈</sup> **<sup>R</sup>***<sup>n</sup>* of the *<sup>n</sup>*-dimensional Euclidean vector space. The nodes of the hidden one are non-linear RBF activation functions. The output of RBFN is a scalar function of the input vector, φ : **R***<sup>n</sup>* → **R**, which is described as

$$f(u) = \sum\_{j=1}^{N} w\_j \phi(\|u - q\_j\|) + \sum\_{k=1}^{L} d\_k g\_k(u), \quad \mathbf{u} \in \mathbf{R}^n \tag{12.12}$$

where *wj* is the weight constant and *qj* is the center of the radial basis function. The functions *gk* (*u*) (*<sup>k</sup>* <sup>=</sup> <sup>1</sup>, ..., *<sup>L</sup>*) form a basis of the space **<sup>P</sup>***<sup>n</sup> <sup>m</sup>* of polynomials with degree at most *m* of *n* variables. Since we use the first order polyharmonic splines φ(*r*) = *r* as the kernel for RBF, the polynomial term is necessary. Otherwise, the interpolation results might be not as accurate as we want [37]. Using Eq. (12.12), a linear system can be obtained to estimate the weight constant vector **w** of radial basis functions as well as the polynomial coefficient vector **d**, such that

$$
\begin{pmatrix} \Phi & G \\ G^T & 0 \end{pmatrix} \begin{pmatrix} w \\ d \end{pmatrix} = \begin{pmatrix} f \\ 0 \end{pmatrix} \tag{12.13}
$$

where *i j* = φ(*ui* − *u <sup>j</sup>*), and *Gik* = *gk* (*ui*) of the range *i*, *j* = {1, ..., *N*}, *k* = {1, ..., *L*}.

**Fig. 12.12** Input feature space of the haptic texture model

The input vector **u** is fed into three nodes of input layer. There are *n* outputs each of which corresponds to each LSF coefficient. One extra output is for interpolation of the variance provided by the Yule-Walker algorithm [32].

Once a set of LSF coefficients is obtained, the output vibration can be estimated in two steps. First, the estimated coefficients are converted to AR. Second, the vibration value is calculated applying Direct Form II digital filter along with *n* previous outputs. It is common way to use digital filter for AR signal estimation. The digital filter we used in this work can be replaced by any of a kind.

The RBFN is trained as follows. First, the representative input points are calculated from each segment by averaging data points in a segments after the zero-mean/unit variance normalization along each axis of **u** (Fig. 12.12a as an example). Second, in order to cover zero normal force area, we select points that lies on the convex hull of existing points and are facing the *vx* , *vy* plane. Then we project them onto the *vx* , *vy* plane (Fig. 12.12b). For the new points at the zero normal force, the LSF coefficients is copied from that of the original point, while the variances are set to zero. In case of zero velocity, new model points are uniformly created and scattered along *fn* axis, whose variance is set to zero, and the LSF coefficients are copied from the closest model points. Lastly, using the model points, we trained RBFN applying a SpaRSA algorithm [59] that identifies sparse approximate solutions to the undetermined system Eq. (12.13) using an extended set of features from the previous step.

## *12.4.7 Real-Time Texture Rendering*

In this section we will describe a setup for anisotropic texture rendering algorithm. The setup consist of software and hardware components. The software component is implemented in form of a computing library to make it independent from the hardware. The software architecture of the computing library is depicted in

(a) Software architecture of the computing library.

(b) Hardware setup for algorithm demonstration.

Fig. 12.13a. The hardware setup that will be used for rendering is shown in Fig. 12.13b.

The architecture of the rendering software consists of three layers. The upper layer is referred to as interactive layer. This layer computes the input vector **u** based on readings from the input device. Additionally, this layer displays response vibrations back to the user. The business logic of the anisotropic haptic texture library is represented by the second layer. This layer is developed in form of a platform independent computing library. The computing library consist of three functional blocks. First block loads the set of haptic texture models into device memory. The second block estimates the LSF (line spectrum frequency) coefficients and the variance by feeding the input vector **u** to the RBFN haptic texture model. The LSF coefficients and the variance are updated inside the buffer in 100 Hz. Meanwhile, the output vibration signal is generated by the third block, which runs on the other computing thread having frequency 2 kHz. The output vibration signal is produced based on buffered LSF coefficients, the variance and *m* buffered vibration outputs, where *m* is a number of LSF coefficients. Note also that all functional blocks work in separate computing threads. The frequency of each computing thread can be reset in accordance with user needs.

#### **Vibration Estimation**

The set of LSF coefficients and the variance describe the contact vibration pattern for a given vector **u** inside the input space of the RBFN haptic texture model. Therefore, the main task of the RBFN haptic texture model is to provide the mapping of threedimensional input vector **u** with corresponding (*m* + 1)-dimensional output vector (*m* LSF coefficients and the variance). This output vector can be calculated using following equation

**Fig. 12.14** Model architecture for action-dependent vibrotactile signal synthesis

$$f\_i(\boldsymbol{\mu}) = \sum\_{j=1}^{N} \boldsymbol{w}\_{ij} \boldsymbol{\phi}(\|\boldsymbol{\mu} - \boldsymbol{q}\_j\|) + \sum\_{k=1}^{L} d\_{ik} \boldsymbol{g}\_{ik}(\boldsymbol{\mu}), \quad \mathbf{u} \in \mathbf{R}^n \tag{12.14}$$

where *i* = {1, ..., *m* + 1} denotes the index of LSF coefficients and the variance, *wi j* is a weight constant and *qj* is a center of the radial basis function. The functions *gk* (*u*) (*<sup>k</sup>* <sup>=</sup> <sup>1</sup>, ..., *<sup>L</sup>*) form a basis of the space **<sup>P</sup>***<sup>n</sup> <sup>p</sup>* of polynomials with degree at most *p* of *n* variables.

The output vibration values are calculated using an approach similar to [23]. First, the LSF coefficients are converted to AR ones. Second, the AR coefficients, variance, and *m* buffered outputs are fed to the transfer function of the Direct Form II digital filter

$$H(z) = \frac{\varepsilon\_l}{1 - \sum\_{k=1}^{p} w\_k z^{-k}},\tag{12.15}$$

where *wk* are AR coefficients, ε*<sup>t</sup>* is a random sample from a normal distribution. The output value of the transfer function (Eq. 12.15) is the output acceleration value. Therefore the overall rendering algorithm of the stochastic vibrotactile signal can be decomposed into two computing threads as in Fig. 12.14.

#### **Rendering setup**

In order to demonstrate the quality of the modeling and rendering algorithms, we designed a tablet-PC-based hardware setup (Fig. 12.13b). The tablet PC (Surface Pro 4; Microsoft) was selected as a rendering device. The contact velocity is calculated based on the contact position data from the touch screen of the tablet PC. The normal force of the contact is calculated based on readings from active digital pen (Surface Pen; Microsoft) with a sensing capability of 1024 pressure levels. The output vibrations are displayed using NI DAQ data acquisition device (USB-6251; National Instruments). This output signal is amplified by an analogue amplifier and is displayed using a voice coil actuator (Haptuator Mark II; Tactile Labs).

## **12.5 Physics-Based Modeling**

In the physics-based simulation, the haptic properties of an object or environment are approximated by an external mathematical model of the underlying physical process of the interaction. In haptic rendering, physics-based modeling is commonly used to simulate the global deformation of an object. The volume of an object is usually discretized into a finite set of mass points, the dynamics of which is approximated by Newton's laws of motion. The relation among neighboring mass points is modeled by constitutive models governing strain-stress relation. This relation can be approximated by the mass-spring-damper system or Finite Element Method (FEM). The solution of the former approach is estimated by a system of ordinary differential equations (ODEs). The latter approach is based on partial differential equations (PDEs) is considered to be physically more accurate. In the physics-based simulation, the haptic properties of an object or environment are approximated by an external mathematical model of the underlying physical process of the interaction.

In this section, our goal is to model a hyper-elastic object deformation using FEM based approach, where the stress-strain relationship derives from a strain energy density function. This approach allows modeling large deformation of relatively soft objects, which is challenging to approximate by other simulation methods. A complete set of methods covering the whole process of the measurement-based modeling/rendering paradigm is newly designed and implemented deformation phenomenons, with a special emphasis on haptic feedback realism.

## *12.5.1 Hyper-Elastic Material Modeling*

In this section, we establish an easy and standard procedure for identifying material parameters of hyper-elastic object and corresponding real-time rendering algorithm. While real-time simulation and digitization of large deformation for visual feedback have been a mature research topic [15, 48, 57], those for haptic feedback are still in their early stage. This is mainly due to that global deformation usually involves changes of geometry throughout the whole object body which is very expensive to simulate in so-called 1 kHz "haptic-real-time" and has nearly infinitely large input and output space for measurement-based approach. What is worse is that for haptic simulation, changes of particles inside the object matter. Due to these difficulties and high realism requirement of these applications, the Finite Element Method (FEM) based approach is considered as the most suitable direction, which this paper follows. In the FEM-based approach, physical deformation of an object's continuum media is approximated using discretization methods and is commonly used for deformation modeling, where the stress-strain relationship of the deformation is governed by a constitutive model and material parameters. However, there are yet two critical hurdles to overcome to apply the FEM-based approach to the haptic digital copy and rendering scenario. First, there is no well-defined way to tune the FEM parameters rendering the behavior of the virtual copy to be exactly the same as the one of an existing real object. Second, it is still not quite feasible to use FEM in haptic rendering due to rather slow update rate of even the state-of-the-art algorithms. Our goal is to tackle these two problems.

We focus on identifying a single set of FEM parameters through the palpation of an object with homogeneous and isotropic material where a single constitutive model and the same material parameters can describe the whole elements within the object. This identification procedure can be repeated for multiple objects, yielding a material library, which can be used to design a heterogeneous or composite object with multiple different materials. We put building material library as our future work.

Our approach of identification follows the conventional procedure; the parameters are estimated by observing the object's shape change in responses to well-defined external force application. In order to facilitate the capturing procedure, our framework assumes an object, from which the material parameters are extracted, having a cylindrical shape. Unlike other volumetric primitives, the cylindrical object has a beneficial property that simplifies deformation capturing. By fixing the bottom of the cylinder and applying the orthogonal force to its top, the shape of the cylinder expands away from its central axis symmetrically. This property allows capturing the deformation at a particular level of the cylinder using several tracking markers. Additionally, the simplicity of the cylindrical shape allows a user to prepare material samples for the identification (Fig. 12.16).

## *12.5.2 Deformation Features*

During compressive deformation, the shape of a cylindrical object expands outwards, whereas the symmetry of deformed shape about cylinder axis is maintained. Thus the right section of a cylinder (Fig. 12.15a) travels downwards and the area of the right section increases. Relatively to its initial shape, the deformation of cylinder at *hk* height level can be represented by axial Δ*dk* and radial Δ*rk* displacements (Fig. 12.15b).

At every time step, the data collection setup recorded a state of the deformation. A complete session of the deformation consists of *n* states, where each *i*-th state is represented by compressing displacement Δ*si* (a distance that top end of cylinder traveled downwards during deformation), normal force *fi* , and sixteen three-dimensional position points of markers **p***i j* . The absolute marker positions p defined in the world coordinate system are variant to translations and rotations. Therefore, to use the **p***i j* position points in material identification, the coordinate system of the data collection device and that of the FEM simulation should be aligned. As the miss-alignment of

(c) Estimated deformation patters at four height levels *hk*. (d) Reconstructed deformation using esti-

mated deformation patterns.

**Fig. 12.15** Deformation parameters definition, deformation feature construction, and deformation pattern extraction c 2022, with permission from Elsevier [9], all rights reserved

(a) Tetrahedral mesh.

**Fig. 12.16** Elasto-plastic deformation of the tetrahedral mesh

the coordinate systems might degrade the model identification quality, using absolute positions is inappropriate. In order to build translation and rotation invariant deformation features, the positions **p***i j* are transformed into more convenient coordinate frames, describing relative displacements of each marker. The origin point *p*<sup>0</sup> *<sup>j</sup>* and the plane passing through the central axis of the cylinder and fitted to the position points **p***i j* describe the coordinate frame of the displacements for the marker *j* (Fig. 12.15). The axial Δ*di j* and radial Δ*ri j* displacements are coordinates of position points **p***i j* projected on to the plane of the coordinate frame of the displacements. At every height level *hk* of the cylinder, the radial and axial displacements are averaged and stored into matrices **D***i*×*<sup>k</sup>* and **R***i*×*<sup>k</sup>* , which define our deformation features. The deformation patterns of four levels of the cylinder can be seen in Fig. 12.12c. The deformation of the cylinder at any height level *hk* can be reconstructed using deformation patterns as shown in Fig. 12.15d. Thus, the deformation of a cylinder can be represented by two matrices. Note that the deformation features do not describe the whole deformation dynamics. To describe the whole deformation dynamics, along with deformation features, the normal forces **f** and the compressing displacements **d** of all states are needed as well.

## *12.5.3 Model Identification*

Our goal is to estimate model parameters based on observations collected during deformation of a real object. In order to optimize material parameters, i.e., Young's modulus *k* and Poisson ratio *v*, we define a FEM solver as a function Γ (·) mapping vector of compressing displacement **d** to the synthesized normal forces ˜**f** and shape deformation patterns, i.e., **D**˜ and **R**˜ .

$$
\langle \mathbf{f}, \mathbf{D}, \mathbf{R} \rangle = \Gamma(k, \nu, \mathbf{d}) \tag{12.16}
$$

Then the model identification can be seen as a nonlinear optimization problem with a following objective function

$$\min\_{\mathbf{k}, \mathbf{v}} (||\mathbf{f} - \tilde{\mathbf{f}}||\_2^2 + \alpha (\frac{1}{r\_c} ||\mathbf{D} - \tilde{\mathbf{D}}||\_F^2 + \frac{1}{h\_c} ||\mathbf{R} - \tilde{\mathbf{R}}||\_F^2)),\tag{12.17}$$

where *hc* and *rc* are reference height and radius of the cylinder, respectively. The *hc* and *rc* are required to make objective function invariant to the physical dimensions of the cylinder. The parameter α is used to balance the force error with the error of deformation. In our case we selected α equals 0.1.

Note that any FEM solver being able to describe deformation of the target object can be utilized in a function Γ (·) for model identification. However, it is recommended to use the same solver for both modeling and rendering. In this work, we adopted the FEM solver with implicit integration schema based on Alternating Direction Method of Multipliers (ADMM) [48]. This solver having the generic formulation allows to use of any constitutive model by defining a proximal operator (Sect. 12.5.4). We employed two commonly used nonlinear hyper-elastic material models, i.e., St. Venant-Kirchhoff and Neo-Hookean models.

In order to solve the nonlinear optimization problem (equation Eq. 12.17), we utilized a single objective Genetic Algorithm (GA). We first created an initial population of 50 two-dimensional genes. Then we ran optimization with 0.7 crossover and 0.1 mutation rates. Other gradient-free optimization algorithms can also be used for a given objective.

## *12.5.4 Finite Elements Method Solver*

The second goal of the work is to make FEM simulation run fast enough for haptic rendering. In this section, we integrated the optimization-based FEM solver from [48] into a haptic rendering environment. Here, we first provide a brief background required for the contact forces computations. Next we explain how the actual contact forces are computed and rendered in our setup.

In FEM modeling, a deformable object is understood to be a set of material points having individual masses *mi* , which are interconnected with each other forming a tetrahedral mesh. Each tetrahedron of a mesh can be treated as a generic spring that keeps mass points at the equilibrium state by raising conservative forces. External forces applied to the deformable object cause a motion of mass points, which in terms obeys the Newton's second law. Thus, in order to approximate the motion of mass points, one can perform explicit or implicit time integration. The implicit method has been found to be more practical for real-time applications providing a stable approximation for relatively large time steps, whereas the explicit methods tend to overshoot the equilibrium point and explode. The implicit time integration method, the backward Euler method, is computationally intractable for real-time haptics since it requires to solve a large non linear system of equations. However, for the conservative system *f* = −∇*U*, the backward Euler method can be formulated as an optimization problem [28]. The main advantage of this formulation is that the parallelizable solvers can be utilized. Here, we provide a brief summary of the framework. For detailed explanation, refer [48].

The optimization problem for the state of a deformable object at Δ*t* time step later can be formulated as,

$$\mathbf{x} = \operatorname\*{arg\,min}\_{\mathbf{x}} (\frac{1}{2\Delta t^2} ||\mathbf{M}^\ddagger(\mathbf{x} - \tilde{\mathbf{x}})||\_F^2 + U(\mathbf{x})) \tag{12.18}$$

where **M** is inertia matrix, **x**˜ is the predicted state of the deformable object in the absence of implicit forces, and *U*(**x**)is elastic potential energy of a deformable object at Δ*t* time step later.

The first and second terms of the objective function (Eq. 12.18) represent the momentum and the elastic potentials respectively. The optimization problem itself can be seen as finding the equilibrium between the momentum and the elastic potentials. In order to solve the optimization problem, ADMM-based solver splits equation (12.18) into two objective functions by introducing a dual variable and a constraint function, which relates these objective functions. In this way, each objective function is optimized separately satisfying constraint relating both. As a result, the solution satisfying the both of new objective functions converges to the solution of Eq. (12.18) in iterations. The dual variable **u** introduced by ADMM is also updated at each iteration. As the deformable object is modeled as FEM, elastic potentials can be calculated locally for each mesh element. Based on [48], the dual variable **u** correlates with implicit conservative forces caused by elastic potentials, and can also be updated locally for each tetrahedral element.

*Elastic Potential (local-step)* During the collision with a haptic probe, the internal deformation undergoes in elastic object raising conservative forces. The deformation gradient **F**(**x**) of whole mesh object can be calculated by its current state **x**. The strain of each tetrahedral element can be defined separately by its deformation gradient **F***<sup>i</sup>* . Integration of elastic potential energies of all tetrahedral elements constitutes the elastic potential energy of full mesh.

$$U = \sum V\_i \Psi \left(\mathbf{F}\_i\right). \tag{12.19}$$

Here, Ψ (**F**) is strain energy density function which can be calculated from deformation gradient and material parameters, *Vi* is the volume of the tetrahedron element.

In local step, the ADMM-based solver searches updated deformation gradient values minimizing the elastic potentials and approaching optimal gradient deformation which is **F***<sup>i</sup>* + **u***<sup>i</sup>* .

$$\mathbf{F}\_{i}^{n+1} = \underset{\mathbf{F}}{\text{arg min}} (\boldsymbol{\Psi}(\mathbf{F}) + \frac{\boldsymbol{\tau}}{2} ||\mathbf{F} - (\mathbf{F}\_{i}^{n} + \mathbf{u}\_{i})||\_{F}^{2}) \tag{12.20}$$

Note that the dual variable **u***<sup>i</sup>* is also updated in this local-step.

$$\mathbf{u}\_{i}^{n+1} = \mathbf{u}\_{i}^{n} + \mathbf{F}\_{i}^{n} - \mathbf{F}\_{i}^{n+1}.\tag{12.21}$$

*Momentum Potential (global-step)*. At global step the solver iteratively updates the state of whole mesh. ADMM algorithm introduces the following optimization problem that searches solution for minimizing momentum potentials satisfying the solution of optimal elastic potentials:

$$\mathbf{x}\_{n+1} = \underset{\mathbf{x}}{\text{arg min}} (\frac{1}{2\Delta t^2} ||\mathbf{M}^\ddagger(\mathbf{x} - \tilde{\mathbf{x}})||\_F^2 + \frac{1}{2} ||\mathbf{W}(\mathbf{F}(\mathbf{x}) - \mathbf{F}'' + \mathbf{u}'')||^2) \tag{12.22}$$

Here,**W** is a weight matrix which affects the convergence rate. Linearized constraints **Cx** = **d** about the current state are also incorporated into 12.22 to handle the collisions. The solution satisfying the constraints and above optimization problem is found by introducing Lagrange multipliers λ in a saddle point system and solved using Uzawa Conjugate Gradient for General Constraints [55]. The self-collision can be handled in the similar manner as in [48]. However, in the current study, the self-collision was not considered to reduce the computational load. Note that the algorithm is optimized globally for the whole mesh, so we call this update rule global-step. The whole algorithm converges to the optimal solution of the problem (12.18) by updating the local- and global-steps alternatively in iterations. The interaction forces can be computed using Lagrange multipliers, which explain in Sect. 12.6.7.

## **12.6 Combination of Physics-Based and Data-Driven Models**

As we discussed earlier, the simulation approach, whether it is physics-based or datadriven, is usually determined by considering the nature and complexity of the target stimuli. The physics-based simulation is usually preferred when the global deformation is in priority. For instance, the physics simulation is beneficial when the object undergoes large or plastic deformation, when the visually plausible rendering is also required, and when the interaction is complex with self and multi-point collisions. The data-driven methods, on the other hand, are usually used when realistic modeling of the local contact is in priority with a highly non-linear action-feedback relation. Many haptic modeling problems, however, fall on neither side of this trade-off. In such cases the hybrid approach of physics-based and data-driven can designed. For instance, to simulate FEM, the authors in [46] computed deformation forces at eight points around the haptic probe positioned on a virtual grid and performed a real-time interpolation at haptic update rate. Bickel et al. [14] proposed to adjust the strain of each element of the linear FEM model with non-linear function. In this section, we explain the hybrid approach of modeling the elastoplastic deformation, where the non-linear forces are computed using a hyper-elastic model from Sect. 12.5 and the plastic flow is approximated by a neural network-based controller. The goal of this section is to demonstrate the joint optimization of physics-based and black-box data-driven models.

## *12.6.1 Plasticity Modeling*

The ability to portray three-dimensional shapes by sculpting malleable materials played an important role in human evolution. Plastic modeling has been actively used in pottery, molding, architecture, and sculpture. Modeling the desired shape from a pliable material like clay or dough requires a special set of sensory-motor skills, which can be developed only through haptic interaction. In order to achieve a target deformation, the artist fine-tunes the contact manipulations relying upon kinesthetic

**Fig. 12.17** Elasto-plastic deformation curve and strain energy density

perception of the plastic flow and force feedback. Due to the lack of realistic haptic feedback, learning the plastic and pastry arts in online or by using virtual reality (VR) simulation remains rather impractical. The realistic haptic feedback might also be beneficial for the digital sculpting in modern computer-aided design (CAD), which in turn shares common manipulation operations as in physical sculpting. In this dissertation, we aim to develop an end-to-end framework that captures material properties from plastic objects, builds the corresponding digital copy, and renders it in a haptic-enabled simulation.

Plasticity is a physical property of a material undergoing permanent deformation due to external forces. The permanent changes generally occur when the applied stress exceeds the material-specific yield point, i.e., transition point when the material behavior changes from elastic to the plastic regime (Fig. 12.16). The total deformation at the plastic regime, however, consists of both elastic and plastic components, as the material partially recovers after load removal. The plastic part of the deformation can be determined by the additive or multiplicative decomposition model. The multiplicative decomposition rule has been proven to be more appropriate since it preserves the physical meaning while keeping the object's volume persistently [36]. In computer graphics, the multiplicative decomposition is usually governed with a parametric model with manual tuning.

The multiplicative decomposition rule has been successfully used in computer graphics rendering. The decomposition is generally governed by a flow model with several parameters, e.g., yield point, flow rate, hardening parameter. In computer graphics, this parameters are generally manually tuned to achieve photo-realistic rendering. However, in haptic rendering, to achieve a realism, the accurate force feedback can be computed using multiplicative decomposition decomposition models allowing modeling plastic flow with arbitrary complexity.

**Fig. 12.18** Plasticity (top) and physics (bottom) policy models c 2022, IEEE Reprinted, with permission, from [6]

## *12.6.2 Elasto-Plastic Decomposition*

The object undergoing a plastic deformation typically passes through four stages, i.e., elastic regime, yielding, plastic regime, and fracturing (Fig. 12.17a). Before reaching the yield point, the object recovers to its original shape after removal of all external forces. The system before the yield point remains conservative and can be approximated by optimization-based numerical integration that we already introduced in Sect. 12.5.4. When the deformation reaches the yield point, some strain energy start dissipating into the other forms of energy, e.g., turns to a heat during particle dislocation. The remaining energy is elastic causing partial recovery after the load removal and producing the force feedback during the contact (Fig. 12.17b). In nature, there is no any perfectly plastic solid material, as the minimal elastic potential is needed to withstand the shape against the gravity. Another important criterial is rate of dissipation of the strain energy in the plastic regime, as the underlying physical processes of turning one energy into the others require different time. Therefore, we should consider at least two factors to approximate the plastic flow. Note that in this study, we don't consider factors related to environment, e.g., temperature, humidity, etc. As long as these factors do not change rapidly, we can build models for different environmental conditions.

**Fig. 12.19** Model identification pipeline. The blue and red lines of the pipeline represent the data flow for the update of state-dependent and -independent parameters, respectively c 2022, IEEE Reprinted, with permission, from [6]

The elasto-plastic decomposition is a continuous process where the total strain **F***<sup>i</sup>* from (12.18), is split into elastic and plastic components. In multiplicative decomposition, the amount of plastic component can be determined relying upon the following rule

$$\mathbf{F}\_i = \mathbf{F}\_i^\prime \; \mathbf{F}\_i^p. \tag{12.23}$$

To find the strain of the permanent deformation, we first diagonalize the total strain, **F***<sup>i</sup>* = **VV***<sup>T</sup>* , where is a diagonal matrix with eigenvalues, as in [16]. To prevent changes of the tetrahedral volume, we constrain the determinant of the diagonalized strain to 1, as follows

$$\hat{F}\_i^p = \left(\det(\Lambda)\right)^{-1/3} \Lambda \tag{12.24}$$

Then, the plastic strain can be estimated using plastic flow constitutive model

$$
\hat{F}\_i^p = (\hat{F}\_i^p)^\circ,\tag{12.25}
$$

where the exponent γ is usually a function relating the current stress with the ratio of the plastic component in the original strain **F***<sup>i</sup>* . For instance, if γ always equals zero, then the material is elastic. Likewise, if constant γ equals one, the plastic component occupies the complete strain. We approximate the γ for each finite element as follows

$$\gamma = \min\left(\nu \frac{(\Psi(\mathbf{F}\_i) - e\_\mathbf{y})}{e\_\mathbf{y}}, 1\right), \tag{12.26}$$

where Ψ (**F***i*)is the energy density function, *ey* denotes the yield point, and ν controls the rate of plasticity.

The rest configuration of the mesh model **X** can then be updated using the plastic strain that we obtained in Eq. (12.25).

$$\mathbf{X} \leftarrow \mathbf{X} V(F\_i^p)^{-1} V^T \tag{12.27}$$

As it can be seen in equation Eq. 12.26, we compute the plastic flow γ directly using the strain energy density. This is advantages for the haptic rendering as the energy density is already computed in ADMM routines and we do not have to compute the the second Piola-Kirchhoff stress as in [13]. However, since we update the mesh object after ADMM iterations, we have to perform Cholesky factorization of a matrix used in a global step of ADMM solver [48]. This requires some additional computation, which can also be done in parallel using GPU.

## *12.6.3 Data-Driven Modeling of Plastic Flow*

Identification of the physically-based dynamic models is a non-linear problem that is commonly solved by meta-heuristic methods, e.g., the genetic algorithm [8, 45]. The objective used in optimization usually requires running a complete simulation for each candidate set of parameters. Thus the training of the black-box controller with a relatively large number of parameters becomes computationally intractable using these methods. To tackle this problem, we propose a novel approach based on inverse reinforcement learning, that optimizes a complex controller by taking advantage of the intermediate steps of simulation.

Reinforcement learning (RL) is a machine learning technique optimizing the control model while interacting with a dynamic environment. The controller in RL changes the state of the environment by executing an action and receives the reward. The main goal in RL is to optimize the state-action mapping function that maximizes cumulative reward. In inverse RL, the reward is derived from observations of an expert acting in an environment. In the case of plasticity modeling, we derive the reward function based on data collected from real deformation and identify a control model that tries to mimic the plastic flow.

The main requirement for modeling the homogeneous object is that the same controller should be able to approximate the plastic flow for any finite elements of the mesh no matter its size and location. The main difficulty is that the deformation measurement (force-displacement field) inside the physical object is infeasible and there is no practical way to compute the reward for each finite element. To address this issue, we propose a multi-agent single-policy reinforcement framework, where each finite element is individually represented by an *Agent*. The agent observes a deformation *State* of the corresponding finite element, which we represent in the form of a vector with recent energy densities. For a given deformation state, the agent executes the action, which is the exponent of the multiplicative elasto-plastic decomposition γ*<sup>i</sup>* . The common policy model is optimized by all Agents in a Markov Game. The execution of simultaneous actions by multiple agents, however, interferes and becomes non-stationary since the next state that each agent observes is also conditional to previous actions of others. To mitigate this problem, we design the inter-agent cooperation in a relaxed form, where the stochastic action is executed by a single agent for a complete run of the simulation (trajectory). The idle agents apply the action from the fixed policy. In this way, we limit the variation of the reward at each time step to an action executed by a particular agent.

Considering the symmetric property of the cylindrical specimen, finite elements can be classified into several groups having similar topology and experiencing similar stress during deformation. The tetrahedral mesh of 63 elements that we used in material identification, thereby can be partitioned into nine groups. At each iteration, one out of nine groups can be randomly selected to sample the next game trajectory. The system reaches the Nash equilibrium point when the agents don't have to move towards improving the policy.

## *12.6.4 Policy Model*

The RL algorithm is commonly classified into Value- and Policy-based methods. In our model, we employ the policy-based concept that performs a direct search over the policy space. The policy function samples continuous actions from distribution conditional to the observed state and parametrized by function approximators like deep neural networks. The main advantages of the policy-based methods are that they directly learn stochastic processes and allow using a gradient-based optimizer. This is beneficial for training deep neural networks using the backpropagation technique.

The material model in our framework consists of state-independent and -dependent learn-able parameters (Fig. 12.19). The state-independent parameters represent material-specific properties that do not change during the deformation. In this model, the state-independent parameters were encoding the normal distributions of Young's modulus and Poisson ratio used for total strain energy computation, as well as the yield point denoting the elastic limit. The state-dependent parameters are used to control the plastic flow in elasto-plastic decomposition by establishing the relation between the total density and the ratio of the plastic component in current deformation. We approximate the plastic flow using a deep neural network depicted in Fig. 12.18. The input of the model is a vector accommodating recent strain energy densities of a particular finite element. The 1D convolutional layers help to compute rate dependant features encoding the possible viscosity, as well as for filtering undesired oscillations. The fully connected layers map incoming feature vectors to the parameters of the gamma distribution. The plastic flow exponent can be sampled from the resultant gamma distribution in the training process, or represented by its mean in rendering.

## *12.6.5 Model Training*

The model identification pipeline is depicted in Fig. 12.19. We identify the material model in iterations by alternating the optimization of physics and plastic flow parameters. In the first step, we sample a number of vectors with FEM parameters and run the simulation for each vector. In this step, the plastic flow was taken as a mean of distribution characterized by a fixed policy model. Likewise, in the second step, we randomly select one out of nine groups and sample state-dependent trajectories using the plasticity model, while keeping state-independent parameters constant. After each step, we update corresponding model parameters using Proximal Policy Optimization (PPO) algorithm [53]. The PPO, like other policy-based algorithms, tries to increase the probability of the action producing a higher return, and penalize the probabilities of actions leading to a lower return,

$$\tilde{p} = \frac{\pi\_{\theta}(a\_t|\mathbf{s}\_t)}{\pi\_{\theta\_{old}}(a\_t|\mathbf{s}\_t)} R^{\pi\_{\theta}}(\mathbf{s}\_t, a\_t), \tag{12.28}$$

where *R*πθ (*st*, *at*) is the infinite-horizon discounted return function computed at time step *t*,*st* is a vector of recent strain energy densities of a finite element representing its state, and *at* = {γ*i*, *k*, *v*, *ep*} is an action. The PPO, however, additionally penalizes large update steps by clipping the probability ratio *p*˜, as follows

$$\mathcal{L} = \min\left(\tilde{p}, \; clip\left(\tilde{p}, 1-\varepsilon, 1+\varepsilon\right)\right). \tag{12.29}$$

To estimate the ongoing reward, we adopted the objective function that we used for identification of hyper-elastic parameters in Sect. 12.5.1.

$$r = -||\mathbf{f} - \tilde{\mathbf{f}}||\_2^2 + \alpha(\frac{1}{r\_c}||\mathbf{X}\_a - \tilde{\mathbf{X}}\_a||\_F^2 + \frac{1}{h\_c}||\mathbf{X}\_r - \tilde{\mathbf{X}}\_r||\_F^2),\tag{12.30}$$

where **f** denotes contact forces, **X***<sup>a</sup>* and **X***<sup>r</sup>* represent axial and radial projections of cylinder deformation, respectively Fig. 12.20.

Note that we don't apply the actor-critic schema used in the original PPO algorithm. Since each agent in our environment has a different influence on the reward, the expected return should be computed for each agent by an individual critic network. Instead, to reduce the variance, from ongoing reward we subtracted the expected reward computed for a mean action of current policy distribution.

## *12.6.6 Recording Setup and Sample Set*

To capture the non-linear force response during the deformation of an elastoplastic sample, we build a motorized data collection setup (Figs. 12.21 and 12.22). The

**Fig. 12.20** Model identification progress: cumulative reward for 500 steps of PPO update; three snapshots of during training; force response for testing data c 2022, IEEE Reprinted, with permission, from [6]

device enables a uni-directional position and velocity control of the carriage for the compressive deformation of a cylindrical material sample. The carriage compressing the sample was equipped with a force sensor (Nano17; ATI Technologies) and was aligned in a normal direction by two rail rod sliders. The movement of the carriage was actuated by a stepper motor (17HS8401; NEMA 17), which was managed by the TMC-2130 controller in 1/16- microstepping mode. The motion of the carriage was smoothed by *stealthChop* algorithm with 1/256-microstepping interpolation (configured in the controller). The revolving resolution of the motor was 3200 steps per complete cycle, which corresponds to 0.025 mm resolution of the carriage's linear motion. In order to capture the shape deformation, we attached a grid of 15 IR markers to target samples and utilized four IR cameras (Flex 13; OptiTrack) for tracking. The force sensor was connected to the same data-acquisition board, which recorded force responses with 1000 Hz update frequency. For the evaluation, we prepared three elastoplastic material samples, i.e., dough, clay, and chewing gum. Two mounts were attached to both ends of each sample allowing fixating it in the device.

To identify material parameters, we performed the relaxation test with a constant strain rate. This test consists of two steps, i.e., loading and relaxation phases. In the loading step, the material sample was compressed with a constant velocity, 10 mm/s. In the relaxation step, the position of the carriage is fixed, and the decaying force feedback was collected for the same duration as in the loading step. For the training and testing datasets, the samples were compressed 5 mm and 6 mm, respectively.

**Fig. 12.21** Data-collection setup and sample set of plastic objects c 2022, IEEE Reprinted, with permission, from [6]

**Fig. 12.22** Force-displacement measurement device of vertical deformation of cylindrical samples

## *12.6.7 Rendering Collision Forces*

During the contact, the global deformation raises conservative forces in the object's media, such that the sum of all internal and external forces equals zero. In the virtual environment, a virtual tool coupled to a haptic manipulator applies external forces to contacting vertices of an object. However, impedance type haptic devices having a closed-loop control provide the position and orientation of the end-effector and accepts the force to be rendered. The contact deformation, in this case, can be described by a set of boundary constraints for contacting vertices **Cj**. The response

**Fig. 12.23** Illustration of the force field at the contacting vertices and resultant force vector during contact © 2022, with permission from Elsevier [9], all rights reserved

force due to the contact can be computed as a sum of normal forces raised at the *m* contacting vertices (Fig. 12.23) due to boundary constraints as follows

$$f\_r = -\frac{1}{\Delta t^2} \sum\_{j=1}^{m} \mathbf{C}\_j \lambda\_j,\tag{12.31}$$

where λ is Lagrange multipliers from the saddle point system.

To set the boundary constraints, all object's vertices are tested for a collision with a virtual tool using Axis-Aligned Bounding Box (AABB) collision detection algorithm. In our case, the virtual tool was in the form of a sphere. Depending upon the application, the shape of a virtual tool can be selected arbitrarily. The global positions of the colliding vertices are moved to the boundary of a virtual tool by forming equality constraints. The equality constraint is generally not recommended for haptic rendering. During a shallow and sliding contact, some vertices can provide oscillation by getting in and out of the contact. To solve this issue, we set a small dead-band allowing vertices to travel out of the object without the loss of the contact and employed the virtual coupling compensating the small contact oscillations.

## **12.7 Conclusion**

The development of haptic software is a complex process that requires considering various engineering aspects of sensing and actuation additionally to the design of algorithms and mathematical models. In this chapter, with a special emphasis on realism, we discussed measurement-based and data-driven approaches that are optimized using data collected during real haptic interaction. The main goal was to deliver a fundamental knowledge of haptic modeling and rendering that can help a reader to formulate haptic models and implement realistic VR and MR simulators. To show the research landscape on haptic modeling and rendering, we provided a series of state-of-the-art methods, i.e., optimization-based FEM simulation, data-driven models with deterministic and stochastic response spaces, and the hybrid approach of the physics-based and data-driven models. The presented examples also provide an introduction to ongoing challenges in object deformation and haptic texture rendering.

## **References**


**Open Access** This chapter is licensed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made.

The images or other third party material in this chapter are included in the chapter's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the chapter's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.

# **Chapter 13 Evaluation of Haptic Systems**

**Alireza Abbasimoshaei, Jörg Reisinger, Carsten Neupert, Christian Hatzfeld, and Wenliang Zhou**

**Abstract** In this chapter, a number of measurement methods and tests are presented that can be used either for verification or validation or—sometimes—both. We therefore refrain from an ordering based on this steps, but will present the methods based on the focus of the evaluation method. In that sense one can identify three main foci of evaluation methods: system-centered methods that will test system properties (and are mostly used for verification, Sect. 13.1), task-centered methods that will test the task-performance of a user working with the haptic system (such tests are mainly used for validation but they can also verify system properties depending on the test design, Sect. 13.2) and user-centered methods that will measure the impact of the haptic system on the user. The latter are almost exclusively used for system validation and further described in Sect. 13.3.

As stated in Chap. 4, the evaluation of a haptic system will test if a haptic system fulfills all requirements defined in the development process (verification) and whether it is conform with the intended usage of the haptic system (validation). In the following, we will focus on the evaluation method itself, the selection of a proper method and the test design is left to the reader. For the design of task-specific haptic interfaces we consider this to be the most practicable approach, because of the uniqueness of the

J. Reisinger · W. Zhou Mercedes-Benz Cars Development, Daimler AG, 71059 Sindelfingen, Germany e-mail: joerg.reisinger@mercedes-benz.com

C. Neupert · C. Hatzfeld ITK-Engineering GmbH, Im Speyerer Tal 6, 76761 Rülzheim, Germany e-mail: carsten.neupert@itk-engineering.de

Christian Hatzfeld deceased before the publication of this book.

A. Abbasimoshaei (B) · C. Hatzfeld

Hamburg University of Technology, Eißendorfer Str. 38, 21073 Hamburg, Germany e-mail: al.abbasimoshaei@tuhh.de

**Fig. 13.1** Assignment of evaluation methods to different applications of haptic systems. Systemcentered evaluation methods are not included for clarity, since they can be applied to almost every system

designs. This chapter therefore will not deal with optimized evaluation processes as for example established by Samur for the class of haptic interfaces [46]. This work shows that a more standardized testing of haptic system will exhibit advantages in terms of testing time and comparison of different systems.

Figure 13.1 gives an overview of the application of task-centered and user-centered evaluation methods with regard to the intended application as described in Sect. 1.5 for the selection of a proper method described in the following sections.

## **13.1 System-Centered Evaluation Methods**

The prevalent goal of system-centered evaluation is the generation of comparable technical ratings and values. These are used to verify the developed system against the requirements defined in the development process and to compare different systems with each other. The latter is especially relevant for haptic displays and haptic interfaces, since these systems are intended to be used universally in a variety of applications. Because of that, system-centered evaluation methods are not dependent on a certain type of task or a user. From this it follows that normally there are no different experiment conditions to be considered in the interpretation of such acquired evaluation values.

When designing a haptic system one defines properties of the haptic system based on requirements derived from the application. The assumed properties are usually calculated precisely and are well known in theory. But to confirm the promised characteristics at the final haptic system it is necessary to perform measurements, at least of the most crucial parts. With respect to standard measurement hardware, in the following parts some hints for performing performance measurements are given.

The main focus of this evaluation part is the characterization of force reflecting haptic system. In the different sections values of interest are figured out and some hints are given to implement and conduct the measurements. So, the measurement of a haptic systems workspace, output force- and motion depending values are discussed as well as the measurement of dynamic behavior of mechanical parts and the displayable impedance of different systems. Secondary, special properties of admittance systems and teleoperation systems are discussed.

## *13.1.1 Workspace*

The workspace, respectively the number and nature of the -→DoF, is the most prominent characteristic of a haptic system. To analyze the work space the measurement of distances and angles can simply be done with a ruler or a measuring tape and an angle meter. Thereby one can differ between active and passive degrees of freedom with corresponding workspaces. Active degrees of freedom are actuated and crucial for the haptic feedback. Passive degrees of freedom show the workspace that is reachable and driven by the user.

Further the characterization can consider the ability to reach every point of the workspace, the independence of the end effectors orientation from the position in the workspace, constraints of the workspace because of singularities, the conditioning number κ at each point in the workspace and the global conditioning index ν (Sect. 8.4).

## *13.1.2 Output Force Depending Values*

Since haptic systems are bidirectional in energy flux, the verification of output force depending values describes the energy flux directed to the user. Hence the user is seen as passive in this case. The verification of the force depending values can be done in two steps. The first is to investigate static or quasi-static output force signals; the second is to investigate the dynamic output force behavior like frequency response, step response and impulse response.

At first the verification of static force values is shown. Therefore a force sensor, attached to the end effector of the device, with a resolution higher than the minimal desired force output resolution of the haptic system, is needed. The measurement is done by supplying an input to the haptic system and simultaneous inspection of measured output forces.

**Fig. 13.2** Common non-linearities in the characterization of haptic devices: **a** saturation, **b** dead zone, **c** hysteresis, figure based on [45, 51]

#### **13.1.2.1 Static Analysis**


While inspecting the measurement signals one may find different characteristic graphs showing the output force signals in respect to device input signals displayed in Fig. 13.2. The previously described characteristic non-linearities can be derived out of these graphs, called calibration curves [46].

#### **13.1.2.2 Dynamic Analysis**

The verification of static force signals addresses only the actuator and gear capabilities of the haptic system. To maintain the overall mechanical properties of the haptic system, including inertia and damping, it is necessary to characterize the dynamical behavior of the system. One has to note that the movement of the system will induce a unwanted output signal of dynamic force sensors because of acceleration of the mass of the sensor. This systematic error can be dealt with an additional calibration or all measurements have to be conducted in a mechanical fixed condition.


#### **13.1.2.3 Measuring Conditions**

In the most cases it is not possible to provide a universally valid test scenario. Hence before execution of the measurements one should create defined measurement conditions. Therefore it might be crucial to use special constrains to get best fit measurement values for the specific case. The four major conditions to be mentioned are constrains of the end effectors motion and can be figured out as fixed end, open end, hand held and user phantom.


accelerometer could be attached instead or in addition to the force sensor. To derive velocity the acceleration could be integrated.


To provide meaningful results and a complete set of measurement values it might be necessary to repeat all measurements for different points and orientations of the end effector in the workspace of a multidimensional system.

## *13.1.3 Output Motion Depending Values*

Most common for measuring motion values of a haptic system, also multidimensional, is the use of accelerometers. Besides the verification of maximum accelerations, due to integration of the acceleration signals, velocities and due to double integration the position of a specimens end effector can be determined almost without effecting the measurement. For measurements with necessity of high accuracy it might be meaningful to use special sensors for measuring velocity or position. In this case the measurement should be reduced to at least one degree of freedom for the same time.

Measuring motion capabilities can be combined with the measurement of frequency- , step-, or impulse response measurements. Due to the characterization of phase differences of force and motion values one can derive information about inertia and damping of the system [46].

While force depending values are most important for systems based on the impedance structure, for admittance based haptic systems the motion depending values are in the main focus. For a large number of haptic admittance systems the accuracy of displayable velocities or positions is relevant. For example, during the characterization of braille displays the rise time and stationary accuracy of the different pins are of note. To measure the position of small structures for example optical measurement systems like laser triangulators for small unidirectional deflections, tracking systems for large multidimensional movements or vibrometers for highly dynamic movements can be used.

## *13.1.4 Mechanical Properties*

The achievable haptic quality is direct depending to the mechanical properties of a haptic system. Mechanical properties can vary depending on the position and orientation in the workspace; hence these factors should be evaluated as well.


## *13.1.5 Impedance Measurements*

One important measure of haptic displays is the ability to provide a wide range of mechanical impedance. The perceptible haptic impression of a haptic system is given due to a combination of force reflection and velocity. In this case the mechanical impedance means the force reflection of the haptic system in respect to the users inserted velocity. The impedance is a frequency dependent value and can be terminated by *Z* = *F*/v. Besides the mechanical impedance for one operating point the so called *Z-width* of the system is of interest [11]. The z-width means the value of impedances that can be displayed by the haptic device. Mostly the development objective of a haptic system is to design the lowest impedance to be like transparent and the largest impedance to be as high as possible to display stiff walls. Since, a haptic systems tends to get instable while displaying very stiff signals, the upper limit of displayable impedances is given by the maintainable stability of the system.

Usually the mechanical impedance gets displayed in a bode plot showing amplitude and phase of a signal. The so generated signals can be used for system identification to create models of the system, showing damping, stiffness and inertia of the system.

**Fig. 13.3** Measurement setup for the evaluation of mechanical impedances [30]

For measuring the mechanical impedance of a haptic system one may use a special setup with external force excitation to input a defined periodical force sweep to the haptic system. To calculate the impedance a force sensor and a velocity sensor needs to be attached between the external force source and the specimen. In this case the impedance can directly be calculated as the quotient of the measured force and velocity.

An exemplary measurement setup for measuring mechanical impedance is shown in Fig. 13.3. The main parts are a force source consisting of a DC-actuator coupled to linearly guided rod, an attached force sensor as coupling element between the actuated rod and the specimen and a velocity measurement system. To measure the velocity of the system a laser triangulator with additional analog differentiator is attached.

The shown setup is attached to a hand held haptic controller, based on a delta robot. In this case the impedance measurement is done for the passive system in different directions at the toll center point of the delta robot to examine its mechanical properties. The resulting impedance amplitude is shown in Fig. 13.4 [30].

To determine the Z-width one has to measure the impedance of the haptic system when moving in free space to get the lowest displayable impedance of the system. To measure the maximum displayable impedance of the system one can repeat the measurement while rendering a stiff wall with the haptic display by fixing the tool center point with the haptic systems actuators at its maximum continuous force.

If there is no special measurement setup available to excite the system it is possible to jiggle the systems end effector with the humans hand while measuring the force and velocity at the end effector. In this case one is restricted to the dynamic range of the frequency output capability of a human's hand. Also the calculation of impedances

out of excitation values and responding signals for force or velocity independently is possible [46].

Due to the impedance measurement of a system at different points and directions in the working space of the haptic system on can measure the homogeneity or dexterity of the system. Therefore the quotient of the generally smallest and largest measured impedance of the passive haptic system can be calculated.

## *13.1.6 Special Properties*

Besides the characterization of output capabilities of haptic interfaces one can find different values in complex haptic systems that might be of interest regarding the haptic quality of haptic system.

One of these is for example the input signal of a haptic system that can affect the haptic quality. Hence the bandwidth and accuracy of the provided signals should be larger and more precisely then the set requirements for the displayable haptic feedback. A other point might be the slaves sensing capability of a haptic teleoperation system. The dynamics of the signal can affect the displayable haptic feedback as well as the accuracy of the sensor signals. Also the latency of (force) signals might be of interest and can heavily affect the transparency of the haptic system [22]. Even the time shift between haptic feedback, visual and acoustic feedback might be of interest.

Further properties that have to evaluated depending on system structure and application are transparency and the transparency error (see Sect. 7.5.2), latencies in the control loop (especially when the system contains packet-based information transfer like for example the internet), control stability and energy consumption.

## *13.1.7 Measurement of Psychophysical Parameters*

Psychophysical parameters like absolute and differential thresholds are fairly well investigated (see Sect. 2.1). Based on the suggestions of Weisenberger et al. it can be nevertheless useful to measure a psychophysical parameter, since deviations to (well-known) thresholds can be attributed to the fidelity of the device [58]. This procedure is implemented in different evaluation testbeds by Samur [46], but can also be applied individually. To assess the fidelity of a haptic system, absolute and differential thresholds are useful measures, evaluating resolution and reproducibility of a device. This method is therefore preferable for the evaluation of haptic interfaces. Similarly, the discrimination of haptic properties and signals can be used to assess haptic systems for communication means as described in the next section.

## **13.2 Task-Centered Evaluation Methods**

The above described methods of system-centered evaluation are used to determine concrete values for the system's properties. However, to validate if a system performs correctly, further evaluations of the task performed with the system are needed. Such task-centered methods investigate the feasibility of a system to perform beneficial in the intended usage.

Typical for this purpose is a simple test task that is performed by several test subjects under different boundary conditions. A typical boundary condition is for example the kind of feedback like no feedback, haptic feedback, visual feedback, haptic and visual feedback under which the test is performed or different properties of the haptic system used (for example amplitudes, frequency, speed etc.). Because of the dependence on test persons and different boundary conditions, one should carefully design the experiment to prevent errors because of inter- and intra-personal effects as well as learning.

Results of task-centered method can be used to compare different systems in the same task or to assess the effectiveness of haptic systems in a given application. Furthermore, such tests can be useful to identify promising parameters to optimize a given design or interaction. Starting with these methods, an engineer will take large steps towards the sometimes odd-looking (from an engineers point of view) approaches of human factors and ergonomics. However, the authors find it advisable to consider such approaches as early as possible in the design process (for example by defining evaluation tests in the requirement derivation process) to incorporate noteworthy aspects that will actually contribute to a good product.

## *13.2.1 Task Performance Tests*

Typical task performance tests are conducted with haptic interfaces using -→ VR test setups or with teleoperation system. One should identify a task that is very close to the intended usage of the system to use all interaction primitives involved, but that is simple enough to be understood quickly by the test persons and to be completed in time.

Typical embodiments of such tests are given in the following list.


To gain quantified measures from these tests, several outcomes are commonly used. They can be used in combination with almost every test type and have to be chosen according to the goal of the evaluation and the technical capabilities of the test environment.


**Fig. 13.5** Task performance test setups for minimal invasive surgery: **a** pick and place setup of a DaVinci surgical robot at the University Medical Centre Mannheim, Germany, **b** needle transfer setup as reported in [38]. Pictures courtesy of Peter P. Pott, Technische Universität Darmstadt and Katherine J. Kuchenbecker, University of Pennsylvania

**Handling Forces** Since one of the main goals for assistive and teleoperation systems is the reduction of handling forces, the evaluation of average, maximum or contact forces is a common outcome of task performance tests. One has to note, that this outcome as well as some of the above mentioned error definitions will require additional sensory equipment like tracking systems or reliable force sensors.

Examples for practical realizations of such tests can be found in a vast number of studies like for example [38], where haptic feedback for robotic surgery is evaluated. Figure 13.5 shows an example using a DaVinci surgical robot. The work of Pongrac giving some general guidelines for the evaluation of virtual reality and teleoperation systems [42]. In the studies included in the meta-analysis about the effects of haptic feedback by Nitsch [40] further task performance tests can be found for various applications.

## *13.2.2 Identification of Haptic Properties and Signals*

One of the main goals of haptic systems intended for communication is to transfer information from the system to the user. Similar is the necessity of a teleoperation to convey enough information to the user to differentiate between relevant components and materials, for example between tissue and vessels in a surgical application. For such kind of evaluations, Tan et al. proposed the evaluation of the -→ Information Transfer (IT) of a haptic application, a measure that is used widely in the evaluation of haptic systems [46]. Jones and Tan give a detailed explanation in [28], on which this section is predominantly based.

This approach is orientated on a information theoretical framework and is normally based on a absolute identification experiment. A user is presented one of *K* stimuli *Si* and has to choose a response from a set of *K* responses *Rj* with *i*, *j* = 1,..., *K*. Based on the answers, an confusion matrix is constructed, that denotes how often each response *Rj* was chosen, when a certain stimulus *Si* was presented. Stimuli are represented in rows while responses are given in columns. Based on this matrix, an estimate for the Information Transfer *I T*est can be calculated according to Eq. (13.1).

$$IT\_{\rm est} = \sum\_{j=1}^{K} \sum\_{i=1}^{K} \frac{n\_{ij}}{n} \log\_2 \left( \frac{n\_{ij} \cdot n}{n\_i \cdot n\_j} \right) \text{bit} \tag{13.1}$$

with *n* total number of trails

*ni*,*<sup>j</sup>* number of occurrences of (*Si*, *Rj*) *ni* row sum *n <sup>j</sup>* column sum

Based on the estimation *I T*est, the number of correctly identifiable stimulus levels *nC* can be calculated according to Eq. (13.2).

$$n\_C = 2^{IT\_{\text{est}}} \tag{13.2}$$

The upper limit of *nC* for a given information channel is also called *channel capacity*. For haptics, typical values of *nC* are in the range of 3 …4 for unidimensional information transfer [9, 14]. Specialized systems like the Tactuator achieve higher values up to 12 bit [52]. For auditory and visual examples, values of *nC* tend to be higher in the range of 5 …7 identifiable levels of for example force or compliance. To correctly measure the channel capacity, an sufficient high number of stimulus alternatives have to be incorporated in the study. Jones and Tan give a rule-of-thumb to evaluate *K* = 2*I T* est+1...<sup>2</sup> stimulus alternatives in *n* > 5*K*<sup>2</sup> trials to minimize statistical bias and to exceed the maximum channel capacity.

The specification of the information transfer in terms of *I T* is preferable to the description in terms of percentage correct, that can be found in many studies. The measure of IT is sensitive to chance performance as well as confusions of the response alternatives by the test subject. One has to note, that the information transfer capacity of a channel is considerably lower than the number of -→Just Noticeable Differences (JND) in the same range lets assume. According to Durlach et al., this is due to the fact that information transfer also involves the subjects memory and not only the sensory system [14]. In terms of the general perception primitives described in Sect. 1.4.1, the -→ JND measures the discrimination ability of a subject, whereas *I T* describes the ability to identify a certain stimulus. The latter is much more difficult.

Test setups evaluating the identification of haptic properties and signals are based on real and virtual samples with different object properties (see for example the vast number of experiments with real objects by the group of Kappers [29]) for the evaluation of teleoperation systems and haptic interfaces. A typical experiment could assess the number of different compliances that can be discriminated when using a haptic teleoperation system or the different amounts of damping, that can be rendered and perceived when using -→ VR interfaces and corresponding applications. An example is presented by Scilingo et al. that investigate the ability of a magnetorheologic haptic display to render compliances [49]. For the evaluation of haptic displays intended for communications, stimuli are constructed from the different attributes the system can display (for example speed, distance, frequency, amplitude etc.) and the information transfer is considered with respect to these attributes. This is illustrated in the following example.

#### **Example: Tactile Shear Display by Gleeson et al. [18]**

Gleeson et al. developed a tactile shear display to convey information in mobile applications. Figure 13.6 gives a picture of the completed system, that is based on a pin that can be moved in different directions with different kinematic properties (see [17, 23] for further information). A confusion matrix of several tests with the tactile shear interface, that displayed stimuli in North, South, East and West directions, is also given in Fig. 13.6.

Based on the values in the confusion matrix, the calculations according to Eqs. (13.1) and (13.2) give *I T*est = 1.23 bit and *nC* = 2.35. With regard to the above rules-of-thumb one could argue that the test setup contained a too little number of stimulus alternatives *K* to evaluate the maximum channel capacity, but that was not the intention of the original study the data where taken from. One has to note also, that Gleeson et al. conducted several studies with different hypothesis and the confusion matrix in Fig. 13.6 contains all trails from these different studies. It is only used as an example for the usage and interpretation of -→ IT here.

**Fig. 13.6** Evaluation result of a shear display (**a**) with regard to the displayed directions, used with permission by Brian T. Gleeson (**b**). Values in the confusion matrix are taken from [18] and report all stimuli presentations of the study without differentiating further experiment variables like moving distance and speed

## *13.2.3 Information Input Capacity (Fitts' Law)*

The evaluation of Information Input Capacity is somewhat the opposite side of the assessment of information transfer as described in the last section. While this quantity measures the amount of information transferred from system to user (interaction path **P'** in Fig. 2.24), the Information Input Capacity measures the amount of information that can be transferred from user to system (interaction path **I'**). The concept was developed by Fitts [16] and is often referred to as *Fitts' law*. It describes the accuracy of movement with regard to the size of the movement. It was proven originally for unidirectional movement tasks like tapping (Fig. 13.7), peg-in-hole and item transfer and extended to two-dimensional taks by Accot and Zhai [2].

**Fig. 13.7** Tapping experiment used to evaluate the information input capacity. Test persons were told to tap both target regions with width *d* at distance w from each other as fast as possible without missing the targets. Figure is based on [16]

To measure the information input capacity, Fitts defined the index of performance *I*<sup>p</sup> according to Eq. (13.3)

$$I\_{\rm p} = -\frac{1}{t} \log\_2 \left( \frac{w}{2d} \right) \text{ in bits/s} \tag{13.3}$$

with *W* as measure of the target size and *A* as distance between targets, as shown exemplary in Fig. 13.7. The logarithmic term is also called the index of difficulty *I*D. To incorporate the temporal dimension the average time for a single movement *t* is used.

This approach is however difficult to use, when one wants to compare different systems or interaction setups. Therefore, the usage of the relation given in Eq. (13.4) is used more often. It relates the moving time *t*<sup>m</sup> with the Index of Difficulty *I*<sup>D</sup> and two device and test-dependent constants *ca* and *cb*.

$$t\_{\rm m} = c\_a + c\_b \log\_2 \left( \frac{d}{w} + 1 \right) \tag{13.4}$$

Equation (13.4) can be derived from Eq. (13.3) directly, but contains a slightly modified formulation of the index of difficulty *I*<sup>D</sup> According to MacKenzie and Buxton this so-called *Shannon formulation* provides slightly better fits to given data, is more consistent with the underlying information theorem and ensures an always positive rating for the index of difficulty [37]. Given that, it is obvious, that the device and test dependent constant *cb* directly relates to the index of performance *I*p. In an evaluation, one will conduct tests with different indices of difficulty of performance and record the movement time needed to move to and select a target. The values of *ca* and *cb* are determined by fitting equation (13.4) to the data. The such acquired indices of performance allow to measure the input capability of a user with a given system and the comparison of different haptic systems.

For haptics, one can find studies employing *Fitts' Law* to investigate the effect of haptic feedback on task performance with different interface configurations [19, 46, 56] and multimodal interfaces [10].

## **13.3 User-Centered Evaluation Methods**

The usage of haptic systems will not only have an effect on the task performance of a user, but also on the user himself. User-centered evaluation methods will give some insights into these processes in order to assess advantages and disadvantages of haptic systems in the investigated application.

Compared to task- and system-centered evaluation methods, there is no prevailing test form for user-centered methods. Depending on the intended informative value of the test one will find comparative tests as well as tests producing single test values.

## *13.3.1 Workload*

Ergonomics and occupational sciences know different forms of workloads as defined by the standard ISO 10075. For the evaluation of haptic systems, the influence of a haptic system on the physical and mental workload of a user is probably the most interesting problem. For the evaluation of haptic systems, the assessment of workload is mainly of interest for the following intentions of haptic feedback:


As an overview, Fig. 13.8 shows some influence factors on the workload of a human operator that probably can be influenced by the designer of a haptic system and the usage definition. Based on this considerations one should also note, that a medium workload is preferable for a safe usage of a haptic system—too low or too excessive demands on the user will result in errors.

In the following, a quite simplified view on workload evaluation is presented to have applicable methods for the comparison of different systems or different conditions of a single system. Aspects like emotional workload are neglected, since


**Table 13.1** Biometric indications to newly arriving information and evidence for information processing. Table based on [48]

they mainly depend on the conditions a task is executed in and not on the system used [48]. The application of the methods presented here therefore does not fulfill all requirements of a workload analysis as formulated by ISO 10075-3.

One of the easiest ways to assess workload is the usage of standardized questionnaires. One of the widespread is the *NASA Task Load Index (NASA-TLX)* [20]. The resulting scale is a workload measure based on six different areas: mental demand, physical demand, temporal demand, performance, effort and frustration. Test persons first evaluate the amount of contribution of each of these areas to the overall workload, in a second step the areas are evaluated on bipolar scales [48]. Despite the NASA-TLX, there are quite a few of other workload assessment tools. An overview is given in [36], the German Federal Institute for Occupational Safety and Health (BAuA) also provides an online database and toolbox for this purpose [7].

Another possibility to assess workload is the analysis of biometric measures. Pulse rate, blood pressure, respiratory frequency, brain activity, -→ electromyography (EMG), pupil diameter, blinking frequency, skin conductivity and eye movement are indicators for a persons attention and concentration. Advantages of these biometric measures are the possibility of continuously data acquisition without major interference with the actual task. Disadvantages of these measures are the dependence on the individual user (therefore one often has to record a baseline before actual performing the experiment) and the lacking of a definite assignment of biometric measures to a defined measure of workload. Table 13.1 gives some biometric indications that can be used for a selection of suitable measurements.

For haptic systems designed to assist a human operator for example in handling tasks with complex trajectories, the assessment of muscular activity and fatigue could be of further interest. While muscular activity is monitored by EMG, the assessment of muscular fatigue is somewhat more complicated. According to [55], EMG signals do not correlate well with muscular fatigue. The proposed evaluation based on maximum voluntary contraction (MVC) of the muscles is probably challenging to implement in a test of a haptic system. Further possibilities to assess muscular fatigue can be adapted from sport science literature [60], if the presented possibilities are not applicable or insufficient for the intended test.

## *13.3.2 Subjective Evaluation*

While subjective evaluations can be done with regard to almost every aspect of the usage of a haptic system, the measure of the amount of immersion into a virtual or teleoperated environment is of interest for these kinds of applications. Pongrac reports several standardized questionnaires for this purpose [42] like for example the *Witmer-Singer-Presence-Questionnaire* or the *ITC-Sense of Presence Inventory*. More recently, Chertoff et al. presented an approach for the evaluation of multimodal virtual environments claiming to improve the integration of sensory, cognitive, affective, personal and social elements of experience [8]. Although primarily developed for the assessment of computer games, this could be a valid alternative for the evaluation of haptic telepresence systems and -→ VR interactions as well.

For the design of training systems, a subjective evaluation, i.e. a self-report of the test subjects, that compares the virtual or teleoperated training condition to the real condition without system-mediated haptic feedback is advisable to investigate acceptance processes of the users. Kron et al. investigate preferences of users for certain kinematic structures of a telepresence system for disposal of explosive ordances by using a subjective evaluation after usage [32]. Further evaluations could cover emotional and social aspects of a system. This may seem to be exaggerated for the majority of haptic systems, but is of importance in application areas like -→ Ambient Assisted Living (AAL) or other assistive systems for elderly and disabled users. In that case, one has also to consider impacts on other people not directly interacting with the haptic system that are only involved as a relative or assistant.

Self-reports can also be used to evaluate the subjective performance of users, when some kind of feedback about the task performance or criteria for good performance are given [59]. From the evaluation of usability aspects like the *joy of use* could be assessed by self-reports or even the assessment of verbal statements during the evaluation test [47, Chap. 6].

## *13.3.3 Learning Effects*

Especially for training applications, the results of regular performance tests can be compared over time and learning effects can be quantified in terms of these test outcomes. Another approach to measure learning effects it the comparison of a trained group of subjects with an untrained control group in the intended application like for example shown by Ahlberg et al. for the effect of -→ VR training on the error rate in cholecystectomies by novice surgeons [4].

## *13.3.4 Effects on Performance in other Domains*

Another approach to measure the effect of a haptic system on the user is to assess the performance in another domain. Predominantly one will find such kind of tests in areas, where a haptic system is intended to assist in a secondary task, while the primary task is not to be affected. Obviously, this is true for communication applications in vehicles. In that case, the standardized -→ Lane Change Test (LCT) is performed with interaction tasks using the haptic communication system as a secondary task to the driving on real roads or in a simulator, for example. Despite the standardized outcomes like the medium lane deviance, other parameters as for example the viewing direction and duration or the task performance of the secondary tasks can be evaluated. Examples for such kinds of evaluation of touch devices can be found in [13, 35, 50].

There are also cases, where such tests aim at the primary goal of the intended application: Várhelyi et al. investigate the effect of an active accelerator pedal that would create a counterforce when passing the speedlimit. The evaluation conducted in 284 vehicles showed an improvement of the driver's compliance with the speed limit as well as reduced average speeds, speed variability, and emission volumes [54].

## **13.4 Formal Framework for Evaluation**

According to ISO standard, there are three main aspects of assessment: Validation, Verification and Usability. They establish a link between the user, the requirements, and also the system under consideration. Validation establishes a relationship between the user and the requirements, verification relates to the requirements and the system, and usability is established between the user and the system (Fig. 13.9).

**Fig. 13.9** Assessment main aspects and their relation

## *13.4.1 Validation*

What is validation? It evaluates the specifications of the requirements in terms of their accuracy and completeness. It can be said that the answer to the question "Are we building the right system?" is validation [5]. This process performs a comparative assessment and shows whether the requirements definition is correct or not [26]. The important criterion should be identified from various sources and specified in the requirements. The usability test becomes more important than validation if the criterion cannot be determined.

Requirements reviews help validation alongside development. User requirements can be used as input to an acquisition process and validation can be done on them. Communication between users and developers has a great impact on the quality of validation. It plays an important role in the requirements documentation. Prevalidated haptic requirements can be found at ISO standards [27].

In tactile/haptic interaction, the validation is done by the interaction goal. So for this point, any requirement that is relevant to the goal is important.

For example, the goal could be to draw something by moving a pen in a virtual world. Detailed requirements can be made for the end effector gripped by the hand, the pressure of the pen on the virtual screen, the force feedback to the user and the simulation of colour and movement. From a technical point of view, it could be dynamic movement and pen pressure. Several scenarios can be used to provide the process. Each scenario can add further requirements such as pen speed, line size and thickness. Various limitations such as available technology and budget should also be considered.

To repeat the tests and get a wide range of results, several people can perform the tasks and get their opinions. Before or after this step, prioritisation of the requirements can be done and reviewed according to the ideas of different people. This can open up several useful objectives in the future development of tactile/haptic systems.

## *13.4.2 Verification*

As with validation and requirements, verification checks the systems and their operations accuracy and completeness. Answering the question "Are we building the right system?" is validation [5]. It can be said that [26], verification checks whether the system can meet the design requirements or not.

To do this, various criteria need to be defined, and if there are no specific criteria, usability testing is a way to verify. By comparing system specifications and requirements, verification is done during the development process. However, verification during developement only compares user requirements with published system specifications, confident that the published specifications have been properly verified. The determination of appropriate measurement techniques has an impact on the quality of verification. Appropriate measures and measurement techniques can be found at ISO standards [27].

In general, electromechanics are the basis of tactile/haptic systems, regardless of the software for the tactile/haptic scenario used in the interaction. They are primarily focused on skin stimulation and use mainly mechanical stimulation. They also use chemical, thermal and electrical stimulation and are primarily focused on human body perception.

## *13.4.3 Usability*

As mentioned earlier, usability is a link between the user and the system and tests the user's ability to operate a system. Usability is also defined as the quality of system used by a user to achieve a particular goal with sufficient comfort, efficiency and accuracy [24]. It also includes the tasks, the equipment such as software and hardware, and the social environment of system use. In another hand, the answer to the question "is the system suitable for the users and their tasks?" is usability. Usability testing can be done both during development and acquisition of the system and in tactile/haptic systems, it can be defined on the basis of the quality of sensory transmission.

## *13.4.4 Handling Requirements for Devices*

As mentioned before, each tactile/haptic device should have a set of specifications (according to the appropriate ISO standard) that relate to its performance and intended use. Based on these specifications, a meaningful comparison between different tactile/haptic devices is possible and the same measurement methods should also be used. Each requirement should therefore be measurable and testable.

#### **13.4.4.1 What Should Be Measured?**

If a designer wants to fully characterize the device, each clause in the specification should be either noted or measured, depending on what quantity is to be measured. These measurable tactile/haptic specifications are divided into several categories, such as environment, force, general and temporal, and ergonomics.

Environment: temperature, texture, weight, shape, electrical, mechanical and acoustic properties, and installation and maintenance.

Force: maximum and minimum continuous force and torque, peak acceleration, force and torque, inertia and resistance.

General and temporal: system delay, bandwidth, frequency, adjustability.

Ergonomic: interface, motion parameters, workspace, degrees of freedom.

#### **13.4.4.2 Measurement Resolution**

There are different ways to measure the characteristics of the tactile/haptic system. These methods are defined depending on the feature. However, as a rule, the resolution of the measuring devices should be about ten times higher than the measurement resolution expected by the user. In addition, high-resolution measurements are not always necessary. It may be sufficient to measure a working area to the centimeter rather than to the millimeter.

#### **13.4.4.3 How should Be Measured**

A test environment should be measurable and repeatable. The procedure is designed according to the situation. The procedure should include at least three usability components specified by ISO 9241-920 [27]: Effectiveness, Efficiency, and Satisfaction. For usability evaluation, different users put themselves in the test situation and follow the test procedure using the tactile/haptic system [15].

#### **13.4.4.4 Effectiveness**

The useful tests will provide an effectiveness and success score. This score will be determined based on the type of test. For example, if the tests are yes/no tests, the score will be the ratio between successful and total attempts. The more users, the lower the indicative bias. Also, the selection of users should be random and the accuracy of results depends on the desired level of certainty required.

The identification and reading speed, latency, percentage of success and target achievement, and average accuracy can be used to measure effectiveness.

#### **13.4.4.5 Efficiency**

Target attainment time in a typical tactile/haptic interaction can be measured. Thus, the ratio of the individual scores for determining the effectiveness divided by the time required can yield efficiency. The average of these scores is the baseline efficiency. It is also more useful to decide which of two tests is more efficient. Efficiency can include: Time to complete the first task, mean time to perform a task, and time to correct errors.

## *13.4.5 Satisfaction (ISO)*

ISO 16982 [24] presented twelve methods for evaluating user satisfaction with an interaction. Since these methods cover a wide range, many factors should be considered to select one of them. These factors can be the simplicity or complexity of the task, the type of object being acquired, the design or operation of a system, and also the form of user participation.

For example, if you are using a questionnaire, the questions may be completed by the user and based on an interview. Also, an ordinal scale should be used to assess user satisfaction. For example, if the question is "The procedure was easy to understand," the answers can range from "completely agree" to "completely disagree."

The analyst collect data on usability components and specific interaction aspects. These data were to be analyzed using common statistical methods to obtain the results. ISO 25062 [25] shows usability test reports, which are an industry standard format.

For example, in a virtual glove and touchscreen test, a monitoring computer measures the success of communication and the time required. A questionnaire also provides information about the user experience. In practice, comparing two or more means of interaction provides higher quality data. By changing a small number of variables, the effect of the variables can be compared and the most effective parameters can be inferred. In this example, these parameters may be glove vibration, internet speed, and frequency.

## **13.5 Evaluation of Haptic Systems, Industial Standards**

Wenliang Zhou and Jörg Reisinger

The physical evaluation of haptic displays due to their ability of representing a specific haptic property is mandatory when developing haptic devices for high volume-series production devices in mobile, white goods or automotive market. The grown awareness of subjective quality and the request for increased haptic effect variety requires a clear and transferable objective description of the haptic feeling to describe, define, reproduce and evaluate it on any platform. This subsection shortly discusses the current state of haptic measurement methodology focusing on the push-feedback.

## *13.5.1 Evaluation of Control's Subjective Haptic Feedback*

To specify and evaluate their haptic characteristics, static measurements i.e. forcedisplacement curves for push buttons and torque-angle curves for rotary controllers, are frequently used in industry since the last three or four decades [6, 31, 41, 44, 57]. However, this measurement only describes the static properties of control elements and that why, cannot capture dynamic effects. For example, control elements, which have the same static properties, but still feeling differently, the static measurement method is not able to specify.

However, a number of results indicate that also dynamic effects play a role in haptic perception. For example [43] describes that inertia and stiffness have an influence on the perception of damping, as well as [34] is showing the mutual influence of damping on the perception of inertia. [33] describes the significantly increased realism of event-based feedback of virtual surfaces and [44] describes event-basedperception observed on push button perception (also compare Chap. 5 Haptic Design of Mechanical Controls). At the same time, the practical application of static measurement reaches its limits as well, not being able to describe certain cases or new applications. In summary, it is necessary to record and evaluate the dynamic properties of haptic feedback in order to describe the perceived quality of haptic events.

At the beginning of this task, it is worthwhile looking at the entire system and the relevant properties of the whole test procedure. The results should generally be transferable and rate subjective properties by objective values. In order to analyze these effects of the dynamic behavior of control elements on the haptic perception and to specify the desired dynamic behavior for their design, we need to be able to characterize the interaction that is taking place between control elements and the human actuating them. [61] introduced a haptic measurement method that measures the dynamic interaction between a control element and an instrumented robotic probe. Since the resulting interaction is not only depending on the control element, but also on the mechanical properties of the probe. It requires the use of a finger-like measurement probe instead of a rigid probe to imitate human-like interactions. Using this new interaction-based dynamic measurement method is capturing dynamic haptic characteristics of control elements in the real working point very efficiently. The measurement of an active haptic system by for example a laser vibrometer is limited to a purely technical description of the system and its Eigen-vibrations, without the transferable properties to perceived haptics, due not considering the influence of the human finger.

#### **13.5.1.1 Static Measurement**

A common method to measure force-displacement curves is using a positioning system with a rigid probe to push the control element with a constant velocity [6, 31, 41, 44, 57], as shown in Fig. 13.10(left). The pushing velocity has to be chosen low (around 0.1 mm/s) so that dynamic effects of control elements due to inertia or damping are negligible and only stationary behavior is measured. From the forcedisplacement curve, technical features like lead travel and switching force (Fig. 13.10(right)) are derived and correlated with subjective percepts by psychophysical studies, already showing up the limits in [44] as also described in Sect.14.1. Note that this curve describes only static properties e.g. the stiffness of control elements.

However, when pushing a button or turning a rotary controller, the interaction between the finger and the control element is a dynamic process and control elements with the same stiffness, but different inertias or viscoelasticity can still feel very differently, although all the static measurements are the same (Fig. 13.10 for an

**Fig. 13.10** Static measurement for a push button: Measurement device (left); Force-displacement curve and technical features derived from the curve (right)

**Fig. 13.11** Two same push buttons (left), one of them with an additional mass (2.5 g), feel differently (the snap of the one with more mass feels dull and unclear), but their static measurements (right) are the same

example). Therefore, static measurements are not sufficient to capture all haptic characteristics of control elements (Fig. 13.11).

Furthermore, over the past decade, control elements have no longer been limited to push buttons and rotary controllers. Diverse new concepts have been developed, for example, touchpads or touchscreens with active haptic feedback (Fig. 13.12(left)). The stiffness of such touchpads can be described by a progressive spring, which means that there appears no snap<sup>1</sup> in the static force-displacement curve. The actual snap caused by an impulse generated by the actuator of the touchpad to realize the sensation of a switch, does not change the static force. If this snap is generated differently, the touchpad will feel differently, although its stiffness is the same (Fig. 13.12(right) for an example). Again, the static measurement cannot capture these dynamic haptic characteristics.

To measure the dynamic properties of control elements, one might consider using the static measurement device to excite the control element and then identify its dynamic parameters. However, to sufficiently stimulating the control element, the input signal must be highly dynamic and, at the same time the total travel of a control

<sup>1</sup> When we push a button, we can feel and hear a "click". This event is called snap. In the static force-displacement curve of a conventional push button, the snap can be seen as a drop in reaction force. The snap plays an essential role in the haptic perception of control elements.

**Fig. 13.12** Touchpad with active haptic feedback (left); Two touchpad samples with different haptic characteristics generated by their actuators, feel differently (one snap feels strong and the other feels weak), but they all have the same static force-displacement curve. In the curve, no snap (a drop in force) can be found but only a small spike (right)

element is usually very short, which means that a highly dynamic measurement device with a very high sampling frequency is necessary. More importantly, there are diverse types of control elements in an automobile, and it is difficult to find a general model structure to describe all of them. At all, it is necessary to find an appropriate model structure for each specific control element type, before identifying them. Summarizing, direct identification of control element parameters is not a costand time-effective approach to capture dynamic characteristics of control elements in automotive industry.

#### **13.5.1.2 Interaction-Based Dynamic Measurement**

To overcome the restrictions of static measurements, a new *interaction-based dynamic measurement method (IDM)* is introduced. The main idea of this method is that instead of directly identifying dynamic parameters of control elements, characterizing the element indirectly by features derived from interaction signals. Considering the human finger as a mechanical system, exciting this system by the interaction force applied on the finger. By measuring this force along with the displacement and its derivatives, indirectly capturing the dynamic haptic characteristics of control elements. The interaction depends not only on the control element but also on the finger as they are two physically coupled dynamical systems and the dynamics of one affects the overall dynamics. Thus, it is desirable that the measurement device mimics the dynamic behavior of an interacting human finger to imitate human-like interactions. If the imitated interaction signals differ significantly from the real interaction signals, some information relevant to haptic perception might have been filtering out or some irrelevant information might have been scaling up.

The main principle of the new measurement method is shown in Fig. 13.13(left), the measurement probe and the control element are modeled as two coupled massdamper-spring systems. The impedance parameters *cp*, *dp*, *mp* and *cs*(*x*), *ds*, *ms* represent the stiffness, damping and mass of the probe and of the push button respec-

**Fig. 13.13** Interaction model when pushing a button (left); Finger-like measurement device (right)

tively (subscript *p* for probe and *s* for switch). Note that the static measurement only is able to measure *cs*(*x*) which is non-linear and depends on the displacement *x*(*t*). The input of the coupled system is the probe movement *x <sup>p</sup>*(*t*). The interaction force is denoted by *f* (*t*). The equation of the motion of this system given by:

$$m\_p \ddot{\mathbf{x}}(t) + d\_p(\dot{\mathbf{x}}(t) - \dot{\mathbf{x}}\_p(t)) + c\_p(\mathbf{x}(t) - \mathbf{x}\_p(t)) = f(t) \tag{13.5}$$

$$m\_s \ddot{\mathbf{x}}(t) + d\_s \dot{\mathbf{x}}(t) + c\_s(\mathbf{x})\mathbf{x}(t) = f(t). \tag{13.6}$$

#### **Measurement Device and Finger Parameters**

Instead of the rigid probe used in static measurement, a mass-damper-spring system is chosen as a finger-like probe to push buttons with a constant finger-like pushing velocity.2 Choosing the components of the probe in an iterative way and testing in each iteration whether the probe can reproduce the real interaction is necessary. The finally achieved probe is shown in Fig. 13.13(right). It consists of a spring element with a certain damping and an accelerometer. The impedance parameters of this probe are 1.8 N/mm (stiffness), 0.92 Ns/m (damping) and 6.6 g (mass). The reason of using an accelerometer instead of a force sensor is that a force sensor is usually too big and too heavy compared with a finger. Note that in Eq. (13.5), the interaction force *f* (*t*) can be derived from acceleration *x*¨(*t*) using integration (the starting conditions must be known though), when the movement and impedance of the probe are known.

This probe is validated by measuring and comparing the real interaction between fingers and control elements and the reproduced interaction between the probe and the same control elements. The probe with an accelerometer pushes different control elements at 7 mm/s and the acceleration during the snap is measured. Using the same accelerometer, attached on the surfaces of the control elements, fingers of different subjects push the control elements with a similar gesture and velocity as typically used when pushing buttons in an automobile. The results show that the probe is a

<sup>2</sup> To achieve the average pushing velocity, a pilot study with 5 subjects was carried out. It was observed that during the pushing and the subsequent releasing the finger moves with an almost constant velocity. The average velocity is 7mm/s.

**Fig. 13.14** Three Measurements of acceleration (left, in time domain; right, in frequency domain) between three different fingers and a push button (red curves) are compared to one measurement of acceleration between the probe and the same button (blue curve). The two groups of the curves are very similar

**Fig. 13.15** Acceleration in the interaction during pushing a button is simulated (Interaction I). If the probe parameters are changed in the simulation, the interaction will be changed significantly. Interaction II: Probe damping is 3 times greater; Interaction III: Probe stiffness is 10 times greater; Interaction IV: Probe mass is 10 times greater

finger-like measurement device as it can reproduce similar interactions as a finger. One of those results is presented in Fig. 13.14.

Some simulation examples in Fig. 13.15 is demonstrating how different the interaction will be if the mechanical parameters of the probe are changed.

#### **Application Examples**

In order to show the effectiveness of the new measurement method, acceleration signals during a snap of the push buttons and the touchpads mentioned in Sect. 13.2, Figs. 13.10 and 13.12, are measured by the new finger-like probe pushing with a velocity of 7 mm/s.

Capturing the different haptic properties of the button with the additional mass by the dynamic measurement method. As shown in Fig. 13.16, this button has smaller acceleration as large amounts of the signal energy is filtered out due to the mass. Thus, the snap feels dull and unclear.

The two touchpads with the same stiffness, but different active feedback differs significantly in the dynamic measurement, which is shown in Fig. 13.17. The sample

**Fig. 13.16** Dynamic measurement of two push buttons with the same force-displacement curve, but different masses

**Fig. 13.17** Dynamic measurement of two touchpad samples with the same stiffness, but different active haptic feedback

I has a much larger acceleration and more signal energy and therefore, its feedback feels stronger.

#### **Derivation of Features from Measurement**

Some technical features are derived from the dynamic measurement signals e.g. the measured acceleration, as shown in Figs. 13.14 and 13.15, the acceleration signal shows a damped oscillation. In order to describe this oscillation, the following features are defined (Fig. 13.18):


<sup>3</sup> Only interaction from 0 Hz to 1.5 kHz is considered, since humans can hardly feel but only hear the interaction over 1.5 kHz.

**Fig. 13.18** Technical features derived from the dynamic measurement

• Ratio of peak value to root mean squared value of the acceleration in frequency domain *p*2*r*: This feature roughly describes the form of the amplitude spectrum of the acceleration, usually applied as a feature from acoustic measurement.

Based on the evaluation of large amounts of touchpads of Mercedes-Benz C Class 2014 by a group of experts deriving correlation between these features and human perception. For example a "nice feeling" of a push button or a touchpad requires large values of *amax*,*min* and small *Td* at the same time. Additionally, the acceleration should be distributed in the range around 300 Hz. Using this preliminary knowledge improving the haptic characteristics of touchpads that the haptic feedback is not feeling artificial, but similar sharp and clear like a mechanical push button. Besides these subjective perceptional results, the intensive use in the end-of-line measurement approved the robustness of the measurement system.

#### **Examples of IDM devices**

In addition to the initial system, in series production development of several EOL testing devices took place. The measurement systems show comparatively high testspeed and the concept has been proven to be mechanically robust. The measuring equipment capability of the IDM moves at a very good level, i.e. the repeatability of measurements is very good within one measurement system. At all, the replacement of measurement fingers or the comparability of results between different measurement systems causes some difficulties. Due to the nonlinear behavior of the mathematical calculation of the features as well as their depending tolerances is limiting the possibility of a mathematical correction of the values. Therefore, deviations of the finger-impedances show significant influence on the feature values so that a comparison in high production volumes is not sufficiently valid. That why, an adjustment and evaluation of the finger impedance regarding standard values is necessary to reach comparability of the different systems' measurements.

The EOL test bench of the EQS' Hyperscreen is shown in Sect. 14.1. Besides active haptic systems, Mercedes-Benz is also testing classic mechanical push buttons via IDM.

#### **Off the Shelf Systems**

The Force Feedback module from *PANOVO tec GmbH* [53] covers the entire range from development to EOL testing and already launches the second more compact generation. Adjusting the measuring finger before delivery and automatically calibration during operation to evaluate finger impedance reaches sufficient comparability of the results. An acoustically optimized drive and a stabilization of the finger during positioning against lateral forces by an adjustable preload are enabling the device for robot-based end-of-line testing. Figure 13.19 shows the test finger F121 used for laboratory operation as well as automated in EOL. Customization of the algorithms of the evaluation software is possible as well.

*Grewus GmbH* [53], supplier for acoustic and haptic actuators, presented on the "automotive interiors expo 2021" fair a cost-effective system for dynamic haptic testing, called ArFi. Its focus lies on comparative measurements during the development phase in the laboratory. Since performing no special adjustment or calibration, the comparing of the results is possible only within a single device. Displaying time and frequency domain behavior as well as an intensity value called GHIV (Grewus Haptic Intensity Value). Placing a second accelerometer at any position of the component allows identifying disturbing vibrations. A microphone allows recording the sound and noise of the component synchronously.

The *Syntouch* Finger [1] has its focus on surface haptic properties. At all, it could also be an interesting approach for the evaluation of haptic events, but the use of rubber may be not the ideal material for long-term end-of-line measuring equipment capability.

**Fig. 13.19** Panovotecs' IDM probe F121

## *13.5.2 Discussion and Outlook*

Any engineering work focuses on designing a product. A successful industrialization of haptic actuation technology in any industrial context however requires always two things:


Both items are not to be underestimated. A successful product is nothing without a corresponding quality control. Especially in the area of haptic technology this is something which always needs to be developed according to the application at hand. The Mercedes C-class touchpad is an excellent example of this conjunction of research and development and is proposed an example to extend the real product range of active haptic devices available to the market.

## **13.6 Conclusion**

As it can be seen from this section, the evaluation of haptic systems is complex and exhibits a large number of different facets. For each newly developed task-specific haptic device, evaluation methods and goals have to be selected from the above mentioned (and other applicable) measures. Depending on the kind of application, existing studies can give hints about the selection of applicable methods. The works of Wildenbeest et al. evaluating teleoperated assembly tasks and the evaluation of an assistive system for minimal invasive surgery by McMahan et al. [38] should be recommended here because of the wide scope and the thorough methodology for this kind of systems. For new kinds of universal haptic interfaces, the work of Samur is naturally a must-read [46]. At the same time, industrial companies step more and more into the area of haptic measurement technology. The future is promising for better and more objective evaluations accessible to a broad range of applicants.

## **Recommended Background Reading**

[21] Hayward, V. & Astley, O. R.: **Performance Measures for Haptic Interfaces** In: Robotics Research, 1996. *Extensive list of possible physical measures for the evaluation of haptic interfaces* [46] E. Samur: **Performance Metrics for Haptic Interfaces**. Springer, 2012.

*The probably most advanced work about evaluation techniques for haptic interfaces with a strong focus on the interaction with virtual environments.*

## **References**


#### 13 Evaluation of Haptic Systems 623

61. Zhou Wet al (2014) Interaction-based dynamic measurement of haptic characteristics of control elements. In: Auvray M, Duriez C (eds) Haptics: neuroscience, devices, modeling, and applications. Springer, Berlin, Heidelberg, pp 177–184. ISBN: 978-3-662-44193-0

**Open Access** This chapter is licensed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made.

The images or other third party material in this chapter are included in the chapter's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the chapter's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.

# **Chapter 14 Examples of Haptic System Development**

**Alireza Abbasimoshaei, Thorsten Meiss, Nataliya Koev, and Jörg Reisinger**

**Abstract** In this section, several examples of task-specific haptic systems are given. They shall give an insight in the process of defining haptic interactions for a given purpose and illustrate the development and evaluation process outlined in this book so far. Examples were chosen by the editors to cover different basic system structures.

Section 14.1—*User Interface for Automotive Applications* presents the development of a haptic interface for a new kind of user interaction in a car. It incorporates touch input and is able to simulate different key characteristics for intuitive haptic feedback. Sect. 14.2—*HapCath* describes a comanipulation system to provide additional haptic feedback in cardiovascular interventions. The feedback is intended to reduce exposure for both patient and physician and to permit new kinds of diagnosis during an intervention. Sect. 14.3—*FingHap—Haptic Finger Rehabilitation Device* presents a finger rehabilitation system with feedback on fingers. It moves the fingers in their normal trajectory and according to the improvement of patient, it changes the helping torque.

A. Abbasimoshaei (B)

T. Meiss

N. Koev

TU-Darmstadt, Merckstr. 25, 64283 Darmstadt, Germany e-mail: n.koev@hapticdevices.eu

J. Reisinger Mercedes-Benz AG, 71059 Sindelfingen, Germany e-mail: joerg.reisinger@mercedes-benz.com

Hamburg University of Technology, Eißendorfer Str. 38, 21073 Hamburg, Germany e-mail: al.abbasimoshaei@tuhh.de

Applied Materials GmbH & Co. KG, Siemensstraße 100, 63755 Alzenau, Germany e-mail: t.meiss@hapticdevices.eu

## **14.1 Touch Input Devices**

Jörg Reisinger

*Mercedes-Benz AG, Sindelfingen, Germany* {*joerg.reisinger@mercedes-benz.com*} Ingo Zoller, Peter Lotz *Continental Automotive GmbH, Babenhausen, Germany* {*ingo.zoller@continental-corporation.com; peter.lotz@continental-corporation.com*} Dongkill Yu *LG Electronics* {*dongkill.yu@lge.com*}

Since the last issue, touch input devices have taken control of the vehicle interior. Familiar button panels have almost disappeared from the vehicle, reduced to individual buttons. Rotary push controls, which have taken over the leading role in the 2000s advanced interaction concepts, have turned out to be an interim solution, typically offered as a second solution for traditional customers in between.

The change to touch input devices promises an unexpected flexibility, no matter whether touchpad or touchscreen, content is no longer static and especially intuitive to use on a touchscreen.

As already achieved with e.g. central control elements, the advantages of a drastic reduction in functions from many to just a few elements, are in the clear structured integration of the many new functions, as well as cost. Further cost advantages arise when reducing from a separated display and input device to a touchscreen. Actually an ideal solution; however, a major disadvantage is a complete visual focus by eliminating any other feedback. Acoustic noises as well as the haptic feeling of the control element are not available to the customer anymore.

Out of a technological view an acoustic feedback generated by perhaps already existing or additional audio systems is a comparably simple and already known possibility. Instead, adding haptic feedback means the integration of additional, completely new technologies into the overall system with many new and varying boundary conditions. It is therefore not surprising that due to the cost and availability of ready-to-use systems, high quality active haptic feedback currently only is available in the exclusive segment of (automotive) touch input devices.

This chapter provides an overview and an insight into the technological differences of touch-operated haptic devices in the premium automotive segment. The systems currently in use are all working with vibro-tactile technologies, i.e. the surface vibrating in the haptically perceivable frequency range.

The technical framework conditions and changing requirements - primarily driven by the design and size of the components—result in different concepts that manifest in a significant technological progress.

Man is a multimodal perceiving being, so that apart from haptics, considering topics like design as a visual aspect and noise an auditory aspect. Technical possibilities and limits must not be underestimated.

**Fig. 14.1** Applied automotive haptic device—touchpad in the Mercedes-Benz C-Class, Year 2014

The previous edition of the book describes the haptic touchpad introduced into the 2013th C-Class in detail (Fig. 14.1) as well as following ones. Since then, there has been a strong evolution. While haptic touchpads dominated at that time, Audi, followed by the Porsche Tycan, introduced the first haptic touch screens in 2017 as a sidekick concept. A new haptic feedback concept introduced in 2020 in the Mercedes-Benz S-Class (Fig. 14.2) and its all-electric brother EQS achieves new sizes and qualities. We will take insight into this application as well. Please consider, the whole chapter results out of experiences during product development, containing many thoughts, not always scientifically worked out in detail.

## *14.1.1 Direction of the Stimulation*

In the 2000s, several studies conducted on rapidly adapting (RA) mechanoreceptors. While for example in 2005 [3] dealt with the frequency response of the Pacinian Corpuscles, in 2010 [11] showed that 3-dimensional stimuli can be reduced to a single vibrational direction without significant loss of information. Finally, it showed that haptic feedback does not have to take place in the normal direction of the skin surface of a finger, but it can take place in every direction. [28] presented a haptic feedback device for haptic clicks by lateral movements of the surface. In general, this opened up new approaches to design haptical devices like touchpads or displays by further technical possibilities.

**Fig. 14.2** Haptic Touchscreen of the 2020th Mercedes-Benz S-Class

#### **14.1.1.1 Sidekick Versus Normal Actuation**

As a result, due to certain technical advantages of this principle, first products appeared in the market. However, somehow a tendency achieved telling that the sidekick concept might be "mandatory" for haptic feedback, what is quite doubtful. To be clear, this new opportunity is offering a new technical approach of how the surface can be stimulated without a negatively influenced perception, but it is not the only valid solution. In any case, however this insight helped the breakthrough of haptic feedback in the automobile very much; it maybe made it possible.

Overall, there are six spatial directions suitable for technically creating a vibrotactile haptic feedback. Two of them are in normal direction to the surface, four of them in lateral direction, as indicated in Fig. 14.3; we will now have a closer look at advantages and disadvantages of each.

#### **14.1.1.2 Advantages of the Sidekick**

The lateral movement of the surface as sketched in Fig. 14.4 offers several advantages. Mainly uniform haptic feedback due to the high rigidity of the surface plate in the excitation direction helps to move each surface point nearby the same, while activating almost no surface vibration modes. This helps uniformity from a haptic point of view as well as acoustics.

**Fig. 14.3** Principles of lateral and normal displacement of the surface to stimulate haptic feedback

Furthermore, separating the actuator's acceleration direction and the user's operating direction. This has the advantage of minimally affecting the force measurement by the haptic impulse improving the quality and reliability of the force signal during a haptic event. In terms of installation space, too, the concept initially offers advantages. For example, there is no need to distribute actuators across the surface or to build complex mechanisms to generate a uniform behavior there. At all this is limited to overall size of the actuated area, because increasing the area, requires further structures to get the surface stiff enough to transfer the impulse (Fig. 14.4).

#### **14.1.1.3 Sidekicks' Disadvantage**

Overall, this principle seems to be a perfect solution—and already works very well in several applications—but there are also disadvantages we need to consider as well.

Depending on the product design, the gap surrounding the active area requires additional space for the lateral movement of the surface next to other technically required tolerances. Consequently, it has a direct influence on the design concept of the product, can contradict its philosophy and requires creative workarounds.

The transfer of forces from the surface to the skin via shear forces leads to another point: The vibration motions require friction between finger and surface to transfer them to the fingertip. Thus, a surface with a low friction coefficient can reduce perception of haptic stimulus. Low-friction contact surfaces are therefore counterproductive with regard to the haptic intensity, which in the case of surface-normal vibration is of minor importance due to the form locking of the effects direction.

What is initially advantageous in the technical design can become a disadvantage when multi-finger-feedback is required. For example, when several users operate the device simultaneously. However, the movement of the entire surface is yet another technical aspect. The system must always move the entire mass of the surface.

The mass increases disproportionately, because as the size increases, more and more measures have to be taken to maintain the moment of inertia, which leads to a large increase in mass.

The Actio-et-Reactio principle (Newton's third law) inevitably leads to another topic: Generating the impulse on the surface (= Actio), is inducing the Reactio in the housing of the component and beyond into the dashboard. It is easy to imagine that this impulse at the dashboard, comparable to the body of a musical instrument, produces strong acoustic reinforcements that can differ greatly from the general quality expectations. To reduce the impulse partially and liquidate mechanically, so-called "tilgers" or "absorbers" are moving against the initial acceleration. Unfortunately, an absorber additionally increases the moving mass. So that the size or mass of the active surface is limiting this approach as well. Fair to say, this effect is also occurring in systems accelerated in normal direction as long as they are using quasi-static surface vibrations, i.e. the entire surface designed moving uniformly up and down.

#### **14.1.1.4 Sidekick Application**

Small lightweight surfaces such as the Audi Touchpads or in non-automotive market the trackpads inside the Apple MacBook since 2015 ([6]) work really well. At all, increasing size and mass limits the areas of application. Audi [26], and Porsche Displays [5] as in Fig. 14.5, both use the Sidekick principle and accelerate a mass of several kilograms. The division into two separate displays may be because of this reason.

## *14.1.2 Lateral Differences of Excitation Directions*

The lateral deflection of the surface can occur as shown in Fig. 14.6 in different directions regarding the finger posture. The mechanical impedance of the human

**Fig. 14.5** Audis' Touchscreen introduced in the A8 2017, using a sidekick haptic feedback [26]

finger varies noticeably with respect to the load direction, so we describe a few observations and thoughts in the following.

While the direction of pulse to the right or left has a comparable behavior due to an almost symmetrical finger impedance in those directions, the forward and backward direction strongly differ. A rocker switch, as shown in Fig. 14.7 can demonstrate the influence of forward and backward. When operating both directions, "forward" and "backward" feel significantly different. In principle, the reason can lie in the component itself. However, if the component is turned and both directions are changed, the previously experienced perception does not change and remains oriented in the finger direction and there seems no component alignment. Obviously, a difference in directional finger stiffness for push and pull is responsible for that difference, that why assuming that the finger direction is influencing the perception of the effect. Therefore, in the following we are discussing how the direction of active surface excitation may be influencing the coupled system's vibrational properties.

#### **14.1.2.1 Coupling Between Finger and Surface**

The normal movement of the C-Class (205) touchpads' surface introduced in 2013 moves the surface "downwards" or "away" from the finger Fig. 14.8. Due to a slight moving surface and the primary focus on push haptic feedback, it behaves like a typical micro switch with a snap disk underneath. The surface jumps down, when reaching a certain force level. There are two relevant acceleration maxima of the vibrational event, the "breaking away" and the "hitting the ground", influencing the haptic intensity of the feedback. Typically, the customers did not identify it as

**Fig. 14.7** Dynamic switch C-Class

an artificial haptic system. Furthermore, they qualitatively perceive a mechanical micro-switch-based feedback. The "Wow" is great when customers realize this, for example, deactivating the snap i.e. push feedback when placing the palm on the touch device when operating the rotary controller.

Looking into technical details, Fig. 14.9 shows the solenoid actuator of the floating touchpad of the C-Class 2013. Its slim shape is perfect and predestined for the Cobra-like touchpad floating above the rotary controller. Integrating the coil in the PCB, makes it flat across the whole floating area and the spread actuator is for pulling the steel actuator downward uniformly to prevent inhomogeneous behavior. A parallelogram solid-state guidance supporting the vertical movement of the actuator without tilting towards the PCB of course keeping the position laterally. An optical sensor detecting the displacement of the actuator used for sensing the force.

An increasing mass and introducing search haptic features, i.e. a feedback appearing when sliding the finger laterally across the surface; it may be worth considering

**Fig. 14.8** Movement "Downward"/"away from" the finger. The initial peak will decouple from the finger, the following impacting peak will couple into the finger

also the direction of actuation due to efficiency of the effect transfer to the finger. The direction "against" as shown in Fig. 14.10 may help to improve the mechanical coupling by initially moving the surface against the finger, increasing the coupling, supporting the first impulse. As an example, the 2018th A-Class Touchpad using a solenoid actuator working "against" the finger is following.

**Fig. 14.10** Movement of the surface "against" the finger. The initial pulse is directing the first pulse against the finger creating a maximum coupling

## *14.1.3 Increasing the Mass—Touchpad as a Sculptural Element*

New design requirements and UI concepts changed the boundary conditions of the previous haptic concept, which was presented as a sculptural touch-only touchpad in the 2018th A-Class, as shown in Fig. 14.11.

**Fig. 14.11** The 2018th A-Class' Touchpad is "kicking against" the finger

#### **14.1.3.1 Frequency Behavior**

In contrast to the touchpads introduced so far, the new generation of touchpads move the entire sculpture to a greatly increasing moving mass, having a 10 times higher mass compared to the floating touchpad or other touchpads in market.

The influence of mass and stiffness on natural frequency in Eq. 14.1 shows that increasing mass decreases the natural frequency of the system accordingly, in consequence, frequencies above the natural frequency can no longer be stimulated as efficiently. The frequency-dependent sensitivity of the mechanoreceptors (Sect. 2.1.1) requires a stimulus in a specific frequency range when designing the system.

$$
\omega = \sqrt{\frac{c}{m}}\tag{14.1}
$$

ω: natural frequency, *c*: stiffness, *m*: mass

Increasing the system stiffness is one option compensating the increasing mass, keeping the natural frequency that in turn has further effects: First, compare IDM in Chap. 13, reducing the influence/variance of human finger impedance as the coupled impedance of the control becomes stiffer and thus gains dominance (compare Eq. 14.2). However, in principle this makes the system more robust against variation of finger impedance.

$$c\_{\text{couple}} = \frac{c\_1 \cdot c\_2}{c\_1 + c\_2} \tag{14.2}$$

*c*coupled: coupled stiffness, *c*1, *c*2: single components' stiffness

In addition, compare Eq. 14.3, both the higher mass as well as the stiffness increase the required energy to stimulate the surface haptically correct.

$$w = \frac{1}{2}m\omega^2 A^2 = \frac{1}{2}cA^2\tag{14.3}$$

Total energy of a vibrating mechanical system, with *W*: Energy, *A*: oscillation amplitude, ω: natural frequency, *c*: stiffness, *m*: mass.

Therefore, the actuator generally needs more power while maintaining the momentum to drive the current frequency range, as you can see in the simulation results in Fig. 14.12. It shows the frequency domain of the coupled system consisting of control element and finger, in the example changing the impedance of the finger. The amplitude of the vibrations around 400Hz is strongly reducing due the increasing impedance. That why the energy in this frequency range needs to be increased to reach the same level. However, this higher energy now must be controlled and minimized in order to avoid negative effects.

#### **14.1.3.2 Reduction of System Noise**

The mechanical haptic impulse produced of course follows Newton's third law of interaction, actio et reactio. This means that the haptic impulse in the surface leads to a more or less powerful impulse in the environment, which means the center console or dashboard in the vehicle. Amplified there like in a guitar body it may lead to disturbing noises that are unacceptable with the need of reduction and compensation. However, a simple reduction of intensity is counterproductive, as haptic feedback can become too weak for interaction. At all, there are approaches reducing negative effects of this disturbing external impulse, when not being able excluding it fundamentally by construction, shortly discussed in the following.

#### **14.1.3.3 Increasing the Signal Vibration Duration**

The simplest method for energy minimization may be a reduction in amplitude by a longer continuous vibration due to a lower damping time constant. This is actually a regression to a vibrating effect, as we know it from the earlier mobile phones. This effect does not seem crisp and high quality, but rather like the buzzing of a bee, which rather has the character of a warning signal than a pleasant and valuable feedback. [23] describes this difference and Fig. 14.13 showing a strongly dampened short, crisp effect compared to the vibration-like effect.

At all, the disadvantage of haptic quality and haptic "meaning" has its advantage in low cost components, small size and a low short-term mechanical energy request, by stretching the energy consumption in time, reducing the magnitude required. At all, it does not guarantee having no noises in the environment as well. An active haptic system with an eccentric rotating mass motor (ERM) currently used in the VW ID3 steering wheel switch.

**Fig. 14.13** comparison of short and increased vibrotactile haptic feedback

#### **14.1.3.4 Mechanical Impulse compensation**

To reduce the environmental impulse, mechanical solutions for its liquidation can be used (so-called absorbers). A combination of mechanical frequency filters, consisting of specific impedances and a moving mass working against the moving surface in the background of the system can help to reduce the reactive impulse. However, this leads to an additional increase in mass, which lowers natural frequency and leads to additional energy requirements. Therefore, implementation needs a very thorough theoretical design.

#### **14.1.3.5 Leaving Out of Disturbing Frequencies**

Inspired by the thoughts of frequency- or input-shaping as described by [7], a simplified approach using a Fourier series as a sum of dampened sine wave signals of different frequencies and damping properties. This Fourier series deliberately does not add the noise producing, disturbing resonance frequencies of the depending resonance bodies as well as their (sub-) harmonic frequencies.

More efficient but also more difficult, seems the frequency-input shaping approach itself, as described by [7]. This optimized approach regarding energy efficiency due to using the full signal bandwidth and only removing the interfering frequencies. Both approaches thus derive the fundamentally same principle by suppressing the disturbing noise frequencies. At all, it is unfavorable if the interfering frequencies come very close to the frequencies required for the good haptic stimulation of the mechanoreceptors as described in (Sect. 2.1.1). Removing frequencies in these frequency ranges is reducing the energy and thus the efficiency of the stimulus. In general, it is an approach to optimize the feed forward signal by frequency selective signal design.

#### **14.1.3.6 Active Damping by Frequency Increase**

As so often, there is also a conflict between energy efficiency and quality here. As mentioned before, a high-quality haptic click signal has a very short duration. An increased mechanical damping can ensure this, but requires higher performance to obtain the same intensity of a signal.

One common damping method is using an opposing impulse derived out of measuring the signals' response. This requires a measurement for calibrating the system's parameters, mainly the delay of activating the opposing impulse. Distortions of the system like drift or thermal deviations appear counterproductive but typically are not too significant. An alternative principle described in [20], uses the previously described Fourier series approach using frequency shift. Therefore, increasing each dampened frequency individually to dampen the energy of that specific frequency. The frequency increase in turn effectively dampens the overall signal, realizing energy-efficient mechanics with a low mechanical damping, as applied in the Touchpad generation of the 2018 A-Class.

#### **14.1.3.7 Actuator Integration and Handheld Versus Mounted Devices**

In the case of installed haptic devices such as touchpads or automotive devices, the integration of the actuator typically realized by clamping it between the housing and the actuated surface. Mobile devices, on the other hand, typically use single side mounted, not clamped actuators, accelerating a mass as a counter weight for the generation of the mechanical impulse. However, except for a few exceptions and against all expectations, typically the actuators of mobiles mounted on the housing side instead of the display side, where the perception generally is expected to take place. While the Linear Resonant Motors (LRM) is increasingly interspersed with high-quality haptics, with Apple's Haptic Engine® as the most popular one. Very first mobile devices were using Eccentric Rotating Motors (ERM) generating a vibratingtype feedback rather than the precise click. To reduce the periodically appearing clicks to a single one, an extra effort is inevitable. Common for both described systems is that the haptic impulse is transferring into the housing of the device. So far, no significant effect is perceptible on the display itself in a mobile device, but only transmitting via the casing, held in the second hand. This simply testing out by placing the device on a table and operating it there, the haptic feedback may not be perceivable. Concluding, Haptics transmitted via the housing is integrating into overall perception via the second hand.

One exception is Google's Pixel 5, introduced in November 2020. Installing the actuator directly on the backside of the display's surface directly stimulating the resonant modes of the display's surface.

The permanent magnet of the LRA is the moving part, working as a counter mass for the generation of the mechanical impulse, but working in normal surface direction. Additionally it supports the devices' Audio.

The fact that primarily causing system's noise by the vibrations transmitted into the housing, it is of significant advantage inducing the surface' stimulus without inducing into the housing. This allows driving the actuators more efficiently and stronger, due to the reduced noise coupling. This finally refuses the argument that the installed clamped systems' actuators may offer more "impulse" due to its stiff setup. However, acoustic noise is limiting and reducing the "useful impulse".

#### **14.1.3.8 Solenoid Versus Voicecoil**

Solenoid actuators are comparably cheap and can generate high dynamic forces. Unfortunately, the unidirectional properties is limiting the possibilities strongly, i.e. applying negative forces is not possible except with a mechanical spring. This nonlinearity greatly limits signal- and control-related possibilities for signal playback. Known workaround solutions are contrary the cost advantage. Voicecoil actuators' permanent magnets are primarily leading to high cost. However, bipolar actuation greatly simplifies control-related integration, I would rather say even makes it possible. In the case of favorable design of the frequency response and corresponding integration of its transfer function in the control concept, is enabling the realization of a variable high-quality haptic feedback on the surface.

## *14.1.4 The 2020's S-Class Center Information Display and the EQS'MBUX Hyperscreen*

The 2020's S-Class as well as the 2021's EQS (BR223 and BR297) are representing the upper end of Mercedes-Benz' automotive quality and innovation. Thus introducing premium displays with OLED technology and 3D instrument cluster displays on a new level, cannot miss haptic feedback to support the driver's interaction with this system.

At all, generating a haptic feedback on a 12,8 "display or even a 17" (EQS) display with a length of more than 1,40 meter and depending to this, a high mass of several kilograms and a not moving surface requires different approaches compared to the previously described principles.

#### **14.1.4.1 The Haptic Concept**

In particular, shaking the entire surface with an adequate dynamic behavior as discussed before would trigger an enormous momentum, including disturbing noise. Additionally two users should be able to interact simultaneously on the surface, with each of them receiving a separate haptic feedback.

The Time Reversal method (TRM), as described in [2], uses several actuators to create a specific pulse very precisely at exactly one position. Perhaps the approach seems to be "too perfect", which is why it gets a disadvantage bringing in a delay of several hundredths of a second, which is contrary meeting user interface requirements of a maximum overall system response.

Another possibility based on the generation of bending waves, which uses natural resonances of a surface. NXT developed and patented this method as Distributed Modal Loudspeaker (DML) to realize audio playback through large surfaces. The continuation of NXT under the firm Redux later also generated haptic feedback on surfaces, today found in Google Pixel 5.

#### **14.1.4.2 Use of Inverted Transfer Functions**

The s-class' approach differs to the previous ones; it uses the inverted transfer functions between each transducer and the finger position. Folding these inverted transfer functions with the defined vibrotactile broadband target signal to apply at the fingertip. Generating a set of actuator signals for each finger position and each actuator. The application of these actuator signals on the real physical hardware is turning them back to the specific target signal. At the finger position, the single signals of each actuator are overlapping and combining to the previously defined super-wave-like vibrotactile broadband target signal with higher intensity at the specific point.

Figure 14.14 describes the control concept of one single signal path. On the left the source, called target signal *FS*(*s*) is the signal wanted to be playing at the fingertip, i.e. it shall be equal to the signal at the target point *FT P* (*s*) = *FS*(*s*). The measurement of the signal path determining by frequency sweep, impulse response or step response. At all, it is important to apply the correct mechanical impedance that needs the inclusion of the finger's impedance. Comparable to the IDM Measurement as described in [Chap. 13], the impedances are influencing the required transfer functions. Figure 14.15 is showing a dynamic measurement system used for determining the transfer functions as well as for evaluating the haptic feedback of the system in lab and end-of-line.

On this basis, the calculated actuator signal *A*(*s*) is stimulating the surface in the way, that real existing physical transfer path *G ph* (*s*) = *G*(*s*) is now turning back the signal into the original target signal *FT P* (*s*) = *FS*(*s*). Of course, there are deviations due to quality of the whole measurement and calculation accuracy as well as environmental and production tolerances. So *G ph* (*s*) ∼ *G*(*s*)is the correct writing. At all, executed tests showing up that no further compensation mechanisms are necessary to run the system in a proper way. That is why we can see *FT P* (*s*) ∼ *FS*(*s*) as sufficiently for the purpose. This transfer function is considering the whole transfer path, from digital signal definition, via the whole electronics including actuator, surface impedance and of course the finger impedance.

Now, folding each target signal *FS*(*s*) with the inverse transfer function *G*−<sup>1</sup>(*s*) towards each of the actuators derives the required set of actuator signals *A*(*s*). Combining and playing the sets associated actuator signals is superimposing to a strong

#### 14 Examples of Haptic System Development 641

**Fig. 14.14** Control concept of one single signal path

**Fig. 14.15** Dynamic haptic measurement on the EQS' MBUX Hyperscreen. Used for evaluation, end-of-line testing and for determining haptic transfer functions

and intense haptic signal the specific finger positions. Of course, it requires the execution of the process for each single target signal.

Figure 14.16 gives an overview about the control concept. It describes the general control principle also including the identification of the transfer functions between

**Fig. 14.16** Schematic of determining actuator signals. Also showing the acoustic optimization path

each actuator and each position that are used to generate the required actuator signals out of each target signal.

The indices show the dependencies on the number of signals (n), number of actuators (a) and number of finger positions (p). So the number of actuator signals for example is the product of n, a and p. It can be easily understood that there might arise a high number of actuator signals to be calculated.

Two different strategies remain implementing that strategy in practice. First approach the ad-hoc calculation of the actuator signals requires a high computing performance to do all complex math in time. The second approach, instead more memory intensive, is the pre-calculation of each of the actuator signals *A*(*s*)*<sup>n</sup>*,*a*,*<sup>p</sup>*. In general, it makes sense to define the size of the finger position areas to limit their number. The lateral size of the finger position areas depends on the system properties. A general simple assumption is a diameter of 3–7 cm.

#### **14.1.4.3 Haptic-Acoustical Optimization**

Experiencing a control elements' feedback always is a multimodal perception. The overall impression contains visual, acoustical and haptical aspects. Especially the acoustic and haptic feedback have a physical relation that can easily lead to conflicts. The frequency range of the haptic feedback lies at around 0–1.000 Hz and is overlapping the acoustical range lying around 16–20.000 Hz. This means that the haptic design is interfering with the acoustics between 16–1.000 Hz.

Due to its acoustical sensitivity, around 1.000 Hz unfavorably covers the most sensitive range of hearing. At all, the perceptional weighting reduces the relevant haptic range down to 500 Hz; at all, they still lie closely together somehow complementing each other, at all, the remaining haptic range is still perceived acoustically well and optimizing haptic signals leads to acoustical side effects and opposite.

**Fig. 14.17** The actuator signal needs to generate the haptic feedback as well as the acoustical perception

The basic principle of acoustic optimization strategies are comparable with the previously described haptic procedure: measurement of the transfer behavior, comparing with the target signal and filtering the signal. The transfer path of course is a different one than the haptic signal path.

The differences are huge anyway: while haptics is focusing on one specific surface area, the remaining areas do not really matter. The acoustic feedback instead is radiating always via the entire surface. This means that all vibrations across the whole surface are part of the acoustic feedback, while only the contact area of the finger affects the haptic feedback perception, as shown in Fig. 14.17.

Each of the actuators is stimulating the overall surface vibration in a different way, that why, so separate acoustical optimizations for each are necessary adding different acoustical influences to fit the target acoustics. At all, there is also a difference between the optimization of noise and acoustical quality.

Typically, nonlinear behavior of the surrounding environment mainly is responsible for noise. Having its specific resonant frequencies amplifying disturbing noise, reduced by dampers, compensational masses i.e. mechanical strategies or influencing

**Fig. 14.18** Acoustic band selective optimization for each actuator and each target signal. The colors of the bars show the fraction of each actuator in each frequency band. The previously only haptically optimized actuator signals are now changing for also optimizing acoustical impact of the feedback signal. Not to misunderstand: it is not a mix of haptic and acoustic signals, but an adjustment of the actuator signals to get an ideal combination of both aspects

the driving signal like Fourier or frequency shaping strategy as described already. The focus lies on the reduction of the intensity of the disturbing frequencies.

Whereas optimizing the acoustic quality is different, more a designing and composing process of the acoustical sound, for giving a clear feedback, not being intrusive but always expressing quality. Ideally, for this the active system is able to generate a linear acoustic behavior more like a high quality loudspeaker.

The challenge of optimizing haptic as well as acoustic quality and noise lies in the same actuator system driving both modalities. It is limiting the possibilities and efficiency of a system, because the optimization of the acoustical behavior may lead to a worse haptical quality in the same way as a strong haptical feeling may lead to a miserable acoustical behavior.

Each actuator can be of different effectiveness in each frequency band due to the individual acoustical and haptical transmission functions. To achieve efficient optimization, the analysis of the contribution of each actuator needs to perform separately for each required frequency band. Thus, for example, an acoustically less effective actuator can generate higher haptic contribution than another one, while in turn the other one may be primarily useful for acoustic optimization. Figure 14.18 shows the scheme for a frequency-specific optimization of all actuator signals to get a uniform acoustical feedback combined with a uniform haptical one.

The acoustic measurement of the signals shall take place with a coupled finger impedance and ideally take the user's typical distance to the display into account.

**Fig. 14.19** Acoustic measurement on the EQS' MBUX Hyperscreen. Used for evaluation and for determining the acoustic transfer functions. Not shown is the finger to operate the device with the correct mechanical finger impedance. The microphone (red circle) is placed in a typical distance of the user

Figure 14.19 shows the measurement, without a mechanical finger impedance. Typically, the optimization process is performed in cycles, Fig. 14.20 shows the optimization progress due to measurements of finger positions based on a similarity index, which expresses the similarity of the signals regarding the mean value of the acoustic parameters.

**Fig. 14.20** Similarity index before (left) and after (right) acoustical optimization, describing the relation between local and average values

#### **14.1.4.4 Type and Position of Actuators**

What kind of actuator to use, does not depend on the electromechanical principles. The use of solenoid, voice coil types or piezo actuators primarily depends on their mechanical properties like frequency range, force/travel intensity and efficiency as well as thermal stability. At all, there are several actuators available of each type fitting into this scheme of vibrotactile frequency range. Hence, finally building space and cost efficiency are becoming the main drivers for this decision.

The physical efficiency of an actuator is determined by the coupling of the required frequencies into the active surface. Besides the mechanical coupling, that typically is gluing, the dynamical behavior of the system is of importance: Each surface has its own specific vibrational modes, as for example shown in Fig. 14.21, where the frequency and shape of the modes are physically determined for each setup via FEMsimulation. To optimize the efficiency of the dynamical behavior, we take a closer look at the basics.

Even using the whole frequency bandwidth of the signal and not only the single modes of the surface, it is a good approach to use the modes' information for optimizing the positions of the actuators. Figure 14.22 shows simplified three different actuator positions A, B and C as well as the surface specific Eigen modes 0, 1 and 2. Actuators placed in vibrational node points cannot activate this specific vibrational mode, as an actuator in position A cannot activate the modes 1 or 2 and position C cannot activate mode 2. While position A is an ideal driving position for mode 0, because it is located at the position of maximum oscillation. Position C seems to be efficient for mode 1 but only moderately driving mode 0. Looking at position B, it shows a strong excitation of each mode, that why it would be the preferred

**Fig. 14.21** FEM Simulation of four vibrational modes of the Mercedes-Benz EQS' MBUX Hyperscreen. Red color shows the maximum amplitude, blue the nodal lines

**Fig. 14.22** Simplified illustration of vibrational modes 1–3 of a surface and 3 specific positions A-C for actuators

position. In practice, unfortunately, not every actuator can stimulate every mode, but an optimization as explained makes sense in any case to become most efficiently.

## *14.1.5 Controlling the Haptic Feedback*

Physically generating a good haptical feeling is the one thing, but to control the feedback appearing at the right time, place and in the right sequence is crucial. Especially when applying complex haptical combinations of different haptical bricks on small space, i.e. search haptical effects like edge effects and texture, shall appear instantly, or several push feedbacks shall instantly within short time.

The interpretation of finger positions and forces are a key feature followed by a performant activation and effect collision handling, prioritizing or mixing different haptical effects. Thus, even at permanently dynamical changing screen layouts,

**Fig. 14.23** Interaction process including the activation of haptic feedback

the generation and interpretation of haptical definitions can lead to a high level of complexity. Figure 14.23 describes an interaction process considering basic mechanisms for feedback control. First, the user is receiving information via the visual screen content and he is deciding to do some finger interaction. Receiving this finger interaction, e.g. sensing touch and force values, the systems is interpreting this user's input. It is recognizing for example some instructions that lead to an approval in form of a haptic feedback and needs to change the state of the system's user interface, for example changing the screen content with updating the haptical definition and of course maybe activating a specific function. If several feedback effects collide, the system needs to be able to prioritize and maybe mix effects, for example a texture and a click effect.

Perceiving the whole user interface as helpful and of high quality by the user, besides the physical quality of the feedback, it requires uniform integration of haptical concepts as well as powerful concepts enriching haptical experiences by combining different effects to a new overall impression. This may shift haptics to a next level.

## **14.2 HapCath—Haptic Catheter**

Thorsten Meiss, Nataliya Koev, Thomas Opitz, Tim Rossner, and Roland Werthschützky

## *14.2.1 Introduction*

Catheterization is a medical procedure used for diagnostic and therapeutic treatment of blood vessels. For example arteries of the heart suffer from atherosclerotic depositions which diminish the blood flow and, as a consequence, result in heart pain, heart attack and heart failure. In the US, diagnostic and interventional catheterizations of the heart were performed approx. 1.5 million times in 2014 (Data base 2021, diagnostic and therapeutic counted separately [27]). In many cases, catheterization is a simple process for well-trained cardiologists: a guide wire is inserted into an artery, usually the fermoral artery at the upper leg, and is slid towards the heart. By rotating the proximal end of the wire (the end in the physicians hand), the physician leads the tip at the wire's distal end into the coronary arteries. To visually control the guide wire movement, short time 2D-X-ray video is used. By sliding a catheter over the guide wire, the physician can lead contrast fluid into the vessels to visualize the course of the arteries for diagnostic purposes. Through this hollow catheter, the physician can lead tools to the upper branches of the coronary vessels or change and reposition the guide wire very quickly. To reopen totally closed or occluded vessels, the physician can thread a balloon catheter over the proximal end and slide it over the guide wire, through the catheter, into the occluded part of the vessel. Then the affected vessel can be widened by inflating a balloon (dilatation) and optionally expand a stent to prevent the vessel from contracting again.

However, in many cases, the vessel is totally closed and penetrating the occlusion with the guide wire tip becomes very difficult. Additionally, navigating the wire through calcified and contorted vessels often turns into a challenging task. The flexibility of the guide wire has to be adapted either to follow contorted vessels or to penetrate occlusions. Therefore, the wire has to be exchanged during the intervention. The risk of penetrating the vessels increases. The intervention becomes time consuming and sometimes even impossible.

A reason for prolonged catherization time is caused by the limited feedback of the guide wire. Due to the small diameter and the necessarily low stiffness of the wire, the physician cannot feel the forces at the guide wire tip; only 2D-X-ray imaging linked with the legal limitation of the amount of noxious contrast fluid, requires well trained operators and a long training phase of new cardiologist. It needs experience to match the limited visual feedback with the real motion of the guide wire. To overcome these challenges originating from a lack of intuitive usable information from the guide wire's tip, the HapCath system provides haptic feedback of the forces acting on a guide wire's tip during vascular catheterization [9, 15] (Fig. 14.24). In order to achieve this, force measurement and signal transmission out of the patient's body is realized. Thus, the transmitted signals are used to control actuators of a haptic display to provide a scaled, amplified force, which is coupled back onto the

**Fig. 14.24** Schematic of the assistive system HapCath: The forces *F*<sup>0</sup> at the tip of the guide wire are measured by means of a small force sensor. The signal *SF*<sup>0</sup> is transmitted out of the patient's body over the wire. Within a haptic display the signal *SF*<sup>0</sup> is reconverted into a scaled force *n* · *F*<sup>0</sup> by means of amplifiers and actuators, thereby overcoming the friction force *FF* within the catheter and vessels. This force is displayed to the surgeon's hand as the amplified force *FH*

guide wire. This scaled force surpass the friction force of the guide wire, enabling the user to feel the tip interacting with the walls and obstructions inside the vessel. This shall simplify and accelerate the navigation of the wire and reduce the risk of punctuating the vessel or damaging and striping of ateriosclerotic depositions. The aim of providing haptic feedback is to enable grasping the right way through the vessels just like with a blind man's cane. For this purpose, very small force sensors have been designed, fabricated, tested and integrated into guide wires. A special electronics to calculate the 3D-force vector acting at the tip has been designed and a haptic display with a translational and a rotational degree of freedom to couple the amplified forces back onto the guide has been constructed and tested.

## *14.2.2 Deriving Requirements*

To our knowledge, the exact forces at the guide wire tip during catheterization have been unknown up to recently. For this project detailed analysis of the advancing of the guide wire within the vessels with simulation [8] and experimental measurements [12] of the guide wire interactions have been performed.

## *14.2.3 Design and Development*

#### **14.2.3.1 Force Sensor Design**

The Fig. 14.25 shows selected relevant scenarios of the interaction of the guide wire with the vessel walls and with stents inside the vessel.

A force sensor can be integrated at the tip of the guide wire or with some distance to the tip. To allow for the measurement of the interaction forces when the guide wire is advanced with the tip backwards (Fig. 14.25d), the sensor needs to be integrated with several centimeters distance from the tip. This will lead to additional friction forces in the sensor signal and will result in lower frequency response due to higher mass and damping. The most beneficial location to integrate a force sensor therefore is directly into the tip of the wire, due to higher amplitude and frequency resolution of the contact force measurement.

Simulations have been performed were the guide wire is modeled as distributed elastic elements interacting with viscous elastic walls of the arteries with Matlabc [8]. Additionally, experiments to determine the buckling load of different types of guide wires where conducted [12]. Both methods reveal a maximum force in axial direction of the wire of around 100–150 mN, e.g. for penetration occlusions. This is depending on the type of guide wire used. The forces during advancing and navigation and detecting surface properties, e.g. roughness or softness, are estimated to be in the range of 1 mN to 25 mN.

To allow for force measurement at the tip, two types of micro force sensors have been designed and fabricated [10, 12, 13, 19] (Fig. 14.26). They are built from mono crystalline silicon with implanted boron p-type resistors. This technology was chosen to fulfill the requirements on micro scale manufacturing with its high level of integration, a relatively high voltage output for robust external readout as well as

**Fig. 14.25** Pictures of different interactions of the guide wire with the vessel, with different kind of plaque (**a**), within a heavily wriggled vessel path (**b**), and with a stent (**c**) and (**d**)

**Fig. 14.26** Two types of mono crystalline silicon force sensors; both are designed to resolve the full force vector in amplitude and angles. Their size is compared to an ant

high mechanical stiffness to fulfill the requirements on high frequency resolution up to 1000 Hz. For stable control of even very low forces and for safety reasons, it is required to support static force measurements. This allows to keep exact force control of low forces even after the guide wire tip was in static contact with a constriction over longer period of time.

#### **14.2.3.2 Guide Wire and Sensor Packaging**

Guide wires are disposable medical products, manufactured with technologies of precision engineering. The guide wire requires a maximum torsional stiffness and a variable bending stiffness along the wire. To integrate an electrical connection of the sensor over or within the guide wire is a challenging task. A loss in rotational stiffness due to softer materials of the conductors than stainless steel or nickel-titanium will result in less mechanical performance. This is the main reason why the space for the integration of electrical wires is very sparse. The electrical connection is established with four insulated robust copper wires, each with a small diameter of 27 µm.

The sensors are glued onto the tip of the wire with UV-curable medical adhesive. First prototypes where enclosed into a flexible polyurethane polymer [14] and covered with medically compatible Parylene C. The second generation of tactile guide wire is covered with Pebax 3533, and hydrophilically coated [25]. This improves navigability due to high lubricity and reduced friction of the guide wire. The Fig. 14.27 gives an insight into the assembly of the wire of the second generation.

#### **14.2.3.3 Haptic Display Design**

The guide wire is navigated through the vessels with two degrees of freedom: translationally, to advance the guide wire, and rotationally, to choose the relevant wire

**Fig. 14.27** The integration of the sensor into the guide wire tip encompasses several steps of precision mounting, dispensing and covering with glues and cover polymers

**Fig. 14.28** Basic design of the haptic user-interface with the translational degree of freedom and side view of the implementation [22]

branch. The haptic display is designed to provide these forces and motions (Fig. 14.28) [22].

The haptic display supports the generation of static forces to display the penetration of occlusions. To give feedback of surface roughness and to reflect the dynamic amplitudes during penetration or when the wire is moved over the grid of stents or rough depositions, the haptic interface is designed with low mechanical inertia to generate high frequency feedback as well. To optimize the dynamic performance of the haptic interface the equivalent circuit representation of the electro-mechanical setup with guide wire and passive user impedance is used (Fig. 14.29) [22].

Because the friction force is a relevant factor of all forces, it is assumed that further optimization of the haptic impression can be achieved by reducing or canceling out the friction forces. The Fig. 14.24 shows that the force at the physicians hand *FH* at interface equals the sum of contact force *F*0, the friction force *FF* and the force *nF*<sup>0</sup> generated by the haptic display. In turn, the friction force *FF* can be calculated when the contact force *F*0, the force *nF*<sup>0</sup> of the haptic interface and the force *FH* at the hand of the physician is known. Therefore, the hand force *FH* needs to be measured in real-time. To allow this, a hand force sensor is developed and tested. The Fig. 14.30 shows the technical implementation and integration of this sensor into a handle part, used for steering a guide wire during catherization.

**Fig. 14.29** Equivalent network representation of the haptic user-interface including the guide wire and the user's passive mechanical impedance

**Fig. 14.30** Hand sensor integrated into a handle for steering the guide wire. FEM Analysis for **a** tensile and **b** compressive force. **c** integration concept of the hand force sensor to assemble a standard size handle for guide wire manipulation

#### **14.2.3.4 Electronic Design**

The sensor and haptic display is powered with one single electronic system. The electronics provides power supply to the force sensor and a unique six channel high resolution analog frontend. A micro-controller unit is used to control the sensor readout and to calculate the 3D-force vector of the contact forces. It provides angle measurement and PWM-control for two brushless DC motors for the haptic feedback interface. Force signals are transferred to a PC and to a display for information purposes. The control loop is implemented in the micro-controller itself without the need for time critical communication with the PC. This allows for a fast control loop with up to 10 kHzs−<sup>1</sup> control rate for smooth haptic feedback [12]. A second design is developed using LabView real-time system [21].

## *14.2.4 Verification and Validation*

To validate the function of the overall haptic system, the tactile guide wire, the electronics and the haptic display are connected [15]. The guide wire is advanced into a model of the arteries build from silicone tubes. The clean silicone tube surfaces mimic smooth, healthy arteries. Partially, the tubes are filled with epoxy glue mixed with sand, which mimic rough depositions like calcified plaque. The Fig. 14.31 shows the sensor signal when the guide wire is moved inside the model. The test signals are fed back to the haptic display and the interaction of the tip with the vessel walls can be discriminated. To analyze the force response in more detail, glass plates with different surface roughness have been prepared and the guide wire tip is moved over the surfaces and the signal is recorded and presented using the haptic display (Fig. 14.32). The resulting haptic feedback allows for discrimination of smooth and

**Fig. 14.31** Moving the guide wire with a minimal contact force. The physician maneuvers the guide wire by moving the handle by rotating, pushing and pulling (**a**). Measurements with a first generation prototype of the tactile guide wire within a model of the arteries with artificial plaque (**b**). The measurement shows the sensor signal during inserting, moving the wire forward, rotating the tip and going into the right and then into the left vessel branch

**Fig. 14.32** Force signal over time recorded from packaged sensors integrated into a first generation guide wire prototype to evaluate different surface roughness. Glass (**a**) paper (**b**) sand in epoxy glue (**c**). Notable is the reproducible, nearly periodic output of the packaged sensor on paper (**b**). Increasing roughness of the surfaces lead to increasing output signals (**a**), (**b**) to (**c**)

rough surfaces. When touching different surfaces the contact force is amplified and different surfaces can be distinguished clearly.

By increasing the amplification factor of the forces, the sensation of soft or elastic surfaces change to that of much more rigid features due to the higher stiffness emulated by the system. This makes soft, fragile surfaces much easier to detect. They become virtually harder, whereby the elongation of the tissue due to force is reduced. We assume that this can lead to much less ruptures of vulnerable features, leading to fewer complications during catherization in the future, as well as a higher success rate for complicated interventions with wriggled arteries.

## *14.2.5 Design Updates and Lessons Learned*

The project HapCath, Haptic Catheter, was started in 2004 and several challenging design tasks have been supported with research topics. In a project to transfer research to application a second generation of guide wire prototypes have been designed together in cooperation with industrial partners. Research was conducted to allow sample manufacturing of guide wires and to improve stability of the signal transmission. The sensor design has been adapted. The main requirements derived in the early project phases remain valid.

To extend the experiments and to validate the findings in a realistic test environment all the technical components have been set up to a new demonstrator system (Fig. 14.33). This system is used for demonstration and tests with vascular surgeons in order to demonstrate and to optimize the system performance.

Senior cardiologists performed tests on navigability of the prototypes in standardized vascular models (Figs. 14.33 and 14.34). They report good characteristics of the second generation prototypes. To meet full performance characteristics of a high performance recanalization guide wire, the core stiffness can be increased by higher cross-section.

**Fig. 14.33** Tests inside a standardized vasular model [10]

## *14.2.6 Conclusion and Outlook*

The project involves several technical challenges, encompassing sensor design and sensor integration, as well as adapted haptic feedback. The current field of research is focused on transferring the results into application by refining the design and the technical implementation of the sensor, electrical wire and guide wire assembly. The test with cardiologists and optimization of the guide wire is ongoing. The application will benefit from ongoing research regarding optimized filtered signal feedback from touch scenarios of the tip with different surfaces. Additionally, research on smart wires is performed where the orientation of the guide wire tip and the stiffness can be controlled electrically by using smart materials. The HapCath project is funded by the German Research Foundation DFG under grant WE2308/3-1:3 and WE2308/15-1:2.

## **14.3 FingHap—Haptic Finger Rehabilitation Device**

Alireza Abbasimoshaei, Thorsten.A. Kern, Yash Shah *Institute of Mechatronics in Mechanics, Technische Universität Hamburg al.abbasimoshaei,t.a.kern@tuhh.de*

**Fig. 14.34** Tests inside a standardized vasular model (**a**) and (**b**). The forces measured with an external force sensor are increasing with the insertion depth of the guide wire due to overlaid friction forces. Only with the integrated force sensor and a haptic display the contact force at the guide wire tip can be made haptically available.[10]

## *14.3.1 Introduction*

Lack of rehabilitation services in rural areas has been one of the major issues that make a big difference in facilities through different living areas. The main reasons found are the availability and reachability of the physiotherapists in different areas. Rehabilitation is one of the most important procedures that should be done after injuries. Due to the repetitive nature of this training, a full robotic system could help the physiotherapists to rehabilitate a larger number of patients either through home rehabilitation or wearable vibrotactile systems [24]. Such a device can record data or can be used as a live system. It could be consists of the operator-device, patient-device, and haptic mechanism. In this section, after a brief introduction of the design and controlling system, a telemanipulation system based on the ROS system is introduced. This section presents the FingHap rehabilitation device that helps patients to stay at home and still acquire the same level of treatment. The patient and physiotherapist will be connected through a cloud server, and a proposed external biway communication approach has been applied between them. The therapist would physically guide the robot present at the clinic, and as real-time communication, the patient's robot would replicate the same motion. As a matter of feedback, the state of the patient's robot will also be passed on to the therapist's robot. In this way, a therapist can physically experience the patient's feelings and decide the next level of exercise. All control parameters like velocity, force, and PID values of the patient's robot can be accessed and controlled by the therapist. The approach has been tested and the achieved results are shown.

## *14.3.2 Design and Prototyping*

In this robot, all DOFs of finger joints and wrist are moving by two motors. A schematic view of the designed system with a hand is shown in Fig. 14.35a. As it is shown, the green part that is the finger placement is a flexible system to adapt the rotation according to the joints' center of motion.

As can be seen in Fig. 14.35a, the hand is located in the upper section that includes the green finger part and two ball bearings. Because during the rotation, the joint's center of motion changes, a flexible system is used for the finger part. Due to the flexibility of this part, fingers can be rehabilitated by adapting the center of rotation.

This system has two motors used to move the cables and provide the fingers displacement (Motor1) and wrist rotation (Motor2). Moreover, two ball bearings and a shaft are used to transmit the motor rotation to the wrist part [4, 18].

Some tracks are designed to adjust the length of the finger parts according to each patient (Fig. 14.35b). Figure 14.35b shows index finger DIP training configuration. As it can be seen, it contains the finger part and the finger joints movements are restricted by a bar at the back of the fingers. Also, this bar contains a circular part at the end to make the adjustment and unlocking different joints easier. Thus, changing the unlocked joint leads to different phalanx rehabilitation. There is a bar at the backside of the system to lock or unlock the joints. The configuration of the bar shown in Fig. 14.35b is for DIP training of the index finger. Figure 14.35c shows the posture of the index finger in the device. As it is shown, the finger part is adjusted according to the length of the finger, and DIP is placed at the tip of it. The movement of the index finger is provided by cable shown in Fig. 14.35c and the spring moves it back (Fig. 14.36).

## *14.3.3 Design an Adaptive Fuzzy Sliding Mode Controller for the System*

#### **14.3.3.1 Desired Trajectory During Rehabilitation**

To design a useful control system, it is needed to find the desired trajectory of each part. Thus, for finding the desired trajectory of fingers, the movements kinematic of

**Fig. 14.35** Schematic design of the system

(a) Whole device (b) Wrist rehabilitation (c) Finger rehabilitation

**Fig. 14.36** Prototype of the rehabilitation robot

all of them were analyzed during their tasks. Ten healthy subjects, seven males and three females did finger exercises in three different velocities [16]. Every subject performed five trials for each joint and they rested about one minute between the trials. For each trial, the physician fixed their finger joints and they moved the free joint. They moved their phalanx according to the physician's instructions and an attached gyro sensor measured the angle of rotation. These patterns are independent of finger length because they depend on the joint angle and time.

The average of the collected data was found and fitted with a polynomial. Before fitting, the average method was used to remove the noise from the data. The equation for the DIP joint's rotation angle of the index finger in flexion is as (14.4) and it is a second-order polynomial and Fig. 14.37 shows the fitting graph with 0.9788 R-square (with *K* = 1). Equation 14.5 and Fig. 14.38 shows the results of fitting a third-order polynomial with *K* = 1 to the PIP joint with 0.9857 R-square.

$$\theta = K \times (0.27 \times t^2 + 2.4 \times t + 0.5) \tag{14.4}$$

**Fig. 14.37** DIP angle with respect to time (with *K* = 1)

**Fig. 14.38** PIP angle with respect to time (with *K* = 1)

$$\theta = K \times (-0.23 \times t^3 + 3 \times t^2 + 0.67 \times t - 2.2) \tag{14.5}$$

Such works were done for other fingers and phalanges. The equation for the MCP joint of the index finger is as following.

$$\theta = K \times (-0.41 \times t^2 + 14 \times t - 2) \tag{14.6}$$

The patients are trained under the supervision of a physician at different speeds. After recording the average data, it is found that the medium and fast speeds of training are reached by multiplying the slow speed (*K* = 1) by *K* = 2.2 and *K* = 3, respectively.

#### **14.3.3.2 Mathematical Model of the System and AFSMC Design**

By using Newton's law on the fingertip the dynamic equation of the system is found [17].

$$I\ddot{\theta} = T \times \sin(\alpha) \times l\_3 + T \times \cos(\alpha) \times E - K \times ((\sqrt{A} - \sqrt{B}) \times \cos(\beta) \times l\_3)$$

$$+ (\sqrt{A} - \sqrt{B}) \times \sin(\beta) \times G) - C\dot{\theta} - K\_1 \theta$$

$$A = (H + l\_3 \sin(\theta))^2 + (l\_1 + l\_2 + l\_3 \cos(\theta))^2 \tag{14.8}$$

$$B = H^2 + (l\_1 + l\_2 + l\_3)^2\tag{14.9}$$

$$I\ddot{\theta} = T \times R \quad . \tag{14.10}$$

The robot's simplified kinematic model is shown in Fig. 14.39a and *E*, *G*, and *H* could be seen in Fig. 14.39b. In Eq. 14.7, *l*1, *l*2, and *l*<sup>3</sup> are the length of the phalanges, *I*, *R*, and θ are the inertia of the rotating part, the motor shaft, and the rotation angle of the finger respectively. Also, *C*, *K*1, and *K* illustrate the robot's damping and stiffness and the spring's stiffness. *I* shows the moment of inertia and *T* is the cable force. α and β are as below and *D* shows the distance between the finger part and the connection point of the cable with the system (Fig. 14.39b).

$$\alpha = \theta + \arctan(\frac{D - l\_3 \sin(\theta) - E \cos(\theta)}{l\_1 + l\_2 + l\_3 \cos(\theta) - E \sin(\theta)}) \tag{14.11}$$

$$\beta = \theta + \arctan(\frac{l\_1 + l\_2 + l\_3 \cos(\theta) + G \sin(\theta)}{H + l\_3 \sin(\theta) - G \cos(\theta)}) \tag{14.12}$$

In Eq. 14.7, θ, *l*, *x*, and *y* are the finger rotation angle, the cable length, the horizontal, and vertical axes of the cable length respectively.

For removing the unknown parameters and uncertainties effects in the system mechanical model identification, a Sliding Mode Controller (SMC) has been designed. This controller reduces the effects of parameter variations, uncertainties, and disturbances.

The Sliding Mode Controller (*u*) consists of two controllers (*ueq* and *urb*) and it should guarantee the stability of the system. The equivalent controller is *ueq* , *urb* controls the uncertainties and disturbances, and they are considered as Eqs. 14.14 and 14.15. More detailed information on this system's Sliding Mode Controller is brought in [17].

$$
u = 
u\_{eq} + 
u\_{rb} \tag{14.13}$$

**Fig. 14.39** Simplified kinematic model of the robot

$$
\mu\_{eq} = \mathbf{g}^{-1} (\ddot{\mathbf{x}}\_d - f - k(\dot{\mathbf{x}} - \dot{\mathbf{x}}\_d) - \eta \mathbf{s}) \tag{14.14}
$$

$$
\mu\_{rb} = -\mathbf{g}^{-1} \rho. \text{sgn}(\mathbf{s}) \tag{14.15}
$$

Where η is a positive constant and the general equation of the system is considered as Eqs. 14.17 and 14.18 in which λ is unknown disturbances satisfying Eq. 14.16.

$$\|\lambda\| < \rho,\tag{14.16}$$

*g* and *f* formula for this system would be obtained as Eqs. 14.19 and 14.20.

$$\ddot{\mathbf{x}} = f(\mathbf{x}, t) + \mathbf{g}(\mathbf{x}, t)\boldsymbol{\mu} + \lambda \tag{14.17}$$

$$\mathbf{y} = \mathbf{x} \tag{14.18}$$

$$\mathbf{g} = (\frac{1}{I})(\sin(\alpha)l\_3 + \cos(\alpha)E) \tag{14.19}$$

$$\begin{split} f &= (\frac{1}{I})(-K \times ((\sqrt{A} - \sqrt{B}) \times \cos(\beta) \times l\_3) \\ &+ (\sqrt{A} - \sqrt{B}) \times \sin(\beta) \times G) - C\dot{\theta} - K\_l \theta \end{split} \tag{14.20}$$

Where *g*(*x*, *t*) and *f* (*x*, *t*) are unknown functions of the system dynamic equation. Moreover, λ is unknown disturbances satisfying Eq. 14.21.

$$\|\lambda\| < \rho \tag{14.21}$$

There is a chattering phenomenon in SMC because of the sign function in its formula. In [17] a fuzzy controller is used for solving this problem. Thus, in the previous work [17], a fuzzy sliding mode controller (FSMC) is designed by combining a fuzzy controller with an SMC.

Because of various amounts of stiffness in the patient's hands, different interaction forces were created between robot and patient. Thus, as another step, an adaptive controller is designed [1]. This adaptive law estimates the uncertainties and the interaction force and drives the error of the trajectory tracking to zero. Thus, (14.22) shows the formula of the designed adaptive controller by considering unknown parameters and patients' interaction force.

$$
\mu\_{ad} = -\frac{\int \frac{s}{I}}{I \times \mathbf{g}} \tag{14.22}
$$

In which *S* is the sliding surface.

## *14.3.4 Cloud Enabled Communication*

In this project, a trusted google cloud server has been utilized.In this system, neither of the devices has a server installed in it, so they are just clients of the cloud server. It helps decreasing system load to a much larger extent (Fig. 14.40).

Moreover, a node in Raspberry Pi acts as a cloud publisher here. Inside the node, a ROS subscriber is formed to extract the data from ROS topics. It is publishing the same data to the server with a publishing rate of 35 Hz or less. The reason is that both ROS and Dynamixel workbench publish the data at 150 Hz and that is too large for a cloud application. Only the important data of the change in the state is transferred from one robot to another.

**Fig. 14.40** ROS—Cloud architecture, motor-images c *Dynamixel*, used with permission

## *14.3.5 System Setup and Data Communication*

#### **14.3.5.1 Passive Mode**

Passive mode is considered as the beginning of the rehabilitation therapy and the patient's hand is supposed to be too weak to do the exercise. Through the robot at the clinic, the therapist would teach exercises to the patient. At this particular stage, only required force is supposed to be inserted on the patient's hand in conventional therapy. Therefore, from the beginning, a low static current is induced on the motor. In this manner, a static torque is produced by pushing the patient's hand to do the exercise. To replicate the motion, the current position of the therapist's robot would constantly be transferred to the patient's robot. One-way communication is making sure one robot follows another. On the other side, the current position of the patient's robot is also constantly feeding back. Every time it would be subtracted with the current position of the therapist's robot and the parameter of difference in position would be found.

If the difference in position would be larger, that would increase resistive torque as per equation in Fig. 14.41 on the motion of the therapist. This resistive torque guides the therapist to reach the position of the patient and it settles down once they both are in the same position and the difference is zero. For further process, the torque which was kept static on the patient's side can be accessed and increased. This torque parameter is iterable and can be changed to make the patient follow the motion.

**Fig. 14.41** Passive therapy structure

#### **14.3.5.2 Assistive Mode**

Assistive mode is considered as the second stage of the therapy where the patient can start the exercise but needs some assistance in between. As the patient begins the motion, the therapist's robot follows it and as soon as the patient stops, the therapist can provide physical assistance by moving the robot. As per the motion of the therapist, assistive torque (14.23) would keep on increasing on the patient's hand and it makes the forward motion of the patient possible. This torque has been given an upper limit to avoid putting excessive force on the patient. Similarly, like the first step, if the patient is not able to follow, the resistive torque leads the therapist to reach the position of the patient. In (14.23) α is a constant.

$$\text{assistive\\_torque} = \alpha \ast \text{torque\\_inserted\\_by\\_therapist} \tag{14.23}$$

#### **14.3.5.3 Active Mode**

In the active mode, the patients can perform the exercise by themselves and the therapist can visualize the performance through the robot at the clinic. The stiffness of the patient's movement can be manipulated through the stiffness parameter to increase the level of difficulty.

#### **14.3.5.4 Resistive Mode**

In the last stage of the therapy, the patient is supposed to already have gained over 80% of recovery. Now, real stress on the arm is required to reach 100% recovery. So now, the therapist provides high resistance to the patient's movement. In between the free-flow of the patient's motion, the therapist would hold the robot. The patient's robot would be rigidly forced to match the same position as of therapist and provides severe resistance to the patent's onward motion. This resistance (14.24) keeps on increasing as the patient moves forward. The patient has to overcome increasing resistive\_torque to keep moving forward. In (14.24) β is a constant.

$$\text{circ}\,\text{su}\,\text{circ}\,\text{que} = \beta \,\,\text{\*}\,\text{difference\\_in\,\,position} \qquad (14.24)$$

#### **14.3.5.5 Automatic Mode**

An automatic mode is an alternative to the passive mode. The therapist can directly provide an initial velocity to the patient's robot with a low static current through the cloud server. The therapist's robot would follow the movement. On the monitoring, this velocity can be manipulated. If a lot of jerks are experienced in the motion, PID values can be changed on the patient's side through the cloud server to reduce the jerks. Each parameter can be manipulated in the communication and the desired state can be achieved. Changing the mode is also possible online.

## *14.3.6 Experiments and Results*

#### **14.3.6.1 Control System**

In an experiment for exploring the AFSMC performance, the slow movement of each phalanx was tested. Ten volunteers performed the finger exercises with different controllers on the robot and the data average was calculated and fitted with a polynomial. As it is shown in [1], adaptive fuzzy sliding mode controller provides better performance because it reduces the effects of differences between patients. Also, it reduces the chattering effects because of its fuzzy controller. According to the experiments data, using an adaptive fuzzy sliding mode controller (AFSMC) reduces the trajectory tracking average error by 80% [1].

#### **14.3.6.2 Tele-Communication**

The time lag depends on many factors. A small analysis was performed to observe the time lag in the communication system at different internet speeds. The data packets of integer values from 100-113 were published in a "for loop" from one Raspberry Pi to another. Fig. 14.42 shows the results for this particular test and the time at which each integer value was sent and received was noted. These data sets were performed at different internet connection speeds of 27,58, and 79 Mbps, and the average time lag were 85,65 and 45 milliseconds, respectively. The analysis shows that internet speed is a big factor in the communication system.

All the therapies were tested on six different people (four men and two women) and the results explicitly convey the proposed system. Within passive therapy, a low static current was induced on the patient's robot. As soon as the therapist starts the motion, the patient would follow the same motion with a constant force on the patient's hand. This force could be provided by changing the current in the motor. According to the xm540-w270 Dynamixel motor manual, when this motor uses 0.3 A and 4 A current, it can provide about 0.4 N.m and 8.8 N.m torque, respectively. For the first case, it is assumed that patient had no problems following the therapist. Results in Fig. 14.43 perfectly relate to the assumption as patients smoothly follow the therapist and both of the trajectories in the position versus time graph aligns with each other and they just separated with a small time lag. The Current vs Time graph shows that, though the therapist puts much force on the rehab device, the patient would still experience the low static current that was applied at the start of the therapy. The time lag measured in the experiment was 183 ms and this time lag includes cloud transfer time, besides, to set the received values from the cloud to the patient's motor and the motor to reach that particular therapist's position. So this was the overall point-to-point measured time lag between therapist and patient (Fig. 14.44).

As in the more general case, the patient would find certain difficulty following the motion. That leads to an increase in the difference in position between therapist

**Fig. 14.42** Time lag at different internet speeds

**Fig. 14.43** Typical recording for a passive therapy run

**Fig. 14.44** Passive therapy with lag of the patient

and patient shown. As the difference increases, resistive torque also piles up to the therapist robot as Fig. 14.41.

Furthermore, the results of the assistive therapy are shown in Fig. 14.45. The patient would start the exercise and the therapist would follow the same motion. As seen in the position vs time graph, at the time counter of 4130 the motion of the patient seems to have stopped. It was reported by the therapist and assistance was provided from there onwards. The assistive torque on the patient's hand is depicted as the current of the motor in the current vs time counter graph of Fig. 14.45 and it is calculated as per (14.23). High peaks in the current show the assistive torque provided to the patient.

The active stage of the therapy is shown in Fig. 14.46. The patient is ought to perform a free-flow motion as shown in the position graph. The therapist would

**Fig. 14.45** Typical recording for an assistive therapy run

**Fig. 14.46** Typical recording for an active therapy run

increase the stiffness of the motor measured in the graph through the present current of the motors. An increase in the current shows the increase of the motor stiffness and the patient would have to put more power for rotating the robot. The measured time lag from point to point was 145 ms. Results of resistive therapy are shown in Fig. 14.47. The patient has to overcome severe resistance defined as (14.24). Also, in this graph, the resistive torque was defined with the present current of the motor. In all other previous therapies, the average current measured was around 60-70 mA, but in the resistive therapy, the patient has to overcome the resistance as high as 200 mA for moving forward. This is considered as last and the toughest stage of the therapy.

However, most of the position error is because of time lag but the observed error is not relevant to the training success, as the therapist can compensate for a potential lacking-behind of the patient and the success is identified by the amount of travel more than the actual synchronicity of the movement. Thus, there shouldn't be much difference if the time lag increases by a certain value. Because in this system, all the data is gradually transferred from one part to another. The system makes sure the data sequence remains the same even if there is a high time lag. So the patient would still follow the same exercise but with a high time delay. This is supported by the subjective answers of the patients, that they did neither feel uncomfortable nor disturbed in passive training mode (Fig. 14.48).

**Fig. 14.47** Resistive therapy

**Fig. 14.48** Intentional time lag

#### **14.3.6.3 User Experience for Telemanipulation System**

The user's opinions are very important because it should be a user-friendly system. For this aim and to make a comprehensive evaluation of the system, a questionnaire survey was designed for six subjects. They accomplished the survey after the training. All participants had no confusion by using the device and they understood the instructions very well. In the question about the passive mode, it was asked whether the low static current provided by the robot was appropriate or not? 33% said that it was more than enough and 50% asked to increase it a bit. (It is worth mentioning that the static current could be easily changed through the server if required by the patient). According to some reviews, it was needed to reduce the vibration which can be done by changing the model gain of the robot. About assistive mode, active mode, resistive mode, all of the participants said the system performed perfectly as per the defined application and they rated the performance eight out of ten by average, and about half of them insisted to make the motion a bit harder in the active mode. 95% rating was obtained in terms of the safety of the system. About the overall movement of the system, around 84% said that the system is smooth enough and all participants stated that the system works well with a dynamic time lag as well.

## *14.3.7 Conclusion and Outlook*

In this work, a mechanism for wrist and each joint of fingers rehabilitation with a low number of motors is presented. Moreover, for reducing the unknown parameters and uncertainties effects, an AFSMC design method is proposed. This controller is more robust and independent from the system model because its fuzzy controller output is based on the error. Furthermore, this controller can solve the problems in the controller algorithm from various patient's differences due to its adaptive part. It was shown that an 80% improvement is observed in the performance of the controlling system.

In another part of this project, a remote supervising system was established. In this structure, a bi-way communication system with a real-time data transfer was developed making one robot follow the other. Thus, the first achievement was itself an effectively working remote supervising system containing a real-time control with active feedback physically realized by the therapist. This approach can be used for any type of rehabilitation device. Additionally, a point-to-point time lag was averagely measured as 145 ms, which is comparatively considered low in the telerehabilitation system. It was possible due to an external combination of ROS and cloud services.

Moreover, the users feeling about the system in different modes were asked by some questionnaires. The overall view of the opinions was good and they pointed some useful statements that will be considered in the next versions. According to the graphs and questionnaire, it is shown that the time lag which is because of sending and receiving the position data, executing the data on the motor, and receiving the feedback, has not a big effect on the performance and a simple camera that gives the physiotherapist an insight into the patient can compensate it. Moreover, most of the participants said that without having this camera, they can do the training without any problem.

This approach can fulfill the requirements for remote rehabilitation in rural areas. The health care system of more than half of the countries in the world is yet to shift from regular therapies to robot-assisted therapies. The cost of developing and delivering robot therapies is very high, but not as costly as the life of people with disabilities. Between all these dilemmas, a system like this can attract and motivate the healthcare industry to look forward to new technologies and modify rehab services.

## **References**


**Open Access** This chapter is licensed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made.

The images or other third party material in this chapter are included in the chapter's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the chapter's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.

# **Chapter 15 Conclusion**

## **Thorsten A. Kern, Christian Hatzfeld, Alireza Abbasimoshaei, Arsen Abdulali, Jacqueline Gölz, Jörg Reisinger, and Fady Youssef**

Like any other design process, the design of haptic systems is largely influenced by the optimization of a technical system, which is based on the consideration of a large number of decisions about individual components that usually influence each other. At the beginning, the requirements of the customer or the project must be defined. The methods presented in Chap. 5 should be used to systematically identify the most important aspects of these requirements. However, the engineer should be aware of the fact that less precise and unambiguous terms are available for describing a human-

Christian Hatzfeld deceased before the publication of this book.

T. A. Kern (B) · A. Abbasimoshaei · F. Youssef

Hamburg University of Technology, Eißendorfer Str. 38, 21073 Hamburg, Germany e-mail: t.a.kern@tuhh.de

A. Abbasimoshaei e-mail: al.abbasimoshaei@tuhh.de

F. Youssef e-mail: f.youssef@tuhh.de

C. Hatzfeld Technische Universität Darmstadt, Darmstadt, Germany

A. Abdulali Department of Engineering, University of Cambridge, The Old Schools, Trinity Lane, CB2 1TN Cambridge, UK e-mail: aa2335@cam.ac.uk

#### J. Gölz

Technische Hochschule Ulm, Fakultät Elektrotechnik und Informationstechnik, Institut für Automatisierungssysteme (IAS), Albert-Einstein-Allee 53, 89081 Ulm, Germany e-mail: jacqueline.goelz@thu.de

#### J. Reisinger Mercedes-Benz Cars Development, Daimler AG, 71059 Sindelfingen, Germany e-mail: joerg@reisingers.de

machine-interface than one is used to. In addition, knowledge of the customer can lead to considerable confusion, since haptic terms in particular, such as resolution or dynamics, can be used in the wrong context or misunderstood. A better definition of the requirements without major misunderstandings is achieved, for example, by giving the customer aids, "shows-and-tells" of haptics. It is necessary that the customer and the engineer come to a common understanding based on references known to both. It seems very promising to describe very thoroughly the interactions that the user should be able to perform with the task-specific haptic system, since they have a great impact on the design of the system and the requirements derived from the capabilities of the haptic sense. For this reason, an understanding of the specifics of haptics on the part of the engineer is necessary, a skilled engineer navigates the customer through this unknown territory. These skills should not be limited to the technical characteristics described in Chaps. 2 and 3, but should also include some knowledge of the "soft", i.e. psychological and social, aspects of haptics, as described in Sect. 1.3.

Based on the above requirements, the technical design process can begin. For this purpose, an adapted version of the commonly known *V-model* is given in Chap. 4. This approach tries to integrate all the above aspects in a structured way. One of the very first decisions is the choice of the structure of the haptic system Chap. 6. Although this decision is at the very beginning of the design process, a rough sketch of the favored structure of the device to be developed must necessarily be made. This requires considerable knowledge of all areas of haptic device design, which will be needed again later in the actual design phase.

In addition to the decision on the general structure already mentioned, the basis of the design of kinesthetic and tactile systems is their kinematic structure (Chap. 8). According to the considerations made for the kinematics concerning the transmission and gear ratios, the working volume and the resolution to be achieved, suitable actuators are selected or even designed. In Chap. 9 the basis for this is laid by comparing the different actuator principles. Examples of their realizations, including unusual solutions for haptic applications, provide a useful collection for any engineer to combine kinematic requirements for maximum forces and translations with impedances and resolutions.

As admittance-controlled systems with kinaesthetic and tactile applications become more important, force sensors must be considered as another component of haptic devices. In Sect. 10.5, this technology is introduced and the tools as well as the opportunities but also the challenges associated with its application are conveyed. A common application of haptic devices is in the human-machine interface of simulators, whether for games ranging from action to adventure, or for more serious applications for training surgeons or in the military or industrial design.

The design steps presented so far allow the haptic device to provide a tactile or kinesthetic output to the user and often measure a response. Especially with today's computer technology, the data will almost always be connected to a standard interface PC. The requirements derived from this interface are subjected to a representation of standard interface technology in Chap. 11, comparing the performance of the interfaces.

Given the fairly common use of haptic devices to represent interaction, whether in virtual environments or to enhance telemanipulation or simply as a means of impact in mobile devices, there must be a deep strategy to restore the impression of touch. This is done through a sophisticated combination of software solutions and data models. For a related introduction, see Chap. 12.

The cross-section given in this book is intended to improve and further accelerate the design of haptic devices and to avoid the most critical mistakes typically made during the design process. Research in the field of haptic devices is making impressive progress. Adapted control concepts appear every few months; the use of haptic perception methods for design-shortcuts has proven itself to be beneficial. Actuators are continuously improved; combinations of principles with haptically interesting properties appear on the market every few years. Closed-loop control systems became more and more interesting due to the increasing availability of highly dynamic, highresolution force sensors and powerful controllers. The whole area of touchscreens created a market need pushing researchers, new startups and industrial side-entries into this growing market. This dynamic in a still comparably young field obliges engineers to follow current developments in research and industry closely. We hope, that this book is able to contribute to the understanding!

**Open Access** This chapter is licensed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made.

The images or other third party material in this chapter are included in the chapter's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the chapter's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.

# **Appendix Impedance Values of Grasps**

The following tables provide the parameter for the model given by Fig. 3.8 and Eq. (3.7) in Chap. 3. They parametrize the different grasping situations discussed in Sect. 3.1.6.


**Table .1** Mean values of the mechanical impedance model according to Fig. 3.8 for different grasping situations


**Table .2** Linear interpolated dependencies of the model's parameters from grasp- and touch-forces according to Fig. 3.8 for different grasping situations. Interpolation according to *c* = *a* · *F* + *b*

# **Index**

#### **A**

Absolute threshold, 54 Active haptic interaction, 109 Active Touch *(Definition)*, 77 Actuation principle, 310 Actuator design, 309 Actuator designs, piezoelectric, 351 Actuator, electrostatic, 384 Actuator, piezoelectric, 346 Admittance controlled, open-loop, 196 Admittance controlled, closed-loop, 198 Admittance-Type System *(Definition)*, 85 Aesthetics, 8 Ampere turns, 323 Anatomical terms of the human hand, 59 Anisotropy of haptic perception, 73 Application areas, 16 Application *(Definition)*, 81, 135 Art, 8

#### **B**

Backlash, 593 Basic equations, piezoelectric, 348 Basic piezoelectric actuator designs, 351 Being, physical, 6 Bending actuator, 312, 363 Bending moment, 453 Bernoulli, 454 Bidirectional, 15 Bimorph, piezoelectric, 360 Block-commutation, 338 Bragg, 476 Braille, 24, 362 Brake, electromagnetic, 380

#### **C**

Calibration, 454

Capacitive actuators, 312, 386 Capacitive principle, 310 Capacitive sensors, 500 Capstan drive, 317 Cascade control, 241 Categorized information, 75 Channel, 37 Channel *(Definition)*, 38 Channel function, 40 Charge constant, 349 Classification of haptic systems, 80 Closed-loop admittance controlled, 192 Closed-loop impedance controlled, 192 C 82, material property, 350 Code disc, 494 Coefficient of strain, piezoelectric, 348 Coefficient of tension, piezoelectric, 348 Coefficients, piezoelectric, 348 Comanipulator, 17 Comanipulation System *(Definition)*, 87 Communication, 22 Commutated, electronic, 335 Commutation, 336 Completion time, 597 Compliance (perception of), 65 Component design, 137 Composites, 453 Comprehensive model, 137 Compression, 530 Concepts of interaction, 76 Conductor, 321 Consumer electronics, 21 Contact grasp, 114 Control design, 233 Control of linear drive, 241 Control of teleoperation systems, 244 Control stability, 139

Coriolis Effect, 506

© The Editor(s) (if applicable) and The Author(s) 2023 T. A. Kern et al. (eds.), *Engineering Haptic Devices*, Springer Series on Touch and Haptic Systems, https://doi.org/10.1007/978-3-031-04536-3

Coulomb's law, 385 Coupling factor, piezoelectric, 350 Crosstalk, 590 Curie temperature, 350 Current source, analog, 343 Curvature (perception of), 65 Curves of equal intensity, 69 Customer, experiments with, 154

#### **D**

D/A converter, 340 DC-drive, 335 DC-motor, 311 Definition of application, 135 Design goals for haptic system, 139 Design piezoelectric actuators, 356 Designs of DEA, 397 Devices, 4 Dielectric Elastomer Actuator (DEA), 394 Differential Limen (DL), 54 Differential threshold, 54 Differentiation of signals, 503 Digital to analog converter, 340 Direct current magnetic field, 326 Disturbance compensation, 238 Dots-per-inch, 493 DPI, 493 Driver electronics, electrodynamic, 339 Dynamic range, 590

#### **E**

Eccentric Rotating Mass motor (ERM), 342 EC-drive, 335 EC-motor, 311 Effect, piezoelectric, 346 Elastic constant, 348 Elasto-mechanics, 451, 452 Electrical time constant, 332 Electric field, 384 Electric motor, 311 Electrochemical principle, 310 Electrodynamic principle, 310 Electromagnetic actuators, 372 Electromagnetic brake, 380 Electromagnetic principle, 310 Electromechanical network, 359 Electronic-commutated, 335 Electro-Rheological Fluid (ERF), 401 Electrostatic actuators, 384 Energy consumption, 595 Energy density, magnetic, 328

Energy, magnetic, 372 Equations traveling wave motor, 353 Errors, 597 Errors of haptic systems, 127 Ethernet, 532 Euler angle, 146 Evaluation, 138, 588 Evaluation criteria, 159 Event, 162 Event-based haptics, 94, 527 Evolution, 6 Exciter, 311 Experiments, 154 Exploratory procedures, 76 External effects on perception, 57 External supply rate, 228 Extreme dead times, 529

#### **F**

Fabry-Pérot, 474 Fechner's law, 56 Feedforward, 239 Fiber bragg grating, 475 Field driven actuator, 384 Field plates, 497 Field response, 332 Field strength, magnetic, 323 Filling-factor, 322 FireWire, 531 Fitts' Law, 601 Fixed angle, 147 Flow mode, 403 Flux, 323 Flux density, magnetic, 323 Flux, magnetic, 323 Foil sensor, 464, 468 Fooling the sense of touch, 73 Forced choice paradigm, 53 Force-feedback device, 85 Force, magnetomotive, 323 Force sensing resistor, 463 Formation of the sense of touch, 6 Frequency response, 591 Function of kinaesthetic receptors, 43

#### **G**

Gage factor, 455, 456, 482 Gears, 315 Gearwheel, 318 General design guidelines, 156 Gesture, 78

Index 683

Golgi tendon organ, 43 Grasp, 113 Gray scale sensor, 495 Grip, 113 Guess rate, 48

#### **H**

Hall sensors, 498 Hammerstein-model, 207 Handling force, 598 Haptic Assistive System *(Definition)*, 83 Haptic compression, 96 Haptic controller *(Definition)*, 88 Haptic Display *(Definition)*, 82 Haptic icons, 24, 75 Haptic illusion, 73 Haptic Interface *(Definition)*, 83 Haptic loop, 174 Haptic quality, 126, 139 Haptics *(Definition)*, 4 Haptic System Control *(Definition)*, 88 Haptic Systems *(Definition)*, 80 Haptic transparency, 139, 245 Hapticon, 24 Haptics in cockpits, 10 Hardware in the loop, 533 H-bridge, 341 HIL, 533 Histology, 37 Human movement capabilities, 79 Human-machine-interface, 10 Hydraulic, 312 HyperBraille, 363 Hysteresis, 590

#### **I**

Icon, 24 IEEE 1394, 531 Impedance controlled, closed loop, 195 Impedance controlled, open-loop, 193 Impedance coupling, 111, 113 Impedance measurement, 115 Impedance-Type System *(Definition)*, 84 Impulse response, 591 IMU, 505 Induction, 331 Industrial design, 20 Influencing factors, 57 Information display, 20 Innervation density of mechanoreceptors, 39 Input strictly passive, 228

Integral criterion, 237 Integration of signals, 503 Intensity, 471 Interaction, 12 Interaction analysis, 135, 157 Interaction concepts, 76 Interaction with haptic systems, 80 Iron-less rotor, 333

## **J**

Just Noticeable Difference (JND), 54 Just Tolerable Difference (JTD), 54

#### **K**

Kinaesthetic *(Definition)*, 14 Kinaesthetic receptors, 43 Kinematic structure description, 146 Kinesthetic, 450

#### **L**

Lapse rate, 48 Latency, 595 Linearity of haptic perception, 72 Linear Resonant Actuators (LRA), 342, 344 Linear State Space, 205 Local haptic model, 527 Longitudinal actuator, 349 Longitudinal effect, magnetic, 374 Longitudinal effect, piezoelectric, 348 Lorentz-force, 318 Lossless, 228

#### **M**

Magnetic circuits, 323, 375 Magnetic cross section, 376 Magnetic dependent resistors, 497 Magnetic energy, 372, 377 Magnetic field strength, 323 Magnetic field, direc current, 326 Magnetic flux, 323 Magnetic flux density, 323, 325 Magnetic resistance, 323 Magneto-rheological-fluid, 405 Magnetomotive force, 323 Magnetorheological principle, 310 Manipulator *(Definition)*, 86 Masking, 72 Material properties, 65 Material properties, piezoelectric, 350 Materials, piezoelectric, 350

Measurement of mechanical impedance, 115 Measuring conditions, 591 Mechanical commutation, 335 Mechanical impedance, 58 Mechanically commutated, 337 Mechanoreceptor *(Definition)*, 38 Mechanoreceptor function, 40 Mechatronic design, 133 Medical diagnosis, 21 Medical robotics, 18 Medical training, 20 Meissner corpuscle, 41 MEMS, 506 Merkel disk, 43 Mice-sensor, 496 Micro-bending sensor, 473 Microneurography, 37 Model based psychometric methods, 50 Motor capabilities, 79 Motor control, 12 Moving coils, electrodynamic, 333 Moving magnet, 335 Multimodal displays, 20 Multiple stimulation, 72

#### **N**

Navigation, 22 Network parameter, 142 Neural processing, 44 Neuromuscular spindle, 43 Nominal load, 442, 483 Novint Falcon, 21 NP-I, 41 NP-II, 41 NP-III, 43 Nuclear Bag fiber, 43 Nuclear Chain fiber, 43 Nyquist criterion, 220

#### **O**

Object exploration, 76 Object properties, 65 Observer based state space control, 239 Open-loop admittance controlled, 192 Open-loop impedance controlled, 192 Optical position sensors, 494 Optimization, 138 Output strictly passive, 228 Overshoot, 235

Pacinian corpuscle, 43 Pain, 14 Pain receptors, 44 Paradigm, 52 Parallel-plate capacitor, 384 Passive haptic interaction, 110 Passive Touch *(Definition)*, 77 Passivity, 15, 253 Passivity, control engineering, 111 PC, 43 Peak force, 590 PEDOT:PSS, 457 Perceived quality, 10 Perception, 11, 13, 35 Perception Of transparency, 247 Perceptional Dimensions, 129 Perceptual deadband, 96 Permanent magnets, 325, 378 Permeability, 323 Permeability number, 377 Permittivity, 323 PEST method, 50 Philosophical aspects, 5 Photo-elastic effect, 470 Physiological basis, 36 π-coefficients, 460 Pick and place, 597 PID-Control, 237 Piezoelectric actuators, 311, 346 Piezoelectric actuators, design, 356 Piezoelectric basic equations, 348 Piezoelectric bimorph, 360 Piezoelectric coefficient of strain, 348 Piezoelectric coefficient of tension, 348 Piezoelectric coefficients, 348 Piezoelectric coupling factor, 350 Piezoelectric effect, 346 Piezoelectric equation, 349 Piezoelectric longitudinal effect, 348 Piezoelectric material properties, 350 Piezoelectric materials, 350 Piezoelectric motor, 312 Piezoelectric principle, 310 Piezoelectric sensors, 477 Piezoelectric shear effect, 348 Piezoelectric special designs, 353 Piezoelectric stack, 312 Piezoelectric stepper motors, 354 Piezoelectric transversal effect, 348 Piezoelectrical Bimorph, 363 Piezoelelectric shear effect, 354 Plunger type magnet, 311, 380 Pneumatic, 312

#### Index 685

Point of Subjective Equality (PSE), 54 Popov inequality, 224 Popov plot, 225 Power grasp, 114 Power law, 56 Power loss, electrodynamic, 320 Precision grasp, 114, 119 Primitives, 11 Progression rule, 49 Properties of tactile channels, 41 Proprioception, 14 Prototyping, 155 Proximity sensors, 500 Pseudo-haptic feedback, 74 method, 50 Psychometric function, 46 Psychometric methods, 49 Psychometric parameters, 54 Psychometric procedures, 48 Psychophysics, 11, 35, 45 Pulse-Width-Modulation (PWM), 340 PVDF, material property, 350 PZT-4, material property, 350 PZT-5a, material property, 350

#### **Q**

Quadrant controllers, 339 Quality of haptic systems, 126 Quality of perception studies, 69 Quartz crystal structure, 347 Quarz, material property, 350 Quaternion, 147

#### **R**

RA-I, 41 RA-II, 43 Rare earth, 325 Receptive field, 38 Reflection light switches, 495 Rehabilitation, 4 Relative resistivity change, 455 Reluctance, 323 Reluctance drives, 379 Reluctance effect, 373 Reluctance effect, magnetic, 374 Remanence flux density, 328 Rendering of surfaces, 21 Requirements, 448, 482 Requirement specification, 135 Resistance, magnetic, 323 Resistive, 455

Resistivity change, 459 Resolution of haptic systems, 127 Resonance-actuator, 311 Resonance principle, 480 Risk analysis, 169 Root locus method, 220, 237 Rotation, 146 Roughness (perception of), 65 Routh-Hurwitz criterion, 220 Ruffini ending, 41

#### **S**

SA-I, 43 SA-II, 41 Safety requirements, 167 Safety standards, 168 Sagnac-Effect, 506 SAW sensors, 480 Scaling, 56 Scattering theory, 255 SEA, 412 Seebeck-Effect, 509 Self-supportive, 334 Semiconductor, 456, 498, 507 Semiconductor strain gage, 458 Senses, 5 Sensitivity, 590 Sensor-less commutation, 337 Sensory cells, 37 Sensory integration, 44 Sensory physiology, 35 Serial coupled actuators, 412 Serial elastic actuator (SEA), 412 Serial viscous actuator, 412 Servo-drive, 335 Shaker, 311 Shape, 162 Shape-memory alloy, 310 Shape-memory wire, 312 Shear effect, piezoelectric, 348, 354 Shear mode, 402 Signal detection theory, 52 Silicon sensors, 461 Simulator, 20 Simulator system, 520 Sinus-commutation, 338 SISO, 141 Slipperiness (perception of), 65 Social aspects, 5 Solution cluster "kinaesthetic", 163 Solution cluster "omni-dimensional", 166 Solution cluster "surface-tactile", 164

Solution cluster "vibro-directional", 166 Solution cluster "vibro-tactile", 165 Solution clusters, 161 Spatial distribution of mechanoreceptors, 39 Special designs, piezoelectric, 353 Squeeze mode, 404 Stability, 139, 253, 595 Staircase method, 49 Standardizing organizations, 168 State feedback control, 239 State space vector, 206 State strictly passive, 228 Stepper motor, 312, 379 Stepper motors, piezoelectric, 354 Step response, 591 Stiffness, 593 Stimulus (pl. stimuli) *(Definition)*, 11 Strain gage, 456 Stress, 453 Stress tensor, 451 Strictly passive, 228 Structural impact detection with vibro-haptic interfaces, 366 Successiveness Limen (SL), 54 Summation, 72 Surface micro-machining, 467 Surface-wave actuators, 312 Surgical robotics, 18 System control structures, 192 System design, 135

## **T**

Tactile, 444, 446, 462, 467, 479 Tactile *(Definition)*, 14 Tactile icon, 24 Tactile properties, 41 Tactile receptors, 36 Tactile sensors, 488 Tactile systems, 398, 406 Tacton, 24 Tangible objects, 8 Task analysis, 158 Task-performance test, 597 Taxonomy of haptics, 12 Taxonomy of psychometric procedures, 48 Technical solution clusters, 161 Teleoperation, 17 Teleoperation Systems *(Definition)*, 87 Telepresence, 17 Temperature measurement, 507 Temperature (perception of), 65 Temperature, monitor, 343

Temperature receptors, 44 Texture, 162, 444 Thermal principle, 310 Thermal sensors, 44 Thermoresistor, 508 Threshold, 54 Threshold values, 59 Time constant, electrical, 332 Time delay, 255 Toolkits for haptic prototyping, 155 Tool-mediated contact, 7 Tool usage, 7 Total reflection, 471 TPTA system, 17 Tracing, 597 Transistor, 462 Transmission chain, 518 Transmission-ratio, 315 Transparency, 245, 595 Transversal actuator, 349 Transversal effect, 373 Transversal effect, electromagnetic, 373 Transversal effect, piezoelectric, 348 Traveling wave motor, 370 Traveling wave motor, equations, 353 Traveling wave motor, linear, 353 Travelling wave, 353 Triangulation, 496 Twisted-String-Actuator (TSA), 317 Two-point threshold, 54

## **U**

Ubi-Pen, 364 Ultrasonicactuator, 311 Ultrasonic sensors, 499 Upper cut-off frequency, 484 Usability, 139 User as a measure of quality, 126 User as mechanical load, 109 User *(Definition)*, 81 User phantom, 592

#### **V**

Validation, 139, 588 Verification, 138, 588 Vibrotactile display, 363 Virtual environment, 20 Virtual realty, 4 Viscoelastic material performance, 443 Viscosity (perception of), 65 V-model, 133

Index 687

Voice-coil-actuator, 311

## **W**

Wave variable, 256 Wave, traveling, 353 Wearables, 506 Weber's law, 54

Wheatstone, 455 , 508 Wiener-model, 207 Wire, 321 Workspace, 589

## **Z** Z-width, 593