The NEUROGES® Analysis System for Nonverbal Behavior and Gesture

Hedda Lausberg

# The NEUROGES® Analysis System for Nonverbal Behavior and Gesture

The Complete Research Coding Manual including an Interactive Video Learning Tool and Coding Template

#### **Bibliographic Information published by the Deutsche Nationalbibliothek**

The Deutsche Nationalbibliothek lists this publication in the Deutsche Nationalbibliografie; detailed bibliographic data is available online at http://dnb.d-nb.de.

#### **Library of Congress Cataloging-in-Publication Data**

A CIP catalog record for this book has been applied for at the Library of Congress.

The development of the NEUROGES analysis system was supported by the German Research Association Grants DFG LA 1249/1-1, 1-2, 1-3.

> Cover image: © Jana Bryjovà Cover design: Olaf Glöckler, Atelier Platen, Friedberg

> > E‐ISBN 978-3-631-77852-4 (E‐PDF) E‐ISBN 978-3-631-77853-1 (EPUB) E‐ISBN 978-3-631-77854-8 (MOBI) DOI 10.3726/b15103

> > > 2019 Hedda Lausberg, ©

Peter Lang – Berlin ∙ Bern ∙ Bruxelles ∙ New York ∙ Oxford ∙ Warszawa ∙ Wien

Open Access: This work is licensed under a Creative Commons Attribution CC-BY 4.0 license. To view a copy of this license, visit https://creativecommons.org/licenses/by/4.0/

This publication has been peer reviewed.

www.peterlang.com

*For Elisabeth and Dietrich*

## **Preface**

Nonverbal interaction and gesture researchers can proudly look up on a long and multi-disciplinary history of their research field. Currently, research on nonverbal interaction and gesture is conducted among others in medicine, psychology, neuroscience, linguistics, anthropology, sociology, or computer science. Given the long history, documented since the ancient Greek, and the broad spectrum of research areas, nowadays a substantial body of knowledge on body movement, gesture, and nonverbal interaction research is available.

However, while the exploration of the field by many different scientific disciplines extends the understanding of nonverbal interaction and gesture, the spread across many different scientific disciplines also bears the risk of isolated areas of specialized knowledge caused by a lack of communication between the different disciplines. Unfortunately, this often applies and one important reason for the scant exchange between the scientific disciplines is that many different methods are used. Researchers of different disciplines often do not understand each other's terminology, methodology including coding systems, and consequently, findings. This condition constitutes a severe obstacle for comparing and relating the results of different studies and for gradually building up a common inter- or even trans-disciplinary corpus of knowledge on body movement, gesture, and nonverbal interaction. Furthermore, many studies in the field still lack a reliable methodology and an operationalization of the behavioral units and gesture types submitted to investigation. As units and types are often not clearly defined, it remains vague what kind of phenomenon had actually been investigated. Different researchers might use the same term, e.g. "symbolic" gesture, but actually refer to different phenomena, or vice versa – a fact that constitutes a severe obstacle in developing a reliable body of knowledge on body movement, gesture, and nonverbal interaction.

The development of the NEUROGES® system, which was made possible by a long-lasting funding of the German Research Association from 1999–2017 (grants LA 1249/1-1, 1-2, 1-3, 2-1), responds to this situation in research. On the one hand, NEUROGES® has become an objective, reliable and user-friendly analysis system for body movements and gestures. A recent review of 18 empirical studies using NEUROGES® in combination with ELAN demonstrated a good objectivity and reliability of the system. On the other hand, NEUROGES® suits as a basic analysis system that can be used across scientific disciplines and thereby facilitates interdisciplinary exchange. Recent studies using NEUROGES® cover a broad range of different scientific disciplines, e.g. linguistics/psycholinguistics, psychology, neuropsychology, medicine, evolutionary anthropology, or criminology. Thus far, altogether more than 500 individuals from different cultures of five continents have been investigated, Germans, British, US Americans, francophone and anglophone Canadians, Suisse, Koreans, Kenyans, and Papua New Guineans, including healthy adults and children as well as individuals with brain damage and with mental illness. Furthermore, NEUROGES® has been used for studies on non-human primates. Finally, NEUROGES® can be combined with other more specialized coding systems such as the Linguistic Annotation System for Gestures (LASG) or the Movement Psychodiagnostic Inventory (MPI).

The present book follows up on the first book "Understanding Body Movement" published 2013 in Peter Lang Academic Research, in which the theoretical background and design of the NEUROGES® system are presented. This second book contains the complete coding manual for the application of the system in research. It further includes an interactive video learning tool that illustrates the assessment algorithm and provides video examples of all NEUROGES® -registered hand movement and gesture types. In addition, four training videos are provided that enable readers to train the application of NEUROGES® and to compare their analyses with a correct solution. Thus, the present book provides all materials needed for an effective self-study of the NEUROGES® system and it enables to a reliable application of the system in research.

I would like to thank the German Research Association for the long-lasting funding of this project and the Peter Lang Company, notably Benjamin Kloss, for publishing the two books on the NEUROGES® system. Furthermore, over the past two decades, discussions with many colleagues from different scientific disciplines have influenced the development to the NEUROGES® system and have made it a truly interdisciplinary tool, notably with Martha Davis, Robyn Flaum Cruz, Miriam Roskild Berger, Norbert Freedman, Georg Goldenberg, Alain Ptito, Eran Zaidel, Joachim Hermsdörfer, Cornelia Müller, Sotaro Kita, Irene Mittelberg, Ellen Fricke, Katja Liebal, Mandana Seyfeddinipur, Janka Bryjovà, Marianne Eberhard-Kaechele, Peter Joraschky, Angela v. Arnim, Harald Skomroch, Ingo Helmich, Robert Rein, Melanie Seiler, Konrad Juszczyk, Oliver Schreer, Ippokrates Konstantinidis, Katharina Reinecke, Niklas Neumann and the many students of the NEUROGES® seminars that have been held since 2007. In particular, the scientific exchange with Han Slöetjes, the main software developer of the ELAN video and audio annotation tool, has been inspiring for the elaboration of the NEUROGES® algorithm and resulted in a highly efficient combination of NEUROGES® and ELAN. Janka Bryjovà has made major contributions to the interactive learning tool, not only by collecting numerous

#### Preface 9

video examples of participants willing to give their consent to publication but also with regards to the aesthetic design of the learning tool. Corinna Klabunde has formatted the manuscript and designed and over the years repeatedly updated the figures of the algorithms in the course of the NEUROGES® development.

Finally, I am thinking of my husband Lothar Stemwedel whose insights were always an inspiration. I miss him deeply.

> Cologne, August 2018 Hedda Lausberg

In order to receive your individual password for the login area on the NEUROGES® website www.neuroges-bast.info please send the code of this book to c.klabunde@ dshs-koeln.de and h.lausberg@dshs-koeln.de.

Code: x12ynd14jeieng

### **I. Introduction to the NEUROGES® analysis system**


### **II. The Kinesic Module (Module I)**




### **III. The Laterality Module (Module II)**





### **IV The Gesture and Action Module (Module III)**





### **V Supplementary categories**





## **I. Introduction to the NEUROGES® analysis system**

## **1 How to use the system in research**

The NEUROGES® analysis system is an objective and reliable analysis tool for kinesic1 behavior and gesture. While it is typically used for the analysis of kinesic behavior and gesture in interaction, it can also be applied to examine the individuals' nonverbal behavior in non-interactive situations, such as thinking during a math task, waiting in a waiting room, etc.

NEUROGES® analyzes kinesic behavior and gesture based on the movement form of the body movements. The movement criteria on the basis of which the movement behavior is segmented into units and classified with values are valid with regards to neural, cognitive and emotional processes. This implies that the values are also sensitive to more complex phenomena such as spatial cognition, language and other neuropsychological functions, or stress, alterations in mental states and mental disease, but also gender and culture, and interaction. The system has been tested with the NEUROGES® archive, a video corpus of more than 500 individuals from different cultures of five continents, including healthy adults and children as well as individuals with brain damage and with mental illness, and furthermore, of non-human primates.

The aims, the theoretical background, and the development of the system are described in detail in the first book "Understanding Body Movement" published 2013 in Peter Lang Academic Research (hereafter referred to as: book I). Developments of the system since 2013 are reported in detail in the manuscript "The revised NEUROGES® -ELAN system–An objective and reliable interdisciplinary analysis tool for nonverbal behavior and gesture" by Lausberg & Slöetjes, published in Behaviour Research Methods, 48(3), 973–93, 2016. NEUROGES® is registered as a European trademark by the European Union Intellectual Property Office.

The NEUROGES® system is characterized by a user-friendly and flexible structure that allows the researcher to tailor the analysis according to her/his research requirements. The flexibility is achieved by the combination of a vertical and a horizontal dimension of analysis (Fig. 1) and by selectivity of the part(s) of the body submitted to the analysis.

The vertical dimension of the analysis is composed of seven assessment steps that build up on each other. The researcher is guided through an algorithmic

<sup>1</sup> The term kinesic is adopted from Birdwhistell (1952) to specify the aspect of nonverbal behavior that is subject of the NEUROGES® analysis.

**Fig. 1:** The analysis algorithm of the NEUROGES® system

assessment, in the course of which the ongoing stream of body movement is segmented into more and more fine-grained movement units, which represent conceptually more and more complex phenomena.

However, each assessment step does not only serve to achieve a more finegrained segmentation and classification, but furthermore, it constitutes a conceptually valid category per se, e.g. Step 3 constitutes the Focus category. Within each category, the category values are systematically organized, e.g. in the Focus category the values *within body, on body, on attached object*, *on separate object, on person, in space* (see Fig. 1, from left to right). This horizontal dimension of the NEUROGES® analysis reflects its developmental approach.

The seven assessment steps and seven main categories, respectively, are grouped into three modules that have different foci of analysis: Module I comprising Steps 1 – 3 the Kinesic analysis; Module II comprising Steps 4 – 5 the Laterality analysis; and Module III comprising Steps 6 – 7 the Gesture and Action analysis. Thus, the system consists of several categories and modules, the assessment of which altogether has strong synergetic effects (in the vertical dimension), but which can also be used independently of each other (because of the horizontal dimension). Therefore, the NEUROGES® system can be employed in a flexible manner according to the researcher's objectives.

Furthermore, in line with the researcher's requests, the body movements are grouped according to anatomy of the body into the four parts hand/arm/ shoulder, foot/leg, head, and trunk movements. As these parts can be analyzed altogether or independently of each other, even more flexibility in the use of NEUROGES® is achieved.

To summarize, the design of the NEUROGES® system bears the advantage that the system can be used in a flexible manner according to the researcher's requirements. On the one hand, the vertical dimension provides a highly operationalized systematic and comprehensive analysis, on the other hand, the horizontal dimension entails that already the assessment of a single category offers a conceptually valid analysis. Furthermore, the researcher can analyze all parts of the body or just select one. As the complete analysis comprises three modules that comprise altogether eight main categories that are each are meaningful per se, the complete analysis can be broken up into the algorithmic analysis of one or more modules and the analysis of one or more categories. Thus, depending on her/his research question the researcher decides if (s)he wants to conduct the complete algorithmic analysis, or apply only one module, one main category, or one supplementary category (see below). Aims, advantages and limitations of the three different approaches are described below.

### **1.1 Complete algorithmic analysis**

The complete analysis is recommended for researchers who aim at a comprehensive exploration of kinesic behavior and gesture. The complete analysis is particularly suitable for exploratory research, for pattern detection, for research questions requiring a high quantitative reliability, and for investigations including all parts of the body.

First, the complete analysis is ideal for exploratory research as it provides the full analysis of kinesic behavior. It registers the classical types of kinesic behavior, such as gestures, actions, head motions, trunk shifts, self-touches, foot movements, rest positions, etc.. In addition, it sheds light on certain dimensions of body movement across these movement types that reflect mental processes: the combination of motion, position and muscle contraction shows the extent of motor activity (Activation category), the trajectory and its structure reflects different types of mental states, such as dysregulated unproductive versus regulated productive states (Structure category), or the locus where the hand/foot acts (on) reveals the locus of sensory stimulation (Focus category). The detailed investigation of the laterality of limb movements provides insight into the neural basis of movement production, i.e., hemispheric specialization, interhemispheric cooperation, and complexity of neural control (Contact and Formal Relation categories). Finally, the complete analysis includes the elaborate analysis of the function and meaning of gestures and actions (Function and Type categories) as well as the analysis of rests and poses. Thus, as it is so comprehensive, the complete analysis is particularly suitable for exploratory research.

Second, as the complete ongoing stream of kinesic behavior is analyzed and thereby all movements and rests/poses are registered and classified, the output of the complete analysis provides the perfect basis for the detection of intraindividual and intradyadic kinesic and gestural patterns (see book I, 2.3).

Third, as the whole stream of kinesic behavior is registered and analyzed, the complete analysis offers a high quantitative precision concerning information about the frequency (number / minute), duration (seconds / unit), and the overall proportion of time (seconds / minute) of the different movement types (see book I, 2.4). Notably, as all kinesic phenomena are registered, also movements that are hard to classify at first glance are obligatorily submitted to the analysis and that challenge the researcher to deal with difficult kinesic phenomena. This further implies that the registered frequency of certain kinesic and gesture types is more accurate (Lausberg & Slöetjes, 2016, Introduction).

Fourth, all four parts of the body can be submitted to the complete analysis with the exception of the laterality assessment (Contact and Formal Relation categories) that only applies to limb movements.

Procedurally, the complete analysis is based on an assessment algorithm that includes seven assessment steps (Fig. 1). The algorithm enables to segment the ongoing stream of body movement into more and more fine-grained units.

Practically, at each assessment step, the units of the previous step are adopted, i.e., they become the 'to-be-coded' units for the next assessment step in which they are reassessed with new movement criteria. If according to the reassessment, the kinesic behavior changes within the adopted unit, the unit is segmented into subunits. These subunits then constitute the 'to-be-coded' units for the assessment step. As this principle of (sub-) unit generation applies to all coding steps, the multi-step evaluation process results in more and more fine-grained kinesic units that represent more and more complex phenomena. Thus, at steps 6 and 7, when complex decisions concerning the function and meaning of a gesture or action are required, fine behavioral units are available that are based on a highly operationalized step-wise segmentation of behavior. As the set of movement criteria (e.g. presence of physical contact + quality of physical contact + object/ subject of dynamic contact) that is employed in an assessment step represents a specific aspect of neural, cognitive, or emotional processes (e.g. locus of sensory stimulation and attention), each assessment step constitutes a conceptually valid category. According to the conceptual main theme, the eight categories are grouped in three modules.

The Activation category (Fig. 1, Step 1) segments the ongoing stream of movement behavior into *movement* units and *rest/pose* units. The *movement* units are further assessed with the Structure category (Step 2a) regarding the movement trajectory, and classified with five Structure values (*irregular, repetitive, phasic, shift, aborted*). The *rest/pose* units are further classified (Step 2b) according to muscle relaxation/contraction with two R/PStructure values (*rest, pose*). In the Focus category (Step 3), *phasic, repetitive,* and *irregular* Structure units are further assessed and classified with six Focus values (*within body, on body, on attached object, on separate object, on person, in space*). The kinesic assessment in Module I (Kinesic Module) is finalized with the concatenation of the Structure values and the Focus values of the units (e.g. *phasic in space*). The StructureFocus units represent classical kinesic behavior types at a highly operationalized level.

As a preparatory step for Steps 4 and 5 (Module II: Laterality Module), based on the StructureFocus units and the R/PStructure units of the limbs unilateral and bilateral units are generated. In the Contact category, the bilateral StructureFocus units are analyzed regarding the physical contact between the right and left limbs and classified with three Contact values (*act on each other, act as a unit, act apart*). The bilateral RestPose units are classified with three R/P Contact values (*crossed, closed, open*). The subsequent Formal Relation category (Step 5) concentrates on conceptual bilateral limb movements, which are defined by a *phasic* or *repetitive* structure (the complex phase is the realization of an idea or a concept). It assesses symmetry and dominance in bilateral limb movements and classifies the units with four Formal Relation values (*right hand dominance, left hand dominance, symmetrical,* and *asymmetrical*). The Contact and Formal Relation assessments can be concatenated with the StructureFocus units and thereby provide highly differentiated information about the laterality of kinesic behavior types. Furthermore, the Step 5 codings determine whether in Step 6 the function is assessed for both hands together, e.g. both hands together pantomime drumming, or whether it is assessed separately for the right hand and for the left hand, e.g. one hand scratches the leg while the other hand points to an external location.

In Steps 6 and 7 (Module III: Gesture and Action Module) the bilateral Formal Relation unit, the unilateral limb units (and optionally the *phasic* and *repetitive* head and trunk units) are analyzed with regards to emotional, cognitive, instrumental, and practical functions. In the Function category (Step 6),on the basis of 10 movement criteria, gestures and actions are classified with eleven Function values are defined (*emotion/attitude, emphasis, egocentric deictic, egocentric direction, pantomime, form presentation, spatial relation presentation, motion quality presentation, emblem/social convention, object-oriented action, subject-oriented action*). In the Type category (Step 7), most of the Function values are further specified with Type values.

Supplementary categories are provided for the examination of specific aspects of kinesic behavior: Technique of Presentation, Efforts, Temporal Structure, Target Hemi-Space, Execution Hemi-Space, Referent, and Trigger/Motive. The supplementary categories are assessed for specific values of the main categories. For some main values several supplementary categories may be applied, e.g. for the Function value *motion presentation* the supplementary categories Technique of Presentation, Target Hemi-Space, Execution Hemi-Space, Techniques of Presentation, and Referent. Thus, the decision to analyze a specific supplementary category depends on the research question.

While among the three approaches (complete, module, category) the complete analysis is the most time-consuming option in terms of total time invested, it is nevertheless the most efficient option with regards to the ratio of invested time and gained output. The complete analysis is characterized by strong synergy effects as each assessment step profits from the assessments of the previous steps. As an example, in the Function category assessment the already existing assessments of the Activation, Structure, Focus, Contact and Formal Relation categories are re-used.

### **1.2 Algorithmic analysis of one module**

The analysis of one module is recommended for researchers who want to focus either on kinesic behavior, laterality of limb movements, or gestures and actions. Within a module the algorithmic assessment approach is preserved.

The Kinesic module (Module I) serves to analyze all body movements that are displayed in a certain context. According to defined movement criteria, the ongoing stream of kinesic behavior is segmented into units and these units are then classified. Thus, any movement such as shifts, actions, gestures, self-touches, etc. is registered. Precise analyzes about the frequency, duration, and proportion of time spent with certain kinesic behaviors are obtained and kinesic patterns can be detected. The kinesic analysis can be applied to hand/arm/shoulder, foot/ leg, head, and trunk movements. The right and left limbs are assessed separately. The Kinesic module consists of three categories: Activation, Structure, and Focus (see 1.1). The Kinesic module analysis is compatible with automatized techniques for movement recognition (see book I).

The Laterality module (Module II) provides a differentiated analysis of the laterality of limb movements and rests/poses (hand/arm/shoulder and foot/leg). The laterality analysis is relevant for research on the neuropsychology of movement and rest/pose production. It allows the researcher to determine which cerebral hemisphere predominantly contributes to the production of a movement type, e.g. if a specific gesture type is predominantly generated in the right or left hemisphere. Furthermore, it assesses interhemispheric cooperation and the complexity of neural control (see book I). In the Laterality module, all bilateral limb movements and rests/poses are analyzed. The Laterality module consists of two categories: Contact and Formal Relation.

The Gesture and Action module (Module III) includes a classical gesture analysis, mostly in the tradition of Efron (1941) (see book I). The name of the NEUROGES® system historically originates from this module: NEUROpsychological GESture analysis system. As compared to other gesture coding systems, the NEUROGES® Gesture and Action module provides a very fine-grained and highly operationalized classification of gesture types. Furthermore, it includes actions as well as motor correlates of emotional experience. The Gesture and Action module is applied to all conceptual body movements. Most researchers have used the Gesture and Action module for the analysis of hand/arm/shoulder movements. However, likewise gestures and actions of the feet and the head can be submitted to Module III analysis. The Gesture and Action module consists of two categories: Function and Type.

### **1.3 The analysis of one category**

NEUROGES® consists of seven main categories (Activation, Structure, Focus, Contact, Formal Relation, Function, Type) and seven supplementary categories (Techniques of Presentation, Efforts, Temporal Structure, Target Location, Execution Hemi-Space, Referent, Trigger/Motive). The theoretical and empirical background of each main category is described in detail in book I and in the literature cited below in section 1.6. The researcher chooses the category that is most suitable for her/his research question.

In terms of total time spent for the analysis, the analysis of one category is the best option as it focuses on one category only. However, the limitations of this approach are that the ad hoc identification of the to-be-coded units is less reliable than the generation of to-be-coded units based on the algorithmic procedure. The ad hoc identification implies that a specific movement or gesture type, which is the focus of research interests, is picked out of the stream of kinesic behavior. For instance, all movements that match the prototype of a pointing gesture are selected. While this methodological approach is efficient with regards to the expenditure of time, it has the limitation that movements are neglected that do not at first glance perfectly match the target prototype, e.g. a small pointing gesture with the thumb. Ambiguous forms of a movement type might, however, provide valuable information about the movement type itself and about the associated cognitive, emotional, and interactive processes. If the ongoing stream of behavior is submitted to the analysis, the researcher is forced to thoroughly consider each movement and to attribute a value to it. Thereby, the precision of the analysis and the gain in knowledge are substantially improved. Furthermore, hand movement and gesture analyses that are based on the segmentation and classification of an ongoing stream of movement behavior rather than on an a priori selection provide a more reliable basis for quantitative analyses, since the variations of the target movement type are also considered.

### **1.4 Objectivity of the NEUROGES® values**

The NEUROGES® analysis system is objective as its values are defined by highly operationalized movement criteria. Some of these movement criteria can be measured with kinematic methods. The objectivity of the NEUROGES® analysis is further evidenced by the fact that the NEUROGES® value definitions have been successfully used for the development of automatic analysis approaches (HHI NEUROGES® video recognizer and KINEMO).

In this research coding manual, for each category and each value the precise definitions based on movement criteria are given.

### **1.5 Reliability of the NEUROGES® values**

A recent review of 18 empirical studies using NEUROGES® in combination with ELAN for the analysis of hand/arm movements demonstrated a good objectivity and reliability of the main categories and values (Lausberg and Slöetjes, 2016).

In this book, for each NEUROGES® value the reliability score is reported. For the binary Activation category, the interrater reliability is assessed with overlap-merge ratio scores (book I, chapter 14) and for all other categories with EasyDIAg (Holle & Rein, 2015; and book I, chapter 15). The means and standard deviations of the reliability scores of the values provide a frame of reference for the researcher to assess the reliability in their own studies (see also book I, chapters 12 and 16).

### **1.6 Validity of the NEUROGES® values**

With regards to validity, recent studies using NEUROGES® have covered a broad range of different scientific disciplines, e.g. neurology, linguistics/psycholinguistics, psychology, neuropsychology, psychosomatic medicine, evolutionary anthropology, primatology, criminology. Thus far, altogether more than 500 individuals from different cultures of five continents have been investigated, Germans, British, US Americans, francophone and anglophone Canadians, Suisse, Koreans, Kenyans, and Papua New Guineans, including healthy adults and children as well as individuals with brain damage and with mental disorders. Further, NEUROGES® has been used for studies on non-human primates. The multidisciplinary use of the system has demonstrated its applicability for a numerous research questions.

As examples, while NEUROGES® values register universal behavioral phenomena, i.e., all NEUROGES® values were found to occur in all cultures, cultures differ in the frequency of the display of certain values (Skomroch et al., 2013; Kim, 2016; Kim & Lausberg, 2018a, Kim & Lausberg, 2018b). Furthermore, NEUROGES® evidences gender differences in nonverbal behavior (Lausberg et al., 2016; Skomroch et al., 2013). Several studies have demonstrated the sensitivity of the NEUROGES® system to conditions of stress, among others in children and adolescents (e.g. Densing et al., 2017; Bryjovà et al., 2013; Heubach, 2016) and to happy versus sad stimuli (Kim, 2016). NEUROGES® is sensitive to personality traits and mental disorders (Reinecke et al., 2018a; Lausberg et al, 2016; Helmich et al., 2011; Lausberg et al., 2010) as well as to improvement in mental health in psychotherapy (e.g. Reinecke et al., 2018b; Neumann et al., 2017; Kreyenbrink et al., 2017; Dvoretska et al., 2014; Gabor et al., 2014; Kryger, 2010), and it shows the quality of interactive processes (Gabor et al., 2015; Dvoretska et al., 2013; Lausberg 2011; Dvoretska, 2009).

Furthermore, NEUROGES® differentiates between different locations of brain damage and associated neuropsychological deficits (Hogrefe et al, 2016; Lausberg et al., 2003) and it is sensitive to alterations in kinesic behavior secondary to mild head trauma (Helmich et al., 2018). It is effective for investigating the neuropsychology of gesture and hand movement in relation to spatial cognition and other cognitive processes (Helmich & Lausberg, 2014; Lausberg et al., 2007; Lausberg & Kita, 2003; Lausberg et al., 2003; Lausberg & Kita, 2002; Lausberg et al., 2000), including speech (Helmich et al., 2014; Skomroch et al., 2013; Skomroch & Lausberg 2013; Lausberg & Kita, 2003), and in relation to intelligence (Dvoretska & Lausberg, 2013; Sassenberg et al, 2011; Sassenberg & van der Meer, 2010; Wartenburger et al., 2010).

Finally, NEUROGES® is suitable for the development of automatic approaches (Juszczyk & Ciecierski, 2016, Postma-Nilsenová et al., 2013; Rein 2013; Masneri et al., 2010).

Research on the validity of the NEUROGES® categories and values is an ongoing process. The interested researcher finds the permanently updated overview on the publications on the website http://neuroges.neuroges-bast.info/ publications.

### **1.7 Occurrence, frequency, and duration of the NEUROGES® values**

Data on the occurrence, the frequency, and the duration of the NEUROGES® values (movement types) are based on the NEUROGES® archive. Six empirical studies with different experimental settings including 191 healthy individuals (99 females, 92 males) from Germany, United States, Canada, Korea, and Papua New Guinea were included in the analyses. The statistical analyses were conducted by Dr. Melanie Seiler and the author.

**Occurrence** refers to how many individuals in the sample show the NEUROGES® value in their nonverbal kinesic behavior. As some NEUROGES® values were not examined in all studies, for some values the sample size is smaller than 191. Reported is the percentage of individuals in the sample who display the respective value. As an example, only 37 % of individuals in the sample show the value *on attached object* as part of their kinesic repertoire.

**Frequency** refers to how often in a minute a unit with the respective value is displayed. Reported is the Mean number per minute (M) ± Standard Deviation (SD) as well as the Median. The calculation of the Mean frequency is based on all individuals of the sample including those who do not display the value. As an example, the mean frequency of right hand *repetitive* units is 2.28 ± 1.71; 1.95 units per minute.

**Duration** refers to how many seconds a unit with the respective value lasts. Reported is the Mean duration in seconds (M) ± Standard Deviation (SD) as well as the Median. As the duration of a behavioral phenomenon can only be determined if the phenomenon is displayed by an individual, the calculation of the Mean duration of a value unit is based only to those individuals in the sample who display the value (compare occurrence). As an example, the Mean duration of left hand *in space* units is 2.81 ± 1.02; 2.55 seconds per unit.

In this book, data on the occurrence, the frequency, and the duration of the Module I values are provided separately for the right and left hands. Given that Module II only assesses bilateral units, the data on the occurrence, the frequency, and the duration of the Module II values refer to bilateral hand movements. Finally, data on the occurrence, the frequency, and the duration of the Module III values are provided separately for the right hand, the left hand, and bimanual movements. Note that these data are only approximate, since in some studies of the NEUROGES® archive *right hand dominance* and *left hand dominance* units (rarely occurring values based on the Formal Relation assessment) were further coded as right hand units and left hand units while in other studies they were coded as both hand units (see explanation in 8.2.2).

### **1.8 How to use the coding manual**

In this book, each category is dealt with in a separate chapter. This enables researchers who want to use only one category or only one module of the NEUROGES® system to directly find the desired information. Each chapter can be understood by itself and it is not necessary to read the other chapters (only occasionally, there is reference to specific sections in other chapters).

The coding manuals of the categories are consistently structured as follows:

• **Definition of the category**: An overview on the category with short definitions of the values and their reliability scores is provided.

	- o Short definition
	- o Definition
	- o Meeting the criteria (definition according to the movement criteria)
	- o Data on occurrence, frequency, and duration of the value
	- o ♦ Examples: In the Modules I and II, each value is illustrated by several examples, separately for the four parts of the body. For space-saving reasons, for the 35 Module III values and the 29 values of the Supplementary categories no written examples are given. However, in the Formal Relation category, which in the complete algorithmic analysis is the last assessment step before Module III, as an illustration, the examples include information about the Module III coding. Note that the interactive video learning tool provides video examples for all values (see 1.9).
	- o Differentiate the value from…: Having been most appreciated by many raters, this section informs how to distinguish the value from other values that share certain movement features and that, therefore, may be mixed up.

<sup>2</sup> Only in the first step, the Activation category, this section is named: "Data submitted to the Activation assessment".

### **1.9 How to use the interactive video learning tool, the NEUROGES® template file for the multimedia annotation ELAN, and the annotated NEUROGES® training videos**

The interactive video learning tool, the NEUROGES® template file, and four annotated NEUROGES® training videos are provided on the NEUROGES® website www.neuroges-bast.info in the password protected login area. In order to receive your individual pass word for the login area please send the code of this book (see page X) to c.klabunde@dshs-koeln.de and h.lausberg@dshskoeln.de.

The development and application of the interactive video learning tool is described in detail in book I, Chapter 13. The use of the interactive video learning is self-explanatory. The user is guided through the NEUROGES® algorithm in which each NEUROGES® value is illustrated by one or more video examples.

For the practical application of the NEUROGES® analysis it is recommended to use the system together with the multi-medial annotator software ELAN. In numerous aspects, ELAN provides the perfect software environment for applying the NEUROGES® analysis system. Therefore, since 2006, NEUROGES® has been combined with ELAN and in their developments the NEUROGES® system and the annotation tool ELAN have mutually influenced each other (Lausberg & Slöetjes, 2009; Lausberg & Slöetjes, 2016). For its application with ELAN, the NEUROGES® algorithm has been translated into a ready-to-use NEUROGES® -ELAN template, which is basically an electronic rating scale. Tab. 1 shows how the NEUROGES® categories are represented as tiers in the NEUROGES® -ELAN template.

Tab. 1 evidences that the NEUROGES® -ELAN template is very comprehensive. However, if additional tiers are needed for very specific research question, e.g. the Type analysis of head movements, the tier simply has to be duplicated and relabeled.

The use of the NEUROGES® -ELAN template is described in detail in book I, Chapter 9 (Step by Step Instruction in NEUROGES® Coding with ELAN). General guides how to use ELAN are available on the website https://tla.mpi.nl/ tools/tla-tools/elan/.

Furthermore, four annotated NEUROGES® training videos are provided on the NEUROGES® website that enable the researcher to train analyzing videos with NEUROGES® -ELAN. The labels of the videos, i.e., Beginner, Intermediate, Advanced, and Expert, indicate the degree of difficulty. There is an instruction how to use the training videos (written by Harald Skomroch).


**Tab. 1:** Modules, categories, corresponding ELAN tiers, and values in the NEUROGES®- ELAN template



(*continued on next page*)


#### **Tab. 1:** Continued


#### **Tab. 1:** Continued

#### **Notes:**

R0 These are the tiers for Rater A

R1 These are the tiers for Rater B

bh These are the tiers for both hands movements

rh These are the tiers for right hand movements

lh These are the tiers for left hand movements

*r/p* These values serve the analysis of *rests* and *poses*

()\* These values are provided only in the template for re-coding of a unit.

*?* In the template, the value*?* is provided for work-in-progress. If you are not sure about the beginning or end of a unit or the value of the unit and you want to get back to this unit later again, mark the unit preliminarily with this value.

\*\*\* Researchers who use the tier Phases have to adapt its label according to the Structure tier that they want to specify, e.g. rh\_Phases\_R0 or lf\_Phases\_R0. The definitions of the phases are given in 4.3.

### **1.10 How to acquire the NEUROGES® certificate**

The NEUROGES® certificate is registered as European Union trademark by the European Union Intellectual Property Office. The acquisition of the certificate is recommended as it guarantees a reliable application of the NEUROGES® system in research. The certification is given after successful passing of the exam. Applicants should thoroughly study the category coding manuals in this book, the interactive video learning tool, and the four NEUROGES® training files before registering for the exam. The exam can be taken for single categories, single modules, or the whole system. For more information about the exam and certificate see http://neuroges.neuroges-bast.info/training and contact c.klabunde@dshs-koeln.de and h.lausberg@dshs-koeln.de.

## **2 Parts of the body submitted to the analysis**

The NEUROGES® system analyzes the kinesic behavior of the body. For the purpose of the analysis and based on anatomical and neuroanatomical grounds, the body is divided in four parts: upper limbs, lower limbs, head, and trunk. The NEUROGES® system allows the researcher to choose if (s)he wants to code all four parts of the body or to select one, two, or three parts (see Fig. 1).

In a natural context, often more than one part of the body or even the whole body is involved in a movement. For many research questions, however, it is efficient to focus on one part of the body, as the different parts of the body fulfil different functions. As an example, a researcher who is interested in cognitive processes and wants to study gestures might want to focus on the upper limbs, as gestures are most often executed by the hands, and more rarely by the head and feet. On the other hand, a researcher, who is interested in openness and rapport in a therapy session might include the analysis to trunk movements in his study (see book I, 2.1.4). Thus, the research question determines if all parts of the body or just one or two parts are studied. Finally, sometimes the video material only allows for the examination of one or two parts of the body, e.g. if the video only shows the upper body.

### **2.1 Upper limbs**

The upper limbs comprise the fingers, hands, arms, and shoulders. Upper limb movements are anatomically defined by motions in the finger, hand, wrist, elbow, and shoulder articulations including those with the collarbone and shoulder blade relative to the trunk. Thus, not only movements of the mere shoulder joint, i.e., inward and outward rotation, adduction and abduction, anteversion and retroversion, but also lifting and lowering, and moving backward and forward of the shoulder are coded as upper limb movements.

Note that in upper limb movements involving the shoulder joints, the upper arm is moved relative to the trunk, e.g. the arm is lifted, while in the rare case of isolated trunk movements involving the shoulder joint, the trunk is moved relative to the upper arm, e.g. leaning forward in a chair but without changing the positions of the arms on the armrests.

The distal muscles of the upper limbs (fingers, hands) can be controlled only by the motor cortex of the contralateral cerebral hemisphere, i.e., the right hand by the left hemisphere. In contrast, the proximal muscles of an upper limb, i.e., the shoulders and upper arms can be controlled by both cerebral hemispheres via contralateral and ipsilateral pathways.

In upper limb movements, often, fingers, hand, arm, and shoulder move together. However, either subpart can move isolated, such as a thumb toss or a shoulder shrug. Therefore, simultaneous isolated movements of the finger(s) or hand and of the shoulder may occur, e.g. a thumb toss simultaneously with a shoulder shrug. With regards to the practical coding with NEUROGES®, in this rare case only **one** *movement* unit is tagged.

### **2.2 Lower limbs**

The lower limbs comprise the toes, feet, legs, hips, and buttocks. Lower limb movements are anatomically defined by motions in the toe, foot, ankle, knee and hip articulations relative to the trunk.

Note that in lower limb movements involving the hip joint, the thigh is moved relative to the trunk, e.g. the thigh is lifted, while in trunk movements involving the hip joint, the trunk is moved relative to the thigh, e.g. leaning forward with the trunk when sitting.

As for the upper limbs, the distal muscles, i.e., the toes and feet can only be controlled by the contralateral cerebral hemisphere, while the proximal muscles of the lower limbs, i.e., the thigh can be controlled by the motor cortex of both cerebral hemispheres. With regards to the practical coding, the same rules apply as for the upper limbs.

### **2.3 Head**

Head and neck movements are anatomically defined by motions in the atlantooccipital joints (between lower surface of the skull and the first vertebra), i.e., nodding, and in the cervical spine relative to the thoracic spine, i.e., turning the head, bending it forward, backward, and sideward.

Head and neck muscles are controlled by both cerebral hemispheres, with a stronger impact of the contralateral hemisphere.

### **2.4 Trunk**

The trunk is the central part of the body from which extend the neck with head and the limbs. It includes the abdomen, the back, the thorax, and the pelvis. Trunk movements are motions in the thoracic, lumbar and sacral spine and in the hip and shoulder joints relative to the limbs. Included are motions of joints within the pelvis and ribcage.

#### Trunk 43

Trunk movements are leaning forward, backward, and sideward, rotating, or contracting, expanding, tilting the pelvis forward, backward, and sideward.

At the level of the spine, the trunk is controlled by different neural pathways than the limbs. While it shares with the lower limbs the innervation of antigravity muscles, it differs especially from the control of the upper limbs, neck and head.

In NEUROGES®, the four parts of the body are coded independently of each other. However, the movements and rests, respectively, of the four parts can be related to each other, which is technically realized by a concatenation procedure. The concatenation procedure delivers complex *movement* units, e.g. simultaneous turning of head, turning of trunk, and pointing with right hand, or *rest/ pose* units, e.g. upper limbs and lower limbs crossed in rest. The concatenation procedure for the four parts of the body is described in Chapter 3.

With regards to the upper and lower limbs, in Module I the right and left limbs are coded separately of each other, and in the Modules II and III, the movements of the right and left limbs are related to each other.

It has to be noted that thus far, the large majority of researchers has used NEUROGES® to analyze the upper limbs. Only few publications deal with an analysis of all four parts of the body, i.e., the whole body (Lausberg, 2011). The researchers' preference for the analysis of upper limb movements entails that the reliability of the NEUROGES® system has been established on the basis of empirical studies using NEUROGES® for the analysis of upper limb movements. There is, however, little reason to assume that the reliability of NEUROGES® might differ for the other three parts of the body, as the categories and values are the same for all four parts of the body. However, as a precautionary measure, researchers who aim at investigating head, trunk, or lower limbs are recommended to carefully control the reliability in their studies.

## **II. The Kinesic Module (Module I)**

The Kinesic Module (Module I) analyses all body movements that are displayed in a certain context. Thus, it serves to register classical types of kinesic behavior, such as gestures, head motions, trunk shifts, self-touches, repetitive foot movements, closed rest positions, etc. (see book I). These types emerge at the end of the Kinesic Module analysis as StructureFocus values, which are finegrained types of kinesic behavior.

The StructureFocus values are based on an algorithmic three steps analysis. The three categories Activation, Structure, and Focus focus on certain dimensions of body movement that are valid with regards to mental processes: the analysis of the combination of motion, position, and muscle contraction provides information about the individual's extent of motor activity (Activation category), the analysis of the trajectory and of its structure about mental states and processes (Structure category), and the analysis of the locus of limb movement about sensory stimulation patterns (and if it applies to attention processes) (Focus category). The final concatenation of the analyses of these basic components of body movement results in the StructureFocus values.

Technically, according to defined movement criteria, the ongoing stream of body movements is segmented into units and these units are then classified with values. Precise analyses about the frequency, duration, and proportion of time spent with certain kinesic behaviors are obtained and kinesic patterns can be detected. The kinesic analysis can be applied to hand/arm/shoulder, foot/leg, head, and trunk movements. The right and left limbs are assessed separately.

## **3 The Activation category**

### **3.1 Definition of the Activation category**

The Activation category segments the ongoing stream of kinesic behavior into *movement* and *rest/pose* units (see Fig. 2). The Activation category assessment can be applied to all four parts of the body (upper limbs, lower limbs, head, and trunk).

The two values *movement* and *rest/pose* are defined by the three criteria motion vs. stillness, actively held position vs. gravity-aligned / supported position, muscle contraction vs. relaxation. Tab. 2 shows the short definitions of the two Activation values and the interrater reliability scores (from Lausberg & Slöetjes, 2016).

Thereby, the Activation category measures the extent of an individual's (psycho-)motor activity (see book I, section II). While the analysis of all four parts of the body enables to register motor activity of the whole body, for many research questions the analysis of only the upper limbs has been sufficient.

In the clinical domain, the Activation category enables to operationalize hypoand hyperactivity (e.g. as listed in the International Classification of Diseases ICD and in the Diagnostic and Statistical Manual of Mental Disorders DSM) by providing data about the frequency (number / minute), the duration (seconds / unit) and proportion of time (seconds / minute) of *movement* versus *rest/pose* units. With the exception of severe pathological states of hyper- or hypoactivity, in most individuals, there is a permanent alternation between *movement* units and *rest/pose* units. ♦ Example: the right hand rests on the arm rest (*rest/pose* unit) ⇒ the hand rises, traces a circle, moves back to lap (*movement* unit) ⇒ rests on the knee (*rest/pose* unit) ⇒ knee moves and hand is passively moved with the knee (continuation of the *rest/pose* unit) ⇒ hand rises again, forms a fist, moves back to waist and is put on the hip, while trunk becomes erect and other hand as well is put on the hip (*movement* unit) ⇒ posing with erect trunk and hands put on the hips (*rest/pose* unit) ⇒ rises again, forms a fist, moves back to lap (*movement* unit) ⇒ rests in lap (*rest/pose* unit).

### **3.2 Data submitted to the Activation assessment**

In the Activation category (Step 1 of Module I), for each part of the body, i.e., the upper limbs, the lower limbs, the head, and the trunk, the ongoing flow of the


**Tab. 2:** Short definitions and reliabilities of the Activation values

\* Interrater reliability as measured with Merge-Overlap (from Lausberg & Slöetjes, 2016)

movement behavior is screened for *movement* versus *rest/pose.* The upper and lower limbs, respectively, are coded separately for the right and left sides.

Procedurally, only the *movement* units need to be tagged, as the *rest/pose* units are defined by the absence of a *movement* unit (for the technical procedure of the *rest/pose* unit generation in NEUROGES® -ELAN see 3.5).

### **3.3 Criteria for the definition of the Activation values**

The Activation values are defined according to the following criteria3 :


The gravity-aligned position requires no additional muscle activation other than that of the antigravity muscles. Antigravity muscles are muscles, mainly extensors of the knees, hips, and back, that by their tone resist the constant pull of gravity in the maintenance of a normal posture, e.g. as in upright standing or upright sitting. In the supported position the part of the body is supported by a physical entity, e.g. hand/arm resting in the lap or trunk leaning against the back of the chair. Thus, the aligned / supported position requires no additional muscle activation.

<sup>3</sup> Note that the definition of the terms in NEUROGES® may deviate from the general definition of these terms, as they are adapted to the requirements of movement behavior analysis.

In contrast, in an actively held position the part of the body is in a position that requires muscle contraction other than that of the antigravity muscles, e.g. holding the arm or the leg stretched out. The actively held position implies isometric contraction of muscles. An actively held position can occur within a *movement* unit, e.g. a pointing gesture in which the extended arm with the shaped hand is held for a moment before being retracted. The actively held position within a *movement* unit serves emphasis or it indicates a hesitation, dysfluency, or disruption.

In rare cases, the actively held position may constitute a *pose* unit. It often contributes to a whole body pose and in case of the limbs, they are typically held in the body near space (near kinesphere, see 8.3). The actively held position in a *pose* is held for a much longer period of time than the actively held position within a *movement*, as a *pose* is kind of a settlement. While it is the theoretical claim of NEUROGES® to not operate with absolute / arbitrary time frames, however, based on the NEUROGES® archive data it is suggested to consider a *pose* if the actively held position is held for longer than four seconds.

**Muscle Contraction vs. Relaxation:** Muscle contraction can be isometric or isotonic. In isometric activation the muscle length does not change during contraction. This results in an actively held constant position of the limb. In contrast, in isotonic activation, the tension remains unchanged but the muscle length changes, resulting in a motion.

In NEUROGES®, contraction is defined as the visible or inferred contraction of muscles other than the antigravity muscles. When analyzing videos a muscle contraction may be visible, i.e., the observer can actually see how specific muscle groups contract. However, in most cases the contraction is not directly visible and the observer infers from the actively held position or the displacement of the part of the body that the muscles must be contracted because the position or the displacement of that part of the body cannot be explained otherwise.

In a *movement* unit, motion is associated with isotonic muscle contraction. Note that the rare case of the combination of motion and muscle relaxation is passive motion, i.e., the part of the body **is** moved. For conceptual reasons, i.e., as the Activion category serves to register an individual's level of motor activity, a passive motion is coded as a *rest/pose* unit.

Note that the normal activity of the antigravity muscles is not coded as *movement*. Thus, researchers who analyze the lower limbs in a standing person do not code the antigravity function of the lower limb as a *movement*. If there is a supporting leg and a free leg, only the activity of the free leg is coded as *movement*. As an example, if the person stands on the left leg and points with the right foot, this is coded as right limb *movement* unit and left limb *rest/pose* unit.

### **3.4 Definitions of the Activation values**

#### **3.4.1** *movement*

#### **Short definition**

#### THE PART OF THE BODY IN ACTIVE MOTION, POTENTIALLY INCLUDING TRANSIENT MOTIONLESS PHASES IN AN ACTIVELY HELD POSITION

#### **Definition**

A *movement* unit is defined by motion and muscle contraction. The combination of motion and muscle contraction (active motion) matches the general definition of a movement.

Short motionless phases with an actively held position may be embedded in the *movement* unit, e.g. pointing with the hand and holding still the shaped hand against gravity for a moment. In other words, a transient motionless phase is part of the *movement* unit if the part of body is held against gravity and if the motionless phase is framed by motion phases.

At the end of a *movement*, the moving person might need some time to find a comfortable *rest* position or to establish a *pose*. The searching of the *rest/pose* position happens during the retraction phase and it is therefore coded as part of the *movement* unit, i.e., the *rest/pose* unit only starts when the part of the body has come to stillness and remains in a constant position. Technically, the transition from a *movement* unit to a *rest/pose* unit is coded according to Seyfeddinipur (2006). It is identified by the transition from the last blurred video frame to the first clear video frame. Vice versa, the transition from a *rest/pose* unit to a *movement* unit is identified by the transition from the last clear video frame to the first blurred video frame.

In a natural context, especially for the limbs, often more than one subpart of the limb is involved in a movement. As an example, fingers, hand, arm, and shoulder move together. However, each subpart can move isolated, such as a thumb toss or a shoulder shrug. Therefore, rarely, simultaneous isolated movements of the fingers or hand and of the shoulder (or toes and thigh) may occur, e.g. a hand toss and a simultaneous shoulder shrug. No matter if the two subparts of one limb move together or isolated, for that limb only **one** *movement* unit is coded and the assessments of the subsequent NEUROGES® categories refers to the more prominent one of the two isolated movements, e.g. the shoulder shrug. In the NEUROGES® -ELAN, the researcher can use the tier Notes to note that simultaneous isolated movements within the limb occurred.

#### **Meeting the criteria**


#### **Examples for** *movement* **units**


#### **Differentiate** *movement* **from**…

# no movement in *rest/pose*: Human observers have individually different thresholds concerning the perception of body movement. Some raters are very sensitive and they perceive fine movements of the fingers, while other raters notice only rather big movements. Factors that influence a rater's movement perception are his/her experience in coding movement behavior, his/her own movement experience, and his/her motivation. To improve the raters' sensitivity and the inter-rater agreement, raters should aim at coding any movement they perceive.


#### **3.4.2** *rest/pose*

#### **Short definition**

#### THE PART OF THE BODY RESTS OR POSES

#### **Definition**

A *rest/pose* unit is primarily defined by stillness and a gravity-aligned/supported position of the part of the body.

Only in rare cases, there is a long actively held position (subtype: *pose*) or passive motion (subtype: *rest*).

#### **Meeting the criteria**


### **Examples for** *rest/pose* **units**


### **Differentiate** *rest/pose* **units from**…


#### *Rest/pose* **positions**

Researchers who analyze more than one part of the body are able to identify *rest/ pose* positions. A *rest/pose* position is defined as a specific static arrangement of two, three, or all four parts of the body, e.g. sitting comfortably with crossed legs in a chair with the arms resting on the arm rests. Thus, the part of the body does not *rest/pose* in isolation but it rests together with other parts of the body, and together they form a *rest/pose* position.

Since *rest/pose* positions might contain *rests* of some parts of the body and *poses* of other parts, e.g. foot flexed but rest of the body relaxed, the two subtypes *rest* (muscle relaxation) and *pose* (isometric muscle contraction) are not separated in the Activation category analysis, which technically in NEUROGES® -ELAN serves as the basis for the concatenation of simultaneous *rest/pose* units of the four parts of the body in order to generate *rest/pose* position (see below 3.5). The differentiation between *rest* and *pose* will be conducted in the Structure category assessment.

### **3.5 Procedure for Step 1 / Module I in NEUROGES® –ELAN**

It is strongly recommended to use the NEUROGES® analysis system in combination with the annotation software ELAN: https://tla.mpi.nl/tools/tla-tools/ elan/. Different versions of user guides are available on the ELAN website, e.g. a practical overview in the "How-to Guide". A step-by-step instruction that is specifically tailored to the use of ELAN in combination with NEUROGES® is published in book I, section III.

Download the NEUROGES® -template from the login area on the NEUROGES® website www.neuroges-bast.info (in order to receive your individual password for the login area on the NEUROGES® website please send the code of this book on page X to c.klabunde@dshs-koeln.de and h.lausberg@dshskoeln.de). Open the template in ELAN. Note that in this book, instructions for ELAN refer to the version 5.0.

If you code the NEUROGES® training eafs, you do not need to download the template, but you open the eaf files directly in ELAN.

To code the Activation category with the NEUROGES® -template proceed as follows:

First, attribute your initials and the identification of the videotaped participant whom you are going to analyze to the Activation tiers in the template:

Go to the function Tier > Change Tier Attributes:

Click on the tier rh\_Activation\_R0 (Rater 0).

In the field Tier Name, change the template initials R0 into your initials (for instance: RX)

In the field Participant enter the identification of the videotaped person whose behavior you are going to analyze.

In the field Annotator enter your name.

Proceed analogously for the lh\_Activation tier.

(Go to the function Tier > Change Tier Attributes:

Click on the tier lh\_Activation\_R0.

In the field Tier Name, change the template initials R0 into your initials (hereafter RX)

In the field Participant enter the identification of the videotaped person whose behavior you are going to code.

In the field Annotator enter your name.)

Then start coding. In the template, the following values are provided for the Activation category:

*movement*

*rest/pose*

*?*

On the tier rh\_Activation\_RX tag all right hand *movement* units and mark them with the value *movement.*

Once you have finished tagging all right hand movements, on the tier lh\_ Activation\_RX tag all left hand *movement* units and mark them with the value *movement.*

The value*?* is provided for work-in-progress. If you are not sure about the beginning or end of a unit or the value of the unit and you want to go back to this unit later again, mark the unit preliminarily with the value*?*.

Technically, there is no need to tag and label the *rest/pose* units, as they can be generated automatically by the Create Annotations from Gaps function:

Apply the function Tier.

Create Annotations from Gaps.

Select Tier: rh\_Activation\_RX.

Create annotations on the same tier.

Value for the new annotations Specific value: Enter *rest/pose.*

OK.

Select Tier: lh\_Activation\_RX.

Create annotations on the same tier.

Value for the new annotations Specific value: Enter *rest/pose.*

OK.

**Note:** If you want to conduct the R/P Contact assessment for *rest/pose* units (or for *rest* and *pose* units, as coded in Step 2) later on, you have to save the *rest/pose* units on a separate tier, i.e., separate from the *movement* units:

Apply the function Tier.

Create Annotations from Gaps.

Select Tier: rh\_Activation\_RX.

Create annotations on a new tier.

Enter rh\_Rest/Pose\_RX.

etc.

### **For researchers who analyze also the lower limbs, the head, and the trunk**

Proceed analogously for the template tiers rf\_Activation\_R0, lf\_Activation\_R0, head\_Activation\_R0, and trunk\_Activation\_R0.

You might aim at creating concatenated units of the four (or only two or three) parts of the body, i.e., units in which head, trunk, upper and lower limbs *rest/pose* simultaneously (*rest/pose* position units) or units in which trunk, head, and upper and lower limbs move simultaneously (whole body *movement* units). These units are technically the overlaps of *movement* and *rest/pose* units of the tiers rh\_Activation\_RX, lh\_Activation\_RX, rf\_Activation\_RX, lf\_Activation\_ RX, head\_Activation\_RX, and trunk\_Activation\_RX. The overlap procedure can be conducted for multiple eafs at a time.

File > Multiple file processing > Annotations from Overlaps.

Step 1/4: File and Tier Selection.

Select files to use for computation:

Select files from domain. Click on the button Domain.

Select an existing domain. Load.

Select tiers to use for computation:

rh\_Activation\_RX, lh\_Activation\_RX, rf\_Activation\_RX, lf\_Activation\_RX, head\_Activation,\_RX and trunk\_Activation\_RX. Next.

Step 2/4: Overlaps Computation Criteria.

Create annotation when annotations overlap

 and their annotation values are equal. Next.

Step 3/4: Destination Tier Name Specification.

Enter name for destination tier: wholebody\_Activation\_RX.

Destination tier is a root tier.

Select a linguistic type for destination tier: Notes OR other Linguistic Type. Next.

Step 4/4: Destination Tier Value Specification.

Value from a specific tier: rh\_Activation\_RX

Finish.

Now you have the following new tier that contains whole body *movement* units and *rest/pose* position units in which all parts of the body *move* or *rest/pose*, respectively, simultaneously: wholebody\_Activation\_RX.

## **4 The Structure category**

### **4.1 Definition of the Structure category**

The Structure category further specifies *movement* units and *rest/pose* units. *Movement* units are classified according to the trajectory with five Structure values: (i) *irregular*, (ii) *repetitive*, (iii) *phasic,* (iv) *shift*, and (v) *aborted* (Fig. 3, Step 2a). *Rest/pose* units are classified according to muscle contraction with two R/P4 Structure values: (i) *rest*, and (ii) *pose* (Fig. 3, Step 2b). The Structure category assessment can be applied to all four parts of the body.

Short definitions of the Structure values and their reliabilities as well the short definitions of the R/P Structure values are given in Tab. 3. The most relevant movement criterion for the classification of *movement* units is the trajectory. It is defined by the path and more specifically, by the absence or presence of phases within the path.

Since the trajectory reflects the absence or presence and complexity of motor planning process (see 4.3), the first three Structure values *irregular, repetitive* and *phasic* as shown in Fig. 3 represent a continuum with an increase in complexity from motor arousal to formative motor processes. The value *shift* registers transitions between still positions (and as such it belongs to the resting/posing behaviors, but since it is a movement it is listed among the Structure values). Finally, the value *aborted*, which is at the end of the horizontal order in Fig. 3, registers the abortion of movements, independently of whether they could have potentially become repetitive, phasic or shift movements (book I, section II).

The Structure category enables to differentiate mental states as reflected in kinesic behavior: motion states dominated by dysregulation (*irregular*), motion states reflecting productive processes (two levels: *repetitive* and *phasic*), transitions between still states (*shift*), the abortion of these processes (*aborted*), as well as still states of relaxation (*rest*) and tension (*pose*). Thus, the Structure category is sensitive to changes in mental states in healthy individuals (including changes elicited by experimental conditions), as well as to alterations of mental states in mental disease or brain damage (see book I, section II).

<sup>4</sup> R/P values serve the further classification of *rest/pose* units.


**Tab. 3:** Short definitions and reliabilities of the Structure values and the R/PStructure values

\* Interrater reliability as measured with EasyDIAg (from Lausberg & Slöetjes, 2016)

### **4.2 Generation of the 'to-be-coded' Structure and R/PStructure units**

### **4.2.1 Generation and assessment of to-be-coded Structure and R/PStructureunits**

The *movement* units that have been generated in the Activation category assessment are directly adopted for the Structure category evaluation. They are termed 'to-be-coded' Structure units and are classified with the five Structure values. Once they are classified, the units are termed Structure units.

Researchers who also investigate *rest/pose* units, generate the 'to-be-coded' R/P Structure units with the same procedure as the 'to-be-coded' Structure units and classify them with the two R/P Structure values *rest* and *pose*.

If there is a change of the Structure value, e.g. first *phasic*, then *repetitive*, within a 'to-be-coded' Structure unit, then the structural change demarcates two new Structure units. However, if there are two or more different movements within a *movement* unit but the Structure value remains the same, e.g. a *repetitive* in space movement (gesture) followed by a *repetitive* on body movement (self-touch), there is no new Structure unit starting, as the mental mode of *repetitive* processes remains the same. In other words, in the Structure category assessment there can never be two Structure units with the same Structure value immediately following each other, i.e., with no *rest/pose* unit inbetween. As an example, a *repetitive* unit cannot be followed immediately by another *repetitive* unit as these two units remain in one Structure unit as long as they have the same Structure value.

In the case that within a to-be-coded unit a *repetitive* unit directly follows a *phasic* unit, or vice versa, the transport phase of the second unit often starts immediately after the complex phase of the first unit, e.g. Structure unit 1 [transport phase ⇒ *phasic* complex phase] ⇒ Structure unit 2 [transport phase ⇒ *repetitive* complex phase] ⇒ Structure unit 3 [transport phase ⇒ *phasic* complex phase ⇒ complete retraction to *rest position*]. In rare cases, foremost in the upper limbs, there is a partial retraction, i.e., after the complex phase, the hand is partially retracted, that is, it is stopped half-way on its way back to rest position and starts a new transport phase, e.g. Structure unit 1 [transport phase ⇒ *repetitive* complex phase ⇒ partial retraction] ⇒ new Structure unit 2 [transport phase ⇒ *phasic* complex phase ⇒ complete retraction to *rest/pose*] (see examples in the NEUROGES® training videos Intermediate 00:00:25.000, and Advanced 00:00:41.000).

Furthermore, as already described in the Activation category chapter, regarding the limbs, a *movement* unit may contain simultaneous isolated movements of different subparts of the limb, e.g. a thumb toss co-occurring with a shoulder shrug. These independent simultaneous movements within one limb require consideration in Structure coding (and in Focus coding) if the Structure values differ between the two parts, e.g. a shoulder shrug (Structure value *phasic*) occurs while fidgeting with the hands (Structure value *irregular*). In this case, the spatially and dynamically more complex movement is coded. For instance, if the *phasic* shoulder shrug is performed with emphasis while the *irregular* fidgeting is less prominent, the value of the unit is *phasic*. On the other hand, if the *irregular* fidgeting is more intensive it will determine the value of the unit. However it should be noted that in a natural context, if isolated non-synchronized movements with different Structures values seem to co-occur, a closer look often reveals that there is a short interruption of the movement in one subpart of the upper limb when the movement of the other subpart is displayed. As an example, the *irregular* activity of the fingers stops during the *phasic* shoulder shrug and continues after the shrug.

### **4.2.2 Alternative generation of 'to-be-coded' Structure and R/PStructure units**

Researchers who start with the Structure category, i.e., who have not coded the Activation category before, identify movements and rests/poses, respectively, in the ongoing flow of behavior and classify them directly with the Structure and R/ PStructure values according to the rules described in 4.2.1 (see also 4.5.3 Alternative procedure in NEUROGES® -ELAN). This approach probably saves time but it is more challenging than the stepwise procedure. Furthermore, it does not provide data on the individual's general extent of motor activity in terms of *movement* units.

### **4.3 Criteria for the definition of the Structure values**

The Structure values are defined according to the criteria below.

**Trajectory**: The trajectory is defined as the path that is generated by the moving part of the body. For the limbs, the point of reference for the evaluation of the trajectory is the subpart of the limb that displays the most complex spatial movement and dynamics. For the upper limbs, this is typically the hand.

The trajectory can be straight, angular, curved or in other terms, one-, two-, three-dimensional. Most importantly, in some *movement* units the trajectory can be subdivided into phases5 . The identification of distinct phases is characteristic for *phasic* and *repetitive* units (therefore, termed units with a phase structure),

<sup>5</sup> In fact, the method of identifying phases within a movement is adopted from gesture research. Gestures are described to consist of a preparation phase, a stroke phase, and a retraction phase (e.g. Kendon, 1972; McNeill, 1992; Seyfeddinipur, 2006). In gesture research, the phases are defined functionally, i.e., the function of the "preparation phase" phase is to prepare the stroke, the function of the stroke phase is to carry the content of the gesture, and the function of the retraction phase is to bring the hand back to rest position. Similar concepts are used in kinematographic research. Based on kinematographic criteria such as trajectory, displacement, and velocity, Hermsdörfer et al. (1996) distinguishes a transport phase (equivalent to the preparation phase in gesture research) and an adjustment phase (equivalent to the stroke phase). In

as well as to a lesser extent for *aborted* units. *Irregular* units and *shift* units are characterized by the fact that there are no phases, i.e., the trajectorial pattern remains unchanged during the unit. In a prototypical *phasic* or *repetitive* unit, three phases can be distinguished:

	- (α) motion complex phase (mC): The trajectory is spatially complex. There may be a dynamics as defined by the degree of variation in the effort factors (see below). In the upper limbs, often a hand orientation and a hand shape is fully developed. The transition from the transport phase to the complex phase may be demarcated by a turn-point in the trajectory, by an increase in spatial complexity of the trajectory, and by an increase of the dynamics.
	- (β) static6 complex phase (sC): The trajectory is stopped and the limb is held against gravity with a distinct orientation and shape. The static complex phase is mostly found in gestures (as defined in the Function category). The gestural information is conveyed by a still image, i.e., a photo could capture the relevant information (see 8.4.6). A static complex phase may follow directly after a motion complex phase, and vice versa.

6 The term static (stroke) is adopted from M. Seyfeddinipur (personal communication) who suggests using it instead of stroke hold. The corresponding term dynamic was not used to describe the complementary type of complex phase, as this term is reserved for changes in the effort factors. Instead, the term motion complex phase was introduced.

contrast to gesture research, in NEUROGES®, the concept of phases is applied to classify movements in general, e.g. actions, shifts, self-touches, and not only gestures. As the term "stroke phase" has been coined specifically for gestures and it defines a phase by its function rather than by movement parameters, in NEUROGES® the term is not adopted.

Complex phases are obligatorily part of *phasic* and *repetitive* units, but not of *aborted* units.

(iii) retraction (R): The part of the body is moved back to *rest/pose* position. The transition from the complex phase to the retraction phase is often demarcated by a turn-point in the trajectory. The hand relaxes while retracting. The retraction phase can include searching for a new *rest/pose* position, e.g. some adaptations may be necessary until the hand has found a comfortable *rest* position. In these cases, the path is not one-dimensional (straight). The reaching of the *rest/pose* position is marked by stillness (see 4.4.6 and 4.4.7). Retraction phases are part of *phasic, repetitive,* and *aborted* units.

While the prototypical phase structure is *rest/pose* ⇒ Transport ⇒ Complex ⇒ Retraction ⇒ *rest/pose*, in natural data, there are often variations of this structure:

A *movement* unit can contain more than one complex phase and accordingly, more than one transport phase and more than one retraction phase, e.g. *rest/pose* ⇒ complete Transport (cT) ⇒ Complex (C) ⇒ partial Transport (pT) ⇒ Complex (C) ⇒ partial Retraction (pR) ⇒ partial Transport (pT) ⇒ Complex (C) ⇒ complete Retraction (cR). The complete transport phase starts from *rest/pose*, the complete retraction phase goes back to *rest/pose.* The partial transport phase starts after a complex phase and leads directly to the next complex phase. Alternatively, there can be a partial retraction after a complex phase, before the partial transport phase to the next complex phase starts. Thus, the retraction phases within the *movement* unit are partial retraction phases (pR), in which the hand retracts but it is stopped half-way on its way back in order to start a new transport phase, which is also only partial as is starts half-way (see 4.4.2 *repetitive* Example ♦ iv). In partial retraction, the hand is often held – without a distinct shape or orientation – for a moment against gravity before the new partial transport phase starts (see example in the NEUROGES® training video Advanced 00:00:41.000, right hand movement).

In rare cases, in the upper limbs, a complex phase starts without a preceding transport phase directly from *rest/pose* position. This is only possible if the *rest/pose* position, in which the hand rests, comfortably allows free moving of the hand right away (see 4.4.2 *repetitive* Example ♦ v; 4.4.3 *phasic E*xample ♦ v), such that it is not necessary to first transport the hand to a location where it can act. Furthermore, the trajectory of the complex phase has to be compatible with a start from the *rest/ pose* position, e.g. the hand rests on an armrest and then the index traces a circle.

Researchers who want to code the phases of *phasic, repetitive,* and *aborted* units in NEUROGES® -ELAN use the tier Phases that provides the following values: *complex static, complex motion, transport complete, transport partial, retraction complete, and retraction partial*.

**Presence / Absence of Efforts:** The five Structure values *irregular, repetitive, phasic, shift,* and *aborted* can be displayed with dynamics. The consideration of the movement dynamics is especially helpful for the identification of units with a phase structure, as the complex phase is often marked by an increase in dynamics as compared to the transport and retraction phases. It is evident that this criterion only applies to the subtype motion complex phase and not to the subtype static complex phase.

The movement dynamics are described with the Efforts as defined by Laban (1988). Efforts are the inner impulses from which movement originates. Four Effort factors, flow, weight, time, and space are distinguished. Each factor is a continuum with 2 polarities, i.e., flow changes between free or bound, weight between strong or light, time between sustained or quick, and space between direct or indirect. These effort qualities result from the inner attitude (conscious or unconscious) towards the four Efforts. Movement dynamics result from variation in one or more Effort factors.

The following definition is taken from Robyn Cruz's lecture material based on Dell (1979).

	- (α) free: going with, allowing energy to go through our and beyond body boundaries; indulgent / expansive use of flow.
	- (β) bound: restricted, controlled, keeping energy flow within body boundaries; fighting / condensing use of flow.
	- (α) light: rarified, delicate, fine touch, overcoming the body weight; indulgent / expansive intention in weight
	- (β) strong: having impact, penetrating, getting behind the body weight; fighting / condensing intention in weight (to be distinguished from heavy, i.e., passive giving into gravity)
	- (α) sustained: stretching out time, leisurely, actively indulging in time; indulgent / expansive decision in time (to be distinguished from slow movement and from evenness of bound flow)
	- (β) sudden: urgent, instantaneous, a sense of urgency recreated each time; fighting / condensing decision in time (to be distinguished from fast movement)
	- (α) indirect: multi-overlapping foci, multi-faceted attention, active meandering; indulgent / expansive attention in space
	- (β) direct: channeled, pin-pointing; fighting / condensing attention in space

**Upper limbs: Presence / Absence of Hand orientation:** During the complex phase of a *phasic* or *repetitive* Structure unit, the hand may (α) adopt a distinct orientation, and (β) remain in neutral orientation (see also 8.3).

	- (α) The hand adopts a distinct shape that is maintained during the complex phase such that a still picture of the hand shape emerges. The configuration of the fingers substantially contributes to the shape, e.g. index and thumb form a ring, a hand with an extended index.
	- (β) There is no active shaping of the hand and the hand often remains relaxed (flat hand with slightly flexed fingers).

The R/PStructure values *r/p*<sup>7</sup> *rest* and *r/p pose* are defined according to the criteria **Motion vs. Stillness, Actively Held Position vs. Gravity-Aligned / Supported Position,** and **Muscle Contraction vs. Relaxation** described in section 3.3.

<sup>7</sup> abbreviation for *rest/pose*

## **4.4 Definitions of the Structure and R/PStructure values**

### **4.4.1** *irregular*

### **Short definition**

SMALL MOVEMENTS WITHOUT DISTINCT TRAJECTORY, POTENTIALLY ONGOING IN TIME

### **Definition**

An *irregular* unit is characterized by the fact that the movement starts and ends at the place where the hand8 (part of the body) happens to be in *rest* position. Thus, there are no transport and retraction phases and in general, no phases can be distinguished within the unit. There is no distinct trajectory, no distinct hand orientation and no distinct hand shape. The movement seems going by itself and it is potentially ongoing in time.

### **Meeting the criteria**

**Trajectory**: The trajectory lacks any clear spatial direction. The imaginary trace that is left by the movement would look like a muddle of thread that is fallen in one spot on the ground. Thus, the movement stays in one area (see book I, 6.5).

*Irregular* units do not have phases. The occurrence of a transport phase, which is distinct from a complex phase, would per definition exclude the occurrence of an *irregular* unit. Thus, an *irregular* unit is characterized by the fact that there is no transport phase and that the movement starts at the place where the hand (part of the body) happens to have *rest*ed before.

<sup>8</sup> As the large majority of the researchers has used NEUROGES® for the analysis of the upper limbs and as the NEUROGES® categories and values are identical for all four parts of the body, in order to facilitate reading, in the coding manual the definition are formulated for hand/arm/shoulder movements. However, any specificity of foot/leg, head, and trunk movements that deviates from the definitions for hand/arm/ shoulder movements are noted, and in the **Examples** ♦ sections for each NEUROGES® value, examples are provided separately hand/arm/shoulder, foot/leg, head, and trunk movements.


#### **Examples for** *irregular* **units**


<sup>9</sup> More precisely, precursors of effort described by Kestenberg (1965a,b, 1967) may better define what is observed: tension flow rhythms (sucking, snapping/biting, twisting, strain/release, running/drifting, starting/stopping, swaying, surging/birthing, jumping, spurting/ramming p. 27) and tension flow attributes (flow adjustment/even flow, low intensity/high intensity, graduality/abruptness p. 65) and possibly pre-efforts (flexibility/channeling, gentleness/vehemence-straining, hesitation/suddenness, p. 79). (Eberhard, pers. communication, 2012).

back and forth movements of the fingers, then the fingers move around again ⇒ hand rests again


#### **Differentiate** *irregular* **units from**…


embedded in an *irregular* unit is part of the *irregular* unit and it does **not** constitute a *repetitive* unit (see Example ♦ iv). However, if researchers want to identify *irregular* units that become temporarily more structured, i.e., intermittently repetitive, they can use the supplementary category Temporal Structure to register this phenomenon with the value *metrical*.


#### **4.4.2** *repetitive*

#### **Short definition**

#### MOVEMENT WITH A PHASE STRUCTURE AND A REPETITIVE MOTION COMPLEX PHASE

#### **Definition**

*Repetitive* units are units with a phase structure, i.e., prototypically they consist of a transport phase, a complex phase, and a retraction phase (but see variations described in 4.3). The complex phase of a *repetitive* unit (rC) is always a motion complex phase. It is characterized by the fact that the hand (part of the body) moves at least twice in the same direction, i.e., forth – back – forth in the same dimension (see NEUROGES® training videos Intermediate 00:47–00.49 and 00:55–00:57, and Expert 00.08–00:10). Therefore, a repetitive complex phase (rC) can be segmented into sub-phases. Any time, the hand (part of the body) starts moving again in the same direction, a new sub-phase will start.

The paths of the sub-phases can be identical, i.e., actually moving back and forth on the same path, or they may be displaced relative to each other in one dimension, i.e., moving back and forth in the same dimension but with a displacement in another dimension, e.g. moving repetitively up and down while moving from left to right. However, the hand orientation and the hand shape remain constant between the sub-phases. The temporal distance between the sub-phases of the complex phase may differ, but the hand never *rests* or *poses*

between the sub-phases. Equal duration between the sub-phases results in a meter, and varying duration in a rhythm.

#### **Meeting the criteria**


As the five fingers of a hand represent anatomically a quasi repetitive structure, the sequential use of the five fingers constitutes a repetitive complex phase. Therefore, sequential movements of the fingers such as galloping from the small finger to the index or counting from 1 to 5 with the fingers, are coded as one *repetitive* unit (see also Example ♦ vi).

**Occurrence:** *Repetitive* units were investigated in 164 individuals of the NEUROGES® archive. Right hand *repetitive* movements were displayed by 95 % (156/164) of the individuals, and left hand *repetitive* movements by 96 % (157/164).


#### **Examples for** *repetitive* **units**


the middle finger (sub-phase 3), then the ring finger (sub-phase 4), and then the little finger (sub-phase 5) (rC) ⇒ hands moves back to *rest* position (complete R) (Note: As the left hand fingers are actively presented one after the other, this is coded not only as a *repetitive* unit in the right hand but also as a *repetitive* unit in the left hand)


#### **Differentiate** *repetitive* **from** …


regard to dimension, hand orientation, hand shape, and effort (see exceptions above), there is no doubt that this is one *repetitive* unit even if the temporal distances between the sub-phases differ. However, if there are a short *rests* or *poses* between one-way *phasic* movements, these are several *phasic* units, e.g. the hand rises, turns out, retracts, rests shortly, rises, turns out, retracts, rests shortly, rises, turns out, retracts, rests (in the example, there are three *phasic* units).


#### **4.4.3** *phasic*

#### **Short definition**

#### MOVEMENT WITH A PHASE STRUCTURE AND A STATIC OR PHASIC MOTION COMPLEX PHASE

#### **Definition**

*Phasic* units are units with a phase structure, i.e., prototypically they consist of a transport phase, a complex phase, and a retraction phase (but see variations described in 4.3). The complex phase of a *phasic* unit (pC) may be static or motion. The motion complex phase of a *phasic* unit is characterized by the fact that the hand (part of the body) moves on a one-way path, i.e., with no repetition of the same direction and at most once in opposite directions in the same dimension, i.e., once back and forth but – in contrast to a *repetitive* unit – never twice in the same direction in the same dimension, i.e., forth – back – forth.

### **Meeting the criteria**

**Trajectory**: In a motion complex phase of a *phasic* unit (motion pC), the hand (part of the body) moves forth one-way or at most forth and back on a defined one-, two- or three-dimensional spatial path. One could imagine that the path leaves a clear trace. On this trace, the hand moves once in one direction (forth one-way) or two times but in opposite directions (forth and back).

In a static complex phase (static pC), there is no trajectory.


#### **Examples for** *phasic* **units**


#### **Differentiate** *phasic* **from** …


#### **4.4.4** *shift*

#### **Short definition**

DIRECT DISPLACEMENT FROM ONE REST/POSE POSITION TO ANOTHER ONE

#### **Definition**

In a *shift* unit, the hand (part of the body) is moved **directly,** i.e., without any detour, from one *rest/pose* position to another *rest/pose* position. Thus, the movement path is typically straight. Only if the hand (part of the body) does not immediately find a comfortable new *rest/pose* position, the path is not entirely straight but at the end there is a short adjustment movement. As an example, when a *rest* position with folded hands is aimed at, the fingers might not immediately find the inter-digital gap between the fingers of the other hand and there are searching movements of the fingers. Since the only purpose of these searching/adjustment movements is to find the new *rest/pose* position, they are part of the *shift* unit.

Note that especially *phasic* and *repetitive* units (and more rarely *aborted* or *irregular* units) may start from one *rest/pose* position and end in another *rest/pose* position. These changes that occur quasi as by-products after *phasic, repetitive, aborted,* or *irregular* movements are **not** marked with the value *shift,* because a *shift* unit is strictly defined as a **direct** transition from one *rest/pose* position to another one with no *phasic, repetitive, aborted,* or *irregular* movement inbetween. Changes in *rest/pose* positions that occur after *phasic, repetitive, aborted,* or *irregular* movements are registered indirectly with the R/P Structure values *r/p rest* and *r/p pose* and the R/P Contact values and, if desired, remarks in the tier Notes.

*Shifts* are often whole body phenomena, e.g. shifting from a *rest* position with the back leaning at the back of the chair and open legs and arms to a *rest* position with trunk leaning forward, crossed arms placed on the table, and legs closed. Therefore, researchers might be interested to merge the *shift* units of the upper limbs, the lower limbs, the head, and the trunk.

#### **Meeting the criteria**

**Trajectory**: The hand moves one-way on a straight trajectory. The path is the shortest way from one *rest/pose* position to another one.

The exception is searching movements that may become necessary when a comfortable *rest/pose* position is not found right away. These searching movements can result in that towards the end of the *shift* the movement path is no longer straight. However, any other form of detour in the trajectory is not compatible with the value *shift*.

As a *shift* is a transition from one *rest/pose* position to another one, technically in NEUROGES® -ELAN, a *shift* unit is always framed by *rest/pose* units.


#### **Examples for** *shift* **units**


### **Differentiate** *shift* **from** …


### **4.4.5** *aborted*

#### **Short definition**

#### DISRUPTED TRANSPORT PHASE OR SHIFT FOLLOWED BY RETRACTION

### **Definition**

An *aborted* unit is a movement that is disrupted. The disruption occurs either during the transport phase of movements that could have become *phasic* or *repetitive* or during a shift movement that could have ended in a new *rest/pose*

position10. Thus, functionally, an *aborted* unit can be a disrupted *phasic* unit, a disrupted *repetitive* unit, or a disrupted *shift*. However, the original motor plan cannot be determined since the movement is disrupted before the complex phase or the new *rest/pose* position is displayed.

Thus, an *aborted* unit consists of two phases, a transport phase/shifting movement and a retraction phase. It is characterized by the fact that the transport phase is **not** followed by a complex phase and the shift movement is **not** followed by a new *rest/pose* position, respectively. Instead, the hand (part of the body) is retracted. Often, the movement dynamics give the impression that the transport phase/shift movement is disrupted halfway. The retraction can take place immediately after the disruption, or the hand (part of the body) is held for a moment and then retracted. If the hand is held for a moment, there is no distinct hand orientation and no distinct hand shape.

Many *aborted* units occur during bilateral hand movements. Both hands start with a transport phase, but then only one hand performs the complex phase, while the other hand, which displays the *aborted* unit, either retracts immediately or it is simply held – as if forgotten – with no distinct shape or orientation during the complex phase of the dominant hand and then retracts together with the dominant hand.

Per definition, an *aborted* unit can**not** directly (i.e., with no *rest/pose* in between) precede nor follow a *phasic,* a *repetitive*, or a *shift* unit, as in that case the ostensible abortion would in fact only be an interruption but not a disruption. As an example, the hand rises half-way, stops for a moment, then continues to rise and then starts with complex phase. This is an interrupted, but not a disrupted transport phase. Likewise, an interruption can occur during the retraction phase of a *phasic* or *repetitive* unit, e.g. hand retracts half-way, stops for a moment, and then continues to retract to *rest/pose* position. Furthermore, conceptually, an interruption – in contrast to a disruption as during an *aborted* unit – in the course of the *phasic, repetitive,* or *shift* unit is no obstacle for the Structure category assessment, as the relevant information which is necessary to assess the Structure, i.e., the complex phase and the new *rest* position, is provided.

<sup>10</sup> A *rest/pose* position is quasi the position system equivalent of the complex phase of *phasic* and *repetitive* units. Therefore, *shifts* can be compared to transport phases. *Shifts* that are performed to move to a new *pose* position or a new *rest* position share movement features with transport phases, e.g. to shift the hands from the lap to the arm rests (the exception is *shifts* that are performed to move away from a *rest* position, e.g. when the location on which the hand rests becomes hot. They might resemble more retraction phases).

### **Meeting the criteria**


#### **Examples for** *aborted* **units**


### **Differentiate** *aborted* **from** …

# *phasic*: In a *phasic* unit in which the hand moves forth and back on a one-dimensional path, there is always dynamics that develop, often with endpoint accent, and there is a distinct hand orientation. Often there is also a distinct hand shape, e.g. a pointing gesture with index extended. In contrast, in an *aborted* unit, the dynamics diminish and there is no distinct hand orientation and no distinct hand shape.

### **4.4.6** *r/p rest*

#### **Short definition**

#### THE PART OF THE BODY RESTS

#### **Definition**

A *rest* of a part of the body is defined by stillness, gravity-aligned/supported position, and muscle relaxation.

If several or all parts of the body *rest*, they form a *rest* position, which is a specific static arrangement of the resting parts of the body (compare 3.4.2).

#### **Meeting the criteria**

**Motion vs. Stillness:** There is stillness.

The exception is passive movement (see 3.3), where there is motion in combination with muscle relaxation. The part of the body is moved.

**Actively Held Position vs. Gravity-Aligned / Supported Position:** The part of

the body is resting in a gravity-aligned or supported position.

**Muscle Contraction vs. Relaxation:** There is muscle relaxation.

#### **Examples for** *rest* **units**

♦ upper limbs: standing person in normal upright position with arms hanging


### **Differentiate** *rest* **units from**…


### **4.4.7** *r/p pose*

### **Short definition**

### THE PART OF THE BODY POSES

### **Definition**

A *pose* of a part of the body is defined by stillness and muscle contraction. In most *pose* units, there is a gravity-aligned/supported position of the part of the body. In rare cases, the part of the body is in a position in which it is actively held against gravity.

If several or all parts of the body *pose*, they form a *pose* position, which is a specific static arrangement of the posing parts of the body (compare 3.4.2). A *pose* position is an activated or expressive position in which the person can remain for a longer period of time, e.g. the thinker pose, the arrogant pose, etc.

### **Meeting the criteria**

**Motion vs. Stillness:** There is stillness.

**Actively Held Position vs. Gravity-Aligned / Supported Position:** Given that a *pose* is adopted for a longer period of time, there is typically a stable gravityaligned/supported position of the part of the body. In rare cases, the part of the body is held actively held against gravity.

**Muscle Contraction vs. Relaxation:** There is isometric muscle contraction.

### **Examples for** *pose* **units**


### **Differentiate** *pose* **from**…

Confusions between *poses* and other values mainly occur for those rare *poses* in which a part of the body is actively held against gravity. In these cases, the actively held part of the body often contributes to an expressive whole body pose. Specifically for posing limbs, they are typically held in the body near space (near kinesphere, see 8.3). As a *pose* is kind of a settlement, the actively held position in a *pose* is held for a longer period of time than the actively held position within a *movement* (see static complex phase). While it is the theoretical claim of NEUROGES® to not operate with absolute / arbitrary time frames, however, based on the NEUROGES® archive data on the duration of the value units is suggested to consider a *pose* if the actively held position is held for more than four seconds.

# a static complex phase of a *phasic* unit with an actively held position: In a static complex phase of a *phasic* unit, the hand (part of the body) may be held against gravity for a moment. The hand is typically held in the middle or far kinesphere of the gesture space. As an example, in a gesture showing the Peace sign the shaped and oriented hand is held for a moment in the upper gesture space. The holding serves the emphasis.


### **4.5 Procedure for Step 2 / Module I in NEUROGES® -ELAN**

### **4.5.1 Generation of the 'to-be-coded' Structure and R/P Structure units**

The 'to-be-coded' Structure units are generated by copying the *movement* units.

Researchers who have also generated *rest/pose* units in Step 1 and kept them on the same tier as the *movement* units, i.e., on the tiers rh\_Activation\_ RX and lh\_Activation\_RX, copy the *rest/pose* units together with the *movement* units in the same procedure to the new tiers rh\_Structure\_RX and lh\_Structure\_RX.

Researchers who have generated *rest/pose* units in Step 1 but saved them separate from the *movement* units, i.e., on the tiers rh\_Rest/Pose\_RX and lh\_Rest/ Pose\_RX, have to copy the *rest/pose* units in a separate procedure to the new tiers rh\_R/PStructure\_RX and lh\_R/PStructure\_RX. This route is obligatory if you want to conduct the R/P Contact assessment for *r/p rest* and *r/p pose* units later on (see 3.5).

Open the eaf file with the Activation units (Step 1 codings), then proceed as follows:

Apply the function: Tier > Copy Tier.

Select a tier to copy: click on rh\_Activation\_RX.

Next.

Select the new parent tier: skip this step.

Next.

Select another linguistic type: click on Structure.

Finish.

Apply the function: Tier > Copy Tier.

Select a tier to copy: click on lh\_Activation\_RX.

Next.

Select the new parent tier: skip this step.

Next.

Select another linguistic type: click on Structure.

Finish.

When the two operations are finished,

apply the function: Tier > Change Tier Attributes.

Scroll down in the list of Current Tiers to the end:

Click on rh\_Activation\_RX-cp.

Enter the Tier Name: rh\_Structure\_RX ('RX' = your initials).

Enter the Annotator: your name.

Enter the Participant: the identification of the person whom you are going to code.

Change.

Click on lh\_Activation\_RX-cp.

Enter the Tier Name: lh\_Structure\_RX ('RX' = your initials).

Enter the Annotator: your name.

Enter the Participant: the identification of the person whom you are going to code.

Change.

Close.

Now, you have the following new tiers:

rh\_Structure\_RX

lh\_Structure\_RX

Proceed analogously for the other parts of the body.

### **4.5.2 Coding of the 'to-be-coded' Structure and R/P Structure units**

The units on the tiers rh\_Structure\_RX and lh\_Structure\_RX are now taken as the basis for the coding of the Structure category (therefore, they are termed 'to-becoded' Structure units). These units still have the copied values *movement* and *rest/ pose*, respectively. The 'to-be-coded' Structure units are assessed with the Structure values listed below. Technically, by double-clicking on the unit and clicking on the correct Structure value, the old *movement* values are replaced by the Structure values:

*irregular repetitive phasic shift aborted*

The old *rest/pose* values are replaced by the R/PStructure values:

#### *r/p rest*

#### *r/p pose*

In the template the value (*rest/pose*) marked by brackets is provided for manual unit generation (see 4.5.3) for researchers who do not want to further classify *rest/pose* units.

The value*?* is provided for work-in-progress. If you are not sure about the value of the unit and you want to get back to this unit later again, mark the unit preliminarily with the value*?*.

For the final statistical data evaluation, the value*?* is only accepted as a final code for those units in which the hand is not fully visible, e.g. if the hand is hidden beneath the table, but it is evident from the arm movements that there must be a hand movement. In the statistical evaluation, these values count as not sufficiently visible movements.

Especially in long 'to-be-coded' units, the Structure value might change within the unit (see 4.2.1). Then divide the old unit in the new subunits. As an example, the 'to-be-coded' unit turns out to contain to three different Structure values, e.g. *repetitive* – *phasic* – *irregular*. Delete the unit and replace it by the three new subunits. With regard to the precise segmentation of a unit into subunits (where to segment the unit), the procedure is described in detail in 4.2.1.

**Important:** Code first the units of the rh\_Structure\_RX, then the units of the tier lh\_Structure\_RX. It is essential to obey this order of coding, as a simultaneous coding of the right and left hand creates the tendency to adapt the values of the two hands to each other.

### **4.5.3 Alternative procedure: Manual generation of Step 2 / Module I units and coding**

If you start with the Structure category, i.e., you have not coded the Activation category, use the alternative procedure of manual unit generation. In this procedure, the tiers rh\_Structure\_R0, lh\_Structure\_R0, rf\_Structure\_R0, lf\_Structure\_ R0, trunk\_Structure\_R0, and head\_Structure\_R0 are used that are provided in the template. Directly tag and code the Structure value of the movement or the rest/pose.

Likewise, for short video clips you might prefer the manual unit generation, even if you have coded the Activation category before. Click on the first unit in the tier rh\_Activation\_R0. A blue vertical bar appears. Follow the bar to the level of the tier rh\_Structure\_R0, double click, and a tag appears. Thereby, you have copied the unit from the tier rh\_Activation\_R0 to rh\_Structure\_ R0. Proceed with copying the next unit on the tier rh\_Activation\_R0. After having copied all units of the tier rh\_Activation\_R0, you recode them with the Structure values. You repeat the same procedure for the units of the tier lh\_Activation\_R0. It is essential to obey this order of coding, as a simultaneous coding of the right and left hand creates the tendency to adapt the values of the two hands to each other.

## **5 The Focus category**

### **5.1 Definition of the Focus category**

The Focus category classifies *phasic, repetitive,* and *irregular* units according to the locus where the part of the body acts (on). The assessment is limited to these three Structure values as per definition in *shift* and *aborted* units there is no acting (on something). Six Focus values are distinguished: (i) *within body*, (ii) *on body*, (iii) *on attached object*, (iv) *on separate object*, (v) *on person*, and (vi) *in space* (Fig. 4).

The Focus category assessment is typically applied to the limbs, and most often to the upper limbs. Therefore, the Focus definitions are formulated for the upper limbs, but they apply likewise to the lower limbs11. Only in specific settings with bodily contact, e.g. parent-infant interaction or body-oriented psychotherapy, it might be useful to apply the Focus category to all four parts of the body12.

The Focus category refers to the locus where the hand/arm or foot/leg (hereafter often only referred to as 'hand') acts (on). It is operationalized by four criteria: presence of physical contact with something/someone (presence vs. absence), quality of physical contact (dynamic vs. static), the object/subject of dynamic contact, and orientation in absence of dynamic contact (body-external vs. body-internal). Short definitions of the Focus values and their reliabilities are given in Tab. 4.

Thereby, the Focus category registers loci of sensory stimulation. These can be ordered from body-internal to body-external as reflected in the order of the six Focus values from left to right in Fig. 4. A second aspect of the Focus category is that it provides information about attention processes as at least overt goaldirected (voluntary *phasic* and *repetitive*) hand movements are preceded by a shift of attention towards the goal, i.e., here the locus of sensory stimulation (but

<sup>11</sup> As the large majority of the researchers has used NEUROGES® for the analysis of the upper limbs and as the NEUROGES® Focus category is identical for the upper and lower limbs, in order to facilitate reading, the definitions are formulated for hand/arm/ shoulder movements. However, in the **Examples** ♦ sections for each NEUROGES® value, examples are also provided for foot/leg movements.

<sup>12</sup> However, the differentiation of intransitive movements into *within body* and *in space* is conceptualized specifically for limb movements, as–with the exception of some elaborate dance performances–only the limbs are moved to a specific location in order to act there (*in space*).


**Tab. 4:** Short definitions and reliabilities of the Focus values

\* Interrater reliability as measured with EasyDIAg (from Lausberg & Slöetjes, 2016)

see also book I, section II). Finally, the Focus category indirectly provides information about the individual's body image.

It is recommended to concatenate the Focus value of a movement with the Structure value of that movement. The concatenation delivers StructureFocus units, which constitute fine-grained types of kinesic behavior (see 5.5).

### **5.2 Generation of the 'to-be-coded' Focus units and selection of the unit phases submitted to Focus assessment**

#### **5.2.1 Generation of the 'to-be-coded' Focus units**

The *phasic, repetitive,* and *irregular* Structure units that result from the Step 2 / Module I coding are adopted for the Focus category assessment. They are then termed 'to-be-coded' Focus units and they are further classified with the six Focus values.

If there are one or more changes of the Focus within a 'to-be-coded' Focus unit, e.g. first *on body*, then *in space*, then the Focus changes demarcate new Focus units, e.g. new Focus unit 1 *on body*, new Focus unit 2 *in space*.

If the Structure value of the 'to-be-coded' Focus unit that contains two or more new Focus units is *phasic* or *repetitive*, then the segmentation of the 'to-be-coded' Focus unit into new Focus units is determined by the beginning of the transport phase (T). The procedure is the same procedure as for the Structure category assessment (see 4.2.1, second paragraph; see examples in the NEUROGES® training videos Pre-Intermediate 00:00:25.000; Intermediate 00:00:41.000).

### **5.2.2 Selection of the unit phases submitted to Focus assessment**

For the Focus assessment two types of 'to-be-coded' Focus units have to be distinguished:


(*Shift* and *aborted* units are not submitted to the Focus assessment).

Furthermore, as already described for the Activation category and the Structure category, a 'to-be-coded' Focus unit may contain simultaneous isolated movements of different parts of the upper limb, e.g. a thumb toss co-occurring with a shoulder shrug. These independent simultaneous movements within one limb require consideration in the Focus coding if the values between the two segments differ, e.g. a shoulder shrug (StructureFocus value: *phasic within body*) co-occurs with a thumb toss (StructureFocus value: *phasic in space*). In this case, the spatially and dynamically more complex movement is coded (compare 4.2.1).

### **5.2.3 Alternative generation of 'to-be-coded' Focus units**

Researchers who start with the Focus category, i.e., who have not coded the Activation and Structure categories before, identify *phasic, repetitive*, and *irregular* movements (see definitions in the Structure category chapter) in the ongoing flow of kinesic behavior and classify them directly with the Focus values according to the rules described in 5.2.2. This approach saves time but it is more challenging than the stepwise procedure and it provides information only about the Focus category.

## **5.3 Criteria for the definition of the Focus values**

The Focus values are defined according to the following criteria:

**Presence of Physical Contact:** This criterion refers to the presence or absence of physical contact between the moving parts of the limb and something/-one. The acting parts of the upper limb are most often the hand or only the fingers (and for the lower limb the foot). It is evident that the hand can most easily reach other parts of the body, attached or separate objects, or other persons, while it less comfortable to establish contact with the elbow and especially the shoulder. This can be observed only in exceptional cases, e.g. if the hands are dirty and there is an itching at the cheek, lissom persons might use the lower arm or even the shoulder to rub the face.

During the *irregular* unit or during the complex phase of a *phasic* or *repetitive* unit, the moving part of the limb has...

	- (i) dynamic: The hand or foot **acts on** something. After having established the contact, the spatial relation between the hand and the object/subject of contact changes, e.g. the hands strokes on the arm rest. The hand or foot action is directed at something and potentially changes it, e.g. the right foot scratches the lower left leg. Note that during repetitive complex phases (rC), the dynamic contact might include short phases of separation, e.g. the hand claps repetitively on the thigh.
	- (ii) static: The hand or foot **acts with** something. After having established the contact, the spatial relation between the hand and the object/subject of contact does not change. The hand has an actively fixated relation to the object/subject of contact, e.g. the hand holds a hammer. Note that, however, that especially in tool use there can be a secondary contact that is dynamic, e.g. hand writes with a pen (primary static contact) on a piece of paper (secondary dynamic contact). The dynamic contact no matter if it primary or secondary determines the Focus value. Thus, the Focus value is always determined by the object/subject that the gesturer **acts on** (dyamic contact) and **not** by the object/subject that the gesturer **acts with** (static contact).

*phasic* or *repetitive* unit, the moving part of the hand is in dynamic physical contact with …

	- (i) body-external free space: The body-external space is the space outside the body-surface and within the reach of the fingers tips when the arms are extended (see also 8.3. definition of gesture/action space). The bodyexternal **free** space is the space within the body-external space that is free of an object/subject. The free space is filled with air. Theoretically, it could also be filled with water, e.g. if someone gestures or acts under water. The body-external free space includes the gesture space (see 8.3), which is in front of the thorax. There needs to be a transport phase to move the hand into the body-external free space so that it can act there. Thus, the use of the body-external free space requires a *phasic* or *repetitive* Structure of the movement.
	- (ii) body-internal space: The body-internal space is the space within the body, i.e., inside the surface. The body-internal space is modified by changes in muscle length, tendons, and joint position. In contrast to the body-external free space, the body-internal space can be used in *phasic, repetitive* and *irregular* movements.
	- (i) If the Structure value is *phasic*, all Focus values may occur.

## **5.4 Definitions of the Focus values**

### **5.4.1** *within body*

### **Short definition**

### ACTING ON BODY-INTERNAL STRUCTURES **Definition**


### **Meeting the criteria**

	- If it is in contact with something, then the contact is static:
		- (i) Together with the other hand as a unit (see also 6.4.2 Contact value *act as a unit*), the hand acts *within body*, e.g. the palms of folded hands are turned outwards and the arms are extended, such that the medial muscles and tendons of the hands and lower arms are stretched.
	- (ii) With an attached object in hand, the hand acts *within body*, e.g. circulating in the wrist while holding the worn scarf.
	- (iii) With a separate object in hand, the hand acts *within body*, e.g. performing weight training.

hand does not need to be transported anywhere in the body-external space, e.g. opening and closing the fingers of the hand in order to stretch them.


#### **Examples** *for phasic within body, repetitive within body,* **and** *irregular within body* **units**


#### **Differentiate** *within body* **from**…

# *in space*: In *in space* units, the complex phase is typically preceded by a transport phase. In contrast, in *within body* units, the complex phase usually starts where the limb rests.

In settings with physical exercises it has to observed carefully, if the Focus of the exercise is more on a demonstration in space, e.g. as in dance, or on the effect on body-internal structures, e.g. Feldenkrais exercises.

# *on separate object, on attached object, on body*: In these units, the Focus is on the object or the part of body. In *within body* units, the Focus is on bodyinternal structures (even if the hand is in static contact with a separate object, with an attached object, or with a part of the body).

### **5.4.2** *on body*

#### **Short definition**

#### ACTING ON THE BODY SURFACE

### **Definition**

During the complex phase of a *phasic* or *repetitive* unit or during an *irregular* unit, the acting parts of the hand act on the body. The primary or secondary contact between the hand and the body part is dynamic. Included here are dynamic within-one-hand movements, i.e., finger-to-finger movements of one hand.

#### **Meeting the criteria**


If the dynamic contact on the body surface is primary, the hand directly acts on it, e.g. stroking the cheek.

If the dynamic contact on the body surface is secondary, there is a primary static contact:


#### **Orientation: -**

**Structure:** *On body* units may have a *phasic*, a *repetitive*, or an *irregular* Structure. **Occurrence:***On body* units were investigated in 191 individuals of the NEUROGES® archive. Right hand *on body* movements were displayed by 99 % (189/191) of the individuals, and left hand *on body* movements by 99 % (189/191).


#### **Examples for** *phasic on body, repetitive on body,* **and** *irregular on body* **units**


#### **Differentiate** *on body* **from**…


In contrast, in an *on body* unit, the hand may act with a separate object on the body, e.g. rubbing the head with a pen.

# *on attached object*: In an *on attached object* unit, the Focus is on the attached object. Included here are those rare units in which in unity with the other hand, the hand acts on an attached object, e.g. the folded hands adjust the pullover.

In contrast, in an *on body* unit, the hand may act with an attached object on the body, e.g. rubbing the face with a scarf.

As body-attached objects are very close to the body and therefore, most actions on the attached object also stimulate the body, it has to be observed carefully if the attached object is used as a tool to stimulate the body or if the Focus of action is primarily on the attached object.


### **5.4.3** *on attached object*

### **Short definition**

### ACTING ON AN OBJECT THAT IS ATTACHED TO THE BODY

### **Definition**

During the complex phase of a *phasic* or *repetitive* unit or during an *irregular* unit, the hand is in primary or secondary dynamic contact with objects that are attached to the body. The hand manipulates attached objects, such as a watch, clothes, jewelry, or glasses. Objects are classified as attached as long as they are connected to the body. If they are removed from the body, the objects are classified as separate, e.g. glasses are put on the table.

### **Meeting the criteria**


**Object/Subject of Dynamic Contact**: The object of the primary or secondary dynamic contact is an attached object.

If the dynamic contact on the attached object is primary, the hand directly acts on it, e.g. playing with the necklace.

If the dynamic contact on the attached object is secondary, there is a primary static contact:


#### **Orientation: -**


### **Examples for** *phasic on attached object, repetitive on attached object,* **and**  *irregular on attached object* **units**


#### **Differentiate** *on attached object* **from**…


Furthermore, *on separate object* units may include a static contact with an attached object. With an attached object in hand, the hand acts on a separate object, e.g. the gesturer cleans the table with the tie that he wears. This is an *on separate object* unit because the Focus is on cleaning the table, and not on the attached tie, which is held in hand and with which the hand acts.

# *on body*: In *on body* units, the Focus of the action is on the body and not on the attached object. This is unambiguous for those units, which the hand is in direct dynamic contact with the skin. However, as the biggest part of the body surface is covered with clothes, naturally, one has to be in contact with the clothes, to stimulate the skin. For example, a gesturer, who wears trousers, strokes his leg. This happens naturally via the trousers, i.e., in physical contact with the trousers. As the gesturer acts on the body and not the trousers, this is coded as *on body*.

In contrast, when a gesturer, who wears trousers, stretches the trousers, this is coded as *on attached object*. Furthermore, if a gesturer puts glasses on the nose, there is no primary stimulation of the body, but the action focuses on the positioning of the glasses. Therefore, the value *on attached object* is given.

However, as body-attached objects are very close to the body and therefore, most actions on the attached object also stimulate the body, it has to be observed carefully if the attached object is used as a tool to stimulate the body or if the Focus of the action is primarily on the attached object.


### **5.4.4** *on separate object*

#### **Short definition**

#### ACTING ON AN OBJECT THAT IS SEPARATE FROM THE BODY

#### **Definition**

During the complex phase of a *phasic* or *repetitive* unit or during an *irregular* unit, the hand is in primary or secondary dynamic contact with objects that are separate from the body, such as a table.

### **Meeting the criteria**


If the dynamic contact on the separate object is primary, the hand directly acts on it, e.g. stroking over the table.

If the dynamic contact on the separate object is secondary, there is a primary static contact:


#### **Orientation: -**


### **Examples for** *phasic on separate object, repetitive on separate object,* **and**  *irregular on separate object*


#### **Differentiate** *on separate object* **from**…


Furthermore, *on attached object* units may include a static contact with a separate object. With the separate object in hand, the hand acts on an attached object, e.g. touch-pointing with a pen on the watch that is tied around the wrist. This is an *on attached object* unit as the Focus is on the attached object.

# *on body*: In *on body* units, the Focus of the action is on the body. This is unambiguous for those units, which the hand is in direct dynamic contact with the body. However, also with the separate object in hand, the hand may act on the body, e.g. writing or tapping with a pen on the hand. This case is coded as *on body*, because the Focus of the action is on the body and not on the object itself.


### **5.4.5** *on person*

### **Short definition**

### ACTING ON ANOTHER PERSON'S BODY

### **Definition**

During the complex phase of a *phasic* or *repetitive* unit, the hand acts on another person, i.e. physically on the other person's body. This is a person who is within the gesturer's reach, typically the interactive partner or a bystander. Dynamic contact with objects that are attached to the other person's body is coded as well with the value *on person*.

### **Meeting the criteria**


If the dynamic contact on person is primary, the hand directly acts on the other person's body, e.g. displaying the High Five emblem, in which the two persons batter their hands at the level of their heads.

If the dynamic contact on person is secondary, there is a primary static contact the other person's body:


(iii) The hand acts indirectly with a separate object on the other person, e.g. with a pen the gesturer touches the other person.

### **Orientation: -**

### **Structure**: The Structure is *phasic* or *repetitive*.

The exceptional occurrence of *irregular on person* units may be observed in intimate relationships, e.g. when a little baby is sitting on the mother's lap and the mother plays with the baby's hair.

**Occurrence, Frequency, and Duration:** Thus far, no data are available, since empirical studies that have been conducted so far did not investigate settings with body contact.

### **Examples for** *phasic on person, repetitive on person,* **and** *irregular on person*


### **Differentiate** *on person* **from**…

# *on attached object:* This value refers to units in which the gesturer acts on objects that are attached to her/his own body. In contrast, if the gesturer acts on objects that are attached to another person's body, the value *on person* is given.


### **5.4.6** *in space*

### **Short definition**

### ACTING IN SPACE WITHOUT TOUCHING SOMETHING

### **Definition**

During the complex phase of a *phasic* or *repetitive* unit, the hand acts in the free space that is external to the body surface and within the personal reach (see criterion body-external space).

Most often, the hand acts in the body-external free space in front of the thorax (gesture space). It is moved towards there with a transport phase. In rare cases, only the fingers move out into the space while the wrist or the palm remain resting.

Rarely, the hand acts **on** the air (or on the water) as a physical substrate, e.g. fanning the air.

### **Meeting the criteria**

	- If it is in contact with something, then the contact is static:
		- (i) Together with the other hand as a unit (see also 6.4.2 Contact value *act as a unit*), the hand acts *in space*, e.g. hands in prayer position perform an *egocentric deictic*. This is coded as *in space*, because the Focus of the hand movement is not on the other hand but on the action in the space.
	- (ii) With an attached object in hand, the hand acts *in space*, e.g. gesturing with a necklace in hand. This is coded as *in space*, because the Focus of the hand movement is not on the attached object but on the action in space.
	- (iii) With a separate object in hand, the hand acts *in space*, e.g. gesturing with a cigarette in hand. This is coded as *in space*, because the Focus of the hand movement is not on the separate object but on the action in space.

### **Examples for** *phasic in space* **and** *repetitive in space* **units**


#### **Differentiate** *in space* **movements from**…

# *on separate object, on attached object, on body* (here: static contact): In *on body, on attached object,* and *on separate object units*, the hand is in dynamic contact with these parts of the body or objects, respectively. The Focus of the movement is on the body or the objects.

In *in space* movements, the hand may be only in static contact with a separate object, with an attached object or with the other hand (as a co-agent). The Focus of the movement is **not** on the objects or the hand but on the action in the bodyexternal free space, e.g. gesturing with a glass held in hand or gesturing with folded hands.

# *on separate object, on attached object, on body* (here: short accidental contact): In *on body, on attached object,* and *on separate object units*, the establishment of contact and the subsequent display of dynamic contact (complex phase) is typically preceded by a transport phase.

In rare cases, *in space* movements may include a short, quasi accidental random physical contact of the body, an attached object, or a separate object. However, the physical contact does not constitute the complex phase, i.e., there is no long duration and no dynamic emphasis on the touch. Furthermore, there is no transport phase before the touch. The short touch represents only a (accidental) point on the trajectory of a constant dynamic movement which otherwise is *in space*.

# *on body*: In *on body* movements, the two hands may act dynamically on each other.

When the hands are folded but the thumbs are turned out and in again during the complex phase, this is coded as *in space*, because the acting parts (here: the thumbs) act in space.

# *within body*: A *within body* movement with a *phasic* or *repetitive* Structure has no transport phase, in which the hand is transported anywhere. In an *in space* movement the hand is transported to a specific location in the bodyexternal free space.

In settings with physical exercises it has to observed carefully, if the Focus of the exercise is more on a demonstration in space, e.g. as in dance, or on the effect on body-internal structures, e.g. Feldenkrais exercises.

### **5.5 Generation of StructureFocus units**

In the final evaluation step of Module I, the Structure units and the Focus units are concatenated (Fig. 5). This procedure produces units with StructureFocus values.

The data on the occurrence, the frequency, and the duration of the StructurFocus values given in Tab. 5 are based on the NEUROGES® archive. While the original NEUROGES® archive analysis included six empirical studies (total of 191 healthy individuals), not in all of these empirical studies StructureFocus values were created. Thus, as the raw occurrence data in Tab. 5 reveal, the number of individuals for whom StructureFocus values are available ranges between 71 and 151, because some StructureFocus values are simply only displayed by some individuals and not by others. Therefore, it is recommended to statistically conduct a descriptive statistics on the frequency distribution of the StructureFocus values (see 5.6.4 at the end).

### **5.6 Procedure in for Step 3 / Module I in NEUROGES® -ELAN**

### **5.6.1 Generation of the 'to-be-coded' Focus units**

The 'to-be-coded' Focus units are generated by copying the Structure units.

Open the eaf file with the Structure units (Step 2 codings), then proceed as follows:

Apply the function: Tier > Copy Tier.

Select a tier to copy: click on rh\_Structure\_R0/RX.13

Next.

Select the new parent tier: skip this step.

<sup>13</sup> If you have conducted the automatic unit generation in the previous step, the tier name ending RX represents your initials. If you have generated the units manually and you have used the tiers that are provided in the template, the tier name ending is R0.

**Fig. 5:** Concatenation of the Structure units and the Focus units

Next.

Select another linguistic type: click on Focus.

Finish.

Apply the function: Tier > Copy Tier.

Select a tier to copy: click on lh\_Structure\_R0/RX.

Next.

Select the new parent tier: skip this step.

Next.

Select another linguistic type: click on Focus.

Finish.


**Tab. 5:** Occurrence, frequency, and duration of the StructureFocus values


When the two operations are finished,

apply the function: Tier > Change Tier Attributes.

Scroll down in the list to the end:

Click on rh\_Structure\_R0/RX-cp.

Enter the Tier Name: rh\_Focus\_RX ('RX' = your initials).

Enter the Annotator: your name.

Enter the Participant: the identification of the person whom you are going to code.

Change.

Click on lh\_Structure\_R0/RX.

Enter the Tier Name: lh\_Focus\_RX ('RX' = your initials).

Enter the Annotator: your name.

Enter the Participant: the identification of the person whom you are going to code.

Change.

Close.

Now, you have the following new tiers:

rh\_Focus\_RX

lh\_Focus\_RX

If it applies, proceed analogously for the lower limbs.

### **5.6.2 Selecting and coding the 'to-be-coded' Focus units**

The units on the tiers rh\_Focus\_RX and lh\_Focus\_RX are now taken as the basis for the coding of the Focus category (therefore, they are termed 'to-be-coded' Focus units). After the automatic generation, the units still have the copied Structure values. Depending on the Structure value, two types of 'to-be-coded' Focus units are distinguished:

(i) 'To-be-coded' Focus units with the Structure values *phasic*, *repetitive,* and *irregular* are assessed concerning the Focus. Technically, by double-clicking on the unit and clicking on the correct Focus value, the old Structure value is replaced by the Focus value:

within body on body on attached object on separate object on person in space

Note that for irregular units the Focus value in space **cannot** be chosen.


In the NEUROGES® template, the values *(aborted)*, *(shift), (rest/pose), (r/p rest),* and *(r/p pose)* are provided for occasions in which these units have to generated manually.

In addition, in the NEUROGES® -ELAN tier the value*?* is provided (see 4.5.2).

If a Focus value changes within a 'to-be-coded' Focus unit, replace the old unit by the new subunits. As an example, a 'to-be-coded' Focus unit turns out to contain two different Focus values, e.g., repetitive tapping on the table and then directly repetitive tapping on the leg. Delete the 'to-be-coded' Focus unit and replace it by two new units. In the example this is a unit with the value *repetitive on separate object* and a unit with the value *repetitive on body*. With regard to the precise segmentation of a unit into subunits, i.e., where to segment the unit, see 4.2.1).

**Important:** Code first all units of the tier rh\_Focus\_RX, then all units of the tier lh\_Focus\_RX. It is essential to obey this order of coding, as a simultaneous coding of the right and left hands creates the tendency to adapting the values of the two hands to each other.

### **5.6.3 Alternative procedure: Manual generation of 'to-be-coded' Focus units**

If you start with the Focus category, i.e., you have not coded the Structure category, use the alternative procedure of manual unit generation. In this procedure, the tiers rh\_Focus\_R0, lh\_Focus\_R0, rf\_Focus\_R0, lf\_Focus\_R0 are used that are provided in the template. Directly tag and code the Focus value of the movement according to the rules described in 5.6.2.

Likewise, for short video clips you might prefer the manual unit generation, even if you have coded the Structure category before. In this procedure, the existing tiers rh\_Focus\_R0 and lh\_Focus\_R0 are used. Click on the first unit in the tier rh\_Structure\_R0 (or \_RX, if you have used the automatic creation in Step 2). A blue vertical bar appears. Follow the bar to the level of the tier rh\_Focus\_R0, double click, and a tag appears. Thereby, you have copied the unit from the tier rh\_Structure\_R0/RX to rh\_Focus\_R0.

When doing so, you may choose to immediately code the Focus value and to decide whether the creation of subunits is necessary (see procedure described above).

Proceed with copying the next unit on the tier rh\_Structure\_R0/RX.

After having copied and re-coded all units of the tier rh\_Structure\_R0/RX, repeat the same procedure for the units of the tier lh\_Structure\_R0/RX. It is essential to obey this order of coding, as a simultaneous coding of the right and left hand creates the tendency to adapting the values of the two hands to each other.

### **5.6.4 Generation of the StructureFocus units by concatenating the Structure units and the Focus units**

This procedure concatenates the Structure units and the Focus units and thereby generates units with StructureFocus values. Technically the fine-grained Focus units with the Focus values are taken as the basis for the concatenation and the Structure values are added.

The procedure can be conducted for multiple files at a time, e.g. for all eafs in which you have coded Structure and Focus. In order to be able to use this very time-saving Multiple files processing function in ELAN, it is absolutely **crucial that the tier names are written correctly**. Small deviations in the spelling of the tier names, e.g. gap instead of no gap, capital letter instead of small letter, entail that the Multiple files processing function becomes ineffective.

File > Multiple files processing > Annotations from Overlaps.

Step 1/4: File and Tier Selection.

Select files to use in overlaps computation:

Select files from domain. Click on the button Domain.

If you have not yet defined a domain, press the button New Domain > Specify New Domain > Add the Folder.

If you had already defined a domain > Select an existing domain > Load.

Select tiers to use for computation:

rh\_Structure\_RX and rh\_Focus\_RX.

Next.

Step 2/4: Overlaps Computation Criteria.

Create annotation when annotations overlap:

regardless of their annotation values.

Next.

Step 3/4: Destination Tier Name Specification.

Enter name for destination tier: rh\_StructureFocus\_RX (Cave: correct spelling).

Destination tier is a root tier.

Select a linguistic type for destination tier: click on Notes.

Next.

Step 4/4: Destination Tier Value Specification.

Concatenate the values of the annotations.

Compute values in the selected tier order:

Establish the following order by pressing **^**:

**first** rh\_Structure\_RX and **second** rh\_Focus\_RX.

Finish.

Now you have a new tier rh\_StructureFocus that contains units with the Structure and the Focus value. The result is right hand units with StructureFocus values and with the copied *aborted, shift, rest/pose, r/p rest,* and *r/p pose* values.

If you want to conduct the concatenation procedure for one file only, proceed as follows:

Apply the function: Tier > Create Annotations from Overlaps.

Step 1/4: File and Tier Selection.

Select files to use in overlaps computation:

Use currently opened file.

Proceed as described above.

Proceed analogously for the left hand, and if it applies for the right foot and the left foot.

In order to get an overview on the most frequent StructureFocus combinations, conduct descriptive statistics by applying the function:

File > Multiple file processing >Load Domain

> Select an existing domain > Load.

Tier Selection: Mark rh\_StructureFocus\_RX and lh\_StructureFocus\_RX.

Press button: Update statistics.

## **III. The Laterality Module (Module II)**

While in Module I the right and left limbs have been assessed independently from each other, Module II focuses on the relation between the two limbs. As the limbs14 are distally controlled by the contralateral cerebral hemispheres, Module II provides information about the role of the right and left hemispheres in the production of limb movements. For the upper limbs, the distal parts are the hands and for the lower limbs, these are the feet. In contrast to the proximal parts of the limbs that can be controlled by both hemispheres via ipsilateral and contralateral pathways, the distal parts of the limbs can only be controlled by the contralateral hemisphere. Therefore, for movements that are executed by the hands or the feet it is possible to infer the hemispheric generation. Furthermore, with regards to the cooperation between the two cerebral hemispheres, Module II provides information about the complexity of neural control in the bilateral limb movements. Thus, Module II is suited for research on hemispheric specialization as well as on executive functions, motor performance, and developmental research.

Technically, as the preparatory step in Module II (see Fig. 6), bilateral and unilateral limb units are generated from the right and left StructureFocus units and the *aborted, shift, r/p rest,* and *r/p pose* (or: *rest/pose*)15 of Module I.

Unilateral limb *movement* units are units in which one limb moves while the other limb rests, and vice versa, unilateral limb *rest/pose* units are units in which one limb rest/poses while the other limb moves. Note that in Module I, no unilateral units had been identified, as the right limb tiers rh\_Activation, rh\_Structure,

<sup>14</sup> Since most researchers analyze the upper limbs, note that hereafter the term hand will be used instead of limb but all definitions apply likewise to the lower limbs.

<sup>15</sup> Researchers who analyze *rest/pose* units with the Contact category may either use the *rest/pose* units from the Activation category, or the *r/p rest* and *r/p pose* units from the Structure category. Because of the tier-copying process (see 5.6), these units are automatically represented on the StructureFocus tier.

and rh\_Focus tiers include **all** right hand units, i.e., those that are performed unimanually and those that are performed simultaneously with a left hand unit. Likewise, the left hand tiers include **all** left hand units. The Module II Preparation Step serves to generate unilateral StructureFocus and Rest/Pose units.

The next step in Module II is the generation of bilateral units16. These are units in which both limbs simultaneously move or simultaneously rest/pose. They are the temporal overlaps of the right limb and left limb StructureFocus, *aborted, shift, rest/pose, r/p rest,* and *r/p pose* units. The bilateral units are submitted to the Module II analysis. Module II comprises the Contact category and the Formal Relation category (see Fig. 6).

<sup>16</sup> Note that throughout the coding manuals the term 'bimanual unit' will only be used for those bilateral units in which there is equal dominance of the hands (see definition of dominance in the Formal Relation manual).

## **6 The Contact category**

### **6.1 Definition of the Contact category**

The Contact category assesses the physical contact between the two hands (feet). Accordingly, the Contact category is applied to newly generated units in which both hands move or both hands rest/pose. The bilateral movement units are classified according to the presence/absence of physical contact and the dynamics of that contact with three Contact values: (i) *act on each other*, (ii) *act as a unit*, and (iii) *act apart* (Fig. 7, Step 4a). The bilateral Rest/Pose units are classified according to the presence/absence of physical contact and the spatial relation of that contact with three R/P Contact values: (i) *crossed*, (ii) *closed*, and (iii) *open* (Fig. 7, Step 4b).

Short definitions of the Contact values and their reliabilities as well the short definitions of the R/P Contact values are given in Tab. 6.

In the Contact category, the degree of bihemispheric sensorimotor activation (proprioceptive and tactile) decreases in the order of the three Contact values from left to right as shown in Fig. 7: *act on each other* ⇒ *act as a unit* ⇒ *act apart*. The mutual sensory stimulation of the hands stabilizes the neural control of movements and the body scheme and, with reference to psychological concepts, the body image and specifically the body boundaries. On the other hand, the expressive freedom increases from *act on each other* ⇒ *act as a unit* ⇒ *act apart.*

The aspect of body scheme and body image stabilization to some extent also applies to the three R/P Contact values *crossed* ⇒ *closed* ⇒ *open*, i.e., it decreases in the order of the three values from left to right as shown in Fig. 7 (this order between *crossed* ⇒ *closed*, however, only applies if in the *crossed rest/pose* position there is a mutual touching of the hands and arms, and it does not apply to the rare forms of a *crossed rest/pose* position without touch). Most importantly, with regards to human interaction, psychological openness and rapport increases from *crossed* ⇒ *closed* ⇒ *open.*

### **6.2 Generation of the 'to-be-coded' Contact and R/PContact units and selection of the unit phases submitted to Contact assessment**

#### **6.2.1 Generation of the 'to-be-coded' Contact and R/PContact units**

If the right hand and the left hand simultaneously display StructureFocus units (including the copied Structure units with the values *aborted* and *shift)* a temporal overlap of the two units is generated. These temporal overlaps constitute


**Tab. 6:** Short definitions of the Contact and the R/PContact values

\* Interrater reliability as measured with EasyDIAg (from Lausberg & Slöetjes, 2016)

the 'to-be-coded' Contact units that are characterized by the fact that both hands move simultaneously.

Likewise, if the right hand and the left hand simultaneously display Rest/Pose units (*rest/pose* units if the tiers rh\_Rest/Pose\_RX and lh\_Rest/Pose\_RX codings are used, and *r/p rest* and *r/p pose* units if the tiers rh\_R/PStructure\_RX and lh\_R/ PStructure\_RX codings are used) a temporal overlap of the two units is generated. These temporal overlaps constitute the 'to-be-coded' R/PContact units that are characterized by the fact that both hands rest or pose simultaneously.

If a 'to-be-coded' Contact unit is directly followed by another 'to-be-coded' Contact unit, i.e., there is no *rest/pose* unit inbetween them, and both units get the same Contact value, do **not** fuse them (This principle applies to all following assessment steps in NEUROGES®).

If there is a change of the Contact value within a 'to-be-coded' Contact unit, create subunits. The procedure for the segmentation of a unit into subunits is the same as that for the Structure and Focus categories, i.e., the second new Contact unit starts with the second transport phase (for detailed explanation see 4.2).

It is recommended to concatenate the Contact values with the StructureFocus values after the Contact assessment and the R/PContact values with the RestPose values. The concatenation delivers StructureFocusContact and RestPoseR/ PContact units, which constitute fine-grained types of kinesic behavior (see 6.6.6).

### **6.2.2 Selection of the unit phases submitted to Contact assessment**

For the Contact assessment three types of 'to-be-coded' Contact units have to be distinguished:


ad (i): These are 'to-be-coded' Contact units that contain a complex phase (of a *phasic* or a *repetitive* unit) in at least one hand. In this type of 'to-be-coded' Contact units, the Contact assessment refers **only** to the complex phase of the unit, for example:


No matter if the complex phase is displayed only by one hand or by both hands, the Contact assessment only refers to the period of time during the complex phase.

ad (ii): These are 'to-be-coded' Contact units which only contain transport or retraction phases (of a *phasic* or a *repetitive* unit) in both hands. Because of algorithmic reasons, i.e., the following Formal Relation assessment, the Contact value is always *prep-retract*, for example:

♦ right hand performs a pointing gesture, while left hand rests (rh pC, lh *rest*) ⇒ rh retracts, while lh rises to be transported to the location where it will execute the next complex phase (rh R, lh T) ⇒ lh performs a pointing gesture, while rh rests (lh pC, rh *rest*) ⇒ lh retracts and rests (lh R). The overlap of the rh and lh sequenced StructureFocus units, only contains the retraction phase of the rh unit and the transport phase of the lh unit. Per algorithmic definition, the Contact value is *prep-retract.*

ad (iii): 'to-be-coded' Contact units that contain *irregular, shift* and *aborted* movements. They may contain a preparation or a retraction phase in one hand, but never a complex phase. The Contact assessment refers to the whole unit, for example:


### **6.2.3 Alternative generation of 'to-be-coded' Contact and R/PContact units**

Researchers who start with the Contact category, i.e., who have not applied Module I before, identify all bilateral movements and optionally also all bilateral rests/poses in the ongoing flow of kinesic behavior and classify them directly with the Contact and R/PContact values according to the rules described in 6.2.4.

### **6.3 Criteria for the definition of the Contact and R/PContact values**

The Contact values are defined according to the below listed criteria.

**Presence of Physical Contact between the two hands:** This criterion refers to the presence or absence of physical contact between the moving parts of the hands. The hands have...

	- (i) dynamic: The hands **act on** each other. The spatial relation between the hands changes, e.g. both hands rub or clap on each other. Note that during repetitive complex phases, the dynamic contact between the hands might include short phases of separation, e.g. the hands clap on

each other. In rare cases, only one hand acts on the other, e.g. right hand scratches left hand, while left hand scratches leg.

(ii) static: The hand **acts with** each other. The spatial relation of the hands to each other does not change. The hands have an actively fixated relation to each other. α) The contact between the two hands has already existed during the preceding *rest* and it is maintained during the unit. β) The contact is not established until the complex phase. Then, during the complex phase the hands have an actively fixated relation to each other, e.g. the hands are brought together to form the shape of a triangle. In contrast to dynamic physical contact, the establishment of the contact before the complex phase happens without changes in the effort factors. The effort flow is bound and the effort space is direct.

### **StructureFocus ⇔ Contact:**

The three Contact values are each associated typically with specific Module I StructureFocus values.


The R/P Contact values that serve to classify *rest/pose* units and *r/p rest* and *r/p pose* units, respectively, are defined by two criteria:

	- (i) no contact: The resting/posing hands do not touch each other.
	- (ii) contact: The resting/posing hands touch each other.

The resting/posing hands are...


### **6.4 Definitions of the Contact and R/PContact values**

### **6.4.1** *act on each other*

#### **Short definition**

#### THE HANDS DYNAMICALLY TOUCH EACH OTHER

#### **Definition**

The two hands act on each other. Both hands actively contribute to establishing a physical contact between them. Or, only one hand actively contributes to establishing physical contact between the hands, while the other hand acts on another focus. The spatial relation between the acting parts of the hands changes, e.g. both hands rub or clap on each other. Note that during the repetitive complex phases, the dynamic contact between the hands may include short phases of separation, e.g. when the hands clap on each other. In bi-phasic or bi-repetitive units, the *acting on each other* can result in sounds. The contact is often characterized by changes in the effort factors. Note that in units with a *phasic* or a *repetitive* Structure, the assessment refers to the complex phase only, while in units with *irregular, shift* and *aborted* Structure it refers to the whole unit.

#### **Meeting the criteria**


#### **Examples for** *act on each other* **units**

To illustrate the relation between the values of the Module I and II, at the end of each example, first the Module I StructureFocus values are given, and then the Module II Contact values, and–in anticipation of the next step in Module II–the Formal Relation values. Since the principle of Concatenation is always the fusion of the right hand StructureFocus value + left hand StructureFocus value + Contact value, the Concatenation is only reported in this first example.


### **Differentiate** *act on each other* **from** …

# *act as a unit*: A confusion between *act on each other* and *act as a unit* may occur, if at all, only for short touches.

In an *act on each other* unit, the two hands are in dynamic contact. The dynamic contact phase typically lasts for while. If the touch is short, the presence of efforts helps to distinguish it from *act as a unit* units, as in *act on each other* units, the contact is characterized by changes in the effort factors (or in the pre-efforts or the tension flow rhythms), such as acceleration and increase of strength, e.g. a clap of the hands. A bimanual *repetitive* unit with repetitive short touches is almost always *act on each other*.

In contrast, a very short touch of the hands is an *act as a unit* unit, if the movement flow is bound and the effort space is direct. This constellation indicates that the hands create a bimanual shape and to achieve this, they *act as a unit*.

#### **6.4.2** *act as a unit*

#### **Short definition**

### THE TWO HANDS ARE IN TOUCH WITH A FIXED CONFIGURATION AND TAKE A JOINT ACTION

#### **Definition**

The term 'unit' in *act as a unit* is used here to indicate that the two hands are in touch with a fixed configuration and they take a joint action. Thus, the two hands behave as if they were one. The physical contact between the hands has typically already existed during the preceding *rest position* and it is maintained during the unit. Note that in units with a *phasic* or a *repetitive* Structure, the assessment refers to the complex phase only, while in units with an *irregular, shift* and *aborted* Structure it refers to the whole unit. For bimanual *phasic* and *repetitive* units, there are two subtypes:

(i) The static physical contact is first established at the beginning of the complex phase. In this case, the hands typically rest separately. During the transport phase, they approach each other (T). A contact is established without changes in the effort factors. The effort flow is bound and the effort space is direct. There may be a static phasic complex phase, e.g. both hands form a triangle and hold it for a while to present it to the addressee (static pC). The static contact between the two hands may be short, as it serves only to show a form or indicate a location (see also Training video Expert). Or, there may be a motion phasic complex phase, e.g. immediately after having established the form of a triangle the hands move while maintaining this shape (motion pC or rC). Note that if there is a long hold after the establishment of the bimanual shape before the hands start moving, this is coded as two complex phases. As an example, the hands establish a static bimanual hand shape and hold if for some time (1rst static pC), then they perform another gesture by keeping this shape (2nd motion pC), e.g. both hands form a triangle and hold it for a while to present it to the addressee (1rst static pC), and then they point while maintaining the bimanual shape of a triangle (2nd motion pC).

(ii) The static physical contact has already existed before the complex phase. The pre-existing contact is maintained during the complex phase. The fixed bimanual configuration may be taken over from the preceding *rest* or *pose position* or from the preceding complex phase. In the first case, the hands rest in a position in which they are in touch with each other, e.g. folded hands. They keep this bimanual configuration, while they rise (T) and while they perform the complex phase (C), e.g. they perform a *baton* or an *egocentric deictic* with folded hands. In the latter case, in a sequence of complex phases within the Contact unit, in the first complex phase the hands adopt a fixed bimanual shape, e.g. hands in prayer position point, and they keep this bimanual shape during the following complex phase, e.g. while maintaining the prayer position, they perform a *form presentation*.

#### **Meeting the criteria**


Exceptions from this rule are rare. As an example, the right hand repetitively opens to semicircle and closes to fist (rh rC). The left hand only joints the right hand once, i.e., it adopts the shape of a semicircle (lh pC) and together with the right hand they form a full circle. In this case, in Module I the right and left hand units have different Structure values.

**Occurrence**: *Act as unit* movements were investigated in 120 individuals of the NEUROGES® archive and they were displayed by 53 % (64/120) of the individuals.


#### **Examples for** *act as a unit* **units**


Note the Focus value *on body* refers to the thigh, as the hands do not focus on each other but on the thigh)

### **Differentiate** *act as a unit* **from** …

# *act on each other*: see above


### **6.4.3** *act apart*

### **Short definition**

### BOTH HANDS ACT SIMULTANEOUSLY WITHOUT TOUCHING EACH OTHER

### **General definition**

The two hands move without touching each other. There is neither static nor dynamic physical contact between the hands. Note that in units with a *phasic* or a *repetitive* Structure, the assessment refers to the complex phase only, while in units with an *irregular, shift* and *aborted* Structure it refers to the whole unit.

#### **Meeting the criteria**

**Presence of Physical Contact**: There is no physical contact between the acting parts of the two hands.

### **Quality of Physical Contact:** -


#### **Examples for** *act apart* **units**


### **Differentiate** *act apart* **from** …

# *act on each other*: A very short touch of the hands is tolerated within an *act apart* unit, if the touch is "en passant" and seems to be accidental. The touch point represents one point on the trajectories of the two hands. Apart from that touch point, the trajectories of the two hands are separate.

However, if the touch constitutes the complex phase of a *phasic* or a *repetitive* unit, an *act on each other* unit is coded. Note that the same logic applies to short touches in the Focus value *in space* in Step 3 / Module I.

### **6.4.4 Special template value** *prep-retract*

In bilateral units with a *phasic* or *repetitive* Structure, the Contact value refers to the complex phase. The value *prep-retract* is provided for those bilateral units with a *phasic* or *repetitive* Structure that contain only preparation or retraction phases, e.g. one hand is in the preparation phase, while the other hand is in the retraction phase. The value facilitates the preparation of the 'to-be-coded' Formal Relation units.

♦ i) right hand and left hand rest folded on table ⇒ right hand raises (cT), points to the right (pC), and retracts (cR). During the right hand retraction phase, the left hand raises (cT). While right hand rests again, the left hand points to the left (pC) and then retracts (cR) (Module I: rh + lh *phasic in space*; Module II: *prep-retract,* no Formal Relation assessment).

### **6.4.5** *r/p crossed*

### **Short definition**

### IN REST OR POSE POSITION THE KNUCKLES OF THE RIGHT AND LEFT HANDS ARE CROSSED

### **Definition**

From a frontal perspective, the knuckles of the right and left hands and the ankles of the right and left feet, respectively, are crossed. Thus, the upper limbs and the lower limbs, respectively, form an over-closed configuration. The right and left limbs may be in touch or not.

### **Meeting the criteria**


### **Examples for** *crossed* **units**


### **Differentiate** *crossed* **from** …

# *closed*: The *closed* position of the upper limbs closes or completes the circle formed by the right and left arms, but it does not lock the person up. The physical contact between the limbs is obligatory, but the knuckles are not crossed.

In contrast, a *crossed* position results in a closing towards the environment.

## **6.4.6** *r/p closed*

### **Short definition**

### IN REST OR POSE POSITION THE RIGHT AND LEFT HANDS TOUCH EACH OTHER BUT THE KNUCKLES ARE NOT CROSSED

### **Definition**

The right and left limbs are in touch with each other but the knuckles and ankles, respectively, are not crossed. Often, the upper limbs complete a round form.

### **Meeting the criteria**

**Crossing:** From a frontal perspective, the knuckles and ankles are not crossed. **Physical Contact**: The right and left limbs are in touch with each other.

### **Examples for** *closed* **units**


### **6.4.7** *r/p open*

### **Short definition**

### IN REST OR POSE POSITION THE RIGHT AND LEFT HANDS DO NOT TOUCH EACH OTHER AND THE KNUCKLES ARE NOT CROSSED

### **Definition**

There is no touch and no crossing, i.e., the right and left limbs are not in touch with each other and the knuckles and ankles, respectively, are not crossed. From a frontal perspective, the position is open.

### **Meeting the criteria**

**Crossing:** From a centripetal perspective, the knuckles and ankles, respectively, are not crossed.

**Physical Contact**: There is no physical contact between the right and left limbs.

### **Examples for** *open* **units**

♦ i) upper limbs *rest*: The right hand rests on right arm rest, left hand rests on left arm rest


#### **Differentiate** *open* **from** …

# *crossed*: Just like an *open* position, also a *crossed* position can be without mutual touching of the right and left limbs. However, in a crossed position, the knuckles and ankles, respectively, are crossed from a frontal perspective.

### **6.5 Generation of StructureFocusContact units and RestPoseR/PContact units**

After the Contact category assessment, the StructureFocus units and the Contact units are concatenated. Researchers, who examine also Rest/Pose, concatenate these units (the *rest/pose* units from the Activation category assessment or the *rest* and *pose* units from the Structure category assessment) with R/P Contact units with the same procedure. The StructureFocusContact units and the Rest/ PoseR/PContact units provide complex information about bilateral movements and rests or poses. Each Concatenation value contains the StructureFocus (and Rest/Pose) value of the right hand, the StructureFocus (and Rest/Pose) value of the left hand, and the Contact (and R/P Contact) value of both hands (see 6.6.6), e.g. right hand *irregular on body* + left hand *irregular on body* + both hands *act on each other*, or right hand *rest* + left hand *rest* + both hands *crossed.* In the examples, the corresponding Concatenation values are *irregular on body irregular on body act on each other* and *rest rest crossed*. Thereby, the interpretation of the kinesic behavior becomes even more specific.

The concatenation may theoretically result in a high number of different StructureFocusContact Concatenation values, as the 16 right hand StructureFocus values times the 16 left hand Focus values times the 3 Contact values equals 768 different Concatenation values (for the Rest/PoseR/PContact values it is only 6). However, by definition certain combinations cannot occur. As an example, the combination right hand *irregular on body* + left hand *phasic in space* + both hands *act as a unit* is not possible as the Contact value *act as a unit* can only co-occur in right and left hand units that have the same Focus value. Furthermore, several combinations are rare in naturalistic data, e.g. right hand *irregular on separate object* + left hand *repetitive on body* + both hands *act apart*. The ELAN function Annotation Statistics provides a fast overview on the most frequent bilateral combinations (see 6.6.7), e.g. an individual may display 7 *irregular on body irregular on body act on each other* units, 21 *phasic in space phasic in space act apart* units, and 3 *repetitive on body repetitive on body act as a unit* units.

## **6.6 Procedure for Step 4 / Module II in NEUROGES® -ELAN**

### **6.6.1 Generation of the right hand Unilateral StructureFocus units**

The tier rh\_StructureFocus contains all units of the right hand, i.e., unilateral right hand units as well as right hand units that are part of bilateral units, which are accompanied by a simultaneous left hand unit. In the procedure below the right hand Unilateral StructureFocus units are generated by subtracting the left hand StructureFocus units from the right hand StructureFocus units (including units with copied *aborted* and *shift* values).

Open the eaf file with the StructureFocus units (Step 3 / Module I codings).

The subtraction procedure can be conducted for multiple eafs at a time. File > Multiple file processing > Annotations from Subtraction.

Step 1/4: File and Tier Selection.

Select files to use for computation:

Select files from domain. Click on the button Domain.

If you have not yet defined a domain, press the button New Domain > Specify New Domain > Add the Folder.

If you had already defined a domain > Select an existing domain > Load.

Select tiers to use for computation:

rh\_StructureFocus\_RX and lh\_StructureFocus\_RX.

Next.

Step 2/4: Subtract Computation Criteria.

Create annotation based on:

Subtraction. Subtract from tier:

rh\_StructureFocus\_RX.

Next.

Step 3/4: Destination Tier Name Specification.

Enter name for destination tier: rh\_Unilateral\_StructureFocus\_RX.

Destination tier is a root tier.

Select a linguistic type for destination tier (by clicking): Notes.

Next.

Step 4/4: Destination Tier Value Specification.

Specify the value for the destination tier.

Value of the annotation.

Finish.

If you want to conduct the subtraction procedure only for one eaf, proceed as follows:

Apply the function: Tier > Create Annotations from Subtraction.

Step 1/4: File and Tier Selection.

Select files to use for computation:

Use currently opened file.

Proceed as described above.

### **6.6.2 Generation of the left hand Unilateral StructureFocus units**

The left hand Unilateral StructureFocus units are generated by subtracting the right hand StructureFocus units from the left hand StructureFocus units. The procedure is analogous to the procedure for the right hand described above. File > Multiple file processing > Annotations from Subtraction.

Step 1/4: File and Tier Selection.

Select files to use for computation:

Select files from domain. Click on the button Domain.

Select an existing domain. Load.

Select tiers to use for computation:

lh\_StructureFocus\_RX and rh\_StructureFocus\_RX.

Next.

Step 2/4: Subtract Computation Criteria.

Create annotation based on:

Subtraction. Subtract from tier:

lh\_StructureFocus\_RX.

Next.

Step 3/4: Destination Tier Name Specification.

Enter name for destination tier: lh\_Unilateral\_StructureFocus\_RX.

Destination tier is a root tier.

Select a linguistic type for destination tier (by clicking): Notes.

Next.

Step 4/4: Destination Tier Value Specification.

Specify the value for the destination tier.

Value of the annotation.

Finish.

### **6.6.3 Generation of Bilateral 'to-be-coded' Contact and R/PContact units**

In this procedure units are generated in which both hands move simultaneously. These bilateral 'to-be-coded Contact' units are the overlaps of right hand and left hand StructureFocus and the copied *aborted* and *shift* units from the tiers rh\_ StructureFocus\_RX and lh\_StructureFocus\_RX. The new tier with the overlap units is labelled bh\_ Contact\_RX.

Researchers who apply the R/P Contact assessment proceed analogously in order to generate the overlaps of the right hand and left hand *rest/pose* units from the tiers rh\_Rest/Pose\_RX and lh\_Rest/Pose\_RX or the overlaps of the right hand and left hand *r/p rest* and *r/p pose* units from the tiers rh\_R/PStructure\_RX and lh\_R/ PStructure\_RX. The new tier with the overlap units is labelled bh\_R/PContact\_RX.

The overlap procedure can be conducted for multiple eafs at a time. File > Multiple file processing > Annotations from Overlaps.

Step 1/4: File and Tier Selection.

Select files to use for computation:

Select files from domain. Click on the button Domain.

Select an existing domain. Load.

Select tiers to use for computation:

lh\_StructureFocus\_RX and rh\_StructureFocus\_RX.

Next.

Step 2/4: Overlaps Computation Criteria.

Create annotation when annotations overlap:

regardless of their annotation values.

Next.

Step 3/4: Destination Tier Name Specification.

Enter name for destination tier: bh\_Contact\_RX.

Destination tier is a root tier.

Select a linguistic type for destination tier: click on Contact.

Next.

Step 4/4: Destination Tier Value Specification.

Concatenate the values of the annotations.

Compute values in the selected tier order:

Establish the following order by pressing **^**:

**first** rh\_StructureFocus\_RX, **second** lh\_StructureFocus\_RX.

Finish.

Now you have the following new tier that contains units in which both hands move simultaneously: bh\_Contact\_RX.

Researchers who assess the *rest/pose* units or *r/p rest* and *r/p pose* units proceed analogously. Thereby, they generate a tier that contains units in which both hands rest/pose, rest or pose simultaneously: bh\_R/PContact\_RX.

For some research questions it is of interest to have the Contact units and the R/PContact units on a common tier. As an example, a researcher might want to fuse the corresponding Contact and R/PContact values (*act apart* with *open, act as a unit* with *closed, act on each other* with *crossed*). In this case, the Contact and R/PContact tiers are merged (Tier > Merge Tiers… (Classic)).

### **6.6.4 Coding the 'to-be-coded' Contact and R/PContact units**

The units on the tier bh\_Contact\_RX are now taken as the basis for the assessment of the Contact category. Therefore, they are termed 'to-be-coded' Contact units. Each 'to-be-coded' unit still has the copied concatenated StructureFocus values, first of the right hand and second of the left hand.

All 'to-be-coded' Contact units on the new tier bh\_Contact\_RX are now assessed with the four Contact values listed below. Technically, by double-clicking on the unit and clicking on the correct Contact value, the StructureFocus value is replaced by the Contact value:

*act on each other act as a unit act apart prep-retract*

Accordingly, units on the tier bh\_R/PContact\_RX are taken as the basis for the assessment of the R/PContact category:

*r/p crossed r/p closed r/p open*

In addition, the value? is provided (see 4.5.2).

**Reminder:** When assessing the Contact category, three types of 'to-be-coded' Contact units have to be distinguished (see 6.2.2.):


If the Contact value changes within a 'to-be-coded' Contact unit, replace the 'to-be-coded' unit by subunits. As an example, a 'to-be-coded' Contact unit turns out to contain different Contact values, e.g. first *act as a unit* and then *act apart.* Delete the 'to-be-coded' Contact unit and replace it by two new units, e.g. a unit with the value *act as a unit* and a unit with the value *act apart* (for the precise segmentation rules, see 4.2.1).

It might occur that the automatically generated 'to-be-coded' Contact units are very short. Automatically generated units that are shorter than 200 ms should be deleted, since they are often only artifacts of imprecise tagging (i.e., due to the fact that in Module I the beginnings and endings of the right hand and left hand units have not been tagged precisely) and since they often do not allow sufficient insight into the movement. Choose the tier bh\_Contact\_RX in the Grid. The last column shows the durations of the units. In the grid you can delete all to-becoded Contact units that are shorter than 200 ms.

### **6.6.5 Alternative procedure: Manual generation of 'to-be-coded' Contact and R/PContact units**

If you start the movement behavior analysis with the Contact category, i.e., you have not assessed Module I before, use the alternative procedure of manual unit generation. In this procedure, the tiers bh\_Contact\_R0 (and bf\_Contact\_R0) are used that are provided in the template. Directly tag all bilateral movements and optionally all bilateral rests/poses and assess the Contact and R/PContact value of the unit according to the rules described in 6.6.4.

### **6.6.6 Concatenation of StructureFocus values with Contact values and of Rest/Pose values with R/PContact values**

This procedure concatenates the StructureFocus values and the Contact values and thereby generates units with StructureFocusContact values. Technically the fine-grained Contact units with the Contact values are taken as the basis for the concatenation and the StructureFocus values are added. Rest/PoseR/PContact values are generated analogously.

The procedure can be conducted for multiple files at a time, e.g. for all eafs in which you have coded Structure and Focus.

File > Multiple file processing > Annotations from Overlaps.

Step 1/4: File and Tier Selection.

Select files to use in overlaps computation:

Select files from domain. Click on the button Domain.

If you have not yet defined a domain, press the button New Domain > Specify New Domain > Add the Folder.

If you had already defined a domain > Select an existing domain > Load. Select tiers to use for computation:

rh\_StructureFocus\_RX, lh\_StructureFocus\_RX, and bh\_Contact\_RX. Next.

Step 2/4: Overlaps Computation Criteria.

Create annotation when annotations overlap:

regardless of their annotation values.

Next.

Step 3/4: Destination Tier Name Specification.

Enter name for destination tier: bh\_StructureFocusContact\_RX.

Destination tier is a root tier.

Select a linguistic type for destination tier: click on Notes.

Next.

Step 4/4: Destination Tier Value Specification.

Concatenate the values of the annotations.

Compute values in the selected tier order:

Establish the following order by pressing **^**:

**first** rh\_StructureFocus\_RX, **second** lh\_StructureFocus\_RX, **third** bh\_Contact\_RX.

Finish.

Now you have a new tier that contains Contact units with the right hand StructureFocus value, the left hand StructureFocus value, and the Contact value: bh\_StructureFocusContact\_RX.

If you want to conduct the concatenation procedure for one file only, proceed as follows:

Apply the function: Tier > Create Annotations from Overlaps.

Step 1/4: File and Tier Selection.

Select files to use in overlaps computation:

Use currently opened file.

Proceed as described above.

### **6.6.7 Frequency distribution of the StructureFocusContact and the Rest/PoseR/PContact values**

In order to get an overview on the most frequent StructureFocusContact (and Rest/PoseR/PContact) combinations, conduct descriptive statistics by applying the function:

File > Multiple file processing >Load Domain

> Select an existing domain > Load.

Tier Selection: Mark bh\_StructureFocusContact\_RX.

Press button: Update statistics.

## **7 The Formal Relation category**

### **7.1 Definition of the Formal Relation category**

The Formal Relation category assesses the symmetry and dominance between the two hands (two feet) in the complex phase of bilateral movement units, i.e., in bilateral units with a *phasic* or a *repetitive* Structure. Four Formal Relation values are provided: *symmetrical*, *right hand dominance*, *left hand dominance*, and *asymmetrical* (Fig. 8).

The Formal Relation assessment refers only to the complex phase, as the complex phase is the realization of a concept. Furthermore, the assessment refers to distal parts of the limbs. For the upper limbs, these are the hands and for the lower limbs, these are the feet. In contrast to the proximal parts of the limbs that can be controlled by both hemispheres via ipsilateral and contralateral pathways, the distal parts of the limbs can only be controlled by the contralateral hemisphere. Therefore, only for concepts that are executed by the hands or feet it is possible to infer the hemispheric generation.

The complexity of neural control increases in the order of the four values as shown in Fig. 8: *symmetrical* ⇒ *right hand dominance* ⇒ *left hand dominance* ⇒ *asymmetrical.* The order that *right hand dominance* is followed by *left hand dominance* applies to right-handers and it can be different for left-handers (see book I, 5.1.3.3). Furthermore, when other factors are controlled for, *right hand dominance* in bilateral complex phases suggests a left hemispheric generation, and vice versa, *left hand dominance* a right hemispheric generation (book I, 2.5).

Thus, the Formal Relation category examines the complexity of neural control in the bimanual realization of concepts. Furthermore, it provides insight into in which cerebral hemisphere the concept is primarily generated. Thus, it is suited for research on executive functions and motor performance and on hemispheric specialization as well as for developmental research.

Given that–although it is possible–only few researchers will analyze the Formal Relation category for the feet, in this chapter the terminology will refer to the hands. However, the definitions of the Formal Relation values for the feet are the same as for the hands and they apply to the sitting position in which both feet are free to move (for the assessment of the feet in a standing person in whom at least one leg is a supporting not-free leg, see 3.3). Below, for each Formal Relation values examples ♦ are given for the lower


**Tab. 7:** Short definitions and reliabilities of the Formal Relation values

\* Interrater reliability as measured with EasyDIAg (from Lausberg & Slöetjes, 2016)

limbs. In the NEUROGES® -ELAN template, the lower limb researchers have to create the following tiers: *bf symmetrical, rf dominance, lf dominance, bf asymmetrical.*

#### **Function of the Formal Relation category (step 5) within the complete algorithmic analysis**

Within the complete algorithmic analysis with 7 assessment steps, step 5 (Formal Relation category) marks a change in the analysis as from now on only conceptual movements, i.e., those with a *phasic* or a *repetitive* Structure, are assessed (see Fig. 8, diamond above Step 5). While steps 1 – 4 include all body movements, steps 5 – 7 focus on conceptual movements only. Thus, with regards to the content, the Formal Relation category and the subsequent Module III Function and Type categories primarily refer to the concepts that are realized in body movements.

Furthermore, the Formal Relation category serves to prepare bilateral units for the Module III Function and Type assessment. The four Formal Relation values determine whether in Module III the Function and Type is assessed only for the right hand in the bilateral movement (*right hand dominance*), only for the left hand in the bilateral movement (*left hand dominance*), for both hands together (*symmetrical* or *asymmetrical*), or separately for the right hand and for the left hand (*asymmetrical*, e.g. the left hand makes a rolling movement while the right hand points to an external location).

### **7.2 Generation of the 'to-be-coded' Formal Relation units and selection of the unit phases submitted to Formal Relation assessment**

### **7.2.1 Generation of the 'to-be-coded' Formal Relation units**

The 'to-be-coded' Formal Relation units are generated from the Contact units. Among the Contact units the Formal Relation units are generated only from those units that have a *phasic* or *repetitive* Structure. The exception is Contact units with a *phasic* or *repetitive* Structure that have a *prep-retract* value. These are not assessed.

### **7.2.2 Selection of the unit phases submitted to Formal Relation assessment**

The Formal Relation assessment refers to the relation between the two hands during the complex phase. It does not matter if the complex phase is displayed by one hand alone or by both hands. The phases within the bimanual movement in which both hands perform a transport or a retraction phase (*prep-retract* value) are not assessed (same rule as for the Focus and Contact assessments for units with a *phasic* or *repetitive* Structure).

If a 'to-be-coded' Formal Relation unit is directly followed by another 'to-becoded' Formal Relation unit and both units get the same Formal Relation value, do **not** fuse them, even if there is no gap in-between them (same rule a for the previous categories).

If there is a change of the Formal Relation value within a 'to-be-coded' unit, create subunits. Concerning the precise segmentation of a unit into subunits, i.e., where to exactly segment the unit, please see the procedure described in detail in 4.2 (same rule a for the previous categories).

### **7.2.3 Alternative generation of 'to-be-coded' Formal Relation units**

Researchers who only analyze the Formal Relation category or the Formal Relation category and the subsequent Function or Type categories have to select all bimanual movements from the ongoing stream of kinesic behavior that show a concept realization in at least one hand according to the definitions of *phasic* and *repetitive* units (see 4.3, 4.4.2, 4.4.3). The 'to-be-coded' Formal Relation units are assessed according to the rules described in 7.2.2.

### **7.3 Criteria for the definition of the Formal Relation values**

The Formal Relation values are defined by the two principal criteria symmetry and dominance. Furthermore, Structure, Focus, and Contact values of the 'to-be-coded' Formal Relation unit provide some hints for the Formal Relation assessment.

Concerning the two principal criteria, the simple assessment algorithm is as follows:

Is there symmetry of the trajectory of the two hands?

> If yes, the value is *symmetrical*.

> If no, …

.. is there dominance of one hand?

>> If yes, is there *right hand dominance* or *left hand dominance*?

>> If no, the value is *asymmetrical*.

The two principal criteria symmetry and dominance are defined as follows:

**Symmetry (symmetry vs. asymmetry):** The definition of symmetry is based on geometrical concepts. The two hands move (typically with the same effort) on symmetrical trajectories. The movement of one hand is indistinguishable from the movement of the other hand with respect to a point, an axis, or a plane of reflection, or with respect to a translation. There are different subtypes of bimanual symmetry in hand movements:

(i) body midline symmetry:

The right and left hands move synchronously and their trajectories show a reflection symmetry to the sagittal plane that goes through the body midline, e.g. when the gesturer traces the shape of a butterfly centrally in front of his/her trunk.


The right and left hands move synchronously parallel, e.g. the right hand traces in the upper right body-external free space a semi-circle that is open to the right, while the left hand traces in the lower left space a semi-circle that is open to the right.

(iv) temporally alternate symmetry:

The trajectories of the right and left hands are symmetrical as defined in (i)– (iii), but the hands move alternately, e.g. the gesturer pantomimes Nordic Walking (per definition, this subtype cannot occur in *symmetrical act as unit* movements).

**Dominance (dominance of one hand vs. equal dominance of both hands):** This criterion refers to the presence or absence of the dominance of one hand. Dominance is present if one hand predominantly performs the complex phase, or vice versa, if one hand is non-dominant during the complex phase. Three subtypes of non-dominance can be distinguished (it is helpful to study the video examples given below on the NEUROGES® –interactive video learning tool, section training videos):

	- Expert: *left hand dominance* 00:00:02.830
	- Expert: *right hand dominance* while left hand retracts and minimally mirrors the right hand complex phase 00:00:54.790
	- Beginner: *right hand dominance* during left hand retraction 00:00:31.170 und 00:00:31.760
	- Advanced: *left hand dominance* during right hand transport phase 00:00:07.240

During the transport or retraction phases, the non-dominant hand might minimally mirror the dominant hand (blend with subtype iii) or there may a slight retardation at the time when the dominant hand performs a complex phase.


The non-dominant is held against gravity, while the dominant hand is performing the complex phase. If the non-dominant hand is held after a partial retraction, typically the hand slightly relaxes while being held.

• Expert: *right hand dominance,* while left hand is held in partial retraction 00:00:53.980

(iii) minimal mirroring of the dominant hand by the non-dominant hand:

The non-dominant hand very roughly and rudimentarily mirrors the movement of the dominant hand, while the dominant hand performs the complex phase. The non-dominant hand is indistinct with regard to trajectory, hand orientation, and hand shape. There is no precise articulation of the fingers and the hand. In anticipation of the Module III Function coding, in the non-dominant hand alone, the Function of the movement could not be assessed, because the movement is so indistinct. If at all, only by observing the dominant hand, the Function of the non-dominant hand could be guessed.

• Expert: *right hand dominance* 00:00:05.080, 00:00:51.890, 00:00:54.790; *left hand dominance* 00:00:05.560

**StructureFocus:** The StructureFocus values of the underlying right and left hand units are not strongly pointing the way to the Formal Relation value. Often the right and left hands have the same StructureFocus values, and this applies more or less to all four Formal Relation values.

However, vice versa, when inferring the StructureFocus values from the Formal Relation value, there is one fixed association: A *symmetrical* value is almost exclusively based on right and left hand StructureFocus units that are characterized by the fact that they have the **same StructureFocus** value.

**Contact:** There are some specific relations between the three Contact values and the four Formal Relation values:



## **7.4 Definitions of the Formal Relation values**

### **7.4.1 symmetrical**

### **Short definition**

### BOTH HANDS MOVE ON SYMMETRICAL TRAJECTORIES

### **Definition**

During the bimanual complex phase, the two hands move synchronously on symmetrical trajectories.

### **Meeting the criteria**

**Symmetry**: The trajectories of the right and left hands are symmetrical.

**Dominance**: Symmetry implies that both hands are equally dominant.

**StructureFocus**: When looking at the StructureFocus values that are at the basis of the Formal Relation unit, the right and left hands **must** have the same Structure and Focus values.

The only exception to this rule can occur in *symmetrical* units with alternate movements (symmetrical subtype iv). In these units, in rare cases, the combination of a *repetitive* unit in one hand and a *phasic* unit in the other might occur (see below example ♦ iii).


To illustrate the relation between the values of the Modules I, II, and III, at the end of each example, first the Module I StructureFocus values are given, then the Module II Contact–Formal Relation values, and then the Module III Function– Type values. These examples also demonstrate how the Formal Relation value determines if in Module III the Function and Type is assessed for the right hand only, for the left hand only, for both hands together, or for both hands separately. Note that the Function–Type values in the examples are only possible values and not the obligatory choice.


not **on** each other. Therefore, the Focus is *in space* and not *on body*; Module II: *act as a unit*–*symmetrical* subtype i body midline; Module III: bh *form presentation–shape*)


Examples for lower limbs: *act apart–symmetrical, act as a unit–symmetrical,* and *act on each other–symmetrical*

♦ xiii) sitting person with both feet standing on the ground (*rest/pose* unit) ⇒ both forefeet raise while heals remain on the floor (T) ⇒ both forefeet symmetrically and alternatingly tap on the floor (rC) ⇒ rest again (*rest/pose* unit) (Module I: rf + lf *repetitive on separate object*; Module II: *act apart*–*symmetrical* subtype iv temporally alternate symmetry; Module III: bf *subject-oriented action*)


### **Differentiate** *symmetrical* **from**…

# *right / left hand dominance*: see below

### **7.4.2** *right hand dominance*

### **Short definition**

### THE RIGHT HAND IS DOMINANT

### **Definition**

In *right hand dominance* units, the right hand displays a complex phase in a distinct manner with regard to trajectory, hand orientation, and hand shape. There is a precise articulation of the fingers and the hand. At the same time, the left hand either displays a transport phase or a retraction phase, or it is held against gravity (often with some relaxation), or it rudely mirrors the right hand. In the latter case, the articulation of the fingers and hand is so crude that no clear trajectory, hand orientation, and hand shape emerge.

### **Meeting the criteria**

**Symmetry:** There is no symmetry.


Note that in *act on each other*–*right hand dominance* units, as a rule, the Contact is established primarily by the right hand, i.e., the right hand acts on the left hand (compare Contact category criterion Presence of Physical Contact subtype β).


Relation is *symmetrical*, and during the second rC subphase there is a *right hand dominance* of the subtype ii) (Module I: rh *repetitive in space*, lh *phasic in space*; Module II: *act apart*–*symmetrical* subunit and *right hand dominance* (subtype ii holding) subunit; Module III: subunit 1: bh *emphasis–back-toss*; unit 2: bh *emphasis–back-toss*)

♦ v) hands rest in lap ⇒ right hand rises (T), left hand remains in lap ⇒ right hand performs repetitive beats on the left hand (rC), while left hand resting in lap displays small mirror movements (rC) ⇒ rh moves back to thigh (complete R) ⇒ rest (Module I: rh *repetitive on body*; Module II: *act on each other*–*right hand dominance* (subtype iii minimal mirroring); Module III: bh *emphasis–batons*)

Examples for lower limbs: *act apart*–*right foot dominance* and *act on each other*– *right foot dominance*


#### **Differentiate** *right hand dominance* **from**…

# *symmetrical*: In *right hand dominance* units, only the subtype in which the non-dominant hand rudely mirrors the dominant hand may be confused with *symmetrical* units.

In *right hand dominance* units of the subtype minimal mirroring of the dominant hand by the non-dominant hand, there is often some kind of symmetry, as the non-dominant hand roughly and rudimentarily mirrors the movement of the dominant hand, while the dominant hand performs the complex phase. However, the non-dominant hand is indistinct with regard to trajectory, hand orientation, and hand shape. There is no precise articulation of the fingers and the hand. In anticipation of the Module III Function coding, in the nondominant hand alone, the Function could not be assessed, because the movement is so indistinct. As a tip, one could imagine how the bimanual movement would be like if the non-dominant hand would be mirrored. In this case, a highly indistinct movement would result.

In contrast, in *symmetrical* movements both hands are equally dominant. Even if one hand performs the movement somewhat smaller, the trajectory, the hand orientation, and the hand shape could still be identified.

# *asymmetrical*: In *right hand dominance* units, only the subtype in which the non-dominant hand is held against gravity may be confused with *asymmetrical* units.

In *right hand dominance* units, the non-dominant is held against gravity while the dominant hand is performing the complex phase. If the non-dominant hand is held after a partial retraction, the hand slightly relaxes while being held.

In contrast, if in an *asymmetrical* unit the hand is held, the hand is never retracted and never relaxed, but it has a distinct hand orientation and hand shape and it is held a specific position in the body-external free space. As an example, the left hand pantomimes holding a nail while the right hand pantomimes hammering. Or, the left hand marks a position in space while the right hand traces a path. The motionlessness of the left hand is not be confused with indistinctness, as the left hand perfectly fulfils its function of representing something static.

#### **7.4.3** *left hand dominance*

#### **Short definition**

#### THE LEFT HAND IS DOMINANT **analogous to** *right hand dominance*


### **7.4.4** *asymmetrical*

#### **Short definition**

BOTH HANDS MOVE ON ASYMMETRICAL TRAJECTORIES AND ARE EQUALLY DOMINANT

### **Definition**

During the bimanual complex phase, the two hands move on asymmetrical trajectories.

(In anticipation of the Module III Function coding, it shall be noted that the two hands are often complementary in fulfilling a common function).

### **Meeting the criteria**

**Symmetry**: The trajectories of the right and left hands are asymmetrical. **Dominance**: Both hands are equally dominant.

**StructureFocus**: The Module I units of both hands often have the same Structure and Focus values.

However, sometimes, the Structure or Focus values of the units of the right and left hands differ, e.g. when left hand pantomimes 'nail holding' (*phasic*) and right hand pantomimes 'hammering' (*repetitive*).


I: lh: *phasic in space*; rh: *repetitive in space*; Module II: *act apart*–*asymmetrical*; Module III: bh *pantomime–transitive*)


Examples for lower limbs: *act apart–asymmetrical, act as a unit–asymmetrical,* and *act on each other–asymmetrical*


### **Differentiate** *asymmetrical* **from**…

# *right / left hand dominance*: see above

## **7.5 Procedure for Step 5 / Module II in NEUROGES® -ELAN**

### **7.5.1 Generation of the 'to-be-coded' Formal Relation units**

The 'to-be-coded' Formal Relation units are generated in two steps:

First, the bh\_StructureFocusContact units are copied.

Second, among these units only those are selected that have a *phasic* or *repetitive* Structure in both hands. Only if the Contact value of this unit is *prep-retract*, the unit is not selected.

### *7.5.1.1 Copying the bh\_StructureFocusContact units*

The 'to-be-coded' Formal Relation units are first generated by copying the StructureFocusContact units.

Open the eaf file with the Contact units, then proceed as follows:

Apply the function: Tier > Copy Tier.

Select a tier to copy: click on bh\_StructureFocusContact\_RX.

Next.

Select the new parent tier: skip this step.

Next.

Select another linguistic type: click on Formal Relation.

Finish.

Apply the function: Tier > Copy Tier.

When the operation is finished,

apply the function: Tier > Change Tier Attributes.

Scroll down in the list to the end:

Click on bh\_StructureFocusContact\_RX-cp.

Enter the Tier Name: bh\_FormalRelation\_RX ('RX' = your initials).

Enter the Annotator: your name.

Enter the Participant: the identification of the person whom you are going to code.

Change.

Close.

Now, you have the following new tier:

bh\_FormalRelation\_RX

### *7.5.1.2 Selecting units with a* phasic *or* repetitive *Structure value in both hands*

The 'to-be-coded' Formal Relation units still have the concatenated StructureFocusContact values. For the Formal Relation category assessment, among the 'to-be-coded' Formal Relation units only those units are selected that have the following Structure:


The exception is units have a *prep-retract* Contact value. They are deleted, unless for your research you need complete units with preparation and retraction phases and not only the complex phases. In this case, the *prep-retract* units are kept but they are NOT coded with the Formal Relation category.

All other units are deleted.

Proceed as follows:

In the grid, choose the tier bh\_FormalRelation\_RX.

In the column Annotation, click on the first 'to-be-coded' Contact unit.

If the unit does not fulfill the above listed selection criteria go to Annotation > Delete Annotation.

### **7.5.2 Assessing the 'to-be-coded' Formal Relation units**

The 'to-be-coded' units are evaluated with the Formal Relation values listed below.

*symmetrical rh dominance lh dominance asymmetrical (prep-retract) ?* (see 4.5.2).

If the Formal Relation value changes within a 'to-be-coded' Formal Relation unit, replace the 'to-be-coded' unit by the new subunits. As an example, a 'to-becoded' Formal Relation unit turns out to contain different Formal Relation values, e.g. first *symmetrical* and then *right hand dominance.* Delete the 'to-becoded' unit and replace it by two subunits, i.e., a unit with the value *symmetrical* and a unit with the value *right hand dominance* (see Intermediate 00:00:06.320 and compare the unit on the Contact tier and the corresponding units on the Formal Relation tier: the Contact unit is split up into two Formal Relation units).

With regard to the rules for the precise segmentation of a unit into subunits, see 4.2.

### **7.5.3 Alternative procedure: Manual generation of 'to-be-coded' Formal Relation units**

If you start with the Formal Relation category, i.e., you have not assessed Module I and the Contact category before, use the alternative procedure of manual unit generation. In this procedure, the tiers bh\_Formal Relation\_R0 (and bf\_Formal Relation\_R0) are used that are provided in the template. Directly tag all bilateral movements that show a concept realization in at least one hand (foot) according to the definitions of *phasic* and *repetitive* units (see 4.3, 4.4.2, 4.4.3) and assess the Formal Relation value of the unit according to the rules described in 7.5.2.

### **7.5.4 Optional: Concatenation of the Formal Relation values with the StructureFocusContact values**

In order to achieve complex values the tiers bh\_FormalRelation\_RX can be concatenated with the StructueFocusContact tiers. The most fine-grained unit segmentation of the Formal Relation tier is automatically adopted.

This procedure can be conducted for multiple files at a time.

In order to be able to use the time-saving Multiple files processing function in ELAN, it is absolutely **crucial that the tier names are written correctly in all eafs**. Small deviations in the spelling of the tier names, e.g. gap instead of no gap, capital letter instead of small letter, entail that the Multiple files processing function becomes ineffective.

File > Multiple files processing > Annotations from Overlaps.

Step 1/4: File and Tier Selection.

Select files to use in overlaps computation:

Select files from domain. Click on the button Domain.

If you have not yet defined a domain, press the button New Domain > Specify New Domain > Add the Folder.

If you had already defined a domain > Select an existing domain > Load.

Select tiers to use for computation:

bh\_StructureFocusContact\_RX and bh\_FormalRelation\_RX.

Next.

Step 2/4: Overlaps Computation Criteria.

Create annotation when annotations overlap:

regardless of their annotation values.

Next.

Step 3/4: Destination Tier Name Specification.

Enter name for destination tier: bh\_StructureFocusContactFormalRelation\_RX (Cave: correct spelling).

Destination tier is a root tier.

Select a linguistic type for destination tier: click on Notes.

Next.

Step 4/4: Destination Tier Value Specification.

Concatenate the values of the annotations.

Compute values in the selected tier order:

Establish the following order by pressing **^**:

**first** bh\_StructureFocusContact\_RX and **second** bh\_FormalRelation\_RX.

Finish.

Now you have a new tier bh\_StructureFocusContactFormalRelation\_RX that contains bimanual units with the Structure, Focus, Contact, and Formal Relation values.

If you want to conduct the concatenation procedure for one file only, proceed as follows:

Apply the function: Tier > Create Annotations from Overlaps.

Step 1/4: File and Tier Selection.

Select files to use in overlaps computation:

Use currently opened file.

Proceed as described above.

## **IV The Gesture and Action Module (Module III)**

The Gesture and Action module (Module III) offers an analysis of conceptual body movements, i.e., gestures and actions, regarding their emotional, cognitive, physical, or practical functions. Thus, just as in the preceding Formal Relation category only body movement units with a complex phase, which implies the presence of a concept, are analyzed–in contrast to Module I and the Contact category, in which all body movements are analyzed.

The NEUROGES® definition of a function is first based on the proposition that any body movement has a function and that–with the exception of some hyperkinetic syndromes in neuropsychiatric diseases–body movements are not displayed accidentally or randomly. Second, it grounds on the empirical evidence that body movements do not only reflect emotional, cognitive, and interactive processes but that they, likewise, affect these processes (see book I, sections I and II). The function of a body movement can be to regulate and process emotional experience, to facilitate speech outflow, to promote cognitive processing, to change the external physical world or the mover's psychosomatic state, etc. It is evident that a body movement not only affects the mover her-/himself but likewise, has an effect on the interactive partner. While Module III refers to the within-subject functions, the between-subjects effects of body movements can be examined with NEUROGES® evaluation procedures for interaction (book I, chapter 18).

All parts of the body, i.e., the upper limbs, the lower limbs, the head, and the trunk can be analyzed with Module III.

The function of a body movement determines its form. The form is defined by a cluster of movement features, for example a combination of a one-dimensional path, a hand shape with index pointing, a use of far kinesphere, a bound movement flow, etc. In order to fulfil a specific function, a movement has to have specific features. As an example, if somebody wants to point to something, certain aspects in his movement can be varied, e.g. the dynamics, but other features are essential. For instance, in order to indicate a precise location in space it is most effective to bring the longitudinal hand axis (wrist to finger tips) in line with a vector that leads to the target and to shape the hand in such a way that a tip is created, e.g. the tip of the index finger, which results in a precision of the course of the vector. The imaginary prolongation of this vector leads to the intended target in space. Floppy hand movements with a relaxed hand, for example, would not be effective for pointing out precise locations in space. Or, if both hands create the shape of a triangle, this gesture could refer to all concrete and abstract entities that share aspects of a triangle. However, it would not, for instance, serve to refer to round entities. Thus, there is a bidirectional form to function link: The function requires essential features of movement form and vice versa, the form of a movement allows only for a limited set of functions or meanings. The Module III values refer to these essential features, i.e., those features without which the movement could not fulfil the function.

The two categories of Module III, the Function category and the Type category, offer a course and a fine-grained classification, respectively. In the Function category, based on the movement form eleven groups of body movements that share form and function are provided (Function values). In the Type category, nine of these groups, namely those that classify gestures, can be further specified with 24 Type values. Researchers can apply either category alone, i.e., conduct a Function category analysis or a Type category analysis, or they can analyze both categories (Fig. 9).

The NEUROGES® Gesture and Action module shows similarities with traditional gesture coding systems. However, in contrast to these systems, which often classify the body movements by the verbal context that accompanies the body movement or other non-movement criteria, in NEUROGES® Module III the function is inferred from the form of the movement. Since neuropsychological research provides ample evidence that some gesture types are generated in the right hemisphere independent from left-hemispheric speech production, and phenomenologically, the existence of gesture–speech mismatches has been demonstrated, it is methodologically indicated to investigate gesture as a phenomenon per se, i.e., independent of other modes of expression such as language, prosody, etc. If gesture-speech or multi-modal research is intended, then in a second analysis step the relation between gesture and speech or other cognitive, emotional, and interactive functions is explored. This two-step approach ensures that the gesture analysis is not biased by the verbal context and consequently, that the information that is specific to the gesture and not to be found in language is discovered (for the detailed theoretical background see book I and Lausberg & Slöetjes, 2016).

## **8 The Function category**

### **8.1 Definition of the Function category**

The Function category registers groups of body movements that share specific movement features that are associated with emotional, cognitive, physical, or practical functions. In order to describe the essential movement form of a group of body movements that have the same function, a high number of movement parameters is needed. Thus, the Function values are operationalized by several movement criteria: gesture/action space, path during complex phase, orientation, hand shape, efforts, body involvement, and gaze. In addition, the assessments of all preceding categories, the Structure, Focus, Contact and Formal Relation values, are (re-)used. Based on the combinations of functionally essential movement features, eleven Function values are defined: *emotion/attitude, emphasis, egocentric deictic, egocentric direction, pantomime, form presentation, spatial relation presentation, motion quality presentation, object-oriented action, subject-oriented action*, and *emblem/social convention*.

Tab. 8 provides the short definitions and reliabilities of the Function values. Note that these short definitions only contain the most typical or specific movement features. Since gestures and actions are performed most often with the upper limbs and rarely with the lower limbs, the head, and the trunk, and since most researchers focus on the upper limbs, the definitions of the Function and Type values are formulated for the hands. However, the value definitions likewise apply to the other parts of the body.

In the algorithm (Fig. 10) the horizontal order of the eleven Function values from left to right reflects a development from emotional motions via gestures via actions to conventionalized gestures/actions.

The Function value *emotion/attitude* registers genuine and learned expressions of emotions and attitudes. Genuine emotional expressions are motor components of an emotional experience. Since they are not based on cognitive planning processes, they are movements without a transport phase.

The Function values *emphasis, egocentric deictic, egocentric direction, pantomime, form presentation, spatial relation presentation, motion quality presentation* register movements that are classically defined as gestures. A gesture is a *phasic* or *repetitive* movement with a transport phase, typically with the Focus *in space*. It serves to promote cognitive processing and to communicate information. The horizontal order of these seven gesture values in Fig. 10 reflects a

**Fig. 10:** The analysis algorithm for the Gesture and Action module (Module III) with the Function category in fat print


**Tab. 8:** Short definitions of the form and the function of the Function values and reliabilities

(*continuedon next page*)


\* Interrater reliability as measured with EasyDIAg (from Lausberg & Slöetjes, 2016)

development from more emotional to more cognitive with an increase in creative complexity and abstraction. *Emphasis* gestures serve to tone the message, they show the importance that the gesturer lends to certain aspects. *Emphasis* gestures are a proxy to *emotion/attitude* motions as they often – in line with prosody–have an emotional connotation. They do not provide pictorial information, such as gestural images. *Egocentric deictic* gestures refer to something from an egocentric perspective and do not create a gestural image. *Egocentric direction* gestures indicate a direction from an egocentric perspective*. Pantomime*  *gestures* depict an action from an egocentric perspective – the gesturer acts as if s/he would perform an action. While the *pantomime* gestures can be very creative gestural depictions, they are still based on an egocentric perspective. Therefore, *egocentric deictic, egocentric direction*, and *pantomime* form the main group of egocentric gestures. In the horizontal order of the Function values, they are followed by the main group of *presentation* gestures in which gestural images are created. *Form presentation* gestures are gestural creations of forms. *Spatial relation presentation* gestures create spatial relations, potentially including *form presentations*. *Motion quality presentation* gestures present the dynamics or manner of movement. They may include *form* and *spatial relation presentation*. As such they are highly creative and potentially the most complex *presentation* gestures.

The Function values *object-oriented action* and *subject-oriented action* constitute the main group of actions. They register movements that are classically defined as actions. In NEUROGES®, an action is defined as a *phasic* or *repetitive* movement, typically with the Focus *on separate object, on attached object,* or *on body* movement that directly changes the physical world, i.e., the gesturer's environment or his/her physical state.

Finally, the Function value *emblem / social convention* registers conventionalized gestures and actions with a culturally defined fixed form – meaning link.

### **8.2 Generation of the 'to-be-coded' Function units**

### **8.2.1 Selection of units for the generation of 'to-be-coded' Function units**

The Function assessment refers only to the complex phase, which is the realization of a concept. Accordingly, only units are assessed that have *phasic* or *repetitive* Structure and a complex phase.

In order to generate the 'to-be-coded' Function units, units from the following preceding tiers are used:

#### **'to-be-coded' Function units for the tier … adopted from the preceding tier …**



Among the units that have been adopted from StructureFocus and Structure tiers, all those are deleted that have the Structure values *irregular, shift,* and *aborted*.

The remaining units as well as the units that have been adopted from Formal Relation tiers are the 'to-be-coded' Function units.

### **8.2.2 The Formal Relation values as an orientation for the assessment of bilateral 'to-be-coded' Function units**

In the both hands (both feet) Function tier, the value of the copied Formal Relation unit determines whether the Function is assessed for both hands together or whether it is assessed separately for the right hand and left hand.

	- (α) If both hands have the same Function value, they are assessed together.
	- (β) If the two hands have different Function values, they are given the special template value *different functions*.

### **8.2.3 The chronology of coding the 'to-be-coded' Function units**

In Module III as compared to Module I, the chronology of the unit coding changes. In Module I, first the right hand and then the left hand was coded. The independent coding of the right and left hands prevents an interpretation of their functional relation and promotes an objective coding based on the Movement form of the movement alone.

In contrast to Module I, in Module III the right hand, left hand, and bimanual 'to-be-coded' Function units are coded in the chronological order of their occurrence, e.g. rh unit, lh unit, rh unit, bh unit, etc. This procedure enables to use the information provided by how the units are sequenced in time to better understand their function.

In hand movements, the change of function is typically associated with a change in Structure, Focus, Contact, and Formal Relation. Therefore, the 'tobe coded' Function units, which result from the fine-grained segmentation processes in the Modules I and II, are likely to represent functional units. If, however, there is a change of the Function value within a 'to-be-coded' Function unit, then the change demarcates subunits. With regard to the precise segmentation of a unit into subunits (where to segment the unit), the procedure is described in detail in 4.2).

### **8.2.4 Alternative generation of 'to-be-coded' Function units**

Researchers who only apply the Function category (and/or the Type category) select from the ongoing stream of kinesic behavior all conceptual body movements according to the definitions of *phasic* and *repetitive* units (see 4.3, 4.4.2, 4.4.3). For the limbs, the units have to be differentiated as unilateral right, unilateral left, and bilateral in order to use the existing Function tiers in the NEUROGES® -ELAN template. Concerning the bilateral units, it is helpful to classify them first with the Formal Relation values, since this pre-assessment facilitates the analysis of the bilateral 'to-be-coded' Function units. The 'to-becoded' Function units are assessed according to the rules described in 8.2.2 and 8.2.3.

### **8.3 Criteria for the definition of the Function values**

The Function values are defined according to the following criteria:

**Structure:** Specific Function values are obligatorily or typically associated with either a *phasic* or a *repetitive* Structure value. As an example, an *egocentric deictic* unit typically has a *phasic* Structure. Tab. 9 gives an overview on the most frequent combinations of Structure values with Function values. Therefore, the table can be used for a first orientation when assessing the Function, as it allows the researcher to narrow down the choice of Function values.


**Tab. 9:** The most frequent associations of StructureFocus values and Function values.

Rarely occurring combinations are not listed in the table, e.g. an *emblem* unit with the StructureFocus value *repetitive on attached object* (tapping on the watch to indicate that the time has to be kept in mind), or an *egocentric deictic* unit that emerges from a *repetitive* Structure unit that in the Function assessment is split up into an *egocentric deictic* unit and a *superimposed emphasis* unit.


The body-external space is the space within the personal reach space of the fingers tips, when the arms are extended (accordingly for the lower limbs). The body-external space matches the gesture/action space. However, while the criterion body-external space applied also to *irregular* units, the criterion gesture/action space refers to the complex phase, i.e., it specifies where the complex phase is displayed. The gesture/action space can defined by:


horizontal plane: To specify the horizontal plane in which the complex phase is displayed, body parts are used as the frame of reference, e.g. plane of the shoulders, plane of the head.

sagittal plane: The sagittal plane that is in line with the body midline (hereafter referred to as 'body midline sagittal plane') is used as a frame of reference. A hemi-space is defined as the gesture/action space that is to the left or to the right of the body midline sagittal plane. Four Execution Hemi-Space values are defined: (α) ipsilateral: the laterality of the hand that displays the complex phase matches the laterality of the hemi-space, e.g. right hand in right hemi-space, (β) contralateral: the laterality of the hand that displays the complex phase contrasts the laterality of the hemi-space, e.g. right hand in left hemi-space, (γ) body midline: the hand displays the complex phase in the sagittal plane that is in line with the body midline, and (δ) ipsi-contra: within the given unit, the hand displays one or more complex phases in both hemi-spaces, e.g. right hand uses both hemi-spaces. If of interest for the research question, the use of the hemi-spaces can be coded with the Supplementary category Execution Hemi-Space.

frontal plane: The frontal plane is specified by the same values as the kinesphere (see below).


to the middle finger tip. The sagittal hand axis runs from the middle of the palm through the hand to the middle of the back.


### **8.4 Definitions of the Function values**

The following definitions of the Function values are structured as follows:

First, a **Short definition** of the Function is given.

Second, if the Function value belongs to a main group of Function values, this is reported in the section **Main group.**

A more detailed **Definition** follows that starts with the Function and then reports the Movement form. As the definitions are formulated for the upper limbs, additional examples ♦ are given for lower limbs, head, and trunk. If the Function value can be further specified with Type values (Type category assessment), for a better overview a list of the corresponding Type values is provided. Furthermore, the Supplementary categories that may be used to specify the Function value are listed (see book section V).

The section **Meeting the criteria** provides a quick overview on the movement features as well as on the occurrence, frequency and duration of the value. It is necessary to read the above section (8.3.) to profit from this section.

Finally, having been most appreciated by many researchers, in the section **Differentiate...** it is reported how to distinguish the given Function value from other Function values. Criteria are provided here that help to distinguish Function values that share certain movement features and that, therefore, may be mixed up.

#### **8.4.1** *emotion/attitude*

#### **Short definition**

#### DISPLAYING EXCLUSIVELY AN EMOTION OR AN ATTITUDE

#### **Definition**

#### **Function:**

Emotions are congenital stimulus-response patterns that enable to react to environmental stimuli and to regulate interaction. They are defined by a subjective experience, a physiological response, and a motor or vocal/verbal response. Basic emotions that different researchers agree on are happiness, sadness, fear, anger, surprise, and disgust. As compared to the cognitive system, the emotion system is the onto- and phylogenetically older system. It responds rapidly with a limited number of partially innate operative schemata including motor plans to relevant stimuli for the basic needs of the individual.

The motor component of a genuine emotion is characterized by a muscle innervation pattern specifically including the postural muscles and the face. It is accompanied by a specific vegetative activation pattern and, unless the individual has a deficit in recognizing her/his emotions, by an explicit emotional experience. There are direct neural pathways from the emotional system in the brain to the facial muscles. Furthermore, the pathways for the control of the axial and proximal muscles differ from those for the muscles of the distal limbs and there is evidence that they are linked more to the emotional system than those of the distal limbs. Thus, genuine emotions involve directly the face and the head (neck), trunk and the proximal limbs. The learned and socially motivated display of emotions is controlled by different neural systems and it differs in movement form from the genuine emotions.

In experimental settings and videotaped natural social encounters genuine *emotions* are displayed rarely or only in very subtle manner. Special methods such as the microanalysis of facial expression are needed to register these fine motions. Rather, there is a display of learned emotional expressions for social purposes. Furthermore, *attitudes* such as resignation, pride, or firmness can be observed. These are longer-lasting states that are regularly evident in *rest* and *pose* positions as well as in posture, but sometimes also single motions may reflect a basic attitude such as shrugging of the shoulders in resignation.

The NEUROGES® Function value *emotion/attitude* refers to genuine emotions or attitudes as well as to the display of learned emotional expressions. Note that while any gesture can be performed with an emotional connotation, e.g. a *deictic* with firmness or an *emblem* with anger, the NEUROGES® Function value *emotion/attitude* refers to body movements that **only** express an emotion and that convey **no** other meaning. Thus, didactically the Function value *emotion/attitude* can be identified through a process of elimination.

**Movement form:** *Emotion/attitude* limb movements are initiated proximally and always co-occur with a facial-postural expression. Therefore, also the researcher who only investigates the upper limbs should check the synchronously occurring head and trunk movements. *Emotion/attitude* movements are characterized by a clear change in the effort factors, the quality of which depends on the type of emotion. During the emotional experience, the use of the effort factors differs from that of the individual's baseline. Since genuine *emotion/attitude* movements are motor correlates of an emotional experience and not based on cognitive planning processes, they are motorically simple with a one-dimensional path. Accordingly, also their learned counter-parts are motorically simple.

Researchers who want to explore if an emotional expression is of genuine origin or a learned display should consider the following movement criteria:

(i) The genuine *emotion/attitude* movement is the intrinsic motor component of an emotional experience. It is obligatorily accompanied by a specific vegetative activation pattern, a specific facial expression (even if only recognizable in micro-analysis), and postural innervation. Genuine *emotion/attitude*

limb movements are initiated in the trunk. They are typically executed bilaterally and symmetrically. If there is a shaping of the hand, it is part of the general body tension, e.g. clenching the hand to a fist while the muscle tension in the whole body rises. Thus, isolated movements of the hands are not compatible with a genuine emotional expression. Furthermore, the genuine *emotion/attitude* movements have no transport phase, since they are not based on cognitive planning processes.

(ii) The learned *emotion/attitude* movement is typically displayed for social purposes or as part of cognitive processing of the emotional experience. As the expression of the emotion is based on a planning process, learned *emotion/attitude* limb movements have a transport phase. They are more often displayed unilaterally than the genuine expressions. The degree of the facial-postural involvement and of the vegetative arousal depends on the degree of the true emotional engagement, unless there is perfect simulation.

Technically, the researcher notes her/his observation genuine vs. learned in the tier Notes.

**Types:** The four Type values that are subordinated to the Function value *emotion/attitude* reflect the most frequent gestural expressions of emotions and attitudes that have been observed in the NEUROGES® archive. These are (formulated for the upper limbs): *rise* (dynamic fast raising up of the arms)*, fall* (letting the arms fall down heavily)*, clap/beat* (clapping, beating, or punching with the hands resulting in contact), and the postural gesture *shrug* (raising and falling of the shoulders). In order to enable researchers to note other emotional expressions that they observe and that – most importantly–fulfil the criteria of the Function value *emotion/attitude,* in the NEUROGES®-ELAN template a special template value (*other e-motion)* is provided.

#### **Meeting the criteria**




#### **Differentiate** *emotion/attitude* **from**…

# *subject-oriented action*: *Emotion/attitude* movements reflect or show specific emotions such as happiness, sadness, fear, anger, surprise, or disgust. In contrast, *subject-oriented actions* serve the regulation of the individual's physical needs, the improvement of her/his visual appearance, or the regulation of mental processes.

Difficulties in distinguishing between the Function value *emotion/attitude* and the Function value *subject-oriented action* apply primarily to *phasic within body* units, as otherwise *emotion/attitude phasic* units are most often *in space,* whereas *subject-oriented action* units are *on body, on attached object,* or more rarely, *on separate object. Within body subject-oriented actions* more often have a *repetitive* Structure, since they typically serve the regulation of physical needs, e.g. rolling the shoulders because of muscle tension or to exercise. *Within body emotion/attitude* movements more often have a *phasic* Structure and a one-dimensional path, e.g. shrugging the shoulders in resignation.

# *emphasis*: *Emphasis* gestures that accompany the speech process may have an emotional connotation. However, while the *emotion/attitude* movements are exclusively expressions or reflections of emotion or attitudes, *emphasis* gestures serve the accentuation of certain segments of speech of the word retrieval. They set accents and may create rhythms.

*Emotion/attitude* movements are *phasic* units, while *emphasis* gestures are often *repetitive.* In *emotion/attitude* movements the movements are proximal, bilateral, and accompanied by a facial-postural expression, and the arm embodies the (emotional) direction. In *emphasis* gestures, the hand or arm is used as a baton to set accents in specific directions. The movements are peripheral, unilateral, and accompany speech (see also Type category: Differentiations of *emotion–clap/ beat* from *emphasis–baton* and *emphasis–superimposed)*.

# *emblem: Emblems* are hand signs that fulfil the criteria conventionality, isolation / distal distinction, explicitness, and novel meaning. Some *emblems* are conventionalized signs for specific emotions, e.g. a fist as a symbol for aggression. These *emblems* may be displayed without experiencing the emotion.

In an *emblem*, the hand sign alone is sufficient to convey the information about the designated emotion. There is always a transport phase. In *emotion/attitude* movements the hand movement is embedded in a postural-facial emotional expression. The hand movement per se can be quite unspecific, and only the whole body movement and facial expression that accompany the limb movement clarify the underlying emotional state or attitude.

### **8.4.2** *emphasis*

#### **Short definition**

#### SETTING ACCENTS ON SPEECH

#### **Definition**

**Function:** Emphasis is defined as force or intensity of expression that gives impressiveness or importance to something. In gesture, emphasis can be produced by strong, direct, and quick movements that thereby set dynamic accents. These movement accents point out short segments of the speech. In synchrony with prosody, they emphasize certain aspects of the verbal statement. As such, *emphasis* gestures can be regarded as manual equivalents of prosody. Rarely, they are displayed in speech pauses. Obviously, a sequence of accents creates a meter or a rhythm and therefore, *emphasis* gestures convey rhythmical and potentially acoustical information (if the Focus value is *on body* or *on separate object*). In the latter case, their effect is less dependent on a well visible location of the hand in the gesture/action space.

Furthermore, emphasis can be put on the process of bringing out in speech a concept and presenting it. By rotating the palm out, these *emphasis* gestures accompany and thereby enforce the process of quasi rotating out words (thoughts) and then presenting them.

*Emphasis* gestures do not only set accents on speech but they can also superimpose emphasis on gestures: *emotion/attitude* (primarily the learned display), *egocentric deictic, egocentric direction, form presentation, spatial relation presentation,* and *emblem* gestures. Kinesically, in order to place an accent on a gesture, the gesture has to have a static complex phase.

As there are only few movement forms that are effective for setting accents, *emphasis* gestures are performed inter-individually and also intra-individually in stereotypic manners. Thus, if in a video sample, an individual repeatedly displays of a certain gesture form that fulfils the criteria of the Function value *emphasis*, the repeated display supports the assessment.

**Movement form:** *Emphasis* gestures are *repetitive* or *phasic in space* (rarely *on body* or *on separate object*) movements. They are spatially simple seesaw movements, either up-down, in-out (supination-pronation), or rarely, if superimposed, forth-back. All *emphasis* gestures have an endpoint accent. The up-down movements can have a downward accent or an upward accent. The supination-pronation movements have an outward accent. All *emphasis* gestures are synchronized with mouth and head movements (unless they accompany internal speech, which is not accompanied by mouth movements but often by head movements).

If *emphasis* gestures follow the static complex phase of *emotion/attitude* (mainly the learned display), *egocentric deictic, egocentric direction, form presentation, spatial relation presentation,* or *emblem* gestures (*superimposed emphasis*), they have up-down or forth-back path with a downward or forward accent. Once the primary gesture has come to the static complex phase, *the superimposed emphasis* gesture follows. The hand shape, the hand orientation, and the position in gesture/action space of the primary gesture are preserved during the display of the *superimposed emphasis* gesture. As an example, the fingers are shaped to V-sign and a *repetitive* forth-back movement is superimposed (*emblem + superimposed emphasis*), or hand points and up-down movements are added (*egocentric deictic + superimposed emphasis).* Technically in *superimposed emphasis*, the unit adopted from Module I or II, which typically has the Structure value *repetitive*, is split into a subunit with the primary Function value and a subunit with the Function value *emphasis*.



#### **Meeting the criteria**


#### **Differentiate** *emphasis* **gestures from**…


In contrast, *emphasis* gestures are typically performed implicitly and they accompany the speech (or inner speech) process. The spatial directions of the path during the complex phase are stereotypical, i.e., up-down, in-out. The Structure is typically *repetitive*.

# *pantomime*: In *pantomime* gestures with a *repetitive* Structure the meaning is conveyed by the repetition per se, e.g. when pantomiming tooth brushing or hammering. In these cases, one up-down brush movement in front of the mouth or one downward hammering movement would not unambiguously convey the meaning of tooth brushing or hammering, respectively. In *repetitive pantomimes* there is often a displacement of the hand, e.g. the hand moves in front of the mouth from the left side to the right side while executing the up-down movements. Furthermore, there is a distinct hand shape or hand orientation and the gaze is at the hand.

In contrast, in *emphasis* gestures there is no displacement of the hand during the forth-back movement, there is no distinct hand shape or hand orientation, and the gaze is not at the hand.

# *form presentation:* In *form presentation* gestures with a *repetitive* Structure the repetition serves to create a repetitive pattern, e.g. to depict a star with six sharps. The repetition of one segment of the form is necessary to create the whole form, e.g. six identical sharps are needed to create a star. There is a displacement of the hand, there is a distinct hand shape or hand orientation and the gaze is at the presented form.

In contrast, as the *repetitive* movements of *emphasis* gestures are back and forth, they cannot create a form. There is no distinct hand shape or hand orientation, and the gaze is not at the hand.

# *spatial relation presentation: Spatial relation presentation* gestures with a *repetitive* Structure may serve to present several independent locations or to create a route with a repetitive pattern, e.g. a zig-zag path.

In *spatial relation presentation* gestures there is always a distinct use of gesture/ action space reflecting the mento-heliocentric perspective and thus, a displacement of the hand. The hand is typically shaped, with a distinct orientation, and the gaze is typically at the presented path. This is not the case in *emphasis* gestures.

# *motion quality presentation*: *Motion quality presentation* gestures and *emphasis* gesture share the *repetitive* Structure. However, *motion quality presentation* gestures typically have a complex dynamics, and a shaped hand to present the object that is moving, and a displacement of the hand to represent locomotion. The gaze is typically at the presented motion.

In contrast, *repetitive emphasis* gestures are spatially simple back-forth movements, with an endpoint accent and they are synchronized with the mouth and head movements.

# *emblems:* In *emblems* with a *repetitive* Structure the repetition is part of the conventionalization, e.g. waving the hand to say good-bye or tapping on the temple to indicate that someone is crazy. In this case, a one-way wave or one tap would not unambiguously constitute the sign and the repetition helps to clarify the message. Note, however, that *emblems* with a *phasic* Structure and a static complex phase may be combined with a superimposed *emphasis* gestures, e.g. adopting the shape of the victory sign and then moving the V-shaped hand repetitively back and forth. In this case, the unit is split up into an *emblem* and a *superimposed emphasis* gesture.

In contrast to *emphasis* gestures, *emblems* are not a complement to a verbal utterance, but they constitute the message itself. Furthermore, *emblems* are displayed explicitly, i.e., within the gesturer's awareness, while *emphasis* gestures are not.

### **8.4.3** *egocentric deictic*

#### **Short definition**

#### INDICATING A LOCATION BY USING AN EGOCENTRIC FRAME OF REFERENCE

#### **Main group**

The values *egocentric deictic, egocentric direction,* and *pantomime* share the characteristic that an egocentric frame of reference is adopted as cognitive perspective.

	- (i) egocentric: In the egocentric perspective, the gesturer is the point of reference.
		- (α) If the gesturer displays spatial information from an egocentric perspective, she/he constitutes the point of spatial reference which the other points in space are related to: "I am here and it is there." (*egocentric deictic)* or "I am here and it is in that direction." (*egocentric direction*). More precisely, the body midline is taken as the point of spatial reference. Therefore, if the target is not in front of the gesturer, (s)he typically rotates the trunk and head to be vis-à-vis with the target. Furthermore, the hand axis (from middle of wrist to middle finger) is oriented centrifugally from the body midline. As an example, if the target is on the gesturer's right, (s)he rotates the head and upper trunk to the right and then points to the target. Note that the gesturer may project him-/herself into an imaginary space while keeping the egocentric frame of reference, e.g. "In my old apartment, if I entered it, the bathroom was on my right." Here, the egocentric frame is projected into a mental imagery space. Further, there can be an egocentric reference to a location in the abstract mental space, e.g. where relevant are emotionally located relative to the gesturer.
		- (β) If the gesturer displays a movement from an egocentric perspective. Thus, the gesturer is the actor: "I move, I act" (*pantomime*). In the gestural display,–in contrast to the mento-heliocentric perspective–the hands keep their natural orientation as parts of the gesturer's body relative to the other parts of the body, i.e., in order to successfully perform actions, the hand and head positions are coordinated such that the actions of the hands can be visually controlled. Therefore, in

*pantomime*, there is typically an involvement of the head and upper trunk in coordination with the hand/arm movement. As an example, in the *pantomime* of hammering the hand, the arm, the head, and the trunk are spatially and dynamically coordinated.

	- (α) The gesturer displays a spatial relation from a mento-heliocentric perspective, i.e., (s)he mentally takes the perspective of the sun and looks on the imaginary spatial scenery: "In the presented environment, it would be over there." (*spatial relation presentation*). In the gestural display, the mental image that is generated with a mento-heliocentric perspective is projected onto the horizontal or frontal plane. Within the frame of this map, the gesturer may show a position or a route. If the imaginary map is projected to the horizontal plane, the hand axis is in line with the vertical space axis. If the imaginary map is projected to the frontal plane, e.g. an imaginary map of the city of Cologne, the hand axis is in line with a sagittal axis. In this case, it is more difficult to use the hand orientation as a movement indicator of the cognitive perspective, as it is the same for the mento-heliocentric and egocentric perspectives. Here, the gesturer's gaze focus helps to identify the cognitive perspective: In the mento-heliocentric perspective the visual focus is on the imaginary map, which is projected within reach/kinesphere onto the frontal plane. In contrast, in the egocentric perspective the visual focus is on the target (or, if it is invisible, on its assumed position) that is typically beyond the reach/kinesphere.

A special case is experimental settings in which animations are presented to the gesturer on a screen in front of her/him. In this case, based on the gestural behavior alone it is difficult to say if the gesturer employs an egocentric or a mento-heliocentric perspective. S/he might really consider objects on the right side of the screen as being on her/ his right and objects on the left side of the screen as being on her/his left, i.e., relate these objects to her/himself. Or, alternatively, s/he might adopt a mento-heliocentric perspective on the scene on the screen.

(β) The gesturer may display a motion from a mento-heliocentric perspective, i.e., the gesturer's hand–instead of the gesturer her-/ himself–represents an agent that moves or acts and the gesturer looks at this presentation mentally taking the perspective of the sun (*motion quality presentation*). The hands are used as if they were marionettes and **not** as if they were the gesturer's hands that relate to the gesturer's body. In most cases, there is an indirect reference to the ground on which the motion or action takes place. This ground is typically projected to the horizontal plane. The gesturer's body is not involved in the presentation. The hands are functionally separated from the gesturer's body. The wrist is often flexed such that the hand axis is in line with the vertical space axis and the hand is displaced on the horizontal plane, e.g. the index and middle finger represent two legs (pars pro toto for a human being) walking on a ground.

However, while *egocentric deictic, egocentric direction* and *pantomime* obligatorily have an egocentric perspective in a concrete or mental imagery space, *spatial relation presentation* often and *motion quality presentation* sometimes have a mentoheliocentric perspective. However, in special experimental settings, e.g. in which animations are presented to the gesturer on a screen, it is difficult to determine whether or not the gesturer keeps the egocentric perspective (on the screen) when re-presenting the spatial relations of the animation. Thus, the relevant difference between *egocentric deictic* and *egocentric direction* on the one hand and *spatial relation presentation* on the other is that the latter Function value space is **created** while in the first two values space is **referred to**. Thus, the creative performance is higher in *spatial relation presentation* than in *egocentric deictic* or *direction*.

#### **Definition**

**Function:** An *egocentric deictic* indicates where something is located, by using an egocentric frame of reference. The gesturer's body midline is the spatial point of reference for defining the location. Thus, from his/her actual location in space, the gesturer indicates another location in the space by pointing to it. The indication of a location is direct reference to a locus in space, and furthermore, it may be an indirect reference to an object/subject. As an example, the gesturer points to the location of the chair in order to designate the chair. The target may be visible ("There is the table") or invisible ("The basket is behind the door"). However, for targets that are invisible because they are far away, *egocentric direction* gestures are preferred.

The gesturer may project him-/herself into a mental space while keeping the egocentric frame of reference, e.g.: "In my old apartment, if I entered it, the bathroom was on my right." Thus, the egocentric frame is maintained in an imagery space. The mental space can also be an abstract space, e.g. a reflection of emotional relations to other persons.

*Egocentric deictics* with reference to concrete locations are typically displayed explicitly, i.e., the gesturer is aware of displaying the gesture.

	- (i) *material:* e.g. referring to the chair in the room
	- (ii) *non-material:* e.g. referring to angels in heaven, a point in time.

#### **Meeting the criteria**




#### **Differentiate** *egocentric deictic* **from**…

# *egocentric direction*: The *egocentric direction* gesture indicates a direction towards a location with no distance information, or a route. An *egocentric direction* gesture that indicates a route is easy to differentiate from an *egocentric deictic*, as it marks a line and does not indicate a point. In an *egocentric direction* gesture that indicates a direction with no distance information, the hand is quasi 'thrown' into the designated direction. The hand is relaxed. There is a free flow and acceleration with an end point accent, the path is arch-like.

 In contrast, the *egocentric deictic* directly designates a specific location with distance information. The target is located in the estimated prolongation of the pointing hand or finger. The hand has a distinct shape, the path during complex phase is spoke- or arch-like, and the movement flow is bound. Often only the index is extended to indicate the localization of the target. Thereby, a precise localization can be realized.

# *spatial relation presentation*: The *egocentric deictic* might be confused with the Type *position* of the *spatial relation presentation* gesture. However, the *spatial relation presentation* gesture creates a position in an imaginary space by positioning or placing the hand in the gesture space. In contrast, in *egocentric deictics* a location is not created but it is only referred to. Thus, there is reference to a location in the gesturer's real or imaginary environment.

 A related aspect regarding the difference between *spatial relation presentations* and *egocentric deictics* concerns the cognitive perspective. In *egocentric deictics* the gesturer obligatorily applies an egocentric frame of reference by using her/himself as the point of spatial reference. In contrast, in *spatial relation presentations*, there is often a mento-heliocentric perspective. The gesturer adopts the cognitive perspective of looking down on an imaginary map that s/he creates by setting positions and routes. As the imaginary map is often projected to the horizontal plane, the fingertips are oriented downwards to the horizontal plane, i.e., the longitudinal hand axis (wrist – finger tips) is in line with the vertical space axis. If the imaginary map is projected to the frontal plane, e.g. a mental map of the city of Cologne, the hand axis is in line with the sagittal space axis. The gaze is at the target on the imaginary map. In contrast, in an *egocentric deictic*, the gesturer indicates a location with spatial reference to her/his body midline. The vector of the pointing hand is in centrifugal orientation (with the exception of pointing to the oneself). The gaze is at the target in the real or imaginary environment.

 In rare cases, the gesturer creates an imaginary spatial map and then refers to a specific location by pointing on this map by keeping the mentio-heliocentric perspectives. While these pointing gestures share many features of movement form with *egocentric deictics*, they are coded as *spatial relation presentations* because they are based on a mento-heliocentric cognitive perspective.

# *emblem*: *Emblems* that include pointing have a novel meaning that differs from the meaning of the gesture that they (hypothetically) originate from, e.g. tapping with the index at the temple may have originally been a *deictic* to the temple and then in the process of conventionalization has been linked with the meaning "You are crazy / That is crazy!". In contrast, *egocentric*  *deictics* do not have a novel meaning, they are one-to-one indications of a location of a material or non-material referent.

 *Emblems* are not variable with regard to the hand shape, hand orientation, and the position of the hand in the gesture/action space, e.g. tapping with the index at the temple. In contrast, for *egocentric deictics* the position and orientation of the hand in the gesture/action space is determined by the designated location.

### **8.4.4** *egocentric direction*

### **Short definition**

INDICATING A DIRECTION OR A ROUTE BY USING AN EGOCENTRIC FRAME OF REFERENCE

### **Main group**

see *egocentric deictic*

### **Definition**

	- (i) direction: "Direction is the information contained in the relative position of one point with respect to another without the distance information" (Wikipedia). In accordance with this definition, the *egocentric direction* gesture indicates a direction (towards where) without distance information, e.g. heavenwards, northwards, etc. Theoretically, the distance can be infinite. As the frame of reference is egocentric, the point of reference is the gesturer's body midline. As a direction does not contain distance information, *egocentric directions* are preferred to *egocentric deictics* for indicating locations that are far away and invisible.

In an *egocentric direction* gesture, the emphasis may be on the change of position with an accent on the start point and the end point (from where to where) or on

<sup>17</sup> Since in the egocentric perspective, there are more often indications of directions than of routes, the Function value was termed *egocentric direction*.

the path between the start and end positions (where along). *Egocentric direction* gestures may not only indicate the path from the gesturer to an external location ("...from me to there"), or the path from an external location to the gesturer ("...from there to me"), but also the route between two or more locations in the external space ("... from there to over there"). Especially in the latter case, for coding the value *egocentric direction* rather than *spatial relation presentation* the criterion is that the egocentric frame of reference is maintained ("[I am here and] it is from there to there").

The *egocentric direction* gestures indicating a direction or a route may be used in a transitive context: the hand indicates the direction towards where an entity is moved or should be moved. The (to be) moved entity may be an object, the addressee, parts of his /her body, or his/her mental state, or in mental training conditions or self-suggestion, the gesturer's own body or mental state. As an agent is required to execute the movement of an entity in a specific direction, these *egocentric direction* gestures also imply information about the agent.

Just like *egocentric deictics, egocentric direction* gestures may be based on the gesturer's mental rotation into an imaginary space. Here, the gesturer projects him-/herself into an imaginary space while keeping the egocentric frame of reference, e.g.: "If I stood in front of the Eiffel Tower, it would be in that direction". Thus, the egocentric frame is projected into the imagery space and then the *egocentric direction* gesture is performed.

Just like *egocentric deictics, egocentric direction* gestures for concrete spatial directions or routes are typically displayed explicitly, i.e., the gesturer is aware of displaying the gesture.

	- (i) direction: In *egocentric direction* gestures that indicate a direction, the hand is quasi 'thrown' in the designated direction. Thereby, the impression of a far or even infinite distance is conveyed. The gesturer's body midline is the point of reference from where the direction starts and the longitudinal hand axis (from wrist to finger tips) embodies the designated direction. The hand is relaxed. The path during complex phase is two-dimensional arch-like. There is a free flow and acceleration with an end point accent. At the end of the complex phase, the longitudinal hand axis is in line with the designated direction.

a change of position is emphasized (from where to where), e.g. in a gesture accompanying the phrase "The bird flew from the high tree to the smaller one", the trace between the start position and the end position is straight. If the path per se is emphasized (where along), e.g. the finger follows the visible skyline of a city, the trace can be spatially complex. As the emphasis is not on the precise depiction of a route (and not on the motion on that route as in *motion quality presentation* gestures), there is no variation in the effort qualities. The movement flow is bound and there is a direct use of space in order to deliver a precise depiction of the route.

The orientation of the hand depends on the transitivity versus intransitivity of the *egocentric direction* gesture. In intransitive gestures, the longitudinal hand axis is in line with the direction. In transitive gestures, the sagittal hand axis (from back to palm of hand) is in line with the designated direction line, as if moving something with the palm/back of the hand into the designated direction.


#### **Meeting the criteria**





#### **Differentiate** *egocentric direction* **from**…


*direction* gestures. In *pantomimes*, however, there is a distinct hand shape in adaptation to the imaginary object, which is thrown or pushed away.

 In contrast, *egocentric directions* that are transitive and show only the direction to which something is moved or shall be moved (move it to the right, away) and the focus is not on the action of moving. The spatial path is oneway two-dimensional. The hand is relaxed, as no information about the object is provided.

# *spatial relation presentation*: The *egocentric direction* of the subtype (ii) route might be confused with the Type *route* of a *spatial relation presentation* gesture. However, the *spatial relation presentation* gesture creates a route in an imaginary space by tracing a path in the gesture space. In contrast, in *egocentric directions* a route is not created but it is only referred to.

Another aspect regarding the difference between *spatial relation presentations* and *egocentric deictics* concerns the cognitive perspective. In *egocentric directions* the gesturer obligatorily applies an egocentric frame of reference by using her-/himself as the point of spatial reference. In contrast, in *spatial relation presentations* with a material referent, there is often a mentoheliocentric perspective. The gesturer adopts the cognitive perspective of looking down on an imaginary map that s/he creates by setting positions and routes. As the imaginary map is often projected to the horizontal plane, the fingertips are oriented downwards to the horizontal plane, i.e., the longitudinal hand axis (wrist – finger tips) is in line with the vertical space axis. If the imaginary map is projected to the frontal plane, e.g. a mental map of the city of Cologne, the hand axis is in line with the sagittal space axis. The gaze is at the target on the imaginary map. In contrast, in *egocentric directions*, the gesturer indicates a direction or a route with spatial reference to her/his body midline. The gaze is in the direction or at the route in the real or imaginary environment.

In rare cases, the gesturer creates an imaginary spatial map and then refers to a route on this map by keeping the mentio-heliocentric perspectives. While these tracing gestures share many features of movement form with *egocentric directions* subtype route, they are coded as *spatial relation presentations* because they are based on a mento-heliocentric cognitive perspective.

# *motion quality presentation*: A *motion quality presentation* gesture presents a specific kind or quality of movement, e.g. rolling or exploding. It may include information about spatial relations, e.g. representing something rolling from one point to another. In contrast to *egocentric direction* gestures, the manner or the quality of movement is superimposed to the direction or route trajectory. Thus, there is always a specific within hand movement trajectory (e.g. circulating the hand to demonstrate rolling) or specific effort qualities (e.g. sudden, strong, and direct to demonstrate something exploding). Furthermore, just like in *spatial relation presentation* gestures, the cognitive perspective is mento-heliocentric, i.e. as if looking on something that is moving without any spatial reference to the own body.

### **8.4.5** *pantomime*

#### **Short definition**

PRETENDING TO PERFORM AN ACTION

#### **Main group**

see *egocentric deictic*

#### **Definition**

**Function:** The gesturer pretends ("as if ") to perform a motor action her-/himself.

"Action is defined as an intentional (wilful) human body movement, a behavior caused by an agent in a particular situation" (Oxford Dictionary), "something done" (Webster's Dictionary). In NEUROGES®, actions are defined as all hand movements that effect changes in the external physical world that surrounds the gesturer's body or in the gesturer's state (see Function values *object-oriented action* and *subject-oriented action*).

The Function value *pantomime* refers to the "**as if**" demonstration of an action but **not** to the actual execution. Examples for *pantomime* gestures are pretending to brush the teeth with an imaginary toothbrush, pretending to climb up a mountain, moving the arms as if marching, or pretending that an external object hits the gesturer. It is an essential criterion for the Function value *pantomime* that the cognitive perspective is egocentric. The gesturer is the actor in the pretended action. Even in the latter example (being hit by an object), the egocentric cognitive perspective is maintained: The gesturer is the actor with whom something happens.

**Not** coded with the value *pantomime* are "as if " demonstrations of non-action movements such as pretending to show an emotion, pretending to perform a deictic, etc. As an example, the gesturer, who narrates a movie in which the main character performs an *egocentric deictic,* may behave as if s/he were the main character and perform an *egocentric deictic*. In this case, the Function value *egocentric deictic* is given. The same applies to imitations of all other types of gestures, notably *emotion, egocentric direction,* or *emblem.*

In special experimental settings, an actual tool may held in the hand and an actual counter-part may be present but the action is only pantomimed, e.g. holding an actual hammer in the hand and pretending to hammer (but not really doing it) on an actual nail. In the neuropsychological literature, this condition is termed tool use demonstration but in NEUROGES® it is coded as *pantomime.*

**Movement form:** The *pantomime* gesture is executed as similar to the actual action of reference as possible. As the gesturer is the actor, potentially her/ his whole body is involved. If the gesturer displays the *pantomime* while sitting, there is regularly an involvement of other parts of the body, such as the head and the trunk. While in *presentation* gestures (9.4.5 – 9.4.8), the hand represents something else, e.g. a running dog, in *pantomime* the gesturer's hand is the actor's hand. Thus, the hands adopt a specific orientation, in which they keep their natural orientation as a part of the gesturer's body, e.g. when pretending to climb up a mountain the hands have to be moved to a position in which they could potentially drag the body upward. The longitudinal hand axis (wrist to finger tips) is often oriented centrifugally from the bodymidline. The use of the gesture space is determined by the action space of the action of reference. As an example, when pretending to march there is use of the lateral gesture space, as in actual marching the arms move on the right and left sides of the trunk.

There are few exceptions to the rule that the gesturer's hand is the actor's hand. Occasionally, the hands may be used as if they were the feet. Furthermore, the hand may embody the tool, e.g. the index embodies the toothbrush, or a separate object, e.g. the hand represents a ball that hits the gesturer. In this case, the hand does not function as a part of the gesturer's body but it represents an object. The Supplementary category Technique of Presentation enables to register this information with the value *hand-asobject*. In any case, the egocentric perspective is maintained: The gesturer is the actor. The hand representing the object relates to the gesturer's body as the actor's body.


Technique of (Form) Presentation: This category registers which technique is used to represent the imaginary object or counterpart the gesturer-actor acts with/on:

	- (i) *material:* Most *pantomime* gestures are *material*, i.e., they refer to concrete actions, such as actually brushing the teeth, swimming in the ocean, getting hit by something, etc.;
	- (ii) *non-material*: However, the *pantomime* gesture may refer to *nonmaterial* phenomena, e.g. the gesture pretends to weigh two ideas with his/her hands when considering the pros and cons for the two ideas.

**Note**: The pantomimed action can be typically clearly be recognized by the rater. Thus, the action may be noted, e.g. cutting with a knife, throwing a ball.

#### **Meeting the criteria**




#### **Differentiate** *pantomime* **from**…


 In some *motion quality presentation* gestures a mento-heliocentric perspective is adopted. Then there is an indirect reference to the ground on which the motion or action takes place. This ground is typically projected to the horizontal plane. Accordingly, the longitudinal hand axis is in line with the vertical axis. The gesturer's body is not involved in the presentation. Expressively isolated from the gesturer's body, the hands present the motion quality. In contrast, in *pantomimes*, the hand is oriented just like the actor's hand in the action would be and at least the upper body, head and trunk, is involved in the demonstration. Even if only the hand acts, e.g. pantomiming sewing, the orientation of the head towards the hand reveals the egocentric perspective.

# *form presentation*: *Form presentation* gestures focus on the form of the object of reference. *Pantomimes* may also include information about the form of the imaginary counter-part or the imaginary tool, e.g. the hand shapes around an imaginary object which is round. However, the *form presentation* gesture emphasizes the static features of an object, while the *pantomime* focuses on how an object is used. As an example, a *form presentation* gesture may depict how long a hammer is by enclosing the imaginary form, whereas in *pantomime*, the tool use is demonstrated, such as holding the imaginary hammer and hammering with it. *Form presentation* gestures are characterized by a bound flow without variation in the effort qualities. The imaginary object is presented in the central gesture space. In contrast, *pantomimes* are characterized by a variation in the effort elements and by a distinct use of gesture space.

# Techniques of Presentation (Supplementary category): To present forms or spatial relations, there are Techniques of Presentation such as *tracing, palpating,* or *marking a position*. The motion image of a form is created by tracing or stroking along an imaginary object, or the image of a position is presented by marking it on an imaginary map. In these *form presentation* and *spatial relation presentation* gestures, tracing, palpating, and marking a position are used as techniques to present a form or a position. The gestural message is "This is a square" or "Here is the location". In this case, the gestures are coded with the Function values *form presentation* or *spatial relation presentation,* and the Supplementary category Technique of Presentation is applied to register whether *tracing, palpating,* or *marking a position* was used to create the motion image or the motion image. These gestures do **not** have the Function of a *pantomime*, i.e., the gestural message is **not** "I am tracing/drawing", "I am palpating/sculpturing", or "I am marking a position".

If, however, the gesturer intends to *pantomime* the actions of tracing, palpating, or marking a position, the tracing, palpating, or marking a position are displayed with dynamics. There is a variation in the effort qualities, because the emphasis is on the action per se, i.e., **how** it is performed. As an example, the *pantomime* of palpating could be performed lightly, sustained, indirectly as if stroking alone on something precious. Or, it could be performed with strength as if working with plastic modelling material. In contrast, if tracing, palpating, or marking a position are used merely as Techniques of Presentation, the focus is on the end product, i.e., the image of form or a location on an imaginary map, and there is no variation in the effort qualities.

# *emblems: Emblems* share with *pantomimes* that they can be used to communicate without words.

However, *emblems* differ in several aspects from *pantomimes:*


have to be differentiated from *pantomime* gestures that show a similar movement form. As an example, a gesture that demonstrates the action of sweeping away dirt from the clothes (*pantomime*) has to be differentiated from an *emblem* gesture that sweeps away imaginary dirt from the shoulder (this location is part of the conventionalization) and that indicates contempt. While *pantomimes* demonstrate almost exclusively concrete actions (e.g. acting as if brushing the teeth), action *emblems* often are signs for symbolic actions, e.g. throwing a kiss (sympathy, affection), tearing the hair out (desperation), cutting the throat (anger, hate, the end), throw up (strong dislike), my hands are tied (helplessness), firing at someone (anger, hate). In all these examples for action *emblems*, the movement form is highly conventionalized and the meaning is novel, i.e., it differs from the meaning that a simple *egocentric deictic, egocentric direction, pantomime, form presentation,* or *motion quality presentation* gesture would have. There are, however, few *emblems* in which only the movement form is conventionalized, e.g. the telephone-sign, but the meaning is that of the concrete action, i.e., the meaning does not differ from that of an equivalent *pantomime* gesture.

#### **8.4.6** *form presentation*

#### **Short definition**

#### CREATING A FORM

#### **Main group**

The three Function values *form presentation, spatial relation presentation,* and *motion quality presentation* have in common that they present something: a form, a spatial relation, or a motion quality.

The term *presentation*, which is shared by the three values, is used in delimitation to the term re-presentation as defined by Freedman (1972, p. 159): "… a *representational hand movement* in which either an abstract idea, or an image having clearly definable space and time referents, is given motor expression." (see also book I, chapter 3.4). In contrast to Freedman's conceptualization, the term *presentation* focuses on the creative act of the gestural expression and it implies that the gesture is not a mere reflection of a cognitive concept but that the formation of the gesture is a creative process that, in turn, can also affect the cognitive concept (book I, chapter 2.1). The term *presentation* also lends emphasis to on what can actually be seen in the hand movement rather than on assumptions about the underlying cognitive concept. As examples, the hand is formed to a round shape and thereby presents a round shape, the two hands are placed at different positions in the gesture/action space and thereby a spatial relation emerges between the two hands, or the hand opens and closes with strength and thereby a motion quality is presented. In contrast, the term re-presentation refers to the assumed referent, e.g. the hand re-presents a ball, the two hands show the distance between two churches in a city, or the hand re-presents the movement of hand-bellows.

In *form presentation* and *spatial relation presentation* gestures, the form or the spatial relation may be presented in a static complex phase (see definition in 4.3). As examples, the hand adopts a certain shape that is held for a while or the two hands present a spatial relation by adopting static positions. Thus, the information is conveyed by a **still image**, i.e., a photo could capture the relevant information. However, likewise, in *form presentation, spatial relation presentation,* and *motion quality presentation* gestures, the information can be conveyed in a motion complex phase (see 4.3). As examples, the image of a form is created by tracing the shape of the imaginary object (Technique of Presentation value *tracing)* or by stroking along the imaginary object (Technique of Presentation value *palpating),* or a spatial relation is presented by two sequential gestures (here… and there), or a motion quality is depicted. These kinds of gestures create a **motion image** (in delimitation to a still image). The creation of still and motion images is based on mental images that are grounded in multi-modally stored mental representations in memory structures (e.g. Kosslyn, 1980). In turn, the addressee recognizes the still and motion images, which the gesturer presents, based on her/his own stored mental representations. In case of motion images, the addressee has to follow the gesturer's hand movements and to memorize the path of the movements in order to recognize the final form or spatial relation. This may imply that the addressee mentally tunes into the gesturer's creative movement and that (s)he uses his/her own sensori-motor experience to comprehend the motion image. The mental recording of the movement path (as if the path were materialized) results in the motion image.

The NEUROGES® definitions of the three *presentation* values imply a **hierarchy** between them: *motion quality presentation* > *spatial relation presentation > form presentation*. A *motion quality presentation* gesture may include information about a form and a spatial relation (how what moves where). A *spatial relation presentation* may include information about a form (what is where). A *form presentation* is the most basic *presentation* value as it only includes information about the form.

#### **Definition**

**Function:** The *form presentation* gesture creates a form.

"Form refers to the shape, visual appearance, and configuration of an object." (Wikipedia). "The shape and structure of anything, as distinguished from the material of which it is composed." (Webster's Dictionary).

The *form presentation* gesture presents (only) a form. The *form presentation* gesture includes **no** information, where the form is situated or what is done with the form.

**Movement form:** There are different techniques to create the still or motion image of a form: (i) *hand-as-object:* the hand creates a still image of a form by embodying the form, e.g. it is formed to a fist (and thereby embodies a round object); (ii) *enclosure:* the hand creates a still image of a form by enclosing an empty space; thus, the configuration and width of the hand aperture provides information about the form, e.g. the hands shape a triangle (enclose an imaginary triangle); (iii) *tracing*: the hand creates a motion image of a form by tracing a closed contour, e.g. it traces a square; (iv) *palpating*: the hand creates a motion image of a form by palpating an empty space; thus, the configuration and width of the hand aperture and the movement path provides information about the form, e.g. the hands palpates a three-dimensional sculpture.

Independent of the technique that is chosen to create the form, *form presentation* gestures in general are characterized by the fact that gesture is displayed in the central gesture space right in front the body midline. The central position is a spatially quasi neutral position, as no specific spatial position is taken. Only in exceptional cases, pure *form presentations*, i.e., those that are not embedded in a *spatial relation presentation* (see below), are **not** executed in the body midline gesture space. For example, because of obvious external circumstances the gesturer chooses another location in the gesture space, such as orientation to the interactive partner or lack of space. As an example, the gesturer addresses a person who sits behind him and performs a *form presentation* gesture above his right shoulder (under normal circumstances, s/he would have displayed this gesture in the central body midline gesture space). Furthermore, in *form presentation* gestures there is often bilateral hand use, as the use of both hands facilitates the creation of complex forms. There is an invariant effort quality use, as information about a static form and **not** about dynamic processes shall be conveyed. Most often, the *form presentation* gesture is performed with a constantly bound flow and direct use of space, in order to provide a precise delineation of the form.

	- (i) *material:* e.g. a ball, the height of a child;


#### **Meeting the criteria**



#### **Differentiate** *form presentation* **from**…


the gesturer to display the *form presentation* gesture at an unusual location in the gesture space.

 In a *spatial relation presentation*, **two** points (exceptions see 8.4.7) are presented and the focus is on the relation **between** these two points. In *form presentation*, **one** form is presented. Confusion is most likely to arise, when the two hands are used to present a form. Here, the two values can best be distinguished by the hand orientation. In a bimanual *form presentation* unit, both hands are oriented towards the center of the imaginary object, i.e., the palms are oriented **towards each other**. In contrast, in a bimanual *spatial relation* unit, in which the spatial relation is created by the position of the two hands, both hands orient **towards the same plane**, in most cases the fingertips of both hands are oriented downwards.


 Furthermore, a *motion quality presentation* has obligatorily a path during complex phase, while *form presentations* only have a path during complex phase, i.e., a motion complex phase, when the Technique of Presentation is *tracing* or *palpating*. *Motion quality presentations* often have a *repetitive* Structure in order to demonstrate the manner of a movement, while *form presentation* gestures are typically *phasic*.


extended index and middle fingers. The fixed meaning is linked to a fixed form. In contrast, in *form presentation* gestures, information about a specific form can be conveyed with many different movement forms, e.g. four different Techniques of Presentation may be used to present a round form. While *form presentations* inform primarily about a form, *emblems* that show forms have a meaning that is beyond the mere form information, e.g. the T-sign does not mean T but it means Time out.

Furthermore, *emblems* are always executed explicitly, i.e., intentionally, while *form presentations*, especially with the Referent value *non-material*, may be executed implicitly. As an example, the gesturer may, without being aware of it, show a *form presentation* with a big size, e.g. reflecting his/her impression of a big problem.

### **8.4.7** *spatial relation presentation*

#### **Short definition**

#### CREATING A SPATIAL RELATION

#### **Main group**

see *form presentation*

#### **Definition**

#### **Function:** *Spatial relation presentation* gestures create a spatial relation.

Space is defined as "Extension, considered independently of anything which it may contain", "the unlimited expanse in which everything is located (Webster's Dictionary); "Space is the boundless, three-dimensional extent in which objects and events (motions) occur and have relative position and direction." (Wikipedia). "... space was a collection of relations between objects, given by their distance and direction from one another…" (Wikipedia). The latter definitions underline the importance of spatial relations to define space. Accordingly, if a gesturer provides spatial information, (s)he needs to establish in gesture a spatial relation. As a location in space can only be defined relative to another, in gesture at least two points have to be created. Thus, the *spatial relation presentation* gesture creates an imaginary spatial map by setting positions or outlining routes. A *spatial relation presentation* gesture may include forms (what is where on the imaginary map).

The *spatial relation presentation* gesture differs from *egocentric deictics* and *egocentric directions* in which a location or a direction is not created but only referred to. Thus, if the gesturer refers to her/his actual (or imaginary) surroundings, s/he does not need to create locations, but can simply refer to them by using *egocentric deictics* and *egocentric directions.*

Another aspect regarding the difference between *spatial relation presentations* and *egocentric deictics* / *directions* concerns the cognitive perspective. In *egocentric deictics* / *directions* the gesturer obligatorily applies an egocentric frame of reference by using her/himself as the point of spatial reference (see 8.4.3 and 8.4.4). In contrast, in *spatial relation presentations*, especially those with a material Referent, there is often a mento-heliocentric perspective (see 8.3, paragraph Cognitive Perspective). The gesturer adopts the cognitive perspective of looking down on an imaginary map that s/he creates by setting positions and routes.

In rare cases, the gesturer creates an imaginary spatial map and then refers to specific locations by pointing or indicating a direction on this map by keeping the mentio-heliocentric perspectives. While these gestures share many features of movement form with *egocentric deictics* / *directions* (see below Differentiate…), they are coded as *spatial relation presentations* because they are based on a mento-heliocentric cognitive perspective.

	- (α) bimanual unit, Contact value *act apart,* Formal Relation values *symmetrical* or *asymmetrical*: The two hands act simultaneously to present the spatial relation. As an example, one hand is placed on the right of the gesturer's body midline and the other hand on the left.
	- (β) unimanual unit, Structure value *repetitive:* One hand acts sequentially to present the spatial relation. The hand is first positioned on the gesturer's right and then on her/his left. The addressee recognizes the spatial relation as (s)he memorizes the first positions of the hand in the gesture space and relates it–as if it were materialized – to the second position.
	- (γ) unimanual unit, Structure value *phasic:* The gesturer's body midline is used as an implicit point of reference (Lausberg et al., 2003). The hand only presents one position in the gesture space. The addressee recognizes the spatial relation as (s)he relates the position of the gesturer's hand to the her/his body midline. However, in this technique the positioning of

the one hand has to be spatially very distinct such that it becomes clear that the position of the hand is not incidentally.

While the above techniques refer to points in space, lines in space are presented by a steady displacement of the hand in the gesture space, e.g. when representing a route in a landscape.

The points and lines establish spatial relations and thereby create twoor three-dimensional spaces. Two-dimensional spaces (maps) are often projected onto the horizontal plane. In that case, the longitudinal hand axis (wrist to fingertips) is in line with the vertical axis, if the imaginary map is projected to the horizontal plane. If the imaginary map is projected to the frontal plane, the longitudinal hand axis is in line with the sagittal axis.

**Hierarchy:** A spatial relation presentation may be embedded in a motion quality presentation, but it may include a form presentation.

*Spatial relation presentation > form presentation*: Per definition, a *form presentation* does not include spatial information, but a *spatial relation presentation* gesture (where) may include information on a form (what is where). If a *spatial relation presentation* includes a *form presentation*, the form is presented at a specific location in the gesture space. As an example, at a location in right gesture space the right hand adopts the shape of a reversed V (e.g. representing a house with a pointed roof on the right in the imaginary spatial map). Thus, the combination of a form and a spatial relation is always coded as *spatial relation presentation.* The included form information can be coded with the Supplementary category Technique of Presentation.

If the *spatial relation presentation* is embedded in a *motion quality presentation* (where [along] something is moving how), the unit is coded as *motion quality presentation.*


Referent: This category enables the rater to register whether (s)he assumes that the referent of the *spatial relation presentation* is


Technique of Presentation: If the *spatial relation presentation* includes a *form presentation*, the Technique of Presentation can be assessed.

Meeting the criteria


226 The Function category


**Occurrence:** *Spatial relation presentation* units were investigated in 91 individuals of the NEUROGES® archive. Right hand *spatial relation presentation* units were displayed by 30 % (27/91), left hand *spatial relation presentation* units by 30 % (27/91), and bimanual *spatial relation presentation* units by 33 % (30/91).


### **Differentiate** *spatial relation presentation* **from**…


As the *motion quality presentation* presents the manner and dynamics of a motion, there is obligatorily a variation in the effort factors and/or a *repetitive* Structure. Thus, the displacement of the hand in the gesture space (in order to depict the path) is obligatorily combined with a variation in the effort factors, e.g. the gestural depiction of the explosion of a volcano, or with a repetitive movement. In the latter case the repetitive trajectory is superimposed to the displacement trajectory, e.g. opening and closing the hand while moving sidewards.

Only if the displacement of the hand is performed without variation in the effort factors and without a superimposed repetitive trajectory, it is a mere *spatial relation presentation*. The movement flow is bound such that a precise presentation of spatial relations can be given, e.g. moving the hand from right upper to left lower gesture space, or a moving the hand on a curved path.

#### **8.4.8** *motion quality presentation*

#### **Short definition**

#### SHOWING A SPECIFIC QUALITY OF MOVEMENT

#### **Main group**

see *form presentation*

#### **Definition**

	- (i) The effort factors are described in detail in the chapters 4.3 and 11. The distinct presentation of a movement dynamics in a *motion quality presentation* gesture is characterized by the fact that it differs from the gesturer's baseline movement dynamics. Each gesturer has his/her personal style of effort quality use, e.g. one gesturer may have a rather free movement flow in his/ her gestures, while the gestural behavior of another gesturer is characterized by directness. Thus, only the distinct presentation of a specific movement dynamics in gesture that differs from the gesturer's habitual gestural pattern constitutes a *motion quality presentation*. Only in exceptional cases,

when a monotonous, non-biological movement shall be re-presented such the rotation of gears, a *motion quality presentation* gesture is performed without changes in the effort factors. However, in this case, the monotony and invariance of the movement is explicit.


*Motion quality presentation > form presentation*: A *motion quality presentation* gesture (how something moves) may include information on a form (how what moves), i.e., presenting a specific form in motion. As an example, the hand forms a round shape and then moves up and down, e.g. representing a ball that is bouncing.

*Motion quality presentation > spatial relation presentation*: A *motion quality presentation* may include a *spatial relation presentation.* The gesture presents a motion quality that takes place at a specific location or on a specific route in an imaginary space. As examples, the gesture shows a circular motion from left upper to right lower gesture space (e.g. representing something rolling down a hill), or a bimanual symmetrical quick and strong movement upward and outward (e.g. representing an explosion).

*Motion quality presentation > spatial relation presentation + form presentation*: A *motion quality presentation* may include a *spatial relation presentation* and a *form presentation.* As an example, the index and middle finger (*form presentation* representing legs of a human being) move alternately (*motion quality presentation* representing walking) on a path from the lower left to the upper right gesture space (*spatial relation presentation* representing a path in the mountains).

While these combinations are coded as *motion quality presentation*, the included form information can be coded with the Supplementary category Technique of Presentation and the included spatial relation information with the Supplementary categories Target Location and Execution Hemi-Space.

	- (i) *material*: e.g. a ball rolling, a volcano exploding;

**Note**: The *quality of motion* can be noted.

#### **Meeting the criteria**




### **Differentiate** *motion quality presentation* **from**…


In contrast, in *motion quality presentation* gestures many different movement forms can be used to present a certain motion quality. In addition, the gesture may be executed implicitly, e.g. the gesturer without intention, displays a waggling gesture reflecting that s/he is not sure about what (s)he is saying.

### **8.4.9** *object-oriented action*

### **Short definition**

### CHANGING THE EXTERNAL PHYSICAL WORLD

### **Main group**

The two Function values *object-oriented action* and *subject-oriented action* have in common that they register actions.

Action is defined as "… exertion of power or force, as when one body acts on another" (Webster's dictionary). The exertion of power on something results in a change of that thing. Given this definition, *phasic* and *repetitive* units may be actions as the complex phase can serve to change something. In contrast to gestures, which are classified with the Function values *emphasis, egocentric deictic, egocentric direction, pantomime, form presentation, spatial relation presentation, motion quality presentation, emblem,* and *emotion/attitude* (learned display) and which are merely expressions, actions result in changes in the physical world. The Focus value helps to differentiate between gestures and actions, since there is a high likelihood that units with the Focus value *in space* are gestures and those with the Focus values *within body, on body, on attached object, on separate object,* and *on person* are actions. The exceptions are *shrugs* (per definition: *within body),*

and *on body body-deictics* and *emblems*, which include touching the body. These are gestures although they are not *in space*.

#### **Definition**

**Function:** The term *object-oriented* refers to all material things that are outside the gesturer's body.

Typical hand movements that cause changes in the external physical world are **tool-specific** manipulation of tools (praxis), e.g. winding up a watch results in a change of the state of the watch, writing results in a written text on a piece of paper, hammering results in a change of the position of a nail, taking a pen out of the pocket and putting it on the table results in a change in the position of a pen, or combing another person's hair results in a different order of the hair. **Tool-unspecific** manipulations, e.g. hammering with scissors, however, are only in rare cases *object-oriented actions*. Mostly, tool-unspecific actions, such as tapping with a pen (when stressed), are *subject-oriented actions*.

It is obvious that *object-oriented actions* often have the Focus values *on separate object, on attached object,* or *on person*. *Object-oriented actions* with the Focus value *in space* are an absolute rarity. In this case, the air or the water is the physical substrate that is manipulated, e.g. fanning air with the hand or swimming in the water.

Non-material changes in the external world such as changes in social relations are **not** given the value *object-oriented action*.


**Note**: the action

#### **Meeting the criteria**


**Occurrence:** As *object-oriented actions* were rarely elicited or investigated in the experimental studies included in the NEUROGES® archive, data on *object-oriented action* units are available only for 53 individuals. Right hand *object-oriented action* units were displayed by 0 % (0/53), left hand *object-oriented action* units by 15 % (8/53), and bimanual *object-oriented action* units by 0 % (0/53).


#### **Differentiate** *object-oriented action* **from**…

# *subject-oriented action*: *Object-oriented actions* may be difficult to distinguish from those *subject-oriented actions* that were originally object-oriented and in the individual development have become subject-oriented (compare Darwin, 1872, 2009, p. 38: "The principle of serviceable associated Habits"). In this case, the action no longer serves its original function, e.g. shifting a chair to improve vision on the scene, but it becomes a habit that is triggered by certain mental states, e.g. shifting a chair to deal with embarrassment.

 As a general orientation, in *subject-oriented action* the gaze is not at the hand, whereas in *object-oriented actions* it is. Furthermore, the inappropriate, toolunspecific use of an (attached or separate) object is typically indicative of a *subject-oriented action*, while the appropriate, tool-specific use indicates an *object-oriented action*. *Subject-oriented actions* are displayed repeatedly (and seemingly unmotivated) by an individual, whereas *object-oriented actions* are displayed only if the situation requires them. More specific criteria to distinguish between *subject-oriented actions* and *object-oriented actions* are reported separately for the different StructureFocus values:


<sup>18</sup> In the Focus coding, putting on glasses is coded as *on object attached*, as the goal of the action is to attach the object to the body, while taking off glasses is coded as *on object separate*, as the goal of the action is to separate the object from the body.

needs, e.g. taking off glasses to read a text in small print, or they can serve to change the visual appearance, e.g. taking off glasses to look more attractive.


### **8.4.10** *subject-oriented action*

### **Short definition**

#### CHANGING THE OWN PHYSICAL (AND SECONDARILY MENTAL) STATE

#### **Main group**

see *object-oriented action*

#### **Definition**

	- (i) *Subject-oriented actions* that aim at changing body states give relief from unpleasant physical states or produce pleasant states. They are reactions to somatosensory perceptions. Perceiving pain, being cold, being hot, etc. triggers actions to regulate the body state. Thus, the action has a clearly identifiable effect on the body, e.g. getting warm, improving vision.

as *subject-oriented action*. In general, the gesturer intends to improve his/ her appearance in order to look more attractive (only occasionally, in shy individuals or in specific situations these actions may serve to look less attractive). Thus, most often these *subject-oriented actions* are preening behavior and they have a social effect. They may be reactions to deviations in the visual appearance, e.g. if the tie is not straight or if the hair is not in place. However, these *subject-oriented actions* may also be displayed if no corrections of the visual appearance are necessary. In this case, they are rather displayed as an appeasement behavior indicating to the addressee that the gesturer wants to please him/her. These appeasement actions are often stereotypical, e.g. repeatedly stroking the hair behind the ear even if actually the hair is in place.


the Supplementary categories Target Location and Trigger/Motive.


**Note**: Note the type of action, e.g. scratching.

#### **Meeting the criteria**



#### **Differentiate** *subject-oriented action* **from**…


### **8.4.11** *emblem/social convention*

The NEUROGES® values are proven to represent universal kinesic and gestural phenomena. All NEUROGES® values occur in all cultures19 of five continents (Germans, British, US Americans, francophone and anglophone Canadians, Suisse, Koreans, Kenyans, and Papua) that have been investigated so far in the NEUROGES® archive. Thus, all cultures also show the Function value *emblem / social convention*, but this is the only NEUROGES® value that requires knowledge of the culture.

*Emblems* and *social conventions* have fixed form – meaning links that are based on cultural conventions. Therefore, the Function value *emblem / social convention* can only be assessed reliably by a rater who is familiar with the gesturer's (sub-)culture, as the rater has to know the gesturer's cultural repertoire of *emblems* and *social conventions* in order to identify these gestures and actions. Therefore, observers who are not familiar with the gesturer's cultural repertoire of *emblems* and *social conventions* proceed by applying the other Function values, as any *emblem* or *social convention* can be described with one of the other Function values. Only the information is lost that the gesture or action is conventionalized and that it has a fixed meaning and/or a specific social context.

### *8.4.11.1* emblem

#### **Short definition**

### USING CULTURE-SPECIFIC HAND SIGNS WITH CONVENTIONALIZED ARBITRARY MEANINGS

### **Definition**

**Function:** The Function value *emblem / social convention* differs from the other Function values for gestures, as it provides no information about the content of the gestural depiction but only the formal information that form and meaning of the gesture are conventionalized. The Function values *egocentric deictic, egocentric direction, pantomime, form presentation, spatial relation presentation,* and *motion quality presentation* provide information on **what**

<sup>19</sup> While all NEUROGES® values occur in all cultures, the cultures differ concerning the frequency of the display of the NEUROGES® values, e.g. in response to the same stimulus, Koreans display more *egocentric deictics* (number/minute) than Germans (Kim & Lausberg, 2017).

is displayed, indicated, pantomimed, or presented (an emotion, an accent, a location, a direction, an action, a form, a spatial relation, or a motion quality). In contrast, the Function value *emblem / social convention* only refers to the aspect of conventionalization. Therefore, any gesture that is coded as an *emblem / social convention* can likewise be classified with one of the other Function values according to its content.

In NEUROGES®, at least the first three criteria have to be fulfilled to classify a gesture as an *emblem*:


<sup>20</sup> As an exception, in certain mental states the gesturer may not be aware of (or at least, pretend to not be so) the display of the *emblem*. This applies typically to obscene or insulting *emblems*.

(v) novel / arbitrary meaning: *Emblems* that contain pointing or directing, pantomiming, depicting of a form, or a motion quality have to be differentiated from *egocentric deictic, egocentric direction, pantomime, form presentation,* or *motion quality presentation* gestures that show a similar movement form. In the respective *emblems*, the movement form is highly conventionalized and the meaning is novel, i.e., it differs from the meaning that a genuine *egocentric deictic, egocentric direction, pantomime, form presentation,* or *motion quality presentation* gesture would have. Examples:

A gesture that presents the form of a T (*form presentation*) has to be differentiated from an *emblem* gesture that shows the form of a T in a highly conventionalized form and that has the novel meaning 'Time out'. The criterion iv relating to the addressee is also fulfilled and the hand is typically moved into the direction of the addressee or in the upper gesture space to be visible for the addressee, whereas for the mere depiction of a T-shape in a *form presentation* gesture the hand typically remains in the central gesture space in front of the body midline.

A gesture that points to the temple in order to indicate the temple (*egocentric deictic*) has to be differentiated from an *emblem* gesture that points to the temple in a highly conventionalized repetitive manner and that has the novel meaning 'somebody is nuts'.

A gesture that presents the action of sweeping away dirt from the clothes (*pantomime*) has to be differentiated from an *emblem* gesture that sweeps away imaginary dirt from the shoulder (this location is part of the conventionalization) and that conveys the novel meaning of contempt.

A gesture that presents a quality of motion (*motion quality)* has to be differentiated from an *emblem* gesture that has the conventionalized movement form in which the hand is formed as if it were a mouth with the thumb being the lower jaw and the other four fingers being the upper jaw and which it opens and closes with the novel meaning '… is a blabbermouth'.

**Movement form:** Each *emblem* has its specific movement form (see 9.4.9). However, most *emblems* share the following features: The hand movement is distally distinct. There is a specific hand shape, a specific hand orientation, and a specific location in the gesture space. With regard to the movement form, *emblems* can be grouped according to their StructureFocus values, e.g. *phasic in space emblems* (e.g. Victory sign), *phasic on body emblem* (e.g. Blockhead), *repetitive in space emblems* (e.g. wagging), *repetitive on body emblems* (e.g. tapping on the temple to indicate that somebody is nuts), or *repetitive on attached object emblems* (e.g. tapping repetitively on the watch to indicate that you want to know what time it is or to indicate that the time has to be kept in mind).

**Types / Notes**: Since there is a large number of different *emblems* within one culture and even more so across all cultures, it is not possible to list the single *emblems* in the NEUROGES®-template. In chapter 9.4.9, a list of *emblem* gestures for the Western part of Germany is provided. However, given the (sub)cultural differences in the use of *emblems*, it is recommended that based on the above defined criteria each research group sets up its own list of *emblem* gestures.


#### **Meeting the criteria**


#### **Differentiate** *emblems* **from**…


### *8.4.11.2* social convention

### **Short definition**

#### CONVENTIONALIZED ACTIONS IN SPECIFIC SOCIAL CONTEXTS

#### **Definition**


recommended that based on the above defined criteria each research group sets up its own list of *social conventions*.

### **Meeting the criteria**


**Occurrence, frequency and duration:** *Social convention* behaviors were not investigated in the empirical studies that build the NEUROGES® archive.

#### **Differentiate** *social conventions* **from**…


### **8.4.12 Special template value** *different functions*

This value can only be used for *asymmetrical* bimanual 'to-be-coded' Function units in which the right and left hands display different Function values, e.g. the right hand a *form presentation* and the left hand a *motion quality presentation*. The value *different functions* reveals that the individual shows complex bimanual performances with independent acting of the right and left hands.

For some research questions, for example if the distinct functions of the right and left hands in unimanual and bimanual performances shall be examined, it is recommended to double-code in order to not lose information: In the example, in addition to the value *different functions* on the tier bh\_Function, the value *form presentation* is coded on the tier rh\_Function and the value *motion quality presentation* on the tier lh\_Function.

## **8.5 Procedure for Step 6 / Module III in NEUROGES®–ELAN**

### **8.5.1 Copying units from preceding tiers**

In order to generate the 'to-be-coded' Function units for the Function analysis, the units from the following previously created tiers are copied:


As an example, the generation of the tier bh\_Function\_R0 by copying the units from the tier bh\_Formal Relation\_R0 is described here:

Open the eaf file, which contains the Formal Relation units, then proceed as follows:

Apply the function: Tier > Copy tier.

Select a tier to copy: bh\_Formal Relation\_RX.

Next.

Select a new parent tier: skip this step.

Next.

Select another linguistic type: click on Function.

Finish.

Apply the function: Tier > Change Tier Attributes.

Scroll down in the list to the end:

Click on bh\_FormalRelation\_R0-cp.

In the field Tier Name change the tier name to: bh\_Function\_RX ('RX' = your initials).

(In the field Participant enter the identification of the videotaped person whose behavior you are going to code.)

(In the field Annotator enter your name.)

Press the button Change. Close.

Proceed analogously for all other Function tiers–as listed in the table above–that you intend to analyze.

### **8.5.2 Selecting the copied units with a** *phasic* **or** *repetitive* **Structure**

The following procedure applies to the newly created tiers rh\_Function\_RX, lh\_Function\_RX, rf\_Function\_RX, lf\_Function\_RX, head\_Function\_RX, and trunk\_Function\_RX.

Among the copied units, all those are deleted that have the Structure values *irregular, shift,* and *aborted*. Since the newly created 'to-be-coded' Function units still have the concatenated StructureFocus values, you have the information necessary to delete all units with an *irregular, shift,* or *aborted* Structure.

As an example, the procedure of deletion is described here for the tier rh\_Function\_RX:

In the grid, choose the tier rh\_Function\_RX.

In the column Annotation, click on the first 'to-be-coded' Function unit.

If the unit has an *irregular, shift,* or *aborted* Structure go to Annotation

> Delete Annotation.

In the grid, choose the tier lh\_Function\_RX.

In the column Annotation, click on the first 'to-be-coded' Function unit.

If the unit has an *irregular, shift,* or *aborted* Structure go to Annotation

> Delete Annotation.

Proceed analogously for all other Function tiers–as listed above–that you intend to analyze (for bh\_Function\_RX and bf\_Function\_RX that are based on Formal Relation units, the deletion procedure is not necessary, since it has already been conducted during the generation of the Formal Relation units, see. 7.5.1.2).

### **8.5.3 Assessing the 'to-be-coded' Function units**

While in Module I, first all right hand units were coded, and then all left hand units, in Module III the coding procedure follows the chronology of the units on the three tiers. Thus, in the order of their occurrence, the units on the three Function tiers are coded one after the other, e.g. rh unit, lh unit, bh unit, bh unit, rh unit etc.

The new units of the tiers bh\_Function\_RX, rh\_Function\_RX, and lh\_ Function\_RX, bf\_Function\_RX, rf\_Function\_RX, lf\_Function\_RX, head\_ Function\_RX, and trunk\_Function\_RX are now coded with the following values:

*emotion/attitude emphasis egocentric deictic egocentric direction pantomime form presentation spatial relation presentation motion quality presentation subject-oriented action object-oriented action emblem / social convention different functions (prep-retract) ?* (see 4.5.2)

Note the special rules for bilateral Function units. The 'to-be-coded' units on the tier bh\_Function\_RX still have the Formal Relation values. The Formal Relation value determines the Function assessment:

*symmetrical* ⇒ The Function value refers to the joint performance of both hands.

*rh dominance* ⇒ The Function value is determined by the performance of the right hand.

*lh dominance* ⇒ The Function value is determined by the performance of the left hand.

*asymmetrical* ⇒ The Function value refers to the joint performance of both hands. If, however, the two hands have different Function values, use the template value (*different functions)* and note in the tier Notes what the two Functions are.

If a Function value changes within a 'to-be-coded' Function unit, replace the old unit by the new subunits (see 4.2).

### **8.5.4 Alternative procedure: Manual generation of 'to-be-coded' Function units**

If you start with the Function category, i.e., you have not assessed Modules I and II before, use the alternative procedure of manual unit generation. In this procedure, the tiers bh\_Function\_R0, rh\_Function\_R0, lh\_Function\_R0, bf\_ Function\_R0, rf\_Function\_R0, lf\_Function\_R0, head\_Function\_R0, and trunk\_ Function\_R0 are used that are provided in the template. Directly tag all body movements that show a concept realization according to the definitions of *phasic* and *repetitive* units (see 4.3, 4.4.2, 4.4.3). For the limbs, the units have to be differentiated as unilateral right, unilateral left, and bilateral. Unimanual limb units are units in which one limb moves while the other limb rests. Bimanual units are units in which both limbs move simultaneously (compare the definitions given in III). The bilateral units should first be classified with the Formal Relation values (see 8.5.3). The 'to-be-coded' Function units are assessed according to the rules described in 8.5.3.

### **8.5.5 Optional: Concatenation of the Function values with the values of the preceding categories**

In order to achieve complex values that include the assessments of the previous categories, concatenation procedures can be conducted. The most fine-grained unit segmentation of the Function tier is automatically adopted.

The units of the tiers rh\_Function\_RX, lh\_Function\_RX, rf\_Function\_RX, lf\_Function\_RX can be concatenated with the corresponding units of the Unilateral\_StructureFocus\_RX tiers.

The units of the tiers head\_Function\_RX and trunk\_Function\_RX can be concatenated with the corresponding units of the Structure\_RX tiers.

The units of the tiers bh\_Function\_RX and bh\_Function\_RX can be concatenated with the corresponding units of the StructureFocusContactForm alRelation tiers.

As an example, the procedure is described here for the latter concatenation. This procedure can be conducted for multiple files at a time.

In order to be able to use the time-saving Multiple files processing function in ELAN, it is absolutely **crucial that the tier names are written correctly in all eafs**. Small deviations in the spelling of the tier names, such as capital letter instead of small letter, entail that the Multiple files processing function becomes ineffective.

File > Multiple files processing > Annotations from Overlaps.

Step 1/4: File and Tier Selection.

Select files to use in overlaps computation:

Select files from domain. Click on the button Domain.

If you have not yet defined a domain, press the button New Domain > Specify New Domain > Add the Folder.

If you had already defined a domain > Select an existing domain > Load.

Select tiers to use for computation:

bh\_StructureFocusContactFormalRelation\_RX and bh\_Function\_RX. Next.

Step 2/4: Overlaps Computation Criteria.

Create annotation when annotations overlap:

 regardless of their annotation values. Next.

Step 3/4: Destination Tier Name Specification.

Enter name for destination tier: bh\_StructureFocusContactFormalRelationFunc tion\_RX (Cave: correct spelling, or choose another term).

Destination tier is a root tier.

Select a linguistic type for destination tier: click on Notes. Next.

Step 4/4: Destination Tier Value Specification.

Concatenate the values of the annotations.

Compute values in the selected tier order:

Establish the following order by pressing **^**:

**first** bh\_StructureFocusContactFormalRelation\_RX and **second** bh\_Function\_RX. Finish.

Now you have a new tier bh\_StructureFocusContactFormalRelationFunction\_RX that contains bimanual units with the Structure, Focus, Contact, Formal Relation and Function values.

If you want to conduct the concatenation procedure for one file only, proceed as follows:

Apply the function: Tier > Create Annotations from Overlaps.

Step 1/4: File and Tier Selection.

Select files to use in overlaps computation:

 Use currently opened file. Proceed as described above.

## **9 The Type category**

### **9.1 Definition of the Type category**

The Type category classifies gestures according to their overall function, as defined in the Function category, and to certain aspects of form and meaning.

*Emotion/attitude* expressions are classified according to direction and weight, and *emphasis* gestures according to formal aspects concerning how the dynamic accent is created. *Egocentric deictics* are specified according to the target and *egocentric direction* gestures according to the absence or presence of an agent who executes the direction. *Pantomime* gestures are classified according to transitivity versus intransitivity. For *form presentation* gestures, shape versus size is specified, for *spatial relation presentation* gestures, route versus position, and for *motion quality presentation* gestures, manner versus dynamics.

The 24 Type values (see Fig. 11) are operationalized by several movement criteria: gesture/action space, path during complex phase, orientation, hand shape, efforts, body involvement, gaze, Structure, Focus, Contact, Formal Relation, and Function (the latter criteria refer to the preceding categories).

For researchers who aim at naming single *emblems*, a list of *emblems* used in West Germany is provided at the end of this chapter.

Tab. 10 provides the short definitions and reliabilities of the Type values. Since gestures are performed most often with the upper limbs and rarely with the lower limbs, the head, and the trunk, and since most researchers focus on the upper limbs, the definitions of the Type values are formulated for the hands. However, the value definitions likewise apply to the other parts of the body.

Researchers who conduct the complete NEUROGES® analysis should note that within the algorithmic structure of NEUROGES®, the Type category differs from the previous categories as it is a dependent category. While the relation between the previous categories is primarily independent, i.e., in principle any value of the previous category can be combined with any value of the following category, the choice of Type values is determined by the Function value. Thus, as the Type values are typifications of the Function values, the label of a Type value includes the parent Function value, e.g. *emphasis–baton*. Furthermore, as the Type values describe smaller groups of gestures than the Function values, the definition of the movement features of the Type values is more precise and less general than that of the Function values. Therefore, the Type category coding can be used as a control for the Function coding. The precise definitions of the movement

features of the Type values enable to determine the Type value and thereby, to verify if the correct Function value was chosen.

### **9.2 Generation of the 'to-be-coded' Type units**

#### **9.2.1 Selection of units for the generation of 'to-be-coded' Type units**

The Function units that result from the Step 6 / Module III analysis are adopted for the Type evaluation.

The exception is Function units with the values *object-oriented action, subjectoriented action,* and *emblem/social convention*. These are not assessed with the Type category.

The copied Function units are termed 'to-be-coded' Type units and they are further classified with the Type values. As stated above, the Type category is a dependent category. The Function value determines which Type values can be coded.

If there is a change of the Type value within a 'to-be-coded' Type unit, then the change demarcates subunits. With regard to the precise segmentation of a unit into subunits (where to segment the unit), the procedure is described in detail in the Structure Coding Manual.

Researchers who only apply the Type category select all movements that they identify as gestures from the ongoing stream of behavior and thereby generate 'to-be-coded' units.

#### **9.2.2 Alternative generation of 'to-be-coded' Type units**

Researchers who only apply the Type category, i.e., who have not assessed the Function category before, select from the ongoing stream of kinesic behavior all gestures. With very few exceptions, *in space* movements are functionally gestures. Therefore, it is recommended to follow the definitions of *in space* units (see 5.4.6, on the basis of *phasic* and *repetitive* units see 4.3, 4.4.2, 4.4.3). Only rarely, *on body* (or *on attached object* and *on separate object*) movements are functionally gestures, e.g. a self-deictic that includes touching of the sternum.

For the limbs, the units have to be differentiated as unilateral right, unilateral left, and bilateral in order to use the existing Type tiers in the NEUROGES®- ELAN template. Concerning the bilateral units, it is helpful to classify them first with the Formal Relation values, since this pre-assessment facilitates the analysis of the bilateral 'to-be-coded' Types units (see 8.2.2). The 'to-be-coded' Type units are assessed according to the rules described in 9.2.1.


**Tab. 10:** Short definitions of the Type values and their reliabilities

<sup>21</sup> For convenience, in the *egocentric* Function values the term '*egocentric*' is dropped, e.g. *deictic- body*, and in the *presentation* Function values the term '*presentation'*, e.g. *form–shape* instead of *form presentation – shape.*


**Tab. 10:** Continued

\* Interrater reliability as measured with EasyDIAg (from Lausberg & Slöetjes, 2016)

### **9.3 Criteria for the definition of the Type values**

The Type values are defined according the same criteria as the Function values: Structure, Focus, Contact, Formal Relation, gesture/action space, path, hand orientation, hand shape, efforts, body involvement, and gaze. The most frequent associations of specific StructureFocus values with Function and Type values are given in Tab. 11.

### **9.4 Definitions of the Type values**

The definitions of the Type values build up on the definitions of the corresponding Function values. Thus, it is necessary to first read the Function value definitions. The following definitions of the Type values are structured as follows:

First, a **Short definition** is given. A more detailed **Definition** follows that first defines the criterion for the typification and then describes the movement form. In Notes recommendations are given to note certain qualitative observations.

In **Meeting the criteria** the movement features are listed in note form. The lecture of section 8.3 in the Function category chapter is necessary to understand this paragraph.

Finally, having been most appreciated by many raters, in **Differentiate**… criteria are provided that help to distinguish Type values that share certain movement features and that, therefore, may be mixed up.

#### **9.4.1 Types of the Function value** *emotion/attitude*

Given that there is a controversial scientific discussion about the definition and the number of distinct emotions, the NEUROGES® Type category does not aim at describing movements associated with distinct emotions. Rather, the



Type category classifies basic movement forms associated with emotional experience. The four *emotion/attitude* Type values register the four most frequent expressions of emotions and attitudes in the upper limbs, as identified with the Function value *emotion/attitude*, that have been observed in the NEUROGES® archive. Other emotional expressions can be coded with the special template value *other e-motion*. These four values are characterized by a specific direction in space (here: up, down) and a specific attitude towards weight, according to Laban's effort factor weight: It can be light (overcoming the body weight, e.g. as in happiness), heavy (passive giving in to gravity, e.g. as in sadness), or strong (getting behind the body weight, e.g. as in anger).

### *9.4.1.1* emotion/attitude–rise

### **Short definition**

### DYNAMIC FAST RAISING UP OF THE ARMS

### **Definition**

**Direction and weight:** The arms are raised up against the gravity. The direction implies that the body weight overcomes gravity.

**Movement form** (features other than direction and weight):

The emphasis is on the act of moving up. The fast upward movement of the arms typically involves the whole arm rather than only the lower arm. In case of a genuine emotional expression, the movement is typically *phasic*, bilateral and it is accompanied by a postural stretching up, e.g. a little child throwing up the arms with joy.

**Note**: underlying emotion or attitude.

#### **Meeting the criteria**



### **Differentiate** *emotion/attitude–rise* **from...**


### *9.4.1.2* emotion/attitude–fall

### **Short definition**

### LETTING THE ARMS FALL DOWN HEAVILY

### **Definition**

**Direction and weight:** The gesturer lets the (lower) arm fall down heavily. The heavy fall implies giving in to gravity, i.e., no effort is undertaken to resist the force of gravity on the body weight.

### **Movement form:**

The emphasis is on the act of falling. The movement involves the lower arm or the whole arm, rarely only the hand. The heavy fall typically expresses resignation or helplessness. In learned emotional expressions that serve to demonstrate resignation or helplessness, in order to be able to let the arm fall (complex phase), it has to be raised first (transport phase), unless the arm happens to be in a raised position before.

(Only for researchers who have not coded the Focus category before: If the *fall* gesture co-occurs with a *shrug* (see 9.4.1.4), the *fall* is coded.)


### **Meeting the criteria**

### **Differentiate** *emotion/attitude–fall* **from**…

# *direction–neutral:* A *direction–neutral* gesture may indicate the direction downwards. In the *direction* gesture the hand leads the movement. The gaze is oriented in the intended direction. *Direction* gestures have a free flow and an endpoint accent (to emphasize the direction), while in *emotion/attitude– fall* the weight is heavy and there is never an endpoint accent. Furthermore, in *emotion/attitude–fall*, there is a specific facial expression.


### *9.4.1.3* emotion/attitude–clap/beat

#### **Short definition**

#### DYNAMIC FAST STRONG MOVEMENT OF ARMS

#### **Definition**

**Direction and weight:** These are arm movements that are displayed with strength and that end by contacting a counter-part, e.g. a piece of furniture, the other hand, another part of the body, or another person. The contact can result in a sound. It implies that strength is put behind the body weight.

### **Movement form:**

The emphasis is on the act of clapping, beating, and punching. The movement is accompanied by a postural-facial expression such as in clapping the hands with joy, slapping on the thigh with joy, or punching on the table with anger (the latter is akin to the leg movement of stamping on the ground in anger).

**Note**: underlying emotion or attitude


#### **Meeting the criteria**

### **Differentiate** *emotion/attitude–clap/beat* **from ...**


# *emblem/social convention:* There is a *social convention* of clapping both palms repeatedly onto each other in order to applaud somebody in a defined social context. The Structure is obligatorily *repetitive* and of a particularly long duration. Since the purpose is not primarily to express an emotion but to honor somebody, the facial-postural involvement is less pronounced than in *emotion/attitude–clap/beat.* In the latter case, especially in genuine emotional expressions, in contrast, the Structure is rather *phasic*.

### *9.4.1.4* emotion/attitude–shrug

### **Short definition**

### SHRUG OF THE SHOULDERS

### **Definition**

**Direction and weight:** There is an up-down movement of the shoulders.

#### **Movement form:**

In the NEUROGES® archive different subtypes of *shrugs* were observed that slightly differ in form:


about her/his gestural or verbal statement. In contrast to the *emblem* shrug, it is displayed beyond the gesturer's awareness and is not primarily produced for the addressee.


### **Meeting the criteria**

#### **Differentiate** *emotion/attitude–shrug* **from**…


parent that its hands are clean. ThE showing of one hand may be accompanied by a pointing gesture of the other hand, e.g. the other hand points to the palm. The gaze is at the hand.


### *9.4.1.5 Special template value* other e-motion

As indicated above, the four Type values are **not** intended to represent the complete spectrum of *emotion/attitude* expressions as registered with the Function value. Rather, they are the most frequent types of *emotion/attitude* expressions observed in the NEUROGES® archive. Therefore, in the NEUROGES®-ELAN template the value *other e-motion* is provided in order to offer researchers the option to note other types of *emotion/attitude* expressions, e.g. fist clenching.

### **9.4.2 Types of the Function value** *emphasis*

The Type values of the Function value *emphasis* refer among others to the direction towards the end point as the dynamic accent.

### *9.4.2.1* emphasis–baton

### **Short definition**

### UP-DOWN MOVEMENTS WITH DOWNWARD ACCENT

### **Definition**

**Direction:** up-down movements with a downward accent

**Movement form:** *Batons* are up-down movements with a downward accent. They are most often conducted with the lower arm and performed in synchrony with head and mouth movements. They are often displayed *repetitive*ly and then create a meter or a rhythm. In *phasic batons*, the single accent emphasizes a certain aspect of the verbal message.

Meeting the criteria


#### **Differentiate** *emphasis–batons* **from**…


### *9.4.2.2* emphasis–back-toss

### **Short definition**

### SMALL UP-DOWN MOVEMENTS WITH UPWARD ACCENT

### **Definition**

**Direction:** up-down movements with an upward accent

**Movement form:** *Back-tosses* are short up-down movements with a downward or upward accent. The movement is typically restricted to the wrist, occasionally even only to the knuckles (metacarpophalangeal joints). The upward movement of the hand is often performed while the wrist remains resting on a support. The back or the radial side of the hand is leading the upward movement. *Back-tosses* are often displayed as a series, i.e., *repetitive* rather than *phasic*, and their performance is synchronized with head and mouth movements while speaking.


### **Meeting the criteria**

Differentiate *back-tosses* from…

# Only for researchers who have not coded the Structure category before, as in the complete algorithmic analysis by the time of Module III coding, *irregular within body* movements have been excluded from the evaluation: *irregular within body* movements: While *back-tosses* have a stereotypical movement form with an upward accent *in space* and they are synchronized with the mouth and head movements, this is not the case for *irregular within body* movements, which have no structured form and no accents.

### *9.4.2.3* emphasis–palm-out

### **Short definition**

### SMALL SUPINATION-PRONATION MOVEMENTS WITH OUTWARD ACCENT

### **Definition**

**Direction:** in-out movements with an outward accent

**Movement form:** *Emphasis–palm-out* gestures are supination–pronation movements, i.e., an outward – inward rotation of the hand or lower arm. The accent is outwards, i.e., on the supination phase. The supination has to reach an extent between 10° and 90° (in the neutral position 0° the flat hand is line with the sagittal plane). Thereby, the palm becomes visible. *Emphasis–palm-out* gestures are often displayed a *repetitive*ly and they are synchronized with the head and mouth movements.

There are two subtypes of *emphasis–palm-out* gestures:


## **Meeting the criteria** Structure *repetitive > phasic* Focus *in space* Gesture/action space ipsilateral neutral gesture space, middle kinesphere, no distinct creative use of space Path two-dimensional, arch-like path, **supination-pronation** Hand orientation hand/lower arm in supination position, palm upward


#### **Differentiate** *emphasis–palm-out* **from**…


improve concentration). In case of a right hand movement, there is a clockwise rotation, in case of the left hand a counter-clockwise one.


### *9.4.2.4* emphasis–superimposed

### **Short definition**

UP-DOWN OR BACK-FORTH MOVEMENTS WITH DOWNWARD OR FORWARD ACCENT THAT DIRECTLY FOLLOW THE STATIC COMPLEX PHASE OF ANOTHER GESTURE TYPE

### **Definition**

**Direction:** up-down or forth-back movements with a downward or forward accent, respectively, which is determined by hand orientation in the primary gesture **Movement form:** *Emphasis–superimposed* gestures are up-down or forth-back movements, often *repetitive*, with downward or forward accent that directly follow an *emphasis–palm-out, emphasis–baton, egocentric deictic, egocentric direction, form presentation, spatial relation presentation*, or *emblem* gesture. In *repetitive emotion/attitude* movements, *superimposed emphasis* can follow the learned display of an emotion but not a genuine emotional expression, since the latter is the direct motor correlate of an emotional experience. Setting *emphasis* is a cognitive process that is not compatible with an immediate emotional expression.

Once the primary gesture has come to the static complex phase, the *superimposed* gesture follows while preserving the hand shape, the hand orientation, and the position in gesture/action space of the primary gesture. Notably, there is no transport phase like the one for the primary gesture but the hand performs the *superimposed emphasis* at the place where the hand happens to be for the primary gesture. Examples are a pointing gesture to which up-down movements are added (*deictic–external target + emphasis–superimposed*), or a gesture marking in an imaginary landscape a position that is underlined *(spatial relation–position + emphasis–superimposed),* or an *emphasis–palm-out* gesture that is re-enforced by short up-down movements with a downward accent *(emphasis – palm-out + emphasis–superimposed).*


#### **Meeting the criteria**

#### **Differentiate** *emphasis–superimposed* **from**…

# Only for researchers who have not coded the Function category before: While the differentiation between a primary gesture followed by an *emphasis* gesture versus a primary *repetitive pantomime* or *presentation* gesture should have already been made in the Function category coding, for researchers who only apply the Type category, the criteria for this distinction are repeated here: *Emphasis–superimposed* gestures can only follow a primary gesture with a static complex phase. *Emphasis–superimposed* gestures are **not** to be confused with a gesture in which the repetition is an intrinsic component of the meaning. As an example, the repetitions in a *pantomime–transitive* gesture presenting tooth brushing or in a *form–shape* gesture presenting a star by tracing several sharp points are **not** superimposed *emphasis* as the repetition does **not** serve to reinforce a primary gesture but the repetition per se constitutes the meaning. One up-down movement in front of the mouth does not convey the meaning of tooth brushing nor does one sharp point not make up a star. In these *repetitive pantomimes* and *presentation* gestures, there is often a locomotive component in the repetition, e.g. the hand moves sideward during the up-down movements. In contrast, in *emphasis–superimposed* there is no displacement of the hand during the up-down or forth-back movement. Furthermore, a movement of the other hand in-between the sub-complex phases of a *repetitive* unit basically excludes *emphasis–superimposed.*


hand during the forth-back movement, there is no distinct hand shape or hand orientation, and the gaze is not at the hand.


### **9.4.3 Types of the Function value** *egocentric deictic*

The Type values of the Function value *egocentric deictic* specify the target.

*9.4.3.1* deictic – external target

### **Short definition**

INDICATING A TARGET IN THE EXTERNAL SPACE BY USING AN EGOCENTRIC FRAME OF REFERENCE

#### **Definition**

**Target:** The gesturer points to a target in the body-external space (with the exception of pointing to the addressee, which is coded as *You–deictic*). The *deictic– external target* is based on an obligatorily egocentric frame of reference, in which the gesturer relates from an egocentric point of view to another actual location in the external space. The gesturer can also project him/herself into an imaginary space and point there, e.g. "In my old apartment, if I entered it, the bathroom was on my right." Here, the egocentric frame is kept in mental spatial imagery.

**Movement form:** The hand is extended and the finger tips are oriented towards the target. Alternatively, only the index is extended, or if the target is behind the gesturer, s/he might use the thumb. The target is located in the estimated line that is the prolongation of the longitudinal hand axis (wrist–finger tips). The hand axis is centrifugal from the body midline. If the target is not in front of the gesturer, it is typical for the egocentric perspective that the gesturer rotates the trunk to be vis-à-vis with the target. As an example, if the target is on the gesturer's right, (s) he turns the trunk to the right.

### **Meeting the criteria**


### **Differentiate** *deictic–external target* **from**…

# *direction–neutral:* The *direction–neutral* indicates a potentially infinite direction ("upwards, northwards") or the relative position of an external target without the distance information, while *deictic–external target* indicates the relative position and distance. In fact, for targets that are far away and therefore invisible, *direction–neutral* gestures are preferred.


### *9.4.3.2* deictic–You

#### **Short definition**

### REFERRING TO THE ADDRESSEE AS A PERSON OR TO A PART OF HER/ HIS BODY

### **Definition**

**Target:** The gesturer points to the addressee to designate him/her as a person (2nd person: You) or to designate a part of the addressee's body.

**Movement form:** If the gesturer points to the addressee to designate him/her as a person, the finger tips are oriented towards the addressee's sternum or, in a more offensive context, towards the addressee's face. Furthermore, pointing with the index as compared to pointing with all fingers as well as a very direct spoke-like path seem to be more offensive. Pointing with the palm up appears to be more polite, possibly in contexts of offering something. If the gesturer points to a part of the addressee's body, the finger tips are oriented towards that part. The *deictic–You* gesture may include touching the other.


### **Meeting the criteria**

### *9.4.3.3* deictic–self

### **Short definition**

REFERRING TO ONESELF AS A PERSON

### **Definition**

**Target:** The gesturer refers to her-/himself as a person (1rst person: I).

**Movement form:** The gesturer points to the sternum without looking at it. In some Asian cultures, there is also pointing to the nose. The pointing may include touching the body. In contrast to other Types of *egocentric deictics*, the gesturer does **not** look at the target (s)he is pointing at, i.e., at the sternum.



### *9.4.3.4* deictic–body

### **Short definition**

### REFERRING TO PARTS OF THE OWN BODY OR TO OBJECTS ATTACHED TO THE BODY

### **Definition**

**Target:** The gesturer designates a part of the own body by pointing to it or by showing it. Or, the gesturer points to or shows an object that is attached to the body, e.g. the finger ring that (s)he is wearing.

**Movement form:** The hand is extended and the finger tips are oriented toward the designated part of the body (or attached object). The gesturer looks at the designated part of the body. The pointing may include touching the part of the body.

Alternatively, the part of the body is moved into the gesture space in order to designate it, e.g. moving the hand into the gesture space to show the hand or to show a finger ring. The showing of a part of the body is often complemented by a pointing gesture of the other hand, e.g. left hand is presented to the partner and the right hand points at left hand (in that case, one Type unit is coded: *asymmetrical body-deictic*).

**Note**: laterality and part of the body or attached object that the gesturer points at

### **Meeting the criteria**


### **Differentiate** *deictic–body* **from**…

# *deictic–self*: In the *egocentric deictic–self* gesture, the gesturer points to the sternum but does not look at the sternum. In contrast, in a *deictic–body* gesture designating her/his sternum, the gesturer looks at the sternum.

### **9.4.4 Types of the Function value** *egocentric direction*

*For egocentric direction gestures, the Type values specify the absence or presence of an agent who executes the direction.9.4.4.1* direction–neutral

### **Short definition**

INDICATING A DIRECTION WITHOUT SPECIFYING AN AGENT

### **Definition**

**Agent:** The *direction – neutral* gesture contains no information about an agent who would execute the direction or route. Thus, the direction or route is indicated in an agent-neutral manner.

As in all *egocentric direction* gestures, the gesturer is the point of spatial reference for indicating the direction ("[I am here and] it is in that direction") or the route ("[I am here and] it is from there to over there"). (The direction from somewhere towards the gesturer is most often to be found in the Type value *direction – imperative*.)

**Movement form:** *Direction – neutral* gestures are one- or two-dimensional gestures with a clear direction. They often involve the lower arm or even the whole arm.


**Note**: the indicated direction, e.g. backwards


#### **Meeting the criteria**


### **Differentiate** *direction–neutral* **from**…


The perspectives are typically revealed by the hand orientation: In *direction – neutral* gestures that show a route, the longitudinal hand axis is oriented centrifugally from the gesturer's body midline. In *spatial relation–route,* the longitudinal hand axis is in line with the vertical space axis. If the imaginary map, on which the gesturer looks, is projected to the frontal level (especially if the gesturer refers to animations presented on a screen as a typical experimental setting), it is more difficult to identify the mento-heliocentric perspective as compared to a projection to the horizontal level, as the hand axis may be the same as for *egocentric directions*. In this case, the gesturer's gaze helps to identify the Type: In *spatial relation – route* gestures, the gesturer looks at the imaginary map, which is not further away than the length of the arms, while in *direction – neutral* gestures that show a route, the gaze is typically directed far away.

### *9.4.4.2* direction–imperative

### **Short definition**

INDICATING TO THE ADDRESSEE TO MOVE (SOMETHING) IN A SPECIFIC DIRECTION

### **Definition**

**Agent:** The agent is the addressee. S/he is asked to execute the direction. As these gestures often have a suggestive or an imperative tone (e.g. "Move to the left!"), this *egocentric direction* Type value was labelled "imperative". *Direction–imperative* gestures may ask the addressee to move to the respective direction him-/

herself, a part of his/her body, her/his mental state (e.g. to calm down, to cheer up), or something (e.g. to move an object into a specific direction). Coded here are also those gestures that indicate to the addressee not to move in a certain direction, i.e., to stop.

The direction, to which the addressee shall move (something), is indicated from the gesturer's egocentric perspective.

**Movement form:** *Direction – imperative* gestures are one- or two-dimensional gestures with a clear direction. They often involve the lower arm or even the whole arm. The gesture is conducted in the gesturer's far kinesphere, the arm is extended into the direction of the addressee. The palm (or more rarely the back) of the hand is oriented in the designated direction as if shifting the entity (the addressee, a part of his/her body, her/his mental state, or an object that s/he shall move) with the palm in the direction. The gesturer may display them repeatedly to emphasize his/her demand. Thus, the Structure may be *repetitive.*

**Note**: the direction or the message of the gesture: [Come] here! [Sit] down! [Calm] down! [Put] it down! [Get] up! [Cheer] up! [Turn] around! [Bring] it out! Stop! No–No! [Go] away! [Move it] away!


#### **Meeting the criteria**

#### **Differentiate** *direction–imperative* **from**…


### *9.4.4.3* direction–self-related

### **Short definition**

SHOWING THE DIRECTION OF ONESELF'S (BODILY OR MENTAL) MOVEMENT

### **Definition**

**Agent:** The agent is the gesturer her-/himself. S/he shows the direction that s/he takes or intends to take. The direction can refer to the gesturer's body or mental state.

As an example, a ballet dancer rehearses the directions in her/his choreography by moving the hands (up, to right, forward, etc). Furthermore, the *direction – self-related* gesture may present the directions of bodily processes, e.g. swallowing (down). Or, the gesturer directs the mental state or cognitive processes by performing a self-suggestive gesture, e.g. to calm down, to forget an idea, to bring out an idea. Thus, the *direction – self-related* gesture presents the (intended) direction of the own mental or cognitive processes.

**Movement form:** *Direction–self-related* gestures are one- or two-dimensional gestures in which the flat hand is moved in a specific direction. The hand is oriented orthogonal to the path during complex phase, i.e., the palm of hand is oriented in the designated direction. The hand acts in the near to middle kinesphere gesture space, i.e., close to the trunk. For mental or cognitive processes the hand may move close to the head. As *direction – self-related* gestures are often autosuggestive, i.e., requiring repeated self-related interventions, the Structure may be *repetitive.*

**Note**: the direction or the message of the *direction–self-related* gesture, e.g. [I calm] down; [I put] it down (I forget it); [I cheer] up; [I bring] it out!


#### **Meeting the criteria**

### **Differentiate** *direction–self-related* **from** …


### **9.4.5 Types of the Function value** *pantomime*

The Type values of the Function value *pantomime* register transitivity versus intransitivity. Transitivity is "characterized by having or containing a direct object < a transitive verb >"; "being or relating to a relation with the property that if the relation holds between a first element and a second and between a second and a third, it holds between the first and third elements < equality is a transitive relation > " (Webster's dictionary). "In syntax, a transitive verb is a verb that requires both a subject and one or more objects. The term is used to contrast intransitive verbs, which do not have objects." (Wikipedia). Accordingly, in NEUROGES® those *pantomimes* are classified as transitive in which the gesturer pretends to either directly act **on** something or **with** something **on** something. The rare cases, in which the gesturer pretends to act **with** something but not **on** something, e.g. waving with a stick, are also coded as transitive.

### *9.4.5.1* pantomime–intransitive

### **Short definition**

ACTING AS IF WITHOUT AN IMAGINARY OBJECT OR COUNTERPART

### **Definition**

**Transitivity:** The *pantomime–intransitive* gesture includes no imaginary or real object and counter-part that the gesturer pretends to act with or on, respectively. In NEUROGES®, air and water are not considered as counterparts. Thus, *pantomimes* such as pretending to swim or to fly (like a bird) are coded as *intransitive*. As an example, gesturer moves the arms as if doing a physical exercise such as marching, swimming, sit-ups, etc. The pantomimed action is executed as similar to the actual action as possible.

#### **Movement form:** see Function value

**Note**: Note the type of action that is pantomimed, e.g. flying.

#### **Meeting the criteria**



### **Differentiate** *pantomime–intransitive* **from**…

# *motion quality–manner*: In *motion quality–manner* gestures the gesturer's hand may represent an object/agent that/who moves, e.g. the index and middle fingers represent two legs (pars pro toto for a human being) walking on a ground. The hands are used as if they were marionettes, i.e., they adopt a function other than being the gesturer's hand. The gesturer's body is not involved in the presentation. In most cases, there is an indirect reference to the ground on which the motion takes place. This ground is typically projected to the horizontal plane.

In contrast, in *pantomime–intransitive* gestures, given the egocentric perspective, the hand remains its function as the gesturer's hand, e.g. in the pantomime of swimming, the arms are extended to the far kinesphere and describe a half circle in the horizontal plane. The whole body, or at least the trunk and head are involved in the demonstration of the movement.

#### *9.4.5.2* pantomime–transitive-active

#### **Short definition**

ACTING AS IF WITH AN IMAGINARY (OR REAL) OBJECT OR COUNTERPART

#### **Definition**

**Transitivity:** A *pantomime–transitive* gesture includes an imaginary (or real) object or counterpart that the gesturer pretends to act with/on. The pantomimed action is executed as similar to the actual action as possible. Often, in these *pantomimes*, imaginary tools are used to act on an imaginary counterpart, e.g. the gesturer pretends to act **with** imaginary drumsticks **on** imaginary drums. However, the gesturer can also pretend to act directly, i.e., without an imaginary object, on something, e.g. climbing up a stony mountain.

In special experimental settings, an actual tool may held in the hand or an actual counterpart may be present but the action is only pantomimed, e.g. holding an actual hammer in the hand and pretending to hammer (but not really doing it) on an actual nail. In NEUROGES®, this condition is also coded as *pantomime – transitive*.

**Movement form:** If the gesturer pretends to act **with** something **on** something, the hand adopts a distinct shape that reveals the form of the object/tool. If the gesturer pretends to act directly with bare hands on something, the hand adopts a distinct shape that reveals the form of the counterpart. In order to represent the imaginary tool or counterpart two Techniques of Presentation (see Supplemetary category) can be used: *enclosure* and *hand-as-object.*

For general information on the movement form of *pantomime* gestures see Function value *pantomime*.

**Note**: Note the type of action that is demonstrated, e.g. combing the hair.


#### **Meeting the criteria**

#### **Differentiate** *pantomime–transitive-active* **from**…

#### # *direction–self-related*: see there

# *form–shape*: *Form–shape* gestures depict only the form of the object of reference, e.g. what form the hammer has, but **not** on what is done with the object. In contrast, *pantomime – transitive-active* gestures provide information about the form of the object/counterpart that is acted with/on, but the essential message is how the object is used, e.g. holding the imaginary hammer and hammering with it.


### *9.4.5.3* pantomime–transitive-passive

### **Short definition**

#### ACTING AS IF AN IMAGINARY AGENT/OBJECT ACTS ON ONESELF

### **Definition**

**Transitivity:** The *pantomime – transitive-passive* gesture includes an imaginary agent/object that acts on the gesturer. The gesturer pretends to react to the imaginary agent/object that affects him, e.g. pretending reacting to a pancake flying into the gesturer's face or being moved away by a gust of wind. As in all *pantomime* Type values, the perspective is egocentric, i.e., the relation between the gesturer and the separate agent/object is presented from an egocentric perspective.

**Movement form:** Often, the body adopts a distinct shape that reveals the form or quality of the agent/object that is acting on the gesturer. If the hand is used to represent the agent/object, it loses its natural orientation as a part of gesturer's body. Typically the Technique of Presentation *hand-as-object* is chosen.

For general information on the movement form of *pantomime* gestures see Function value *pantomime*.

**Note**: Note the type of action that is pantomimed, e.g. being hit by something.


### **Meeting the criteria**

#### **Differentiate** *pantomime–transitive-passive* **from**…

# *form–shape*: *Form–shape* gestures depict only the form of the object of reference, e.g. what form the hammer has, but **not** how one is affected by the object. In contrast, *pantomime – transitive-passive* gestures may provide information about the form of the agent/object who/that acts on the gesturer, but the essential message is how the gesturer reacts to the agent/object.

*Form–shape* gestures are displayed in the central gesture space. There is no variation in the effort qualities, and a variety of Techniques of Presentation can be used to create the image of the form. The body involvement is limited to the hands and arms. In contrast, *pantomime – transitive-passive* gestures are characterized a distinct use of gesture space reflecting the action space of the reaction of reference, by a variation in the Effort factores and by a large body involvement.

### **9.4.6 Types of the Function value** *form presentation*

For *form presentation* gestures, the Type category classifies the geometric aspect of the form that is presented: shape and size.

*9.4.6.1* form–shape

#### **Short definition**

CREATING A SHAPE

### **Definition**

**Geometric aspect:** Shape is defined as "any spatial attributes as defined by outline" (Webster's Dictionary). "The shape of an object located in some space is a geometrical description of the part of that space occupied by the object, as determined by its external boundary – abstracting from location and orientation in space, size, and other properties such as color, content, and material composition." (Wikipedia).

In NEUROGES®, gestural shape information is coded with the value *form – shape.* Since size information can be abstracted from shape information, pure gestural size information is coded with the value *form – size*. Information about location and orientation in space is coded with the *spatial relation presentation* values. Some information about content and material (how something feels like, how heavy it is) can be expressed with the *motion quality presentation* values.

**Movement form:** see Function value *form presentation*; independently of the Technique of Presentation *hand-as-object, enclosure, tracing,* and *palpating,* the resulting still or motion image of a shape is a 2- or 3-dimensional form. **Hierarchy**: A *form–shape* gesture may include a *form–size* gesture.

*Form–shape > form–size:* As defined above for the geometric aspect, the shape is – in the first line – abstracted from the actual size, i.e., the correct shape information does not automatically include the correct size information. As an example, in gesture the sun may depicted by a round shape or a pitched rool by a triangle shape, but the size information would not be correct. However, the *form–shape presentation* **can** include information about the correct size of the object of reference, especially if the size of the object of reference does not extend that of the gesturer's kinesphere, i.e., the reach of his/her arms. In other words, it is likely that for the presentation of the shape of an apple, the shape and the size may match that of a real apple, whereas it is unlikely that a *shape presentation* referring to a roof matches the size of an actual roof. The combination of *form–shape + form–size* is coded as *form–shape.*

**Specifications:** *Form presentation* gestures can be further assessed with the Supplementary categories Technique of Presentation and Referent.

The referent of a *form–shape* gesture may be


**Note**: Note or draw the shape.


#### **Meeting the criteria**


#### **Differentiate** *form–shape* **from**…

# *form–size*: A *size* gesture only provides information about the size but not about the shape. In contrast, a *shape* gesture may or may not include information about the size. Since for *form – size* gestures only the Techniques of Presentation (ToP) *tracing* and *enclosure* are suitable, difficulties in differentiating between *form – shape* and *form – size* may only arise when these two techniques are used.

If the ToP *tracing* is chosen, in a *form – size* presentation the trace indicating a length is always straight, whereas in a *form – shape* presentation, the trace indicating the contour of a *shape* is often closed. Thus, a one-dimensional path is indicative of a *form–size* gesture, while a two- or three-dimensional path characterizes a *form–shape* gesture.

If the ToP *enclosure* is chosen, the *form–shape* presentation is characterized by a two- dimensional hand shape, e.g. the two- or three-dimensional shape of a form is shown by the empty space between two rounded hands, the palms of which face each other and may or may not touch each other. In contrast, in the *form–size* presentation with *enclosure*, there may some indirect rough reference to the form of an object by the hand shape. If the *size* of broad objects shall be presented, the flat hands may be held straight and parallel to each other. If the size of thin objects shall be presented, only the indices show the size. However, the hands are never rounded nor adopt another complex shape.

### *9.4.6.2* form–size

#### **Short definition**

#### CREATING A LENGTH

### **Definition**

**Geometric aspect:** Size is defined as "physical magnitude of something (how big it is)" (Webster's Dictionary). Size does not include information about the shape. **Movement form:** In gesture, a size is indicated by the depiction of a length by a *phasic* one-dimensional gesture. This can be either the height, the width, or the depth of one object. Thus, if the presentation of the size of an area (height and width) is intended, two sequential *phasic* one-dimensional gestures are needed. If the representation of the size of a volume (height, width, depth) is intended, three sequential *phasic* one-dimensional gestures are needed, i.e., representing the height, the width, and the depth of the volume. Depending on whether the height, the width, or the depth shall be presented, the depiction is strictly in the vertical, the horizontal, or the sagittal space axis, i.e., it is not depicted on a cross axis.

In order to create a length, the two Techniques of Presentation (ToP) *enclosure* and *tracing* are suitable.


and strictly one-dimensional in either the sagittal, or the horizontal, or the vertical dimension.

**Specifications:** *Form presentation* gestures can be further assessed with the Supplementary categories Technique of Presentation (see 8.4.6) and Referent.

The referent of a *form–size* gesture may be


**Note**: Note the size.

#### **Meeting the criteria**



#### **Differentiate** *form–size* **from**…


### **9.4.7 Types of the Function value** *Spatial relation presentation*

For *spatial relation presentation* gestures, the Type category classifies the geometric configuration that is presented: line/curve (*route*) versus point (*position*).

#### *9.4.7.1* spatial relation–route

#### **Short definition**

#### CREATING A SPATIAL ROUTE BY PRODUCING A LINE

#### **Definition**

**Geometric configuration:** A route is defined as "a way or course taken in getting from a starting point to a destination" (Oxford Dictionaries). A route can be depicted as a line or a curve. A line is "a concept which includes, but is not limited to, an infinitely-extended one-dimensional figure with no curvature" and a curve is "an object similar to a line but that need not to be straight" (Wikipedia).

**Movement form:** In order to create a line or a curve, there is a displacement of the hand in the gesture space. Thus, the most prominent movement feature of *spatial relation–route* gestures is the distinct use of gesture space.

The hand creates a line or a curve with a start point and an endpoint. If a *spatial relation–route* gesture focuses on the change of position, i.e., the end position as being spatially different from the starting position (from where to where), the trace between the start position and the end position is straight. If it focuses on the path between the start and end position (where along), the trace is often curved and spatially complex. As *spatial relation – route* gestures provide no information about motion, there is no variation in the effort qualities. Rather, in order to create a precise outline of the route, the movement flow is bound, there is a direct use of space, and often the index is used to trace the route. The hand can trace with the index a route in an imaginary space or the hand can shift the hand from one position to another in an imaginary space. The shifting can be done by placing the flat hand or the fingertips on the position and then moving the hand.

A direction gesture that is displayed with a mento-heliocentric perspective is also coded as *spatial relation–route*. The gesturer creates an imaginary space with a mento-heliocentric perspective and in this space s/he indicates a direction. As an example, the gesturer creates a map of Cologne, which s/ he projects on the horizontal plane, and on this imaginary map s/he shows the direction northwards. The movement form of the (mento-heliocentric) direction gesture is similar to that of an *egocentric direction*, but the body involvement and hand orientation differs. The movement is conducted with the hand only and the finger tips are oriented to the imaginary map, i.e., if the map is projected to the horizontal plane, the finger tips are oriented downwards.

**Hierarchy**: A *spatial relation–route* gesture may include a *form – shape*, a *form – size*, or/and a *spatial relation – position* presentation, and it may be embedded in a *motion quality – manner* or a *motion quality – dynamics presentation*.

*spatial relation–route > form–shape*: *Spatial relation–route* gestures, especially those with the Referent *material*, may include information about the form of a route, e.g. the right hand adopts a round shape with fingertips pointing down and then moves up from the right lower to the left upper gesture space without variation in the effort factors (e.g. in order to represent the course of the tunnel in the mountains). Or, the gesture includes information about an object that is displaced in space, e.g. the flat hand is displaced from the left half to the right half of the gesture space (e.g. in order to represent a board that is shifted from the left to the right). The combination of *spatial relation–route* and *form–shape* is coded as *spatial relation–route.*

*spatial relation–route > form–size*: A *spatial relation–route* gesture may include information about the size of a route, e.g. the tips of index and thumb are held with a distance of 1 cm, oriented downwards like an inverted U, and then the hand is moved in curves (e.g. in order to represent the course of a narrow path in a landscape). The combination of *spatial relation–route* and *form–size* is coded as *spatial relation–route*.

**Specifications:** *Spatial relation–route* gestures can be further assessed with the Supplementary categories Technique of Presentation, Execution Hemi-Space, Target Location (see all 8.4.7), and Referent.

The Referent of the *spatial relation–route* gesture may be


#### **Meeting the criteria**


#### **Differentiate** *spatial relation–route* **from**…


### *9.4.7.2* spatial relation–position

#### **Short definition**

### CREATING A SPATIAL POSITION BY SETTING A POINT RELATIVE TO ANOTHER ONE

#### **Definition**

**Geometric configuration:** A position "… represents the position of a point … in space in relation to an arbitrary reference origin". A point is "an entity that has a location in space or on a plane, but has no extent" (Wikipedia). As a spatial position is defined by its relation to another one, at least two points in space need to be presented in gesture. Therefore, the *spatial relation–position* gesture presents two or more points relative to each other.

**Movement form:** The most prominent movement feature of *position* gestures is the distinct use of gesture space. In order to create a spatial position, the hand can mark a position on an imaginary map or in an imaginary space, which is created into the gesture space. The marking can be done by placing the flat hand or the fingertips on the position or by making an imaginary sign (dot or cross) at that position. Alternatively, the hand points to a specific position on the imaginary map (*mento-heliocentric deictic*).

**Hierarchy**: A *spatial relation–position* gesture may include a *form – shape* or a *form – size* presentation, and it may be embedded in a *motion quality – manner* or a *motion quality – dynamics presentation*.

*spatial relation–position > form–shape*: A *spatial relation–position* gesture, especially those with the Supplementary category Referent value *material*, may include information about the form of an object, e.g. in the right gesture space the right hand adopts the shape of a reversed V (e.g. to represent a house with a pointed roof located in the east of the city) and in the left gesture space the left flat hand is held parallel to the floor (e.g. to represent a supermarket with a flat roof located in the west of the city). The combination of *spatial relation–position* and *form–shape* is always coded as *spatial relation–position.*

*spatial relation–position > form–size*: A *spatial relation–position* gesture may include information about the size of an object. As an example, in the right gesture space the flat right hand is held with palm down 10 cm above the table and in the left gesture space further away from the body the flat left hand is held palm down 30 cm above the table (e.g. to present the different heights of two buildings in different parts of the city). The combination of *spatial relation–position* and *form–size* is always coded as *spatial relation–position*.

**Specifications:** *Spatial relation–position* gestures can be further assessed with the Supplementary categories Technique of Presentation, Execution Hemi-Space, Target Location (see all 8.4.7), and Referent.

The Referent of a *spatial relation–position* gesture may be


hold, there is a presentation of a thought as/on the palm, while in the pronation complex phase hold there is a presentation of the other side of a thought by showing the back of the hand. The meaning emerges: "So–so". Thus, two positions and aspects, respectively, are opposed to each other.



Differentiate *spatial relation–position* from…

# *deictic–external target*: see there

# *form–shape*: see there

# *form–size:* see there

# *spatial relation–route:* see there

### **9.4.8 Types of the Function value** *Motion Quality Presentation*

For *motion quality presentation* gestures, the Type category classifies the quality that is presented: *manner* or *dynamics*.

*9.4.8.1* motion quality–manner

#### **Short definition**

PRESENTING A SPECIFIC TYPE OF MOVEMENT

#### **Definition**

**Quality:** The hand presents a specific type of movement, e.g. displaying a pulsating or rotating movement. The manner of movement can be typically defined by a verb, e.g. to roll, to jump, etc.

**Movement form:** The Structure of a *motion quality–manner presentation* is obligatorily *repetitive*. The movement type is presented by a specific withinhand/wrist trajectory, e.g. the hand repetitively opens and closes or the hand repetitively rotates in the wrist. The repetitive trajectory is often accompanied by a parallel variation in the effort factors, e.g. accelerating in the down-phase of the circle and decelerating in the up-phase of the circle. The only exception in which there is no variation in the effort factors is the intentional representation of monotonous motion, e.g. representing gear transmission. In this case, special emphasis in gestural expression is put on the invariance of the effort factors.

In case of the representation of a stationary movement, there is no displacement of the hand in the gesture space. As an example, the hand remains at the same place in the gesture space while opening and closing repetitively. In case of the representation of loco-motion, the repetitive within-hand/wrist trajectory is superimposed to a displacement trajectory, e.g. the hand is displaced in the gesture space while opening and closing repetitively.

**Hierarchy**: A *motion quality – manner presentation* may include *form – shape, form – size, spatial relation – route, spatial relation – position* and *motion quality– dynamics presentations,* or combinations of these presentations.

*motion quality–manner* > *form–shape*: A *motion quality–manner presentation* may include *form–shape* information about the object/subject that/who moves, e.g. the hand represents a jellyfish (Technique of Presentation ToP: *hand-asobject*) that contracts and expands, or the index and middle finger represent two legs (ToP: *hand-as-object*) that walk, or the two hands shape around an imaginary ball (ToP: *enclosure)* that bumps up and down.

*motion quality–manner* > *form–size*: A *motion quality–manner presentation* may include *form–size* information about the object/subject that/who moves, e.g. the hand depicts a size by the distance between thumb and index and moves up and down to represent an object of a certain size (ToP: *enclosure*) that is bouncing.

*motion quality–manner* > *spatial relation–route:* A *motion quality–manner presentation* often includes route information, e.g. a manner of movement is depicted on a path from the upper right to the lower left gesture space representing something rolling down a hill.

(If the **mere** presentation of locomotion is intended, the direction that the presented object takes relative to itself (forward, sideward, backward) is depicted. The spatial relation of the object to other positions is not relevant, i.e., the gesturer does not create spatial surroundings when presenting the manner of movement. In this case, the movement is typically depicted on a one-dimensional path on a main space axis, i.e., in the sagittal space axis in front of the body midline (to represent the general concept of forward and backward), in the horizontal space axis (to represent the general concept of sideward), or on the vertical space axis (to represent the concept of up and down).

*motion quality–manner* > *spatial relation–position*: A *motion quality–manner presentation* may include position information, e.g. the *manner* gesture is displayed at a specific position in the gesture space in order to represent a stationary movement at a specific location in an imaginary space, e.g. something bouncing on a roof.

*motion quality–manner > motion quality–dynamics:* A *motion quality–manner presentation* often includes dynamics, e.g. rolling fast, jumping heavily.

*motion quality–manner > form – shape* or *size + spatial relation – route or position + motion quality–dynamics:* A *motion quality–manner presentation* may include *form–shape* or *size, spatial relation–route* or *position,* and *motion quality– dynamics* information.

While these combinations are all coded as *motion quality – manner*, the included *form* information can be coded with the Supplementary category Technique of Presentation, the included *spatial relation* information with the Supplementary categories Target Location and Execution Hemi-Space, and the included dynamics information with the Supplementary category Efforts.

**Referent:** The Referent of the *motion quality–manner* gesture may be

(i) *material*: a concrete physical movement, e.g. rolling;

(ii) *non-material*: an abstract movement, i.e., more precisely, the translation of a manner of movement that is originally displayed by a physical object/subject onto an abstract entity, e.g. a revolution starts to get rolling.

**Note**: Note the manner of movement that is represented, e.g. rolling.


### **Meeting the criteria**

### **Differentiate** *motion quality–manner* **from**…


# *motion quality–dynamics:* A *motion quality–manner presentation* typically has a *repetitive* Structure, whereas a *motion quality–dynamics presentation* has a *phasic* Structure. In *motion quality – dynamics* the variation in effort qualities is essential and in *motion quality – manner* the repetitive trajectory.

### *9.4.8.2* motion quality–dynamics

#### **Short definition**

#### PRESENTING A SPECIFIC DYNAMICS OF MOVEMENT

#### **Definition**

**Quality:** Dynamics is defined by a variation in the Effort factors (see 4.3 and 11). The depiction of a dynamic quality is the primary function of *motion quality– dynamics* gestures. The dynamics of movement can be typically defined by an adverb, e.g. light, free, sustained, rigid etc. The *motion quality–dynamics* gesture can also refer to tactile or sensory experiences, e.g. how something feels like. In this case, the gesturer transforms tactile and sensory impressions into movement dynamics, e.g. the tactile experience of a soft surface is presented by the movement dynamics when stroking along it. These impressions can be typically defined by an adjective.

**Movement form:** The variation of the Effort qualities is the core movement feature of the value *motion quality–dynamics*. The distinct gestural depiction of dynamics in a *motion quality–dynamics presentation* is characterized by the fact that the gestural dynamics differ from the gesturer's baseline movement dynamics. Each gesturer has his/her personal style, e.g. one gesturer may have a rather free movement flow in his/her gestures, while the gestural behavior of another gesturer is characterized by directness. As a rule, the distinct presentation of a specific movement dynamics, which is registered by the value *motion quality–dynamics*, differs from the gesturer's personal pattern of effort quality use.

To some kinds of dynamics a specific direction of the movement is intrinsic, e.g. the dynamics of exploding (effort qualities: strong, sudden, direct) is associated with an outward motion. Or, the depiction of heaviness is associated with a downward motion, while the depiction of lightness implies an upward motion.

**Hierarchy**: A *motion quality – dynamics presentation* may include *form – shape, form – size, spatial relation – route, spatial relation – position,* or combinations of these presentations.

*motion quality–dynamics* > *form–shape*: A *motion quality–dynamics presentation* may include *form–shape* information about the object/subject that/who moves or that is touched, e.g. the hand is formed to a fist and moves downward heavily to represent a ball (ToP: *hand-as-object*) falling down heavily, or the hand strokes tenderly along an imaginary flat object (ToP: palpating) to represent the softness of a fur.

*motion quality–dynamics* > *form–size*: A *motion quality–dynamics presentation* may include *form–size* information about the object/subject that/who moves, e.g. the hand shows a size by the distance between thumb and index and is moved up and out with strong, quick, and direct dynamics to represent an object of a certain size (ToP: *enclosure*) that is part of an explosion.

*motion quality–dynamics* > *spatial relation–route:* A *motion quality–dynamics* presentation often includes route information, e.g. a quick strong movement is depicted on a path from the upper right to the lower left gesture space representing something crashing down a hill. Given physical laws, a specific direction or route may imply a dynamics, i.e., the dynamics are induced by the route itself, e.g. getting faster on downward routes. On the other hand, the depiction of a specific dynamics may imply a direction in space, e.g. the depiction of heaviness implies a downward direction.

*motion quality–dynamics* > *spatial relation–position*: A *motion quality– dynamics presentation* may include position information, e.g. the *manner* gesture is displayed at specific position in the gesture space to represent how something feels like that is at a specific location in an imaginary space.

*motion quality–dynamics > form – shape* or *size + spatial relation – route or position:* A *motion quality–dynamics presentation* may include *form–shape* or *size* and *spatial relation–route* or *position* information.

While these combinations are all coded as *motion quality – dynamics*, the included form information can be coded with the Supplementary category Technique of Presentation and the included spatial relation information with the Supplementary categories Target Location and Execution Hemi-Space.

**Referent:** The Referent of the *motion quality–dynamics* gesture may be



#### **Meeting the criteria**

#### **Differentiate** *motion quality–dynamics* **from**…


### **9.4.9 Specific emblems and social conventions**

Since there is a large number of *emblems* and *social conventions*, which furthermore differ between cultures and sub-cultures, it is not possible to list them in the NEUROGES®-template. Given the (sub)cultural differences in the use of *emblems*, it is recommended that based on the criteria defined for the Function value *emblem / social convention* (8.4.11) each research group sets up its own list of *emblems* and *social conventions*. Technically, when coding with the NEUROGES®-template, based on this list the name of the specific *emblem* or *social convention* is noted in the tier Notes.


**Tab. 12:** Specific *emblems* used in a population in the Western part of Germany


Time-out sign

Malice Stop sign The fingers hook

Showing a fool's nose

Raising the index

Showing fool's ears / elephant

ears

Pulling the earlobe

Strain one's ears

Expensive!

Craziness Snobbishness

Gossip Sneaking

Rubbing thumb, index and middle finger

Waving the hand with spread fingers and the palm in front of the

Lifting the nose with the plantar side of the index finger

Making the shape and motion of a talking mouth

Making a small circular motion with the wrist next to the hip

Indicating that a person is a snob

Indicating that a person is a blabber

mouth / gossiping

Not believing what a person says

Sign for sneaking / stealing

face

fingers

fingers /

With both hands

Raising the index above the head

Placing the thumbs at the temples / the ears and wagging the

Pulling down the earlobe with the thumb and the index-finger

towards the conversational partner

Cupping one ear in one hand towards the addressee

Forming a T with both flat hands

Brushing the right index-finger along the extended left index

Extending the palm of the hand in front of the body

Hooking the index fingers of both hands into each other and

pulling in opposite directions

Placing the thumb of one hand at the nose and wagging the

finger

To ask for a break

To be malicious

Shame on you!

So there!

Stop!

Indicating a strong bond / relationship

To tease someone;

Showing someone a fool's nose

Indicating that one wants to say

something

To tease someone,

Showing someone is a fool

"What did you

"Speak louder!"

To strain ones

"Tell me!" "Speak louder!"

"What did you say?"

Indicating that something is expensive

Indicating that a person is crazy

 ears

 say?"

**Tab. 12:** Continued


**Proposed Name**

Showing the watch

Shrug Counting with one's fingers

Isolated shrugging of the shoulders, no accompanying facial or

The emblem is typically performed unimanually.), in Germany by

stretching out one after the other the thumb, index, middle, ring,

and little finger. In Anglophone countries the index is extended,

then the middle, ring, little finger, and finally the thumb In

contrast, counting on the fingers of the other hand is used more

Forming an L with the right hand and placing it in front of the

Forming a fist (with the thumb inside or outside) in front of the

chest and pressing the fingers together

Pressing thumb, middle and ring finger on each other while

extending the index and little finger upwards

Raising the middle finger out of the fist with the back of the hand

facing the conversational partner (also possible with the little

Placing the index-finger vertically on the (open) lips

Raise the index and middle finger / the palm of the hand closed

together above shoulder

bible)

 height (often combined with placing the right hand on the heart / the

silent

To swear

Indicate to keep the mouth shut / to be

Showing someone that (s)he is a loser

To wish someone good luck

Keep your mouth shut and your ears

perked

"Fuck you!"

ideographically.

forehead

Loser-sign To press one's thumbs

The silent fox To give someone the finger

finger)

Silence sign

The swear

postural movement

**Movement form**

wristwatch is usually worn

Tapping on (or only looking at) the plantar side wrist, where a

**Meaning**

Indicating that someone should

hurry up "You are late!" "Keep track of the time!"

I do not know

Signs for the numbers 1-10




**Tab. 13:** Specific *social conventions* used in a population in the Western part of Germany

The tables below provide the most common *emblems* and *social conventions* in the Western part of Germany. The compilation is based on a pilot study by Michaela Klüh from 2011.

### **9.5 Procedure for Step 7 / Module III in NEUROGES®-ELAN**

The 'to-be-coded' Type units are simply generated by copying the Function units.

### **9.5.1 Generation of the 'to-be-coded' Type units**

Open the eaf file with the Function units (Step 6 / Module III), then proceed as follows:

Apply the function: Tier > Copy Tier.

Select a tier to copy: click on bh\_Function\_RX.

Next.

Select the new parent tier: skip this step.

Next.

Select another linguistic type: click on Type.

Finish.

Apply the function: Tier > Copy Tier.

Select a tier to copy: click on rh\_Function\_RX.

Next.

Select the new parent tier: skip this step.

Next.

Select another linguistic type: click on Type.

Finish.

Apply the function: Tier > Copy Tier.

Select a tier to copy: click on lh\_Function\_RX.

Next.

Select the new parent tier: skip this step.

Next.

Select another linguistic type: click on Type.

Finish.

When the three operations are finished,

apply the function: Tier > Change Tier Attributes.

Scroll down in the list to the end:

Click on bh\_Function\_R0-cp.

Enter the Tier Name: bh\_Type\_RX ('RX' = your initials).

Enter the Annotator: your name.

Enter the Participant: the identification of the person whom you are going to code. Change.

Click on rh\_Function\_R0-cp.

Enter the Tier Name: rh\_Type\_RX.

Enter the Annotator: your name.

Enter the Participant: the identification of the person whom you are going to code. Change.

Click on lh\_Function\_R0-cp.

Enter the Tier Name: lh\_Type\_RX.

Enter the Annotator: your name.

Enter the Participant: the identification of the person whom you are going to code.

Change.

Close.

Now, you have the following new tiers:

bh\_Type\_RX

rh\_Type\_RX

lh\_Type\_RX.

#### **9.5.2 Coding the 'to-be-coded' Type units**

The units on the tiers bh\_Type\_RX, rh\_Type\_RX, lh\_Type\_RX are now taken as the basis for the coding of the Type category (therefore, they are termed 'to-becoded' Type units). The units still have the copied Function values. The Function value determines the choice of Type values (see Fig. 11).

When coding the units of the tiers bh\_Type\_RX, rh\_Type\_RX, and lh\_Type\_ RX, proceed chronologically, i.e., code the units in the order of their occurrence, e.g. rh unit, lh unit, bh unit, bh unit, rh unit, etc.

If a Type value changes within a 'to-be-coded' Type unit, replace the old unit by the new subunits (compare 4.2).

The Type assessment is not conducted for 'to-be-coded' Type units with the Function values *object-oriented action, subject-oriented action,* and *emblem/ social convention.* Thus, these units are not re-coded with Type values but they are kept as they are in order to potentially serve as a basis for the generation of the 'to-be-coded' units for the Supplementary category assessment.

### **9.5.3 Alternative procedure: Manual generation of 'to-be-coded' Type units**

If you start with the Type category, i.e., you have not assessed Modules I and II and the Function category before, use the alternative procedure of manual unit generation. In this procedure, the tiers bh\_Type\_R0, rh\_Type\_R0, and lh\_ Type\_R0 are used that are provided in the template. If you intend to analyze foot gestures, head gestures, and trunk gestures as well, please generate the tiers yourself. Then directly tag all gestures according to the definition given in 9.2.2. For the limbs, the units have to be differentiated as unilateral right, unilateral left, and bilateral. Unimanual limb units are units in which one limb moves while the other limb rests. Bimanual units are units in which both limbs move simultaneously (compare the definitions given in III). The bilateral units should first be classified with the Formal Relation values (see 8.5.3). The 'to-be-coded' Type units are assessed according to the rules described in 9.5.2.

## **V Supplementary categories**

The Supplementary categories offer an advanced examination of specific topics: Technique of Presentation, Efforts, Temporal Structure, Target Hemi-Space, Execution Hemi-Space, Referent, and Trigger/Motive. These categories differ from the main categories Activation, Structure, Focus, Contact, Formal Relation, Function and Type, as they are not part of the proper assessment algorithm but they constitute an additional assessment for specific main values. Furthermore, since the Supplementary categories deal with highly specific topics, thus far, they have only been employed in few empirical studies (e.g. Lausberg & Kita, 2002; Lausberg & Kita, 2003; Lausberg et al., 2003; Densing et al., 2017) and therefore,–in contrast to the main categories–no substantial data on reliability and validity are yet available. Thus, researchers who apply these categories have to thoroughly test the interrater agreement.

The Supplementary categories serve to specify certain main values. For some main values several Supplementary categories can be applied, e.g. for the Function value *motion presentation* the categories Technique of Presentation, Target Hemi-Space, Execution Hemi-Space, Techniques of Presentation, and Referent. Obviously, the choice of a Supplementary category depends on the research question. Technically in NEUROGES®-ELAN, the units of the main values that shall be submitted to the supplementary assessment are copied to the new tier that is linked with the Linguistic Type of the Supplementary category (same procedure as for the main categories, see the sections Procedures in NEUROGES®-ELAN in the chapters on the main categories).

## **10 Supplementary category Technique of Presentation**

### **10.1 Definition of the category Technique of Presentation**

The Supplementary category Technique of Presentation refers to the gestural techniques that are used to present information about a form: *tracing, palpating, enclosure,* or *hand-as-object*.

### **10.2 Selection of units for the Technique of Presentation assessment**

The category Techniques of Presentation is applied to the Function value *form presentation* and the Type values *form–shape* and *form–size,* respectively (primary form values).

If the Function values *pantomime, spatial relation presentation*, *motion quality presentation,* or the respective Type values *pantomime–transitiveactive, pantomime–transitive-passive, spatial relation–position, spatial relation–route motion quality–manner*, and *motion quality–dynamics* contain information about the shape or the size of an object (secondary form values), the Techniques of Presentation can be applied also to these Function and Type values. However, only the Techniques of Presentation *hand-as-object* and *enclosure*, in which information about a shape or a size is conveyed by a static technique, i.e., a static hand shape, can be used for the presentation of form information in *pantomime–transitive-active, pantomime–transitivepassive, motion quality–manner*, and *motion quality–dynamics presentations.* In other words, if the Technique of Presentation per se is already dynamic as in *tracing* and *palpating*, it cannot be combined with a (dynamic) *pantomime* or *motion quality presentation*, e.g. if the hand is *tracing* the shape of an object it cannot – at the same time – present a manner of motion. In contrast, if the hand embodies an object (*hand-as-object*) it can, in addition, present a manner of motion. Tab. 14 shows the Techniques of Presentation for primary and secondary form values.

If there is a change of the Technique of Presentation within a unit, which had been adopted from the Function or Type category coding, then the Technique of Presentation change demarcates subunits.

**Fig. 12:** Values of the category Technique of Presentation

### **Differentiate the Supplementary category Techniques of Presentation from**…

# Type value *pantomime–transitive*: Techniques of Presentation serve to present a form, i.e., the gestural message is "This is a square" or "Here is round shape". They do **not** have a pantomiming function, i.e., the gestural message is **not** "I am tracing" or "I am palpating". Techniques of Presentation are **not** displayed with dynamics (variations in the effort factors), as the emphasis is on the presented image. In contrast, the corresponding *pantomime–transitive* gestures such palpating, embracing, or positioning are displayed with dynamics as the emphasis is on the action per se. As an example, the *pantomime–transitive* positioning could be performed with light and sustained effort qualities as if placing something precious on a table. Or, it could be performed with strong and direct effort qualities as if throwing something on the floor (The differentiation should have already been made in the Function or Type category assessment. However, for didactical purposes the differentiation is reported here again).

### **10.3 Definitions of the Techniques of Presentation values**

#### **10.3.1** *hand-as-object*

#### **Short definition**

#### EMBODIMENT OF AN IMAGINARY OBJECT

#### **Definition**

The shape of the imaginary object is embodied by the hand. Therefore, this technique is characterized by a very distinct hand shape. The hand adopts the shape of the object of reference. The hand may not only provide information about the shape but also on a very basic level about the content of the object (solid versus


**Tab. 14:** Techniques of Presentation used to present a form in different Type values

\* The Technique of Presentation is assessed if the value in includes size or shape information.

vacuum), e.g. making a fist to present a round, solid object or forming a whole to present a round object with a vacuum. Furthermore, the hand may not only represent the shape of an object but also of an agent, e.g. index and middle finger with tips down re-present legs (the legs being a pars pro toto for a human being). In the rare case of the hand embodying an agent make a note: hand-as-agent.

*Hand-as-object* is a static Technique of Presentation, i.e., the hand becomes a static sculpture in order and presents a still image of a form (see 8.4.6). Thus, the complex phase is a static complex phase. As *hand-as-object* is a static Technique of Presentation, it can be combined with all Function and Type values that include primary and secondary form information.

### **Function values**

### primary: *form presentation*

secondary (i.e., that include a *form presentation*): *pantomime*, *spatial relation presentation*, and *motion quality presentation*

### **Type values**

#### primary: *form–shape*

secondary: *pantomime–transitive-active, pantomime–transitive-passive, spatial relation–position, spatial relation–route, motion quality–manner*, and *motion quality–dynamics presentations*

### **Examples for** *hand-as-object* **in primary and secondary form values**


### **10.3.2** *enclosure*

#### **Short definition**

### ENCLOSURE OF AN IMAGINARY OBJECT

### **Definition**

The hand(s) encloses the imaginary object. The shape of the imaginary object is presented by a hand grip. Thus, there is a distinct shape that enables to infer the specific shape of the represented object.

*Enclosure* is a static Technique of Presentation, i.e., the hand adopts a static shape (grip) and presents a still image of a form. Thus, the complex phase is a static complex phase. As *enclosure* is a static Technique of Presentation, it can be combined with all Function and Type values that include primary and secondary form information.

### **Function / Type values**

#### primary: *form–shape*, *form–size*

secondary: *pantomime–transitive-active, pantomime–transitive-passive, spatial relation–position, spatial relation–route, motion quality–manner*, and *motion quality–dynamics presentations*

#### **Examples for** *enclosure* **in primary and secondary Type values**

	- (i) unimanually by showing the distance between the thumb and the index of one hand;
	- (ii) bimanually showing the distance between the two flat hands with the palms (at least one of them) oriented to the center of the imaginary object, i.e. the palms are oriented to each other;
	- (iii) unimanually showing the distance between the hand in space (palm oriented to the center of the imaginary object) and an external surface such as the floor or the table.

#### **Differentiate** *enclosure* **from**…

# *palpating*: *Enclosure* is a static Technique of Presentation. The shape of the imaginary object is presented by a fixed hand grip. The complex phase is a static complex phase. In contrast, *palpating* is a dynamic Technique of Presentation. The shape is presented by stroking along the imaginary object. The complex phase is a motion complex phase.

### **10.3.3** *palpating*

#### **Short definition**

#### PALPATING AN IMAGINARY OBJECT

### **Definition**

The hand palpates, feels, or strokes along an imaginary object. The sensitive palms are fully used to feel the imaginary object and they dynamically adapt to the shape of the imaginary object. The palms are oriented toward the center of the imaginary object. *Palpating* enables to present 2- and 3-dimensional shapes. Thus, is refers to *shape* presentation and per definition not to *size* presentation, which presents only one dimension.

*Palpating* is a dynamic Technique of Presentation. Via the process of palpating a motion image (see 8.4.6) of the shape is created. Thus, there is a motion complex phase.

### **Function / Type values**

primary: *form–shape*

### secondary: *spatial relation–route, spatial relation–position* **Examples for** *palpating* **in primary and secondary form values**


### **Differentiate** *palpating* **from**…

The following differentiation should have already been made in the Function or Type category coding. However, for didactical purposes the differentiation is reported here again:

# *pantomime–transitive* of palpating an object: Gestures in which *palpating* is used as a Technique of Presentation lack movement dynamics because the message focuses on the motion image of the shape that is created, e.g. "This is a ball". In contrast, *pantomime – transitive* gestures of palpating an object focus on the action of palpating from an egocentric perspective, e.g. "I am palpating".

### **10.3.4** *tracing*

### **Short definition**

### TRACING THE CONTOUR OR THE EXTENT OF AN IMAGINARY OBJECT

#### **Definition**

The hand traces or draws with the fingertips, often only with that of the index, the contour of an object (*shape*) or the line of an extent (*size*).

*Tracing* is a dynamic Technique of Presentation. Via the process of tracing a motion image of the shape or the size is created. Thus, there is a motion complex phase.

### **Function / Type values**

primary: *form–shape*, *form–size*

### secondary: *spatial relation–position*

#### **Examples for** *tracing* **in primary and secondary form values**


#### **Differentiate** *tracing* **from**…


The following differentiations should have already been made in the Function or Type category coding. However, for didactical purposes they are reported here:


gesture space. When *tracing a* route, the trace is open, i.e., the starting point of the trace does not match the endpoint of the trace. The fingertips are oriented towards the imaginary spatial map, which is most often projected to the horizontal plane. There is a distinct use of gesture space.


# **11 Supplementary category Efforts**

## **11.1 Definition of the category Efforts**

The Supplementary category Efforts enables to assess the dynamics of movement units, i.e., how a movement is performed. The category is directly adopted from the Laban movement analysis (see also 4.3). Laban (1988) defined Efforts as the inner impulses from which movement originates. He distinguished the four factors flow, weight, time, and space. Each factor comprises a continuum of qualities with 2 polarities, i.e., flow changes between *free* and *bound*, weight between *strong* and *light*, time between *sustained* and *sudden*, and space between *direct* and *indirect*. These effort qualities result from the inner attitude (conscious or unconscious) towards the four Effort factors. The definitions and examples in this chapter are adopted from Robyn Cruz's lecture material (1995) and from Dell (1979). The only modification of the original Laban Effort analysis is that in NEUROGES®, because of clinical relevance, the value *heavy* has been added in the factor weight.

### **11.2 Selection of units for the Effort assessment**

Variations in the Effort factors, i.e., dynamics, are components of many gesture types and actions. They are conceptually intrinsic to *pantomime* and *motion quality* gestures, to *object-oriented* and *subject-oriented actions*, and to *emotion/ attitude* movements. As an example, if the gesturer presents a *motion quality – dynamics* gesture in which something is crashing *suddenly* or falling *heavily,* the dynamic quality is part of the creative depiction. Or, if the gesturer pretends to hammer (*pantomime*) or actually hammers (*object-oriented action*), the effort qualities *strong* and *direct* are intrinsic components of the *pantomime* and *action*, respectively. *Subject-oriented actions* such as *lightly* stroking the face can also include dynamics. Likewise, effort qualities are intrinsic to *emotion/attitude* movements (genuine and learned emotional expressions), e.g. the expression of sadness implies *heaviness*.

Effort qualities might also serve to lend an emotional connotation or a certain dynamics to other Function values. Effort qualities often occur in *emphasis* gestures, e.g. to perform a baton *strongly* and *directly*. *Egocentric deictics, directions,* and *emblems* may also be performed with effort dynamics, too, e.g. an insulting emblem. Thus, the Supplementary category Effort can be applied to all Function and Type values with the exception of *form presentation* and *spatial* 

**Fig. 13:** Values of the category Efforts

*relation presentation*, which focus on providing information about forms and spatial relations but not about dynamics.

Furthermore, the Supplementary category Effort can be further applied to all other NEUROGES® main categories and values, e.g. a *sudden shift* to a closed position, a *light on body* movement, or *strong act on each other* movements. However, it should be noted that especially for non-conceptual movements Laban-based analyses offer more fine-grained differentiations, e.g. precursors of efforts as described by Kestenberg (1965a,b; 1967) with tension flow rhythms (sucking, snapping/biting, twisting, strain/release, running/ drifting, starting/stopping, swaying, surging/birthing, jumping, spurting/ ramming) and tension flow attributes (flow adjustment/even flow, low intensity/high intensity, graduality/abruptness) and possibly pre-efforts (flexibility/channeling, gentleness/vehemence-straining, hesitation/suddenness) (Eberhard, personal communication, 2012). However, for most research questions, the Supplementary category Efforts will be sufficient to describe the observed phenomenon.

If there is a change of the Effort quality within a unit, which had been adopted from the main categories, then the Effort quality change demarcates subunits.

### **11.3 Definition of the Effort values**

### **11.3.1** *free flow*

A variation in body tension representing ease: going with, allowing energy to go through and beyond the body boundaries; indulgent / expansive use of flow

#### **11.3.2** *bound flow*

a variation in body tension representing restraint of movement: restricted, controlled, keeping energy flow within body boundaries; fighting / condensing use of flow, e.g. in the *pantomime – transitive-active* threading a needle, carrying a pot of hot coffee. "Examples of changing between *free* and *bound* flow might be found in: 1) a free sweep of your arm during a conversation, in which you knock something over, and then freeze; 2) carrying a full pan of water over a distance, setting it down and then shaking yourself for relief." (Dell, 1979, p. 15)

### **11.3.3** *light weight*

force or pressure exerted in movement holding body weight: rarefied, delicate, fine touch, overcoming the body weight; indulgent / expansive intention in weight, e.g. *pantomime–intransitive* walking on ice, *motion quality–dynamics* representing a feather that is gliding to the ground, *pantomime–transitive* carrying a delicate porcelain figure

### **11.3.4** *strong weight*

force or pressure exerted in movement using body weight: having impact, penetrating, getting behind the body weight; fighting / condensing intention in weight, e.g. *pantomime–transitive* hammering a nail into a wall

### **11.3.5** *heavy weight*

no force or pressure exerted in movement: passive giving of body weight into gravity, e.g. *motion quality–dynamics* presenting the falling of a heavy stone

### **11.3.6** *sustained time*

compensation to outward time demands, or attitude toward duration of action: stretching out time, leisurely, actively indulging in time; indulgent / expansive decision in time (distinguish from slow motion or evenness of bound flow), e.g. *pantomime – transitive-active* taking all the time in the world to get up

### **11.3.7** *sudden time*

compensation to outward time demands, or attitude toward duration of action: urgent, instantaneous, a sense of urgency recreated each time; fighting / condensing decision in time (distinguish from fast or tempo increase), e.g. *pantomime–intransitive* runner making a quick start

### **11.3.8** *indirect space*

attention or orientation to space, how energy is focused in action, free-floating attention: multi-overlapping foci, multi-faceted attention, active meandering; indulgent / expansive attention in space, e.g. *pantomime–intransitive* of trying to orientate blind-folded

### **11.3.9** *direct space*

attention or orientation to space, how energy is focused in action, selective attention: channeled, pin-pointing; fighting / condensing attention in space, e.g. *deictic–external target* to a tiny target

## **12 Supplementary category Temporal Structure**

### **12.1 Definition of the category Temporal Structure**

The Supplementary category Temporal Structure enables to register temporal aspects of a movement unit. The Temporal Structure is defined by the equality versus inequality of the durations between the subphases 22 in units with a *repetitive* Structure (see 4.4.2). Thus, it determines whether the durations between the subphases vary or not. At least three subphases are needed in order to identify the temporal pattern of a *repetitive* unit, as this enables to compare the duration between subphase 1 and subphase 2 with the duration between subphase 2 and subphase 3.

Technically, in NEUROGES®-ELAN the durations between the subphases are compared by noting the instants of the turn-points of the subphases. A turnpoint is a change of direction, e.g. back – forth.

### **12.2 Selection of units for the Temporal Structure assessment**

The Temporal Structure assessment is applied to units with a *repetitive* Structure. In the Function analysis, these units often turn out to be *emphasis* gestures23. Furthermore, *irregular* units, which may include transient repetitive phases, may be assessed regarding the Temporal Structure.

As a rhythm may contain metric phases and furthermore, as rhythm and meter contain single accents, it is important to stick to the rule that the Temporal Structure value always refers to the complete *repetitive* unit. This implies that in the Temporal Structure assessment per definition no subunits are created. As an example, a *repetitive* unit with the temporal pattern \_ …\_ … would not be segmented into *single accent – metrical – single accent – metrical* subunits, but be coded as a whole unit with the value *rhythmical*.

<sup>22</sup> As an example, a *repetitive* unit in which the same direction – with or without displacement in one dimension–is taken three times, e.g. back – forth – back – forth – back – forth, consists of three subphases.

<sup>23</sup> However, *superimposed emphasis* units cannot be used as to-be-coded units for the Temporal Structure assessment, since they are often a subunit of the original *repetitive* unit. As stated above, the Temporal Structure assessment refers to the complete *repetitive* unit.

**Fig. 14:** Values of the category Temporal Structure

## **12.3 Definition of the Temporal Structure values**

### **12.3.1** *single accents*

The *repetitive* unit comprises less than three subphases. Thus, the existence of a meter versus rhythm cannot be determined.

### **12.3.2** *metrical*

The *repetitive* unit comprises three or more sub phases and the duration between the subphases is equal.

### **12.3.3** *rhythmical*

The *repetitive* unit comprises three or more subphases and the duration between the subphases varies.

## **13 Supplementary category Target Location**

### **13.1 Definition of the category Target Location**

The Supplementary category Target Location specifies where a target is located. The target may be a position, an object, or a subject in the physical space or in the imaginary space, which is reflected in or created by gesture.

With reference to the neural organization of spatial attention, the location of the target is defined relative to the sagittal plane that is in line with the body midline. The body midline sagittal plane divides the body (body-internal space and body-surface) and the gesture/action space (body-external space) in two halves. Accordingly, four Target Location values are defined: (i) *right side:* the target is in the body-internal space, the body-surface, or the body-external space that is right to the body midline sagittal plane, (ii) *left side:* the target is in the bodyinternal space, the body-surface, or the body-external space that is left to the body midline sagittal plane, (iii) *body-midline:* the target is in the sagittal plane that is in line with the body midline, and (iv) *both sides*: in the to-be-coded unit, the target is / the targets are on both the right and left sides.

The precise definition of the target depends on the Function of the hand movement, as defined by the Function value.

For *egocentric deictics,* the target is the location the hand points to. For *egocentric deictics* with the Focus (value) *on body, on attached object, on separate object,* or *on person,* the target is where the body-surface, the attached object, the separate object or the person, respectively, is touched. For *in space*–*egocentric deictics* the spatial target is distant from the hand. It may even be in the other half of the gesture space than the gesture execution. As an example, the hand executes a *deictic–external space* in the right gesture space but the target of the pointing gesture is the left gesture space24.

For *egocentric directions,* the spatial target is the location the hand directs at. *Egocentric directions* are almost exclusively *in space* and thus, the spatial target is distant from the hand. The destination of the direction or the route is the target.

<sup>24</sup> The gesture space where the gesture is executed is assessed with the Supplementary category Execution Hemi-Space. Dissociations between the Execution Hemi-Space and the Target Location Hemi-Space provide valuable diagnostic information about neuropsychological impairment such as neglect.

For *pantomime–transitive*, the target is the imaginary counter-part that the gesturer pretends to act on, e.g. an imaginary object or environment. *Pantomime– transitive* gestures may include an imaginary tool that the gesturer uses to act on the imaginary counter-part. As an example, first the imaginary toothbrush (target 1) is grasped from an imaginary bathroom sink, then it is moved in front of the mouth (target 2) (in this example, based on the two different targets, two subunits are generated).

For *spatial relation*–*position* gestures, the target is the position that is created. In *spatial relation – route* gestures, the target is the destination of the created route or direction.

For *object-oriented actions*, the target is the object that is manipulated. For *on separate object*, *on attached object*, or *on person object-oriented actions*, the target corresponds to the Focus (value), i.e., the target is the separate object, the attached object or the person, respectively.

For *subject-oriented actions*, the target is most often a part of the body that is manipulated. For *within body, on body, on attached object,* or *on separate object subject-oriented actions*, the target corresponds to the Focus (value), i.e., the target is the muscles and joints, the part of the body, or the attached or separate object.25

To summarize, for *egocentric deictics* and *egocentric directions,* the target is the location the hand points to or directs at. For *pantomimes*, the target is the imaginary counter-part that is pretended to act on. For *spatial relation presentation* gestures, the target is the created position or destination. For *actions* the target is the part of the body (inner structures and surface) or the object/subject that/ who is manipulated.

### **13.2 Selection of the units for the Target Location assessment**

For a reliable assessment of the Target Location category the experimental setting should have a frontal view on the participant, in order to precisely estimate the body midline sagittal plane.

<sup>25</sup> For *within body* and *on body subject-oriented actions*, the side where the target is (Target Location) may dissociate from the gesture/action hemi-space where the movement is executed (Execution Hemi-Space), if the target is a mobile part of the body that can be moved to the contralateral gesture space. As an example, the left hand may be moved to the right gesture/action hemi-space. If the left hand is touched by the right hand in the right gesture hemi-space, the Target Location value is *left side* and the right hand Execution Hemi-Space value is *ipsilateral* (i.e., the right gesture/action hemi-space).

**Fig. 15:** Values of the category Target Location

A target, as defined above, can be identified in the following Function/ Type values: *egocentric deictic–external target, egocentric deictic–You, egocentric deictic – body, egocentric deictic–self, egocentric direction–neutral,* egocentric *direction–imperative, egocentric direction – self-related*, *pantomime–transitive, spatial relation–position, spatial relation – route*, *object-oriented action, subjectoriented action,* and furthermore, if spatial relation information is included, also in *motion quality presentation.*

The Target Location assessment applies to the complex phase. In *phasic* and *repetitive*26 units with one complex phase, there can be only one target. In *phasic* and *repetitive* units with two or more complex phases, there can be two or more targets. Thus, if the Target Locations differ between the complex phases of one to-be-coded unit, based on the Target Location assessment sub-units are created that match the complex phases. However, one complex phase is never divided further. This is an important rule because the value *both sides* is identical with the sum of the values *right side* and *left side*. Thus, any *both sides* unit would automatically be split up in a *right side* subunit and a *left side* subunit, if a complex phase could be split up.

### **13.3 Definition of the Target Location values**

#### **13.3.1** *right side*

The target is located in the body-internal space, the body-surface, or the bodyexternal (gesture/action) space that is right to the body midline sagittal plane.

<sup>26</sup> Note that in *repetitive* units, the Target Location assessment does not apply to the single subphases of the complex phase, but only to the complete complex phase.

### **Examples**


### **13.3.2** *left side*

The target is in the body-internal space, the body-surface, or the body-external space that is left to the body midline sagittal plane. Examples ♦ in analogy to *TL right side*

### **13.3.3** *body-midline*

The target is located in the sagittal plane that is in line with the body midline. Thus, the value *body-midline* refers only to a plane and not to a three-dimensional space. Accordingly, *body-midline* targets require spatially precise gestures and actions. As an example, in order to convey the information of a distant target being located at the body-midline sagittal plane, kinesically the longitudinal hand axis has to be in line with the body-midline sagittal plane, e.g. in a *deictic–external target* that refers to a target that is in line with the body midline, the longitudinal hand axis is in line with the body midline sagittal plane and the target is in the imaginary prolongation of the hand axis. In *spatial relation – position* gestures, the position created in the gesture space is in line with the *body midline*. In *egocentric direction* and *spatial relation – route* gestures, the target (destination) is in line with the *body-midline*27.

### **Examples**


<sup>27</sup> Often also the movement path (upward, downward, forward, backward, or a blend of these directions) of the *egocentric direction* or *spatial relation – route* gesture is executed in the body-midline plane. As an example, the hand with palm down is held in front of the navel and then moved downwards several times as the gesturer indicates to him-/ herself to calm down (*egocentric direction – self-related*). Here, the Target Location value is identical with the Execution Hemi-space value (both: *body-midline*).


### **13.3.4** *both sides*

The value *both sides* comprises the *left side* and the *right side* of the body-internal space, the body-surface, and the body-external (gesture/action) space. As a target is one location, at first glance it may seem contra-intuitive that one location is on *both sides*. However, there are several possible constellations:


### **Examples**


## **14 Supplementary category Execution Hemi-Space**

### **14.1 Definition of the category Execution Hemi-Space**

The Supplementary category Execution Hemi-Space specifies the location where the hand displays the complex phase. Thus, in the context of this category, execution is defined as the display of the complex phase.

With reference to the neural organization of spatial attention, the location of the execution is defined relative to the sagittal plane that is in line with the body midline (same as for the Supplementary category Target Location). The body midline sagittal plane divides the body-external space (gesture/action space) in two hemi-spaces. A hemi-space is defined as the body-external space that is to the left or to the right of the body midline sagittal plane. Four Execution Hemi-Space values are defined: (i) *ipsilateral:* the laterality of the hand that displays the complex phase matches the laterality of the hemi-space, e.g. left hand executes in left hemi-space, (ii) *contralateral:* the laterality of the hand that displays the complex phase contrasts the laterality of the hemi-space, e.g. left hand executes in right hemi-space, (iii) *body midline:* the hand displays the complex phase in the sagittal plane that is in line with the body midline, and (iv) *ipsi-contra:* the hand displays the complex phase in both hemi-spaces.

While the category Execution Hemi-Space shares with the category Target Location the neuropsychologically grounded division of space by the body midline sagittal plane, the two categories differ regarding the fact that Target Location applies to the body-internal space, the body-surface, or the body-external space, while Execution Hemi-Space only refers to the body-external space. Furthermore, the Target Location values are defined with regard to an egocentric right–left orientation (Is the target on the left or on the right of the egocentric space?), while the Execution Hemi-Space values are defined with regard to the relation between the laterality of the moving limb and the laterality of the hemi-space (Is the execution on the ipsilateral or on the contralateral side of the limb?).

### **14.2 Selection of units for the Execution Hemi-Space assessment**

For a reliable assessment of the Execution Hemi-Space category the experimental setting should have a frontal view on the participant in order to precisely estimate the body midline sagittal plane.

The Execution Hemi-Space can be assessed for all Function/Type values and furthermore, for all NEUROGES® units with a *phasic* or *repetitive* Structure, which per definition have a complex phase, as the Execution Hemi-Space assessment applies to the complex phase.

In *phasic* and *repetitive*28 units with one complex phase, there can be only one Execution Space. In *phasic* and *repetitive* units with two or more complex phases, there can be two or more Execution Spaces. Thus, if the Execution Spaces differ between the complex phases of one to-be-coded unit, based on the Execution Hemi-Space assessment sub-units are created. However, one complex phase is never divided further. This is an important rule because the EHS value *ipsicontra* value is identical with the sum of the values *ipsilateral* and *contralateral*. Otherwise, any complex phase that uses the ipsilateral and contralateral hemispaces (EHS value: *ipsi-contra)* would automatically be split up in an *ipsilateral* subunit and a *contralateral* subunit.

If, in bimanual units, the Execution Hemi-Space values for the right and left hands differ, the two hands are coded separately.

### **14.3 Definition of the Execution Hemi-Space values**

#### **14.3.1** *ipsilateral*

The laterality of the hand that displays the complex phase matches the laterality of the hemi-space. The knuckles (metacarpophalangeal joints) do not cross the body midline sagittal plane.

In unimanual right hand units, the right hand displays the complex phase in the right hemi-space. In unimanual left hand units, the left hand displays the complex phase in the left hemi-space. In bimanual units, the right hand displays the complex phase in the right hemi-space and the left hand in the left hemi-space.

#### **Examples**


<sup>28</sup> Note that in *repetitive* units, the Target Location assessment does not apply to the single subphases of the complex phase, but only to the complete complex phase.

**Fig. 16:** Values of the category Execution Hemi-Space

while the left hand displays the complex phase of an *egocentric deictic* in the left hemi-space.

### **14.3.2** *contralateral*

The laterality of the hand that displays the complex phase contrasts the laterality of the hemi-space. The knuckles cross the body midline plane.

In unimanual right hand units, the right hand displays the complex phase in the left hemi-space. In unimanual left hand units, the left hand displays the complex phase in the right hemi-space. In bimanual units, the right hand displays the complex phase in the left hemi-space and the left hand in the right hemi-space.

#### **Example**

♦ *phasic in space–egocentric deictic–external target*: The right hand displays the complex phase of an *egocentric deictic* in the left hemi-space.

### **14.3.3** *body-midline*

The complex phase is performed in the sagittal plane that is in line with the body midline. Typically, the longitudinal hand axis is line with the body midline plane. If the hand axis is orthogonal to the body midline plane, the knuckles are in line with the body midline plane.

#### **Examples**

♦ *phasic in space*–*egocentric deictic*–*self:* The hand displays the complex phase of an *egocentric deictic* pointing to the sternum. The longitudinal hand axis is line with the body-midline sagittal plane.


### **14.3.4** *ipsi-contra*

The complex phase trajectory extends over the *ipsilateral* and *contralateral* hemi-spaces.

#### **Examples**


## **15 Supplementary category Referent**

### **15.1 Definition of the category Referent**

A referent is "the thing in the world that the word or phrase denotes or stands for" (Oxford Dictionary). Accordingly, in the Supplementary category Referent, the referent is the thing in the world that the gesture denotes or stands for. The "thing" may be a material object/subject or a non-material phenomenon.

The Supplementary category Referent registers whether the analyst assumes that the Referent of the gesture is (i) a material object/subject or (ii) a nonmaterial phenomenon.

Experienced nonverbal behavior and gesture researchers usually have an intuition whether or not the Referent of a gesture is of material or non-material nature. However, thus far, little empirical research has been conducted to identify the movement criteria that coin the raters' intuition or to examine whether the raters' intuitive classifications are correct. Therefore, the Supplementary category Referent is a work-in-progress category and its values are hypothetical and need to be tested.

### **15.2 Selection of units for the Referent assessment**

The Supplementary category Referent is assessed for the Function values *egocentric deictic, egocentric direction, pantomime, form presentation, spatial relation presentation, motion quality presentation*, and the respective Type values.

If there is a change of the Referent within the to-be-coded unit, which had been adopted from the Function or Type category assessment, then the Referent change demarcates subunits.

### **15.3 Definition of the Referent values**

#### **15.3.1** *material*

#### **Short definition**

GESTURES REFERING TO MATERIAL OBJECTS / SUBJECTS

#### **Definition**

The rater assumes that the gesture refers to material objects or subjects, their concrete spatial relations or physical motions or actions.

Gestures with the Referent *material* are executed precisely with regard to hand shape, hand orientation, gesture space use, path during complex phase, and efforts. The gaze is directed at the (imaginary) material entity that is indicated or created.

### **Examples**

	- indicating to the addressee to actually move her/his body or parts of his/her body into a specific direction, or to actually move an object into a specific direction, e.g. to move the bottle on the table into a specific direction
	- a ballet dancer rehearsing a choreography with mental training by performing the directions of body movement with his/her hands, e.g. for plié the hands moving down, for jumping the hands moving up.
	- showing the shape of an apple by enclosing an imaginary apple; tracing the shape of the moon
	- showing the height of one's child; showing the height of a mountain
	- showing on an imaginary map of London the route from Tower Bridge to Westminster Abbey

**Fig. 17:** Values of the category Referent

perspective the action of walking by moving the index and middle finger with finger tips oriented downwards

♦ *motion quality–dynamics*: showing a volcanic eruption; showing the sensory quality of a fur

### **Differentiate** *material* **from**…

# *non-material*: Gestures that refer to *material* objects or subjects, their concrete spatial relations, or physical motions or actions are distinct and precise with regards to hand shape, hand orientation, gesture space use, path during complex phase, and efforts. The gaze is directed at the (imaginary) material entity that is indicated or created.

Conceptually similar gestures, e.g. an explosive relationship instead of an explosive volcano, may be used to refer to *non-material* phenomena. As compared to *material* gestures, their *non-material* counter-parts are more vague with regard to hand shape, hand orientation, gesture space use, path during complex phase, and efforts. The gaze is **not** directed at the imaginary non-material phenomenon that is indicated or created in gesture.

### **15.3.2** *non-material*

### **Short definition**

#### GESTURES THAT REFER TO NON-MATERIAL PHENOMENA

### **Definition**

Gestures with a *non-material* Referent refer to non-material phenomena. The non-material phenomena may be events, constructs, theoretical models, relations, mental processes such as mood and thoughts, etc. Included are gestures that refer to the gesturer's meta-perspective on his/her own process of thinking.

For researchers who want to specify the latter aspect of *non-material* reference, a more fine-grained differentiation is provided here. These *non-material own thoughts* gestures either implicitly reflect the logic structure of the thinking process (subtype: *own thoughts–ideographic*), the gesturer's evaluation of her/his own thoughts (subtype: *own thoughts–toning*), or the gesturer's acting with the own thoughts (subtype: *own thoughts–acting with*). *Own thoughts–ideographic* gestures are defined according to Efron (1941, p. 96): " …*ideographic*, in the sense that it traces or sketches out in the air the 'paths' and 'direction' of the thought pattern. [They] might also be called logico-topographic or logico-pictorial." Thus, ideographic gestures reflect the thinking process, i.e., the logical sequence in which the gesturer sets his/her thoughts, which thought develops from another one, etc. In line with Efron's term "logico-topographic" *ideographic* gestures are *spatial relation presentation* gestures. *Own thoughts–toning* gestures are gestural comments on the gesturer's verbal or gestural statement. They provide information about the reliability of the gestural or verbal statement such as "I am (not) confident about what I am saying here". *Toning* gestures are typically displayed at the start or at the end of a speech turn or a gesture sequence. *Toning* gestures are typically *motion quality presentation* gestures. In *own thoughts–acting with*, the gesturer acts with her/his thoughts as if they were objects, e.g. searches for them, throws them away, weights two ideas, etc. These gestures have an egocentric cognitive perspective, i.e. they are often *egocentric direction* or *pantomime-transitive* gestures. Researchers who want to register if the *non-material* referent refers to *own thoughts* technically note these codes in the tier Notes in NEUROGES®-ELAN.

In gesture, non-material phenomena are often depicted in analogy to material objects or subjects. However, as compared to gestures with a *material* Referent, their *non-material* counter-parts are more vague with regard to hand shape, hand orientation, gesture space use, path during complex phase, and efforts. The gaze is not directed at the presented non-material phenomenon.

#### **Examples**


indicating to the addressee to move his mental state into a certain direction, e.g. to calm down or to cheer up, or to move a thought, a plan, or an idea into a certain direction. As an example, the gesture indicates to the addressee to put down the idea, i.e., to forget about it.

♦ *egocentric direction – self-related:*

performing a repetitive rotation in the wrist when desperately searching for a word

The gesture is an example of the subtype *own thoughts*–*acting with,* as it seems as if the gesturer tries to rotate a word out of his/her brain. The gesture is possibly akin to the *palm-out* gesture as thoughts are rotated out. However, while the *palm-out* gesture reflects the success of having brought out the idea and presenting it, the rotating gesture is performed to facilitate the process of bringing out thoughts.

	- pretending to stand on unstable grounds to depict loosing control

pretending to weigh two arguments with the hands when considering the pros and cons of a plan

The weighing gesture is an example of the subtype *own thoughts–acting with*, as it indicates that the gesturer opposes and compares two ideas or arguments. The palms are oriented upwards and held like weighing scales and they move alternatingly up and down.

♦ *pantomime – transitive-passive:*

pretending to be hit by something heavy when depicting being struck by bad news

♦ *form–size*:

depicting a super event; or a big problem

♦ *form–shape*:

depicting a perfect round evening


depicting a temporal course, e.g. the decline of the Roman Empire

	- creating a route that reflects the gesturer's implicit structuring of a working process
	- The gesture is an example of the subtype *own thoughts ideographic,* if it reflects the gesturer's subjective implicit organization of a process, e.g. "I start with the vocabulary (marking position 1), then I learn the grammar (route to position 2), then I directly move to translating the text (direct route to position 3), and finally I repeat the vocabulary again (route back to position 1)".
	- indicating that one is not sure about the own verbal oder gestural statement
	- The waggling gesture is an example of the subtype *own thoughts toning,* if the gesturer uses it to indicate that his/her statement might not be reliable. In this case, the waggling gesture is displayed at the beginning or end of a gesture sequence or the verbal statement.
	- The Structure of the unimanual gesture is *repetitive,* the Focus is *in space*. The waggling gesture is a fast small supination–pronation movement of the hand with not more than 90° between the supination and pronation end positions. The hand shape is flat with the fingers spread without tension. The palm in oriented in the horizontal or sagittal plane. The path during complex phase is two-dimensional and semi-circular.

depicting an explosive dispute

# **16 Supplementary category Trigger/Motive**

### **16.1 Definition of the category Trigger/Motive**

The Supplementary category Trigger/Motive registers whether the analyst assumes that the trigger or the motive of the *subject-oriented action* is (i) to change the gesturer's physical state, (ii) to change the gesturer's visual appearance, or (iii) to regulate the mental state.

Experienced nonverbal behavior and gesture researchers usually have an intuition about the trigger or the motive of a *subject-oriented action*. However, thus far, little empirical research has been conducted to identify the movement criteria that coin the raters' intuition or to examine whether the raters' intuitive classifications are correct. Therefore, the Supplementary category Trigger/ Motive is a work-in-progress category and its values are hypothetical and need to be tested.

### **16.2 Selection of units for the Trigger/Motive assessment**

Function units with the value *subject-oriented action* are submitted to the assessment with the supplementary category Trigger/Motive.

### **16.3 Definition of the Trigger/Motive values**

#### **16.3.1** *physical regulation*

#### **Short definition**

#### ACTIONS TO CHANGE GESTURER'S PHYSICAL STATE

#### **Definition**

These are actions that aim at changing physical states, i.e., to give relief from unpleasant physical states or to produce pleasant states. They are reactions to somatosensory perceptions. Perceiving pain, being cold, being hot, etc. trigger *physical regulation* actions. Accordingly, these actions have a clear–typically for the rater identifiable–effect on the body, e.g. rubbing the skin, improving vision.

Note that these actions that originally serve *physical regulation* may become a habit and then serve *mental regulation* (see 16.3.3).

### **Movement form**

**Structure**: *phasic, repetitive*

**Focus**: *on body, within body > on attached object, on separate object, in space*

**Efforts**: depending on the physical goal

### **Involvement of other parts of the body**: -

**Gaze**: sometimes at the hand or the focus, respectively, e.g. at the hand that scratches the spot on the arm that itches

### **Other criteria**: -

**Note**: the part of the body that is the focus of the *physical regulation* action **Examples** (ordered according to StructureFocus)


### **Differentiate** *physical regulation* **from**…

# *mental regulation*: In *physical regulation* the specific physical need that triggers the action can often be objectively identified, e.g. she folds her arms because she is cold. If the physical trigger requires it, the *physical regulation* action may be complicated including contortions such as scratching oneself on the back. In *mental regulation* actions no objective physical trigger can be identified. *Mental regulation* actions typically occur

**Fig. 18:** Values of the category Trigger/Motive

as part of stereotypical behavioral patterns that are displayed repeatedly in the same psycho-social context, e.g. when someone is stressed, amused, embarrassed etc.

#### **16.3.2** *visual appearance*

#### **Short definition**

ACTIONS TO CHANGE THE GESTURER'S VISUAL APPEARANCE

### **Definition**

These are actions that change the gesturer's visual appearance. In general, the gesturer intends to improve the visual appearance in order to look more attractive. Only in shy subjects or in specific situations an individual might intend to look less attractive. Thus, most often *visual appearance* actions are preening behaviors and they have a social effect.

*Visual appearance* actions can be specific reactions to specific deviations in the visual appearance, e.g. the tie is not straight or the hair is not in place. However, *visual appearance* actions may also be displayed if no corrections of the visual appearance are necessary. In this case, they are rather displayed as an appeasement behavior indicating to the addressee that the gesturer wants to be beautiful for him/her and please him/her, e.g. stroking the hair behind the ear although the hair is in place.

#### **Movement form**

**Structure**: *phasic, repetitive* **Focus**: *on body, on attached object* **Efforts**: distinct use effort qualities

**Involvement of other parts of the body**: involvement of trunk possible

**Gaze**: sometimes at addressee

### **Other criteria**: -

**Note**: the part of the body that is the focus of the *visual appearance* action **Examples** (ordered according to StructureFocus)


Differentiate *visual appearance* from…

# *mental regulation*: *Visual appearance* actions are effective and specific in order to change the visual appearance. Often, the deviation in the visual appearance that motivates the action can be identified. If *visual appearance* actions are displayed as preening behavior, they are demonstrative and communicative.

In *mental regulation* actions no prior deviation in the visual appearance can be identified. M*ental regulation* actions typically occur as part of a stereotypical behavioral pattern that are displayed repeatedly in the same psycho-social context, e.g. when someone is stressed, amused, embarrassed etc.

### **16.3.3** *mental regulation*

### **Short definition**

### ACTIONS THAT STIMULATE THE BODY WITHOUT A RECOGNIZABLE PHYSICAL TRIGGER

### **Definition**

These are actions that serve to regulate the gesturer's mental state, i.e., to work off energy, to stimulate, or to calm down. Two polar mental states that may elicit *mental regulation* actions are distinguished here: (i) hyperarousal, e.g. nervousness, stress; in this condition, *mental regulation* actions serve to calm down the gesturer; (ii) hypoarousal, e.g. boredom, tiredness etc. the gesturer has to cope with; in this condition, *mental regulation* actions serve to stimulate and activate.

No physical trigger, no visual appearance deficit nor social motive to improve the visual appearance is recognizable in *mental regulation* actions. Accordingly, the actions do not necessarily have a clear effect on the body or on the visual appearance. The most important feature of *mental regulation* – in contrast to *physical regulation* and *visual appearance* regulation–is that the individual displays the actions repeatedly in a stereotypical manner in the absence of objectively identifiable physical and visual appearance related triggers. Thus, in order to code the value *mental regulation*, the *subject-oriented action* should at least be observed twice in a stimulus set. If, while coding, the rater suspects that an action serves *mental regulation* but up to that moment (s)he only observed the behavior once, (s)he should first code the value *trigger/motive unknown* and then, if the behavior occurs for the second time, go back to the first tag and code *mental regulation.*

#### **Movement form**

#### **Structure**: *phasic, repetitive*

**Focus**: *on body, on attached object, on separate object, within body*

**Efforts**: depending on the action

#### **Involvement of other parts of the body**: -

**Gaze**: the gaze is typically **not** directed at the acting hand

**Other criteria**: Gesturers often display the *mental regulation* actions repeatedly in a stereotypical manner, often as part of a complex behavioral pattern, e.g. pulling on the necklace and smiling embarrassedly.

**Note**: the part of the body that is the focus of the *mental regulation* action **Examples** (ordered according to StructureFocus)


### **16.3.4** *trigger/motive unknown*

As the Function category *subject-oriented action* refers to internal processes, i.e., physical states, mental states, and the wish to improve one's visual appearance, it may be difficult to reliably identify the internal trigger or motive. If the rater is insecure in his/her assumption concerning the gesturer's motive or trigger, (s)he chooses the value *trigger/motive unknown.*

## **List of Figures**


# **List of Tables**


## **References**


Freksa, C. Habel, K.F. Wender (Eds.), Spatial Cognition. LNCS (LNAI), 1404, 1–17. Heidelberg: Springer .


Oxford Dictionary


Webster's dictionary

Wikipedia