# Augmented Humanity

Being and Remaining Agentic in a Digitalized World

Peter T. Bryant

Augmented Humanity

# Peter T. Bryant Augmented Humanity

Being and Remaining Agentic in a Digitalized World

Peter T. Bryant IE Business School IE University Madrid, Spain

#### ISBN 978-3-030-76444-9 ISBN 978-3-030-76445-6 (eBook) https://doi.org/10.1007/978-3-030-76445-6

© Te Editor(s) (if applicable) and Te Author(s) 2021. Tis book is an open access publication.

**Open Access** Tis book is licensed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence and indicate if changes were made.

Te images or other third party material in this book are included in the book's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the book's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.

Te use of general descriptive names, registered names, trademarks, service marks, etc. in this publication does not imply, even in the absence of a specifc statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use.

Te publisher, the authors and the editors are safe to assume that the advice and information in this book are believed to be true and accurate at the date of publication. Neither the publisher nor the authors or the editors give a warranty, expressed or implied, with respect to the material contained herein or for any errors or omissions that may have been made. Te publisher remains neutral with regard to jurisdictional claims in published maps and institutional afliations.

Cover illustration: © Alex Linch/shutterstock.com

Tis Palgrave Macmillan imprint is published by the registered company Springer Nature Switzerland AG. Te registered company address is: Gewerbestrasse 11, 6330 Cham, Switzerland

# **Preface**

Digitalization is transforming the contemporary world, as digital technologies infuse all domains of human experience and striving. Te efects are everywhere, profound and accelerating. Among the most remarkable efects is the growing capability of human-machine interaction, which allows advanced artifcial agents to collaborate with human beings, as enablers, partners, and confdantes. Increasingly, human and artifcial agents work together in close collaboration, efectively as one agent in many situations, to pursue common goals and purposes. In this way, digitalization is driving the augmentation of humanity, where humanity is conceived as communities of purposive, goal-directed agents. Te transformation is already visible in numerous expert domains, in which artifcial agents extend and complement human expertise, for example, in clinical medicine and the piloting of aircraft. Much practice in these domains now relies on the real-time assistance of humans by artifcial agents, who together form digitally augmented agents, also called humanagent systems in computer science. Clinicians collaborate with such agents to perform advanced diagnosis, patient monitoring, and surgery, while pilots work closely with artifcial avionic agents to control their aircraft. Within each domain, collaboration between human and artifcial agents increases the accuracy, speed, and efciency of action, although digital innovations also bring new risks and dilemmas. When collaboration fails, the results can be debilitating, costly, and sometimes fatal.

Similar opportunities are emerging in many other domains. Highly intelligent, artifcial agents are becoming ubiquitous throughout human experience, thought, and action. Industrial organizations are very clearly impacted. Intelligent robots and artifcial agents already perform many complex manufacturing, logistical, and administrative tasks, and some are now capable of advanced analytical and creative work (Ventura, 2019). Tey are powering the fourth industrial revolution. Comparable innovations are transforming education and entertainment, especially in online environments. Everyday life is equally afected. People constantly collaborate with artifcial agents, such as Apple's Siri and Amazon's Alexa, in shopping, managing the home environment, and searching for information. In like fashion, the expression of personality, identity, and sociality are mediated by smartphones. Human beings curate their virtual selves, relationships, and social world in collaboration with artifcial agents. Most people can confrm this from personal experience. In summary, the early twenty-frst century is witnessing the rapid digital augmentation of humanity.

Furthermore, the next wave of digital technology promises to be even more transformative and collaborative. Many people could be surprised by the nature and speed of innovation. For example, newer digital technologies will empower empathic relating, allowing artifcial agents to interpret and exhibit emotional states and moods. Google's recent experiments provide strong evidence: observers could not tell the diference between a human and artifcial agent, during an everyday telephone conversation (Leviathan & Matias, 2018). Te artifcial agent sounded genuinely empathic and human. Related innovations will imitate other aspects of personality, including the expression of attitude, opinion, and humor. Te cumulative impact will approach what some refer to as the singularity, in which artifcial intelligence is functionally equivalent to human and possibly becomes transcendent (Eden et al., 2015). In addition, a vast array of intelligent sensors and the internet of things will enable realtime, precise perception of the environment. Ubiquitous and potentially invasive, digitally augmented surveillance will envelop the world. Applications will deliver many benefts, including the control of autonomous vehicles and smart cities (Riaz et al., 2018), and hopefully more sustainable management of the natural and built environments. Indeed, if wisely employed, digital innovations could help to alleviate the existential threats currently faced by humanity, including climate change, pandemic disease, and environmental degradation.

Many of these digital innovations rely on the fusion of multiple disciplines and particularly computer science, branches of engineering, social and cognitive psychology, and neuroscience. Major research eforts are underway which connect these felds, for example, in the study of braincomputer interfaces and multi-agent systems. Applications are already deployed in intelligent prosthetics, automated transport systems, and the hands-free use of computers (Vilela & Hochberg, 2020). Other innovations are transforming rehabilitation after brain injury and intersubjective communication (Brandman et al., 2017; Jiang et al., 2018). Overall, future digital technologies will be faster, more powerful, accurate, and connected. Tey will enable radically new forms of human-machine interaction and collaboration. Artifcial agents will augment the full range of human thought, feeling, and action, at every level of personal and collective organization. Te digital augmentation of humanity is underway.

Yet, as noted earlier, there are signifcant risks and dilemmas too. First, the potential benefts of digitalization are unevenly distributed. Major segments of humanity are being left behind or marginalized (Te World Bank, 2016). For these groups, the digital divide is widening. If this continues, digitalization will amplify, rather than mitigate, socioeconomic deprivation, inequality, and injustice. Second, digitalization is vulnerable to manipulation by powerful interests. Owing to the technical reach and integration of digital networks, they could become tools of social control and coercion. Indeed, we already see examples of artifcial systems being used to suppress, mislead, or demonize groups for ideological, political, and cultural reasons. Tird, social biases and racial stereotypes easily infect artifcial intelligence and machine learning, leading to new forms of discrimination and injustice. Fourth, human agents are often slow to learn and adapt, especially with respect to their fundamental beliefs and commitments. At every level, whether as individuals, groups, or collectives, people get stuck in their ways. Habits and routines are often hard to shift, and assumptions are resilient, which can be appropriate and welcome in some contexts but constraining in others. In any case, when it comes to information processing, humans are simply no match for artifcial agents. Hence, humanity could struggle to absorb the efects of digitalization. Artifcial agents might outrun or overwhelm the human capability for adaptive learning, resulting in unintended consequences and dysfunctional outcomes.

# **New Problematics**

Collectively, these trends pose new explanatory problematics, defned as fundamental and often contentious questions within a feld of enquiry (Alvesson & Sandberg, 2011). In this case, the feld of enquiry concerns the explanation of civilized humanity, conceived as communities of purposive, goal-directed agents. Te new problematics which digital augmentation poses go to the heart of this feld: how can human beings collaborate closely with artifcial agents while remaining genuinely autonomous in reasoning, belief, and choice; relatedly, how can humans integrate digital augmentation into their subjective and intersubjective lives while preserving personal values, identity, commitments, and psychosocial coherence; how can digitally augmented institutions and organizations, conceived as collective agents, fully exploit artifcial capabilities while avoiding extremes of digitalized docility, dependence, and domination; how can humanity ensure fair access to the benefts of digital augmentation and not allow them to perpetuate systemic deprivation, discrimination, and injustice; and fnally, the most novel and controversial challenge, which is how will human and artifcial agents learn to understand, trust, and respect each other, despite their diferent levels of capability and potentiality. Tis last question is controversial because it implies that artifcial agents will exhibit judgment and empathy. It suggests that soon we will attribute a type of autonomous mind to artifcial agents. Recent research in artifcial intelligence and machine learning shows that these qualities are feasible and within reach (Mehta et al., 2019).

In fact, modernity has puzzled over similar questions since the European Enlightenment. Scholars have long debated the limits of human capability and potentiality, and how best to liberate autonomous reason and choice, in light of natural and social constraints (Pinker, 2018). Modernity therefore focuses on similar questions to those posed by digitalization. Tese modern problematics include: how can human agents collaborate with each other, in collective thought and action, while developing as autonomous persons; how can humans absorb change while preserving valued commitments and psychosocial coherence; how can societies develop efective institutions while avoiding excessive docility and domination of citizens; how can humanity ensure fair access to the benefts of modernity and not allow growth to perpetuate deprivation, discrimination, and injustice; and fnally, a defning question for modernity, which asks to what degree can and should human beings overcome their natural and acquired limits, to be more fully rational and empathic? Tis last question is also controversial. Some argue that limited capabilities and functional incompleteness are humanizing qualities, while others view them as faws which can and should be overcome (Sen, 2009).

Terefore, digitalization leads us to problematize the underlying assumptions and concerns of modernity. First, digitalization radically expands intelligent processing capabilities, thus problematizing concepts of bounded rationality. Second, and relatedly, as intelligent machines edge closer to exhibiting a type of autonomous mind, they problematize the classic ontological distinction between human consciousness and material nature, and thus between mind and body. Tird, digitalization supports the rapid composition and recomposition of agentic forms, allowing for dynamic modalities, thereby problematizing the traditional distinction between individuals, groups, and larger collectives. Fourth, digitalization enables adaptive commitments across multiple contexts and cultures, akin to digitally augmented cosmopolitanism. Each problematization has practical implications as well because people think and act based on their core assumptions and commitments. If these are disrupted or refuted, people must respond. Evidence suggests that some will simply resist and remain wedded to priors, whereas others may feel overwhelmed by digitalization and perhaps abandon priors altogether and surrender to digital determination. None of these extremes is an efective response. Finding the right balance will be critical. Much of this book is about that challenge.

# **About This Book**

Te book began by exploring digitalization through a set of smaller projects, each focused on a specifc area of impact, especially regarding theories of problem-solving and organization. Some of the topic chapters refect these origins. However, this approach proved too incremental and encumbered. Constraining assumptions were everywhere, from bounded rationality to the polarization of mind and nature, individuality versus collectivity, and interpretative understanding versus causal explanation, plus related distinctions between abstract and practical reason, ideal versus actual performance, and deductive versus inductive justifcation of belief. In summary, after a few years of struggling within such constraints, I decided that a diferent kind of project was required. Te current book began to take shape.

Over time, I realized that the conceptual architecture of modernity was inadequate for the task. Te efects of digitalization are too deep and novel, and especially the dilemmas which arise from the combination of human and artifcial capabilities and potentialities. In this regard, humans are holistic in their thinking, often myopic or nearsighted, relatively sluggish in processing, and insensitive to variance. By comparison, artifcial agents are increasingly focused, farsighted, fast, and hypersensitive to minor fuctuations. Clearly, human and artifcial agents possess diferent processing capabilities and potentialities. When they collaborate as augmented agents, therefore, the result could be extremely divergent or convergent processing. One agent might dominate the other and the overall system will be convergent in human or artifcial terms. Alternatively, the two types of agents might collaborate but also diverge and confict. Te combined system could be farsighted and nearsighted, fast and slow, complicated and simplifed, hypersensitive and insensitive, all at the same time. However, these novel dynamics are difcult to capture using the traditional concepts and questions of modern human science. Indeed, as I argued previously, digitalization leads us to problematize many traditional assumptions and concerns. I therefore introduce fresh concepts and terminology to describe these novel phenomena, their related mechanisms, and dilemmas.

For some readers, the new concepts and terminology may be challenging. Such novelties require efortful reading, especially when used in original theorizing. However, I hope readers will agree that the uniqueness of the phenomena warrants this approach, and that to unpack and analyze the digital augmentation of humanity, we need to refresh our conceptual architecture. Signifcant features of digital augmentation are highly novel and not yet clearly conceptualized in the human sciences. For similar reasons, this work is largely conceptual and prospective. It looks forward, trying to shed light on an emerging terrain. It explores problematics and invites scholarly refection about widely assumed concepts and models. Further empirical investigations and testing are necessary, of course, but frst, the theoretical framework can be established.

## **Opportunities and Risks**

Tis book therefore examines the prospects for digitally augmented humanity. It problematizes prior assumptions and formulates new questions and dilemmas, although the book does not attempt fully to resolve these issues. Rather, it seeks to advance the future science of augmented humanity and agency. Refecting this breadth and style, the book is multidisciplinary, prospective, and occasionally speculative, which has advantages and risks. In terms of advantages, prospective theorizing can bring clarity and organization to new phenomena. Furthermore, it allows us to combine insights from diferent felds, in ways which transcend existing knowledge and which cannot easily be demonstrated empirically. More specifcally, the argument combines insights from social cognitive psychology, computer science and artifcial intelligence, behavioral theories of problem-solving, theories of social choice and microeconomics, and insights from organization theory, philosophy, and history. Extensive referencing of literature supports this breadth. Overall, therefore, the book looks forward toward a larger project of investigating, explaining, and managing the phenomena in question while acknowledging that initial proposals will need to evolve, as future investigations unfold.

In terms of risks, prospective theorizing is exactly this, prospective and not yet fully elaborated or tested. Moreover, problematization is inherently broad, at a high level, and hence this work does not delve into detail on every topic. Tis adds to the challenge and risk, but also the opportunity, for it allows us to formulate new explanatory frameworks. To do this, the author must be diligent and informed about main features of the felds in question, and the reader should be happy to think about broad questions. In further defense of this approach, the book's central motivation deserves repeating, namely, that the speed, novelty, and impact of digital augmentation require new concepts and theorizing. Extraordinary digital innovations are rushing ahead, and exploratory leaps are required to keep up. A piecemeal treatment will not do justice to the full impact of this transformation. Te phenomenon calls for creative thinking at an architectural level. It calls for pluralistic "theorydriven cumulative science" (Fiedler, 2017, p. 46). My work embraces this challenge and keeps it in front of mind. As always, the reader will judge if the potential advantages outweigh and justify the risks.

# **Reading This Book**

As noted previously, this book views humanity in terms of purposive agency and then examines how digitalization enables augmented agentic form and functioning, conceived as close human-machine collaboration. Chapters examine implications for a range of domains, from problemsolving to the future of human science. Terefore, the book is broad in scope and intended for a wide readership, embracing all the human and digital sciences. For this reason, some readers may fnd parts of the argument unfamiliar and technical. However, no formal or unusual methods are employed, and I expect all readers can understand what the book seeks to convey. Te fgures and diagrams aim to clarify the processes of digital augmentation. Each is accompanied by a full narrative explanation as well. Tese elements are fundamental to the work and sprinkled throughout. I therefore encourage readers to embrace the purpose, be ready to adopt new concepts and terms, and to study the fgures which illustrate novel processes and mechanisms. Hopefully, readers will agree that the efort is worthwhile and that the argument lays the groundwork for further research into these phenomena.

Regarding specifc chapters, it is important to start with the frst two. Chapter 1 sets out the broad topic and discusses how to model the digital augmentation of humanity. Chapter 2 then identifes major historical patterns and highlights the role of technological innovation in assisting agentic capability and potentiality and, critically, the role of digital technologies in this regard. Subsequent chapters examine the implications of digital augmentation for the following domains: agentic modality, problem-solving, empathy with other minds, self-regulation, evaluation of performance, learning, self-generation, and fnally the science of digitally augmented agency.

Madrid, Spain Peter T. Bryant

# **References**


Sen, A. (2009). *Te idea of justice*. Harvard University Press.


# **Acknowledgments**

In writing this book, I received much support and encouragement. Te project began while I was employed at IE University in Madrid, Spain. Being an innovative and entrepreneurial institution, IE provided a supportive and conducive environment. I also hold a research fund there which supported the work, including the welcome opportunity to publish it open access. Troughout the process, I received invaluable advice and feedback from colleagues and friends at my own school and elsewhere. Special thanks to Richard Bryant, Pablo Garrido, Dan Lerner, Luigi Marengo, Willie Ocasio, Simone Santoni, and especially to Giancarlo Pastor who provided exceptional support and insight during a difcult year of global pandemic, when much of this book was written. Sincere thanks also to colleagues at academic conferences over recent years and to the very professional editorial and production teams at Palgrave Macmillan.

# **Contents**




### **Index** 297

# **List of Figures**


#### **xxii List of Figures**


# **List of Tables**


# **1**

# **Modeling Augmented Humanity**

As intelligent sociable agents, human beings think and act in autonomous and collaborative ways, fnding fulfllment in communities of shared meaning and purpose. Without such qualities, culture and civilization would be impoverished, in fact, barely possible. Tus conceived, being and remaining agentic matter greatly. Individuals, groups, and collectives dedicate signifcant efort and resources to furthering these ends. Technologies of various kinds often assist them. Many institutions also exist for these purposes: to foster human development, facilitate cooperation, and grow collective endowments. Societies therefore organize to sustain and develop their members. Agentic capabilities and potentialities improve and humanity can prosper. Over recent centuries, especially, this has led to major advances in health, education, productivity, and empowerment.

Yet positive outcomes are not guaranteed. History teaches that natural and human disasters are never far away and often lead to unreasonable and inhumane behavior. Indeed, global threats loom today, including climate change, pandemic disease, and degradation of the environment. In addition, human malice and injustice can occur anywhere at any time, and they often do. Furthermore, resources and opportunities remain scarce for many, severely restricting their potential to develop and fourish. At the same time, capability and potentiality are unevenly distributed. Humans are limited by nature and nurture, especially regarding the capabilities required for intelligent thought and action. People gather and process information imperfectly, often in myopic or biased ways, then reason and act poorly, falling short of preferred outcomes and failing to learn. Not surprisingly, therefore, it takes time and efort to grow capabilities and potentialities, especially for purposive, goal-directed action. It is the work of a lifetime, to be fulflled as an autonomous, intelligent, and efcacious human being. And the work of history, to achieve such fulfllment on a social scale.

# **Historical Patterns**

Notwithstanding these challenges, capabilities and potentialities develop over time, owing to improved nurture, resources, opportunities, and learning. Major drivers also include social and technological innovation (Lenski, 2015). In fact, since the earliest periods of civilization, humans have crafted tools to complement their natural capabilities. Tey also pondered the stars and seasons and developed explanatory models which made sense of the world and life within it, where models, in this context, are defned as simplifed representations of states or processes, showing their core components and relations (see Johnson-Laird, 2010; Simon, 1979). Granted, in the premodern period, models of the world and being human often relied on myth and superstition, but they captured broad patterns nonetheless and codifed the rhythms of nature and fortunes of fate. Where, following others, I defne premodern as before the modern period of European Enlightenment and industrialization (e.g., Crone, 2015; Smith, 2008). And importantly, the technological assistance of humanity began in premodernity, albeit in a primitive fashion. Over time, capabilities and technologies continued to evolve and difuse. Despite episodic disruption and setbacks, civilized humanity has developed, typically in a path-dependent fashion (Castaldi & Dosi, 2006). For Western civilization, this path traces back to ancient Greece and Rome, which in turn drew deeply from earlier, Eastern civilizations. Teir cumulative legacy survives today, in many of the languages, concepts, and models which still enrich culture and thought.

Ancient learning enjoyed a renaissance in parts of the Mediterranean world during the ffteenth century CE. Artists, scholars, and architects drew insight and inspiration from the ancients. Another important infection point was the European Enlightenment of the seventeenth and eighteenth centuries. Over time, intelligent capabilities grew, initially among privileged members of society. After much historic struggle and social change, these capabilities difused and deepened, to become the shared endowment of modernity. Here again, technological innovation was crucial. From the frst telescopes and microscopes to the printing press and early adding machines, then to the steam, electronic, and computer ages, technological innovation has expanded agentic capability and potentiality. Adam Smith (1950, p. 17) noted this type of impact, when he wrote, "the invention of a great number of machines which facilitate and abridge labour, and enable one man to do the work of many." In parallel, new psychological and social models emerged, which assume that human beings have the potential to learn and develop as intelligent agents (Pinker, 2018). Te modern challenge thus became, how to grow agentic capabilities and potentialities, so that more persons can enjoy these benefts and fourish. Political and cultural struggles also ensued, as groups fought to control the future and either to defend or to dismantle the vestiges of premodernity.

While the preceding historical account is reasonably grounded, it clearly simplifes. Almost by defnition, periods of civilization span widely in time and culture. Any detailed history will be notoriously complex and irregular. Tere are few consistent patterns, and even those which can be observed should be treated as contingent (Geertz, 2001). For the same reasons, totalizing conceptions often over-simplify. As Bruno Latour (2017) explains, modern concepts of the globe and humanity itself assume unifed categories which obscure fundamental distinctions. Hence, we must ask, is it possible to identify patterns of civilized humanity over time? Previous attempts have often been misguided and lacked validity. Most transparently failed, because they sought to generalize from one or other historical context, and then extrapolated from temporal contingency to universality. Arguably, the model of history ofered by Karl Marx exhibits this faw. Noting this common failing, it can be argued that all knowledge of such phenomena is contextual. Few, if any, patterns transcend historical contingency. To assume otherwise could be misleading and potentially dangerous, especially if it supports ideologies which deny the inherent diversity of human aspiration and experience. Nevertheless, if we respect caution and openly acknowledge contextual contingency, it is still possible to generalize, at least at a high level.

Given these caveats, scholars observe that civilized humanity exhibits broad patterns of behavior and striving over successive historical periods (Bandura, 2007). Many of these patterns are anthropological and ecological, rather than historical, in a detailed narrative sense. Evidence shows that civilized humanity has always been purposive and selfgenerating, creative and inventive, hierarchical and communal, settled as well as exploratory, competitive and cooperative. In short, civilized humanity is deeply agentic. Granted, these patterns are broad, but they are consistent, nonetheless. Scholars in numerous felds recognize them (e.g., Braudel & Mayne, 1995; Markus & Kitayama, 2010; Wilson, 2012). In any case, like all theoretical modeling, it is necessary to simplify, to focus on the main topics of interest. All models and theories must be selective. Debate is then about what to select and simplify, how, and why. Whether such models are illuminating and explanatory is determined by application and testing. Science always progresses in this fashion. Te current work will focus on the broad efects of digital augmentation on humanity, viewed as cultural communities of purposive agents.

# **Dilemmas of Technological Assistance**

Technologies assist and complement human capabilities, compensating for weaknesses and helping to overcoming limits. More specifcally, technological assistance addresses the following needs. First, humans are limited by their physiological dependencies, whereas technologies can function independent of such constraints, for example, by operating in extreme, hostile environments. Second, humans are frequently proximal and nearsighted, whereas technologies are distal and farsighted. Technologies therefore extend the range and scope of functioning, as when telescopes gather information from distant galaxies. Tird, humans are often relatively slow and sluggish, compared to technologies which can be fast and hyperactive. Technologies therefore accelerate functioning. Fourth, humans are frequently insensitive to variance and detail, whereas technologies can be very precise and hypersensitive. In this fashion, technologies improve the accuracy and detail of functioning. Fifth, humans are irregular bundles of sensory, emotional, and cognitive functions, whereas most technologies are highly focused and coordinated. Hence, technologies enhance the reliability and accuracy of specifc functions, for example, in robot-controlled manufacturing. And sixth, humans are distinguished as separate persons, groups, and collectives, while technologies can be tightly compressed, without signifcant boundaries or layers between them. Technologies thereby enhance functional coordination and control, exemplifed by automated warehouses and factories.

All six types of extension refect the fundamental combinatorics of technologically assisted humanity, that is, the combination of human and technological capabilities in agentic functioning. Over longer periods, history exhibits a process of punctuated equilibrium in these respects. During these punctuations, the technological assistance of agency achieves signifcantly greater scale, speed, and sophistication. Not surprisingly, transformations of this kind are consistent foci of study (Spar, 2020). For instance, studies investigate how modern mechanization combines technologies and humans in social and economic activity (Leonardi & Barley, 2010). Early sites were in cotton mills and steampowered railways. Other technologies infused social and domestic life, combining humans and machines in systems of communication and entertainment. More recently, human-machine combinatorics reach into everyday thought and action, through smartphones, digital assistants, and the ubiquitous internet. Once again, the technological assistance of agency is transitioning to a new level capability and potentiality. Major benefts include far greater productivity and connectivity.

Digitalization therefore continues the historic narrative of modernity, for good and ill, where digitalization is defned as the transformation of goal-directed processes which lead to action—that is, the transformation of agentic processes—through the application of digital technologies (Bandura, 2006). Tus defned, digitalization embraces a wide range of digital technologies and afects a wide range of agentic modalities and functional domains. Most notably, advanced digital technologies enable close collaboration between human and artifcial agents as digitally augmented agents, also known as human-agent systems in computer science. New challenges thus emerge. On the one hand, advanced artifcial agents are increasingly farsighted, fast, compressed, and sensitive to variation. On the other hand, humans are comparatively nearsighted, sluggish, layered, and insensitive to variance. Clearly, both agents possess complementary but diferent capabilities, and combining them will not be easy.

Digitalization therefore entails new opportunities, risks, and dilemmas for human-machine collaboration. One possible scenario is that artifcial agents will overwhelm human beings and dominate their collaboration. Te overall system would be convergent in artifcial terms. Alternatively, persistent human myopia and bias could infect artifcial agents, and digitalization would then amplify human limitations. Now the system would be convergent in human terms. While in other situations, both types of agent may lack appropriate supervision and go to divergent extremes, where supervision in this context means to observe and monitor, then direct a process or action. In fact, we already see evidence of each type of distortion. Digitalization therefore constitutes a historic shift in agentic capability, potentiality, and risk. As in earlier periods of technological transformation, humanity will need new methods to supervise human-machine collaboration in a digitally augmented world. Analysis of these developments is a major purpose of this book. Also for this reason, it is important to distinguish the following types of agency which are central to the argument:


digitalized and humanized, to signifcant degrees. Performances are collaborative achievements of human and artifcial agents.

## **Supervisory Challenges**

Figures 1.1 and 1.2 illustrate the supervisory challenges just described, namely, how to combine and manage human and technological functioning in agentic action. Te two fgures depict complementary models. Both show the complexity of human functioning on the vertical axis and of technological functioning on the horizontal axis. Te models also show the limits of overall supervisory capabilities, depicted by the curved lines, with L1 being the natural human baseline, and increasing capabilities in L2 and L3 owing to technological assistance. In each model, the gap between these lines therefore depicts the variance in supervisory

**Fig. 1.1** Minor gap in supervisory capability

**Fig. 1.2** Major gap in supervisory capability

capabilities, while all capabilities reach asymptotes of maximum complexity for human and technological functioning.

Figure 1.1 depicts a minor gap between supervisory capabilities L1 and L2, meaning that technological assistance at L2 does not add much to the baseline at L1. Te fgure then depicts two systems of functioning. Te frst is defned by human functioning HA and technological functioning TA. Technological complexity is less in this case, while human functioning is more complex. For example, it could be a deliberate intentional type of action which is modestly supported by technology, such as writing a letter using a pen. Assuming L1 as the natural human baseline, an agent requires a small increase in supervisory capabilities to complete this activity, and hence capabilities at level L2 are sufcient. Put simply, pens are simple tools, even if the written thoughts are complex. Te second system is defned by human functioning HB and technological functioning TB. Now technological functioning is more complex, such as routine procedures which rely heavily on technologies, for example, riding in a carriage. Indeed, most people easily ride as passengers in carriages, although the carriage itself requires active supervision to maintain and control it. Once again, agents require modest supervision to complete this activity, and hence capabilities at level L2 are sufcient. In addition, Fig. 1.1 shows two segments labeled A and B, which are the functions beyond baseline supervisory capability L1. Both segments are relatively small, owing to the modest increase in supervisory complexity between L1 and L2. Put simply, it is relatively easy to supervise the use of pens and riding in carriages. In fact, many premodern activity systems were like this, owing to the relative simplicity of their technologies.

Next, Fig. 1.2 depicts a major gap between limits L1 and L3, meaning that technological assistance at L3 adds signifcantly to the baseline at L1, especially if L3 includes digital technologies. Te model again depicts two systems of functioning. Te frst is defned by more complex human functioning HC and less complex technological functioning TC. For example, it could be a deliberate, intentional form of action which is supported by digital technology. Perhaps the writer now uses a word processor to compose a news article. Te tool may be fairly easy to use, while the thoughts are intellectually complex. Hence, the activity requires a greater level of overall supervisory capability at level L3. Furthermore, segment C in model in Fig. 1.2 is much larger than A in model Fig. 1.1. Tis means that more functionality lies beyond baseline capabilities, and the overall system requires more sophisticated supervision, which is true for writing using a computer, compared to using a pen.

Te second system in the model in Fig. 1.2 is defned by less complex human functioning HD and more complex technological functioning TD. For example, it could be a routine activity which is automated by advanced digital technology. Perhaps the passenger now rides in an autonomous vehicle, rather than sitting in a carriage. In fact, we could map the spectrum of mobility systems along the horizontal axis, from less complex systems to the most advanced artifcial agents. In all cases, the overall activity system requires greater supervisory capabilities at L3. In addition, segment D is much larger than B in Fig. 1.1. Far more functionality lies beyond baseline capabilities. Te supervisory challenges are high. In terms of the example just given, it requires a signifcant advance in capabilities to supervise human engagement with autonomous vehicles.

Given these examples, Fig. 1.2 illustrates the supervisory challenge in highly digitalized contexts. Segments C and D show the scale of the challenge, as signifcant functions are beyond baseline supervisory capabilities. Tese segments also illustrate the functional losses which may occur if supervisory capabilities fall below level L3. Put simply, inadequate supervision will lead to poorly coordinated action. For example, technological processes might outrun and overtake human inputs, and thereby relegate humans to a minor role in some activity. Automated vehicles could override or ignore human wishes. Alternatively, human processes may import myopias and biases, and artifcial agents then reinforce and amplify human limitations. Perfectly written news articles can be racially biased and discriminatory. In both scenarios, poor supervision skews collaborative processing and leads to functional losses. Strong collaborative supervision will therefore be required, involving human and artifcial agents, to ensure that both types of agent work efectively together with mutual empathy and trust.

# **Period of Digitalization**

Digitalization therefore continues the historical narrative of technologically assisted human agency. Moreover, advanced digital systems are intelligent, self-generative agents in their own right (Norvig & Russell, 2010). Where self-generation in this context means to produce or reproduce oneself without external guidance and support. Like human beings, artifcial agents are situated in the world, sensory, perceptive, calculating, and self-regulating in goal pursuit. Artifcial agents also gather and process information to identify and solve problems, thereby generating knowledge and action plans. Also like humans, artifcial agents are autonomous to variable degrees. In fact, the most advanced artifcial agents are fully self-generating and selfsupervising, meaning they generate and supervise themselves without external guidance or support. Finally, artifcial and human agents are equally connected in collaborative relationships and networks.

Given these developments, human and artifcial agents increasingly collaborate with each other as augmented agents. Digitalization connects them, and their combinatorics are deepening. Human and artifcial agents are becoming jointly agentic, at behavioral, organizational, and even neurological levels (Kozma et al., 2018; Murray et al., 2020). So much so, that artifcial and human agents will soon be indistinguishable in signifcant ways, approaching what some refer to as the singularity of human and artifcial intelligence (Eden et al., 2015). If well supervised, the collaboration is reciprocal and productive: artifcial agents digitalize collaborative functioning, and human agents civilize their joint functioning (Yuste et al., 2017). In these respects, digitalization penetrates far deeper into human experience, compared to earlier phases of technological innovation. As Bandura (2006, p. 175) writes about the digital revolution, "Tese transformative changes are placing a premium on the exercise of agency to shape personal destinies and the national life of societies."

More specifcally, digitalization is augmenting the sensory-perceptive, cognitive-afective, behavioral-performative, and evaluative-adaptive processes, which mediate human agency and personality (Mischel, 2004). In fact, artifcial agents are being developed which imitate these features of human functioning. Enabling technologies will include artifcial neural networks, quantum and cognitive computing, wearable computers, brainmachine engineering, intelligent sensors, and robotics. Smart digital assistants will also proliferate, reaching beyond smartphones to a wide range of digitally augmented interactions. Tese agents will deploy additional innovations, such as artifcial personality and empathy (Kozma et al., 2018). Powered by such technologies, augmented agents will learn and act in far more expansive and efective ways. Consequently, a new type of agentic modality is emerging from digitalized human-machine collaboration.

Furthermore, assisted by digital technologies, people can more rapidly shift their attentional and calculative resources, updating memory, cognitive schema, and models of reasoning. In these respects, digital augmentation disrupts some traditional beliefs about the natural and human worlds. Particularly, given the massive growth of artifcial intelligence and machine learning, the classic distinction between conscious mind and material nature appears unsustainable, as artifcial agents become functionally sentient and empathic. Similarly, human collaboration with artifcial personalities will challenge assumptions about privacy and the opacity of the self, because augmented agents will interpret and imitate empathy and other expressions of personality (Bandura, 2015). Terefore, a number of widely assumed distinctions appear increasingly contingent, and better viewed as options along a continuum, rather than as invariant categories (Klein et al., 2020). As Herbert Simon (1996), one of the founders of modern computer science and behavioral theory, observed, scientifc insight often transforms assumed states into dynamic processes. In this case, insight transforms assumed material and conscious states into dynamic processes.

At the same time, there are grounds for concern and caution. To begin with, digitalization might enable new forms of oppression, superstition, and discrimination. Indeed, we already see evidence of these negative efects. For example, some institutional actors leverage digital technologies to dominate and oppress populations, for ideological, political, or commercial gain (Levy, 2018). Others use digital systems to restrict and distort information, spreading deliberate falsehood, superstition, and bias, again to serve self-interest. In addition, digitalization could be used to prolong the unsustainable, overexploitation of the natural world. Its benefts may also be unfairly distributed, privileging those who already possess capabilities and resources. Digitalization would then reinforce meritocratic privilege and undermine commitment to the common good (see Sandel, 2020). If this happens, the "digital divide" will continue to widen, exacerbating inequality across a range of social indicators, from mobility to education, health, political infuence, and income. Tis book examines some of the underlying mechanisms which drive these efects.

# **Adaptive Challenges**

Technological transitions of this scale are often fraught. Tey demand changes to fundamental beliefs and behaviors which are frmly encoded in culture and collective mind. Amending them is not easy. Nor should it be. Such beliefs and behaviors are typically contested and tested before encoding occurs, and the results are worthy of respect. Adding to the overall resilience of these systems, mental plasticity often declines with age, and most people adapt more slowly over time. Youthful curiosity and questioning give way to adult certainty and habit. Older institutions and organizations exhibit comparable tendencies. Although, here too, sluggish adaptation is sometimes advantageous. It may preserve evolutionary ftness in the face of temporary perturbation. In fact, without adequately stable ecologies, populations, and behaviors, biological and social order would neither evolve nor persist (Mayr, 2002). For this reason, incessant adaptation can be selfdefeating or an early sign of impending ecological collapse.

Digital augmentation simultaneously compounds and disrupts this dynamic. Compounding occurs, because digital augmentation might lead to excessive adaptation and the unintended erosion of ecological stability and agentic ftness. In fact, without adequate supervision and constraint, psychosocial coherence could be at risk. At the same time, the sheer speed and power of these technologies can be disruptive. Many human systems are not designed for rapid change and might fracture under pressure. Furthermore, even if digitalization improves adaptive ftness, in doing so, it might shift the locus of control away from human agents, toward artifcial sources. Hence, as artifcial agents become more capable and ubiquitous, humanity must learn how to supervise its participation in augmented agency, while artifcial agents must learn to incorporate human values, interests, and commitments, where commitment, in this context, means being dedicated, feeling obligated and bound to some value, belief, or pattern of action (Sen, 1985). Put simply, human agents need to digitalize, and artifcial agents need to humanize. Many benefts are possible if digital augmentation enriches agentic capability and potentiality. If poorly supervised, however, artifcial and human agents might diverge and confict, even as they seek to collaborate. Or one agent may dominate the other and they will overly converge. Augmented humanity needs to understand and manage the resulting dilemmas.

## **New Problematics**

In fact, digital augmentation problematizes modern assumptions about human capability and potentiality, where problematization is defned as raising new questions about the fundamental concepts, beliefs, and models of a feld of enquiry (Alvesson & Sandberg, 2011). Tus defned, problematization looks beyond the refnement of existing theory. It is more than critique. It questions deeply held assumptions and invites the reformation of enquiry. For the same reason, problematization does not entail a detailed review of all prior work. Rather, we need to identify key concepts, assumptions, and models and then apply fresh thinking, all the while, refecting on the novel phenomena and puzzles which prompt this process. My argument adopts such an approach. It problematizes modernity's core assumptions about human agentic capability and potentiality and examines the emerging problematics of digitally augmented humanity.

In short, modernity assumes that human agents are capable but limited, and need to overcome numerous constraints, to develop and fourish. As the preface to this work also states, modernity therefore focuses on the following questions: how can human agents collaborate with each other, in collective thought and action, while developing as autonomous persons; how can humans absorb change, while preserving value commitments and psychosocial coherence; how can societies develop stronger institutions and organizations, while avoiding the risks of excessive docility, determinism, and domination; how can humanity ensure fair access to the benefts of modernity and not allow growth to perpetuate discrimination, deprivation, and injustice; and fnally, a defning challenge of modernity, asks to what degree, can and should human beings overcome their natural limits, to be more fully rational, empathic, and fulflled (Giddens, 2013). As Kant (1964, p. 131) wrote regarding moral imperatives, we strive "to comprehend the limits of comprehensibility." Continuing this tradition, contemporary scholars investigate the limits of human understanding and how to transcend them, hoping to increase agentic capability and potential while balancing individual and collective priorities.

However, owing to digitalization, capabilities are expanding rapidly. Humans are potentially less limited, in many respects. Digitalization therefore leads us to problematize the modern assumption that human agency is inherently limited. New questions and problematics emerge instead. I also list these in the preface and repeat them here: how can human beings collaborate closely with artifcial agents, while remaining genuinely autonomous in reasoning, belief, and choice; relatedly, how can humans integrate digital augmentation into their subjective and inter-subjective lives, while preserving personal identities, commitments, and psychosocial coherence; how can digitally augmented institutions and organizations, conceived as collective agents, fully exploit artifcial capabilities, while avoiding extremes of digitalized docility, dependence, and determinism; how can humanity ensure fair access to the benefts of digital augmentation and not allow them to perpetuate systemic discrimination, deprivation, and injustice; and fnally, the most novel and controversial challenge, which is how will human and artifcial agents learn to understand, trust, and respect each other, despite their diferent levels of capability and potentiality. Tis is controversial because it implies that artifcial agents will exhibit autonomous judgment and empathy. It assumes that sometime soon, we will attribute intentional agency to artifcial agents (Ventura, 2019; Windridge, 2017).

## **1.1 Theories of Agency**

Albert Bandura (2001) is a towering fgure in the psychology of human agency, both individual and collective. His social cognitive theories explain how the capacity for self-regulated, self-efcacious action, is the hallmark of human agency, as well as a prerequisite for human selfgeneration and fourishing. In this respect, Bandura epitomizes the modern perspective on human agency: despite their natural limitations, people are capable of self-regulated, efcacious thought and action. Tey sense conditions in the world, identify and resolve problems, and pursue purposive goals. Human potential is thereby realized as people develop, engage in purposive action, and learn. Tey also mature as refexive beings, acquiring the capability to monitor and manage their own thoughts and actions, and ultimately to self-generate a life course. Tus empowered and confdent, people fnd fulfllment and fourish. In modernity, being truly human is to be freely and fully agentic.

## **Persons in Context**

For comparable reasons, Bandura (2015) is among the psychologists who advocate situated models of human agency, personality, and rationality, often termed the "persons in context" and "ecological" perspectives. Like other scholars in this community, Bandura views human agency in naturalistic terms, assuming agents are sensitive to context, inherently variable, adaptive, and self-generative (Bandura, 2015; Bar, 2021; Cervone, 2004; Fiedler, 2014; Kruglanski & Gigerenzer, 2011; Mischel & Shoda, 2010). Consequently, he and others reject static models of human personality and agency—for example, they reject fxed personality states and traits (e.g., McCrae & Costa, 1997)—and argue instead that persons are situated and adaptive. In the context of digitalization, conceiving of human agents in this way is important for two main reasons. First, if humans are complex, open, adaptive systems, situated, and self-generative, they are well suited for collaboration with artifcial agents which share the same characteristics. Second, digitalization amplifes the impact of contextual dynamics because contexts change rapidly and penetrate more deeply into human experience. Being human in a digitalized world is to be human in augmented contexts.

For similar reasons, some psychologists explicitly compare human beings to artifcial agents. Tey note that both types of agent can be modeled in terms of inputs, processes, and outputs (Shoda et al., 2002). In addition, both human and artifcial agents sense the environment and gather information which they process using intelligent capabilities, leading to goal-directed action, and subsequent learning from performance. Humans and advanced artifcial agents are both potentially selfgenerative as well. Terefore, human and artifcial agents are deeply compatible, because both possess the same fundamental characteristics: (a) they are situated and responsive to context; (b) they use sensory perception of various kinds to sample the world and represent its problems; (c) both then apply intelligent processes to solve problems and develop action plans; (d) they self-regulate performances, including goal-directed action; (e) both evaluate performance processing and outputs, which results in learning, depending on sensitivity to variance; (f) both are selfgenerative and can direct their own becoming; and (g) they do all this as separate individual agents or within larger cooperative groups and networks.

Two of these characteristics are especially notable, namely generativity and contextuality. First, self-generation refects a wider interest in generative processes broadly conceived. In numerous felds, scholars research how diferent systems kinds originate, produce, and procreate form and function without external direction. Chomsky's (1957) theory of generative grammar is a perfect example. In it, he argues that semantic principles are genetically encoded, then embodied in neurological structures, and subsequently generate linguistic systems. In this way, the frst principles of grammar help to generate language. Others propose generative models of social science, using agent-based and neurocognitive modeling (e.g., Epstein, 2014). Some economists exploit the same methods to explain the origins and dynamics of markets (e.g., Chen, 2017; Dieci et al., 2018). While in personal life, generativity embraces the parenting of children, mentoring the young, as well as curating identities and life stories (McAdams et al., 1997).

Second, contextuality is not limited to theories of human personality and agency. For example, philosophers also debate the role of context, when considering the content of an agent's thoughts and actions. Not surprisingly, naturalistic and pragmatic philosophers are highly skeptical of ideal objectivity free from contextual infuence. As Amartya Sen (1993) argues, perception, observation, belief, and value, all arise in some context and positions within it. From this perspective, claims of objectivity, whether ontological, epistemological, or ethical, must be positioned within context. Tere is no view from nowhere (see Nagel, 1989) and no godlike position or point of view, *sub specie aeternitatis*, which John Rawls (2001) hoped for. Commitments of every kind imply context and position. Human agents are forever situated, embedded in social, cultural, and historical contexts, although, each context and position can be well lit, by focused attention, sound reasoning, and gracious empathy.

In fact, across many felds of enquiry, scholars are adopting similar approaches. Context and position matter. Examples are found in other areas of psychology and social theory (Giddens, 1984; Giford & Hayes, 1999), in economics (Sen, 2004), as well as in linguistics and discourse analysis (Lasersohn, 2012; Silk, 2016). Tey all share a common motivation. Within each feld of enquiry, there is growing awareness of contextual variance and complexity, plus skepticism about static methods and models. Tese concerns are amplifed by the obvious increase in phenomenal novelty and dynamism, especially owing to digitalization and related global forces. At the same time, most scholars who embrace context and position also reject unfettered subjectivity and relativism. Rather, they problematize assumptions about universals and ideals and investigate systematic processes of variation and adaptation instead. All agentic states are then conceived as processes in context unless there is compelling evidence to the contrary. Debate then shifts to what common, underlying systems or structures might exist among diferent expressions of agency. To cite Simon (1996) once again, scientifc insight often transforms the understanding of assumed states into dynamic processes.

# **Capability and Potentiality**

However, individual human agency is not simply an expression of personality in context. While agency assumes personality, it goes further (Bandura, 2006). Te two constructs are not fully correlated. First, agency is forward looking, prospective, and aspirational, whereas personalities need not be. Second, agency is self-reactive, allowing agents to evaluate and respond to their own processes and performances. Tis function exploits outcome sensitivity and various feedback and feedforward mechanisms. Tird, human agents are self-refective, whereby they process information about their own states and performances and form refexive beliefs and afects. Fourth, agency is potentially self-generative, meaning agents curate their own life path and way of becoming, although not all persons do so. To summarize, individual agency is an afordance of personality. Te two are integrated, interdependent systems of human functioning. Personality and agency together, allow individuals to be intentional, prospective, aspirational, self-reactive, self-refective, and self-generative.

Collective agents exhibit comparable characteristics. Yet collective agency is not simply the aggregation of personalities (Bandura, 2006). Granted, collectives connect and combine diferent individuals, but at the same time, collective agency is more holistic and qualitatively diferent. It relies heavily on networks and culture, for example, which also help to defne collective modality and action (DiMaggio, 1997; Markus & Kitayama, 2003). Nevertheless, collectives share many of the same functional qualities as individuals. Collective agency is also intentional, prospective, aspirational, self-reactive, self-refective, and self-generative. But these are now properties of communities, organizations, institutions, and networks, rather than individuals or aggregations of them (March & Simon, 1993; Scott & Davis, 2007). In summary, collective agency is an afordance of cultural community. In this respect, cultural communities and collective agency are integrated, interdependent systems of human functioning as well, but at a more complex level of organization and modality.

## **Limits of Capabilities**

Irrespective of agentic modality and context, however, purely human capabilities are limited. Teories of agency therefore allow for approximate outcomes, trade-ofs, and heuristics. Tey explain how individuals and collectives simplify and compromise, in order to reason and act within their limits (Gigerenzer, 2000; March & Simon, 1993). Sometimes, simplifying heuristics and trade-ofs work well. But at other times, agents fall prey to noise, bias, and myopia, owing to the fallibility of such strategies (Fiedler & Wanke, 2009; Kahneman et al., 2016). Each major area of agentic processing is afected. First, sensory perception is constrained by limited attentional and observational capabilities, and agents easily misperceive the world and themselves, becoming myopic or clouded by noise. Second, cognitive-afective processes are limited by bounded calculative capabilities, which allow biases and myopias to distort problemsolving, decision-making, and preferential choice. Empathic capabilities are limited as well, meaning agents often struggle to interpret and understand other people and themselves. Tird, behavioral-performative outputs are constrained by limited self-efcacy and self-regulatory capabilities. Hence, humans often perform poorly or inappropriately. And fourth, updates from feedforward and feedback are limited by insensitivity to variance, memory capacity, and procedural controls, meaning humans often fail to learn adequately and correctly. Feedforward updating is especially vulnerable, owing to its complexity and speed.

Importantly, these limitations suggest the contingency of many assumed criteria of reality, rationality, and justice (Bandura, 2006). For if purely human capabilities are inherently limited, then whatever is grounded in such capabilities will be limited as well. Tis is especially problematic, because ordinary categories and beliefs often acquire ideal status, as fundamental realities, necessary truths, and mandatory self-guides. Tey are idealized, meaning they are extrapolated to apply universally and forever, when in fact they do not (Appiah, 2017). Once again, each area of agentic processing is afected. First, the ordinary limits of sensory-perceptive capabilities often determine agents' fundamental ontological commitments and the core categories of reality. For this reason, most naturalistic and behavioral theorists argue that ontologies are contextual and variable to some degree, and hence open to revision (Giford & Hayes, 1999; Quine, 1995). In contemporary philosophy, this approach supports "conceptual engineering," in which fundamental concepts of reality and value are constructed and reconstructed to ft the context (Burgess et al., 2020; Floridi, 2011).

Second, agents regularly hold idealized epistemological commitments—criteria of true belief and models of reasoning—which refect the limits of their cognitive capabilities. Most naturalistic, and behavioral theories view epistemic commitments as inherently adaptive and ecological (Kruglanski & Gigerenzer, 2011). One notable advocate of this position was the later Wittgenstein (2009), who illuminated how contingent "language games" become idealized in axiomatic models of reasoning. In fact, Wittgenstein exposed axiomatic models as a type of meta-game, which foreshadowed recent thinking about the evolution of logics (e.g., Foss et al., 2012; Tornton et al., 2012). Tird, agents adopt ethical commitments. Tey form ideals of goodness and justice which refect their limited relational and empathic capabilities, where empathic limits constrain how much people can appreciate about each other's values and commitments. Philosophers then debate the origin of such limits and the degree to which they might be overcome. Some view empathic incompleteness as intractable and humanizing, and central to sociality and culture (e.g., Sen, 2009); while others argue for empathic universals, at least regarding fundamental principles (e.g., Rawls, 2001).

## **Impact of Digitalization**

By transcending ordinary human capabilities, digital augmentation problematizes these questions and assumptions. First, digital innovations are rapidly improving the capacity to sense the environment, thereby heightening the perception of contextual variation and problems. Enabling technologies include the internet of things, intelligent sensing technologies, and fully autonomous agents. Second, digital augmentation massively increases information processing capabilities, transcending the assumed limits of human intelligence. For example, anyone with a contemporary smartphone can access enormous processing power at the touch of an icon. Tird, digital augmentation enables new modes of action, which augment human performances. Digital innovations are transforming sophisticated domains of expert action, such as clinical medicine. Fourth, augmented agents can learn at unprecedented rates and degrees of precision, through rapid performance feedback, coupled with intense feedforward mechanisms (Pan et al., 2016; Pan & Yu, 2017).

Altogether, therefore, digitalization is radically augmenting agentic capabilities and potentialities, regarding sensory perception, cognitiveafective processing, behavior performance, evaluation of performance, and learning. In consequence, many traditional assumptions appear increasingly contingent and contextual, among them, conceptions of cognitive boundedness, distinctions between conscious mind and material nature, interpretive versus causal explanation, and abstract necessity versus practical contingency. Digitalization thus problematizes the conceptual architecture of modernity.

## **1.2 Metamodels of Agency**

Fully to conceptualize and analyze this shift, we need to work at a higher level of metamodels. By way of defnition, metamodels capture the common features of a related set of potential models within a feld (Behe et al., 2014; Caro et al., 2014). Put simply, metamodels defne families of models. Tey are specifed by hyperparameters which defne the core categories, relations, and mechanisms shared by a set of models (Feurer & Hutter, 2019). Tus defned, metamodels are studied in numerous felds, even if they are not labeled as such, for example, in decision-making (He et al., 2020; Puranam et al., 2015) and Chomsky's (2014) work on linguistics. Te concept is very well established in computer science: "Metamodels determine the set of valid models that can be defned with models' language and behavior in a particular domain" (Sangiovanni-Vincentelli et al., 2009, p. 55). In this book, the term refers to families, or related sets, of models of agency. Regarding agentic metamodels, hyperparameters defne levels of organization or modality, activation mechanisms, and processing rates, such as the speed of self-regulation and learning, where rates, in this context, are defned as the number of processing cycles performed per unit of time. Te reader will therefore encounter the terms "metamodel of agency" and "agentic metamodel" throughout this book. However, I will not ofer an alternative model of agency at the detailed level. Te book does not present an alternative theory of human psychology or agency as such. Nor will it propose a formal model of augmented agency or humanity based on a specifc theory. Rather, my argument will focus at a higher level, on the features of agentic metamodels.

To illustrate, consider the feld of psychological science. In this feld, a popular metamodel assumes that human beings perceive the world and themselves, then process information and perform action, with varying degrees of intelligence and autonomy. Human beings are therefore a type of input-process-output system (Mischel, 2004). Given this broad metamodel, scholars then formulate specifc models of psychological functioning, such as behaviorist, social cognitive, state, and trait models. Importantly, each type of model exemplifes the principles of the broad metamodel, though they vary in terms of the specifc parameters for inputs, the internal mechanisms of processing, and performance outputs. Moreover, in felds like psychology, domain-specifc metamodels are often predetermined, typically from the analysis of practice and experience. Indeed, whole industries evolve this way. Prescriptive metamodels guide pedagogical and clinical practice (Bandura, 2017). Most psychologists therefore assume fairly stable agentic metamodels which are deeply encoded in culture and community (Soria-Alcaraz et al., 2017). Tis means that metamodels adapt incrementally, if at all, under normal conditions. Indeed, institutional felds are labeled "felds" for this reason; and similarly, personality types are labeled "types." Both labels refect stable metamodels in these felds of study (Mischel, 2004; Scott, 2014). Moreover, few practitioners question the normative metamodels of a feld. Most are encoded during training, or imposed by regulation, and remain fxed.

Te question remains, however, whether metamodels will help in the analysis of human-artifcial augmented agency. Even if metamodeling is a suitable way to analyze both human and artifcial agents, at a high level, can the two be integrated in this way? Perhaps the fundamental features of the mind and consciousness are too incommensurable with artifcial agency and intelligence. Arguably, this was the case until recently. However, as noted earlier, recent technical advances suggest that metamodeling is now feasible in this regard. For example, advanced systems of artifcial intelligence are increasingly capable of higher forms of cognitive functioning, including self-generation and self-supervision, associative and speculative reasoning, heuristic problem-solving and decision-making, as well as interpreting afect and empathy (Asada, 2015; Caro et al., 2014). Human and artifcial agents are increasingly similar and thus amenable to integrative metamodeling, especially when they combine as augmented agents.

## **Compositive Methods**

Digitalized ecologies will be increasingly dynamic and responsive. Agency will be less reliant on stable metamodels and encoded templates. New metamodels, or families of models, will consistently emerge. In this way, augmented agents will be capable of rapid transformation. Humans and artifcial agents will take on diferent, complementary roles, selfgenerating dynamically to ft changing contexts. Teir metamodels will compose and recompose in real time, to ft changing conditions. In this respect, digital augmentation supports a more dynamic method, which can be described as "compositive" (cf. Latour, 2010), meaning that methods and models will compose, decompose, or recompose, to ft diferent contexts. From a design perspective, therefore, augmented agency will be near composability, as well as being near decomposable modular and hierarchical systems. Moreover, compositive methods are systematic and rigorous, the result of processing vast quantities of data. Tese methods are neither ad hoc nor idiosyncratic (e.g., Pappa et al., 2014; Wang et al., 2015).

Compositive methods are already employed in contemporary artifcial intelligence. Systems maintain databases of processing modules and methods, and then select and combine these to ft the problem context. Metamodels and models are developed rapidly, contextually, in response to problems and situations. As noted previously, the most advanced software algorithms now compose their own metamodels they are fully self-generative—requiring minimal (if any) supervision. Evolutionary deep learning systems and Generative Adversarial Networks (GANs) function in exactly this way (Shwartz-Ziv & Tishby, 2017). Via rapid inductive and abductive learning, these systems process massive volumes of information, identifying hitherto undetectable patterns, to compose new metamodels and models, often without any external supervision. Augmented agents will do likewise. Tey will leverage the power of digitalization to select and combine diferent techniques and procedures, and thereby compose metamodels and methods which best ft the context. Development of, and investigation by, augmented agents will invoke compositive methods.

Notably, the great economist, Friedrich Hayek (1952), argued for compositive methods in the social sciences, as an antidote to naïve reductionism, developing models and methods which best ft the problem at hand (Lewis, 2017). In these respects, Hayek's conception of "compositive" is comparable to recent technical developments. Going beyond Hayek's conception, however, digitalized metamodeling is agentic and ecological, more similar to Latour's (2010) concept of composition. It synthesizes both top-down and bottom-up processing, detailed and holistic, rapidly iterating, using prospective metamodeling and testing, until maximizing metamodel ft, and often achieving this in a fully unsupervised, self-generative fashion. In these respects, digitalized composition also problematizes traditional methodological distinctions: between qualitative and quantitative, methodological individualism and collectivism, and between reductionism and holism. Instead, compositive methods will blend these options and treat such polarities as the extremities of continua (Latour, 2011). I will return to these topics in later chapters, and especially in the fnal chapter which discusses the future science of digitally augmented agency.

## **1.3 Dimensions of Metamodeling**

Nevertheless, given the complexity of many problems and the processing they require, artifcial agents must also simplify and approximate. Tis is done using algorithmic heuristics, which are shortcut means of specifying models and methods (Boussaid et al., 2013). At the most general level, hyperheuristics provide simplifed means of specifying the broad hyperparameters of metamodels. Recall that metamodels are defned as related sets of potential models, and hyperparameters specify the broad features or attributes of metamodels, including their core categories, mechanisms, and processing rates (Feurer & Hutter, 2019). Hyperheuristics are shortcut means of specifying these properties. Metamodels are further distinguished by the supervision applied in their development. As noted earlier, they can be fully supervised, semi-supervised, or unsupervised, from artifcial and human sources.

In supervised metamodeling, hyperparameters are fully determined by prior experience and learning (e.g., Amir-Ahmadi et al., 2018), whereas in semi-supervised systems, the initial hyperparameters are partially given, but provisional. Additional processing is required to tune and optimize them. Among the benefts of a semi-supervised approach, is that metamodeling can exploit prior learning while responding to novelty, although, semi-supervised metamodeling also poses risks, if it imports distorting biases and myopias (Horzyk, 2016). Alternatively, some artifcial agents are fully unsupervised. Hyperparameters are developed by the agent itself, in a self-generative fashion. Metamodels are composed, rather than retrieved. Advanced artifcial agents do this through rapid, iterative hyperparameter pruning, tuning, and optimization (Song et al., 2019).

As noted above, GANs are a recent innovation of this kind (Wang et al., 2017). In these systems, artifcial agents compete in a collaborative game. A generator produces fake examples of some phenomenon, derived from pure noise. In parallel, a discriminator is trained on real examples of phenomena, such as photographs of human faces. If the system is fully unsupervised, these training data are unlabeled and unstructured. Using such data, the discriminator learns via multiple cycles of induction. Ten the artifcially generated, fake examples are passed to the discriminator, along with unclassifed real examples, and the discriminator tries to distinguish real from fake. Te competition ends in a Nash equilibrium, being the state in which neither the generator nor discriminator can do better against the other, but they rely on each other to achieve this maximal state, and both therefore beneft from stabilizing the system (Pan et al., 2019). In this fashion, the GAN produces a maximizing solution to the focal problem, for example, developing an artifcial agent which can distinguish human faces, without needing any external supervision (Liong et al., 2020). And the metamodel is fully unsupervised and self-generative.

## **Parameters and Variables**

Given initial hyperparameters and the metamodel they defne, the next phase applies metaheuristics to select a model from the choice set (Feurer & Hutter, 2019). First, the agent will select specifc parameters, about what counts as real versus fake, and what is exposed or hidden. Second, it will select activation functions, such as the type of action generation, or the outcome variance which triggers adaptive feedback. Tird, there will be specifc processing cycles and learning rates, for example, whether a particular type of feedback is slow and sluggish, or fast and hyperactive, and also about the level and intensity of feedforward processing. In purely human processes, such parameters tend to be encoded in memory and mental models, and supervised by metacognition (Bandura, 2017). Even ecological models of rationality are signifcantly supervised, by prescribing criteria of adaptation and association (Kruglanski & Gigerenzer, 2011). For example, the metaheuristic may encode "fast and frugal heuristics" as the most ecologically appropriate model for problem-solving (Gigerenzer & Goldstein, 1996). Te system then employs this specifc model to resolve a focal problem.

Next, given a specifc model and its parameters, the process specifes the variables or expected patterns of variance. For example, in a naturalistic model of agency, variables might capture the expected degree and rate of variation in self-regulated behavior (the dependent variable), conditional on the strength or weakness of self-efcacy (the independent variable) (Bandura, 1997). In this case, the variables are supervised and predetermined. Tough, if supervision is poor, the specifcation of variables can easily import distorting myopias and biases. Indeed, such biases often infect human and machine learning, resulting in poor choices and decision-making (Noble, 2018). Human and artifcial agents must therefore learn how better to supervise the selection of variables, mindful of these risks. Otherwise, models could be underftting (admitting too much noise and variance), or overftting (excluding too much noise and variance). Both scenarios will increase functional losses (Kahneman et al., 2016).

Importantly, at each level of artifcial processing, algorithmic heuristics help to manage the otherwise overwhelming complexity of data and processes. Indeed, much research into artifcial intelligence and machine learning focuses on optimizing such hierarchies: using hyperheuristics to select the hyperparameters which defne metamodel choice sets; then using metaheuristics to select the detailed model which fts best; and fnally, the chosen model provides specifc heuristics to solve a focal problem. Te earlier example cited (a) the metamodel of associative, heuristic problem-solving, then (b) the model of "fast and frugal" heuristics, and (c) applying a specifc heuristic, such as a simple stopping rule (Gigerenzer, 2000). Tese methods will be critical for the efectiveness and efciency of digitalized problem-solving, and especially more complex problems. For the same reasons, these methods will be employed by digitally augmented agents.

Furthermore, depending on the type and level of supervision, hyperparameters are more, or less visible. Recall that some are predetermined, given by supervision, and hence immediately visible. Others may be hidden and unsupervised, and therefore wait to be discovered by further processing. In computer science, these questions loom large for the efciency of artifcial agents (Yao et al., 2017). On the one hand, the more prior supervision of hyperparameters, the less is hidden and metamodeling is more efcient and predictable. For example, in fully supervised machine learning, hyperparameters are predetermined and thus fully visible. However, as a result, there are fewer degrees of freedom: the greater the supervision, the less freedom in metamodeling. On the other hand, with less or no supervision, more is hidden. Tis entails greater degrees of freedom, to explore and self-generate. Tis is the case in unsupervised GANs, in which hyperparametric values are largely hidden and await discovery. However, the process of discovery consumes time and resources. To compensate, unsupervised systems also employ hyperheuristics in hyperparameter tuning and pruning, to optimize metamodel discovery and design. Tat is, they self-supervise their own objective function to maximize ft while also minimizing the processing load (Burke et al., 2019). Similar dynamics occur in the development and functioning of agentic metamodels. Human systems also need to balance metamodel ft and efciency. But in these contexts, components can be hidden for other reasons, and especially owing to the limitations of human perception and consciousness.

## **The Role of Consciousness**

In premodern cultures, it was assumed that most fundamental principles are accessible to ordinary consciousness, even if they depended on divine revelation and ritual. Tis included the core categories of reality, truth, and value, about persons, the polis, and the cosmos (Rochat, 2009). However, hyperparameters of this kind are inevitably anthropomorphic, owing to their origins in ordinary experience. Tis was certainly true for premodernity. Fundamental categories of reality and truth were defned in human terms, that is, in terms which refected ordinary consciousness. Hence, the gods were superhuman characters and the cosmos emerged through anthropomorphic or animistic stories of creation. By implication, premodern cultures ofered few degrees of self-generative freedom in agentic form and function.

By contrast, during post-Enlightenment modernity, the fundamental properties of nature are largely inaccessible to ordinary consciousness. To discover them, one requires specialized technological assistance, or in other words, the methods of modern empirical science. Nevertheless, many continued to believe that the fundamental properties of mind and self are directly accessible to consciousness. Descartes (1998) exemplifed this belief when he introspected and famously concluded, "I think therefore I am." Te modern mind-body problem was born and over time modernity bifurcated the sciences. On the one hand, the natural sciences demoted ordinary consciousness and relied on technological assistance to access the hidden, fundamental realities of nature. Whereas on the other hand, many human sciences continued relying on ordinary consciousness to access the fundamental properties of mind and self, with or without technological assistance (Tiel, 2011). Some disciplines continue to do so, believing that the hyperparameters of cognitive form and function are directly accessible to ordinary consciousness. Arguably, this is anthropomorphic and erroneous (see Chomsky, 2014).

In fact, owing to digitalization and neurophysiological discoveries, it is becoming abundantly clear that the fundamental realities of mind and self are opaque to ordinary consciousness (Carruthers, 2011). Specialized technologies are required here too. Introspection is a functional approximation, at best. From the perspective of digital augmentation, therefore, no fundamental categories and mechanisms—of neither physical nature nor mental phenomena—are directly accessible to ordinary consciousness. Both require technological assistance to observe and analyze them. However, this does not entail the reduction of mind and self to material cause or the digital dissolution of consciousness. Rather, as I will explain more fully in later chapters, it entails rethinking classic concepts of mind and self in terms of digitally augmented agency and self-generative systems.

Signifcant implications follow for the supervision of technologically assisted agency, and especially the supervision of digitally augmented agents. Most importantly, if ordinary consciousness is demoted and no longer a reliable source of fundamental reality and truth, then it will require deliberate supervision to ensure that ordinary human inputs are acknowledged and respected. Tey cannot, and should not, be either foundational or taken for granted. In fact, this problem is already a topic of research in computer science. Artifcial agents are designed to recognize and accommodate the ordinary experience of mind and self, when they need to collaborate with humans in behavioral settings (Abbass, 2019), for example, when humans travel in autonomous vehicles. Tese situations require the systematic incorporation of human perceptions, values, and interests, despite their lack of precision and reliability. In this way, the supervision of augmented agency is humanized.

Te earlier Figs. 1.1 and 1.2 illustrate these efects. Recall that these fgures depict the core supervisory challenge of technologically assisted humanity, namely, how to combine and coordinate, divergent levels of human and technological functionality. In fact, the same factors explain the shifting role of ordinary consciousness in explanatory thought. To illustrate, instead of interpreting these fgures as general models of supervision, now assume they depict the supervision of explanatory thought. Next, recall that the small gap between levels of capability L1 and L2 in Fig. 1.1 illustrates modest technological assistance. We can therefore reinterpret this fgure to depict forms of science with modest technological tools and techniques. Also, note that segments A and B are both relatively small. Much supervision is achievable using baseline capability L1 and hence accessible to consciousness. In fact, this was the dominant pattern in premodern science (Sorabji, 2006). It persists in some felds of human study, which still derive fundamental categories and mechanisms from ordinary consciousness and introspection.

By contrast, in Fig. 1.2, there is a larger gap between baseline capability L1 and more technologically advanced capabilities L3. Tis fgure therefore illustrates forms of science with signifcant technological input. Segments C and D are large, implying that much is inaccessible to ordinary consciousness and requires supervision at level L3. Modern natural science is certainly like this, as are the human sciences which no longer rely on ordinary consciousness and perception but employ specialized technologies instead. Te science of digitally augmented agency will adopt the same approach. However, for this reason, future science confronts a major challenge. It will require strong collaborative supervision to avoid scenarios in which artifcial agents overwhelm and ignore human inputs, and/or human supervision imports distorting myopias and bias into artifcial intelligence and science.

# **Critical Dilemmas**

Metamodeling therefore plays an important role in agentic thought and action. In practical, behavioral domains, most metamodeling is automatic, implicit, encoded in memory, and heavily supervised by procedural routine and custom. Data are labeled and principles are clear. In fact, in ordinary human experience, metamodeling is only self-generative in the most creative and speculative domains. By contrast, metamodeling by artifcial agents is increasingly self-generative and unsupervised. Tis distinction between human and artifcial supervision of metamodeling has profound implications for their collaboration. Digitally augmented humanity must integrate both types of agent and accommodate their diferent capabilities and potentialities. On the one hand, human inputs will be strongly supervised, replicative, and ordinary human intuitions and priors will often persist. Humans also tend to be comparatively myopic, sluggish, layered, and insensitive to variance. Whereas artifcial agents will tend toward increasingly unsupervised, self-generated inputs, independent of human intervention. In addition, artifcial agents are comparatively farsighted, fast, compressed, and hypersensitive to variance.

Overall, collaborative supervision will therefore be daunting, even for the most developed augmented agents (Cheng et al., 2020). If supervision is poor, the result could be extremely convergent or divergent forms and functions. Regarding over-convergence, one type of agent might dominate the other resulting in systems which are too digitalized or too humanized. Whereas regarding over-divergence, human and artifcial inputs will both be signifcant but conficting. A number of divergent dilemmas are possible. First, the human and artifcial components of augmented agents could diverge in terms of range, being both farsighted and nearsighted at the same time, looking too far and near in sampling and search. Second, their processing rates might diverge, being rapid in some respects and sluggish in others, thus cycling both too fast and too slow. Tird, artifcial processes could be hypersensitive to variance, while human processes are relatively insensitive, thereby admitting too much and too little noise. And fourth, augmented agents might combine overly complex and simplifed components, leading to poor integration and coordination. In all these scenarios, outcomes will easily become dysfunctional. Te following chapters examine the origins and consequences of these dilemmas for key domains of agentic form and functioning. Te fnal chapter looks forward to the future science of digitally augmented agency.

## **References**


Wang, K., Gou, C., Duan, Y., Lin, Y., Zheng, X., & Wang, F. (2017). Generative adversarial networks: Introduction and outlook. *IEEE/CAA Journal of Automatica Sinica, 4*(4), 588–598.

Wilson, E. O. (2012). *Te social conquest of earth*. WW Norton & Company.

Windridge, D. (2017). Emergent intentionality in perception-action subsumption hierarchies. *Frontiers in Robotics and AI, 4*, 38.

Wittgenstein, L. (2009). *Philosophical investigations*. Wiley.


**Open Access** Tis chapter is licensed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/ by/4.0/), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence and indicate if changes were made.

Te images or other third party material in this chapter are included in the chapter's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the chapter's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.

# **2**

# **Historical Metamodels of Agency**

Diferent agentic metamodels correspond to major historical periods of civilized humanity. Technological innovation is an important, distinguishing feature of this narrative (Spar, 2020). Te long-term trend is toward greater agentic capability and potentiality, assisted by more sophisticated technologies: from the static agentic metamodels and simple technologies of premodernity to the more complex metamodels of modernity, assisted by mechanical and analogue technologies, and now to the increasingly dynamic, digitally augmented metamodels of the contemporary period. In summary, the evolution of agentic metamodels is a historical process itself. Each major period warrants detailed discussion.

# **2.1 Major Historical Periods**

In premodern cultures—prior to the modern period of Enlightenment and industrialization—human agency was popularly conceived in terms of divinely ordained narratives and fxed social orders. Purposive thought and action were supervised by patriarchal authority and supernatural beings, which helped to make sense of intractable fate. For most people, the order of things was not a human composition, but bestowed by agents from above and beyond (Geertz, 2001). Given these assumptions, explanation of the world was teleological and driven by fnal cause, while categories of reality were defned in terms of essential states and forms. Refecting this relative lack of capability and potentiality, the dominant metamodels of agency assumed stability (Sorabji, 2006). Normality meant replicating an established, often divinely ordained metamodel of agency, and signifcant variance was viewed as a sign of weakness or deviance. Hence, trying to amend or circumvent the divine order was fraught with existential risk, as the ancient Greek tragedians understood (Williams, 1993). And for most, human overcoming was not explained by autonomous reasoning and action, but by good fortune and supernatural benefcence.

In like fashion, explanatory thought about agency in premodernity focused on collective norms, compliance with them, and the vicissitudes of fate. Rare opportunities for change consisted in altering position within the established order, shifting from point to point, like transposing scalar values in Euclidean space (Isin, 2002). Not surprisingly, therefore, premodernity did not privilege individual agency, but rather deference and compliance. Granted, some scholars explored alternative conceptions, but as the trial of Socrates illustrates, encouraging autonomous critical thought could be a crime punishable by death (Hackforth, 1972). Agentic performance was assessed in terms of adherence to norms, and the purpose of feedback was to refne replication and correct deviation. Tomas à Kempis (1952) epitomized this perspective in *Te Imitation of Christ*, one of the most revered texts of the premodern Christian period. He explains that fulfllment comes from absorbing scripture and imitating the life of Christ, not from autonomous reasoning and choice. Te latter perspective had to wait for Martin Luther, who nailed his 95 theses to the door almost a century later. In summary, premodern, agentic metamodels prioritized replication, rather than adaptive change or original composition.

## **Replicative Metamodels of Agency**

Figure 2.1 depicts a premodern, replicative metamodel of agency. It builds on the cognitive-afective model of personality developed by Mischel and Shoda (1998). Te fgure assumes an input-process-output view of personality and the self, situated in context, but with relatively low variability and high stability. Hence, the metamodel illustrates a "persons in context" perspective, albeit with relatively low levels of contextual and system variation. Given these assumptions, the fgure includes a sequence of major phases. First, it shows situational input stimuli (labeled SI), which include information about situations and problems in the world. Second, these stimulate sensory perception (SP), which transmits information to the next stage, cognitive-afective processing (CA). Tird, the agent then processes information using cognitive-afective processing units (PU), which may include encodings, beliefs, afective states, goals, and values, including reference criteria and core commitments (RC), and self-regulatory schemes. Fourth, the agent generates action plans (AG), which are self-directed and self-regulated, to some degree. Fifth, these plans result in behavioral-performative outputs (BP). Sixth, such outputs trigger evaluation of performance (EP), as agents compare outcomes to aspirations and expectations, conditional on their degree of sensitivity to variance.

Te fgure also depicts three mechanisms of update encoding which fow from the evaluation of performance. Tese are shown by the arrowed

**Fig. 2.1** Replicative agentic metamodel

lines at the base of the fgure. Te frst two are feedback mechanisms (FB) which fow from evaluation of performance (EP). One is an inter-cyclical mechanism which updates cognitive-afective processes; that is, updates occur at the completion of full cycles of processing. A second intercyclical mechanism updates input stimuli and the situational context itself. In addition, there is an intra-cyclical feedforward mechanism (FF), fowing from cognitive-afective processing, which updates the situation itself and input processing. It is intra-cyclical, in relative terms, because it occurs during and infuences the ongoing process. Natural, human feedforward guidance is neurological and largely unconscious (Basso & Belardinelli, 2006), and was inherently part of human functioning, even in premodern times.

Furthermore, each phase is composed of functional components, illustrated by small circles. Tese are situational inputs (SI), cognitive-afective processing units (PU), including one dotted circle showing a referential criterion or commitment (RC), and behavior performances (BP) (see Mischel & Shoda, 1995). In the replicative metamodel, many components are invariant owing to stable contexts and behavioral norms. Arrows are dashed, to indicate relatively low variance and potentiality. Similarly, the fgure depicts weak mechanisms of feedback and feedforward encoding, also shown by dashed lines. Premodern cultures did not exhibit widespread self-refective variation, in this regard, but rather compliant imitation. Equally, learning was largely a process of memorization. As noted above, agents were guided by replication and imitative processes.

# **The Modern Period**

By contrast, during the modern period, agentic capabilities and potentialities expanded, supported by technological innovation and socioeconomic development. For good and ill, new sources of knowledge, production, and mobility disrupted the premodern socioeconomic order. Te focus of agency shifted away from patriarchal order and imagined beings, toward reasoning persons in the natural world (Giddens, 1991). Identity and meaning were now contingent on learning and achievement, rather than compliant acceptance of inherited position. Tat said, docility within social collectives remained hugely important, but it now became a political and philosophical question, versus one of theological dogma (e.g., Locke, 1967). In this fashion, modern criteria of reality, truth, and reasoning transcend premodern replication. Furthermore, the nature of human empathy and commitment frame modern thought about justice and ethics, rather than divine personalities and their pronouncements.

Consequently, the development of intelligent capabilities and the provision of opportunities for personal development and learning have been central to modern human science and theories of agency. Scholars examined the functioning of mind and personality, and the potential impact of social and cultural forces on human development (Pinker, 2010). Educational and clinical interventions built on such research. In parallel, the modern sciences became deeply dualistic. As noted in Chap. 1, for many scholars, human mind and consciousness were distinguished from material nature and the body. Hence, the human and natural sciences bifurcated into separate systems of study, with diferent methods of observation and analysis. Most critically, while the fundamental realities of mind and self may be directly accessible to ordinary consciousness and intuition, understanding of the natural world demands specialist technologies in controlled settings.

Reaction to Darwin's theory of evolution epitomizes this divide (see Mayr, 2002). On the one hand, the biological world was reconceived as a fully natural system, requiring scientifc methods of observation and analysis, not driven by essentialism or teleology. On the other hand, however, many continued to believe that the fundamental features of mind and self were accessible to ordinary consciousness and irreducible to natural cause. Indeed, they feared that natural mechanisms would erode the ontological status of self-consciousness, and with it, various precepts of identity and faith. Subsequent debates refect this dualism of modern thought: how to reconcile and integrate material cause and natural evolutionary mechanisms, with human consciousness, intentional action, and the interpretation of meaning.

Modern agentic metamodels exhibit the same tension. Most are deeply dualistic and assume problematic relations between the material and conscious aspects of human experience, or in other words, between mind and body. Refecting this dualism, the major problems of modern agency can be compared to opposing vectors in Cartesian space: material versus intentional cause; natural selection versus preferential choice; biological instinct versus autonomous will (Reill, 2005). Tese polarities combine and often clash, in metamodels of evolutionary change and adaptive learning. Modern human science then seeks to resolve the resulting dilemmas. It asks, how do biological evolution and development interact with conscious mind and learning? Nevertheless, both mental and material processes involve change and development, albeit via diferent mechanisms. In consequence, the dominant agentic metamodels of modernity are broadly adaptive, rather than replicative.

# **Modern Adaptive Metamodels**

Modern adaptive metamodels of agency therefore assume autonomous, reasoned problem-solving, learning, and development, within natural and cultural worlds. Persons are generally described as complex, open, adaptive systems, embedded in context (Shoda et al., 2002). Figure 2.2 illustrates this type of adaptive metamodel. Once again, it includes the same broad phases: situational input stimuli (SI) trigger sensory perception (SP), which in turn stimulate cognitive-afective processes (CA) by interacting processing units (PU), including one dotted circle showing a referential criterion or commitment (RC). Tese processes lead to action generation (AG) and resulting behavioral-performative outputs (BP), and the evaluation of performance (EP), which results in feedforward and feedback encoding (FF and FB respectively). Importantly, the modern, adaptive metamodel assumes stronger capabilities and more advanced technologies, when compared to the premodern, replicative metamodel in Fig. 2.1.

Te fgure depicts other important changes, compared to the replicative metamodel. To begin with, the metamodel in Fig. 2.2 is more complex, shown by additional component circles in each segment. Te system is also more connected and dynamic, shown by the greater number of arrows, which are now solid rather than dashed, indicating stronger capabilities and potentialities. Hence, Fig. 2.2 shows greater functional intensity overall. Particularly, cognitive-afective processing is a more complex

Feedback encoding (FB)

**Fig. 2.2** Adaptive agentic metamodel

system of interacting units. Some of these units—especially beliefs and values—will also serve as reference criteria and core commitments, which guide action and the evaluation of outcomes. One such criterion or commitment is depicted by a dotted circle (RC). In actual systems, there will be many.

In addition, the adaptive metamodel in Fig. 2.2 has stronger feedback and feedforward mechanisms, indicated by the solid arrowed lines at the base of the fgure. Tese solid lines represent the fact that variance often triggers adaptive learning in modern contexts, as well as updates to the stimulus environment and the system itself. Modern agents therefore exhibit stronger refexive functioning, compared to agents in premodernity. Inter-cyclical feedback (FB) is more active as well. Moreover, some updates will amend reference criteria and core commitments. Although, scholars continue to debate which reference criteria and commitments are adaptive, when, why, and to what degree.

## **The Period of Digital Augmentation**

In response to digitalization, agentic metamodels are transforming again. Te technological assistance of agency is transitioning to a new level of scale, speed, and sophistication, thus driving a qualitative shift in agentic capability and potentiality. To begin with, recall that advanced artifcial agents can compose, decompose, and recompose metamodels, in a dynamic fashion, potentially self-generating without external supervision. In addition, they learn with extraordinary speed and precision, including from intra-cyclical feedforward updates. Tis latter capability is particularly important. Earlier metamodels assume modest feedforward mechanisms, either from unconscious instinct or the efortful guidance of complex processes over time, whereas digitally augmented agents will learn rapidly and constantly in this fashion. As mentioned previously, advanced artifcial agents already do. When incorporated into humanartifcial collaboration, therefore, rapid feedforward learning, plus the sheer power and reach of artifcial agency, transform agentic functioning. Augmented agency is intensely generative and near composability. Indeed, these capabilities distinguish digitalization from earlier periods of technologically assisted agency.

All aspects of processing are afected. Augmented agents can sense and sample the world more extensively, organize and process vast amounts of information very rapidly, represent and solve complex problems, and then design and direct responsive action. Augmented agents also achieve unprecedented speed and precision in learning, compared to purely human agents. To illustrate, consider the artifcial agency required for autonomous mobility systems: constant real-time sensing of the environment, vehicles, and passengers; rapid complex problem-solving and empathetic interaction; accurate and coordinated action planning and control. Experts therefore apply a framework for autonomous vehicles known as "sense, plan and act" (Shalev-Shwartz et al., 2017). Tese systems exemplify that generative, augmented agency constitutes a new metamodel of intelligent agency (see Caro et al., 2014). Its central characteristics include the following: close collaboration between human and artifcial agents; high sensitivity to context; intelligent sampling, representation, and resolution of complex problems; compositive methods, based in artifcial intelligence; high sensitivity to variance and rapid evaluation of performance; very rapid processing and learning rates; real-time monitoring, self-regulation, and adjustment; often self-generative with minimal external supervision.

Tis historic transformation has meaningful topographical analogues in formal modeling. First, digital augmentation far transcends the state changes of the premodern period, which are comparable to scalar transpositions, shifting from point to point in Euclidean space. Second, digitalization also transcends the adaptive learning of modernity, which can be mapped as vector transitions in Cartesian space. Now third, and in contrast to both earlier periods, digital augmentation is about generative composition, which can be expressed as multi-vector tensor transformations, curving through Riemannian space (Kaul & Lall, 2019).

## **Generative Metamodels**

Figure 2.3 illustrates this kind of generative metamodel of agency. Once again, the system integrates situational input stimuli (SI) which trigger sensory perception (SP), which in turn stimulate cognitive-afective processes (CA) consisting of interacting processing units (PU), again including one dotted circle showing a referential criterion or commitment (RC). As before, processing results in action generation (AG) leading to behavioral-performative outputs (BP), and the evaluation of performance processing and outcomes (EP), which often results in feedforward and feedback encoding (FF and FB respectively). However, this metamodel is more complex and dynamic, compared to the metamodels depicted in Figs. 2.1 and 2.2. Tis is shown by the greater number of component

Feedback encoding (FB)

#### **Fig. 2.3** Generative agentic metamodel

circles within each segment. Some are now shaded as well, which indicates they are digitalized, transformed by the incorporation of digital technology. Importantly, the major phases shown by large diamond shapes now partially overlap, integrated by digitalized processes.

As Fig. 2.3 further shows, digitalization occurs throughout the metamodel. First, the generation of situational inputs is increasingly digitalized. For example, through the internet of things and intelligent sensors, situational contexts are increasingly digitalized and connected. In consequence of this development, the fgure also shows that sensory perception overlaps with cognitive-afective processing. Both phases are digitally intermediated. Second, cognitive-afective processing is equally digitalized and collaborative, whereby artifcial agents interact with the cognitive-afective system. In this regard, recent innovations include cognitive computing, wearable devices, and artifcial personality (Mehta et al., 2019). Refecting this development, the fgure shows that action generation is digitalized and overlaps cognitive-afective processing and behavioral-performative outputs. Tat is, digitalized action generation intermediates cognitive-afective processing and behavior performance. Current examples of this include artifcial assistants and expert decision support systems (Wykowska, 2021). Tird, behavioral-performative outputs are themselves digitalized, for example, by the incorporation of artifcial agents, collaborative robotics, and intelligent prosthetics (Vilela & Hochberg, 2020). Finally, the fgure shows that evaluation of performance is partially digitalized too.

In summary, Fig. 2.3 shows how digitalization is augmenting and transforming all aspects of agency: the stimulus environment and perception of it; processes of reasoning and afect; the generation and performance of self-regulated action; the evaluation of performance, and the encoding of updates as learning. For this reason, the fgure includes a stronger, digitalized stream of feedforward encoding (FF), fowing from cognitive-afective processing to update the situational context and processing itself. Now depicted by a heavier shaded line, indicating it is digitalized. Via this process, the system updates the context and process itself, intra-cyclically, in real time. Today's most advanced agents already function in this way. Newer technologies, including devices which integrate real-time biometric feedback and augmented reality, will accelerate this trend. Feedforward updating will be constant and ubiquitous. In consequence, augmented agents will be increasingly self-generative, far eclipsing the agentic potentiality of earlier periods. Tese will be distinguishing features of generative, agentic metamodels.

At the same time, however, owing to their complexity and dynamics, these metamodels will be more difcult to supervise. Artifcial and human agents will interact in every phase and function. However, as the previous chapter explains, both agents function in diferent ways. Much human processing is relatively myopic, sluggish, layered, and approximating, while artifcial agents are increasingly fast, expansive, compressed, and precise. In consequence, many artifcial feedforward mechanisms are inaccessible to human consciousness, and hence the two levels of processing could easily diverge. If this happens, augmented agents risk dysfunctional combinations of precision and approximation, fast and slow processing rates, sensitivity and insensitivity to variance, layering and compression, and complexity plus simplifcation. Artifcial agency could then outrun, overwhelm, and bypass human inputs. Alternatively, human myopia and bias may infect artifcial agents, and digitalization would then reinforce and amplify the limitations of human functioning. In summary, digitally augmented, generative metamodels pose major supervisory challenges.

## **2.2 Agentic Activation Mechanisms**

Agentic activation mechanisms are being digitally transformed as well. To begin with, consider Fig. 2.2 once again, which shows the modern, adaptive metamodel of agency. All components are clearly distinguished and bounded. Tey are exogenous (external) or endogenous (internal), relative to each other. For example, situational inputs (SI) are exogenous to cognitive-afective processing (CA), whereas processing units (PU) are endogenous to cognitive-afective processing (CA). Now compare the digitally augmented, generative metamodel in Fig. 2.3. Te fgure is more highly integrated, with digitalized components connecting the major stages of situational inputs (SI), cognitive-afective processing (CA), and behavior performances (BP). Tese stages now overlap, owing to the digitalization of activation and intra-cyclical feedforward mechanisms (FF). Via such means, augmented agents will update the system in real time (Ojha et al., 2017). Tese mechanisms are central to the generative metamodel of agency.

Regarding the frst two stages, digitalized mechanisms of sensory perception (SP), join the two large diamond shapes of situational inputs (SI) and cognitive-afective processes (CA). As a result, sensory perception becomes an intelligent process itself, thanks to the rapid intra-cyclical management of attention and sampling (see Fiedler & Wanke, 2009). In fact, environmental sampling and data gathering become deliberate, intelligent, and adaptive activities. Recent evidence supports this shift (e.g., Dong et al., 2020). Te internet of things, smart sensors, wearable devices, and automated systems of multiple kinds, all connected to artifcial agents, enable intelligent sensing and sampling, which radically complement ordinary sensory perception. However, from a modeling perspective, the digitalized mediation of intelligent sensory perception (SP) is neither exogenous nor endogenous, with respect to situational inputs (SI) and cognitive-afective processing (CA). Rather, intelligent sensory perception is in-between, mediating the boundaries of both the situational context and the cognitive-afective system.

Second, in like fashion, Fig. 2.3 shows that action generation (AG) is becoming behavioral and performative, not simply an antecedent of behavior. Action generation is now digitalized and joins cognitiveafective processing (CA) and behavior performance (BP). Tis means that action plans can be updated and regenerated during performances, in real time, via intra-cyclical feedforward mechanisms (Heaven, 2020). Artifcial agents will process rapidly in the background, to integrate and update each phase of the process. Performances thus become more dynamic, thanks to digitalization. Hence, we can refer to performative action generation in generative metamodels. In fact, this already occurs in the development of agile, self-correcting systems (Howell, 2019). However, performative action generation (AG) is neither exogenous nor endogenous, with respect to cognitive-afective processing (CA) and behavior performance (BP). Rather, the process is again in-between, mediating the boundaries of both the cognitive-afective system and behavior performance.

Tird, Fig. 2.3 shows that situational updating from feedback (FB) and feedforward (FF) is becoming intelligent itself. Tat is, situational contexts are becoming sites of intelligent learning, not simply passive sources of sensory inputs and problems. Once again, enabling technologies include the internet of things, ambient computing, and autonomous agents. All are embedded into the problem context and capable of updating it, often autonomously, in a self-generative fashion. Via these mechanisms, contexts will update and regenerate during problem-solving, not only from inter-cyclical, adaptive feedback. Hence, existing situations will evolve, and new ones emerge, during problem-solving itself. I describe this process as contextual learning, which is neither exogenous nor endogenous, with respect to behavior performance and the stimulus environment. Rather, the process is also in-between, mediating the boundaries of both the evaluation of performance (EP) and situational inputs (SI).

## **Augmented In-Betweenness**

All the mediating mechanisms just described are central to augmented agency: intelligent sensory perception, performative action generation, and contextual learning. Tey are novel and transforming. Together, they allow augmented agents to learn, compose, and recompose in a dynamic fashion, updating form and function in real time. Augmented agency is therefore near composability, not only near decomposability. Moreover, these mechanisms signal a wider shift, from fxed boundaries and categories to fuid metamodeling. In consequence, however, being inside or outside of system boundaries at any time (endogenous or exogenous respectively) is often ambiguous and may not apply. Tis is because digitalized mediators operate at a higher rate and level of sensitivity, monitoring and adjusting system boundaries (see Baldwin, 2018). Tey are neither endogenous nor exogenous, relative to the boundaries they help to defne. Rather, they are consistently in-between, processing potential form and function. Tese mechanisms will be critical and ubiquitous within digitally augmented agents.

Over recent years, scholars pay increasing attention to such efects. Tis interest is captured by the growing number of studies about forms of

**Fig. 2.4** Endogenous, exogenous, and in-between

ambiguous meaning, ambivalent value and belief, organizational hybridity, and ambidextrous action (e.g., March, 2010; O'Reilly & Tushman, 2013). In many domains, agents combine alternative, complementary, and sometimes conficting patterns of thought and action, as they grapple with increasing phenomenal complexity and dynamism. Tis leads to shifting categorical boundaries, forms, and functions. Te prefx "ambi," meaning both in Latin, is therefore a recurrent prefx in descriptive terms. As the reader will see in subsequent chapters, I exploit this prefx to describe other, novel patterns of in-betweenness arising from digitalization.

Figure 2.4 illustrates this type of dynamic mediation. It shows three levels and rates of processing and highlights the potential for divergence within augmented agents. To begin with, the upper third of the fgure is modern adaptive capabilities at level L2. Tey cycle at rate R2 over times T1 and T2, and process inter-cyclical feedback at T2. Te middle third of the fgure is stronger digitalized capabilities at level L3 which cycle at rate R3 over times T1.1, T1.2, T2.1, and T2.2. In other words, digitalized processing L3R3 cycles more rapidly, compared to the modern adaptive scenario L2R2. Te lower third of the fgure illustrates digitalized capabilities at level L4

with cycle at rate R4. Tese are roughly equal to L3R3 but are labeled differently to distinguish them. All three processes then depict the same two subsystems labeled A and B and their components. For example, these could be complementary subsystems of a problem-solving process.

However, the three levels of processing produce diferent patterns, shown by shaded dots and shifting boundaries of A and B. First, consider the upper portion of the fgure, depicting L2R2. It shows that some of the components of A2 and B2 transition between times T1 and T2, and hence the boundary line shifts from being horizontal to vertical. For example, perhaps some components of solution search at T1 become aspects of problem representation at T2, refecting adaptive learning which improves attention and problem sampling (Fiedler & Juslin, 2006). Now compare the middle process at L3R3. Components and boundaries shift as well, but in a diferent fashion. Most notably, the process cycles more rapidly, and as a result, the system changes at time T1.2. One component of each subsystem has moved, along with the system boundary. Te light gray dot shows a component at T1.2 which is endogenous to A3 and exogenous to B3, but it remains endogenous to B2. While the dark gray dot shows a component at T1.2 which is endogenous to B3 and exogenous to A3, but it remains endogenous to A2. Importantly, if we now combine the two subsystems (L2R2 and L3R3) in one augmented agent, some components are regularly in-between, simultaneously endogenous and exogenous, depending on the level of processing.

Note that at time T2, both L2R2 and L3R3 are equivalent again. Tis could mean that the change at T1.2 has now been incorporated into the system L2R2 via adaptive feedback. Te same pattern of processing then occurs over the following cycle T2. Once again, some shaded components are in-between, simultaneously endogenous and exogenous, relative to diferent levels of processing. But the degree of divergence is modest, only one component of each subsystem at a time. Moreover, L2R2 and L3R3 will synchronize at the completion of each major cycle; although inbetween, they exhibit ambiguous boundary conditions. In summary, this augmented agent is broadly convergent over time, because digitalized intra-cyclical, feedforward updates at L3R3 are incorporated into L2R2 via inter-cyclical adaptive feedback. Te agent absorbs updates efectively at both levels. Learning is generative and functional, in these respects.

Next, consider the third process at L4R4. Once again, components and boundaries shift, but now more extensively, compared to the other processes. At time T1.2 components of A4 and B4 have moved, and the boundary between them is now vertical. Once again, the light gray dot shows a component at T1.2 which is endogenous to A4, but it remains endogenous to B2. While the dark gray dot at T1.2 is endogenous to B4, but it remains endogenous to A2. Moreover, if we combine the two subsystems (L2R2 and L4R4) in one augmented agent, half the components are in-between, simultaneously endogenous and exogenous, relative to diferent levels of processing.

Furthermore, at time T2, the two processes L2R2 and L4R4 remain divergent, in contrast to the earlier convergent condition. Shaded components are persistently in-between, ambiguously endogenous and exogenous. In other words, the change in L4R4 at T1.2 is not fully incorporated into the system L2R2, probably because the degree of digitalized processing at L4R4 is beyond the absorptive capabilities of L2R2. Moreover, the same pattern of divergent processing occurs again over the following cycle T2. It leads to a compounding efect. By time T2.2, all components of A and B are ambiguously endogenous and exogenous. Tis augmented agent is therefore increasingly divergent over time because digitalized intra-cyclical, feedforward updates at L4R4 are not incorporated into L2R2 via inter-cyclical adaptive feedback. Te agent does not absorb updates efectively across levels and modalities. Learning is digitalized but dysfunctional, in these respects.

In fact, actual systems already exhibit these efects (e.g., Lee & Ro, 2015), for example, in the dynamic adaptation of modular transaction networks and software architecture (Baldwin, 2008). Tese processes rapidly update modular components, setting and resetting system boundaries. However, challenges escalate when inter-cyclical feedforward processing is fast and constant. Boundaries are consistently in fux. Hence, in highly digitalized systems, there will always be some components which are in-between, ambiguously endogenous and exogenous. Te risk is that rapid, intra-cyclical updates will lack coordination with slower, inter-cyclical feedback, as depicted in Fig. 2.4. When this occurs in augmented agents, artifcial and human processes will diverge, possibly leading to dysfunctional outcomes. Efective supervision will be critical.

## **Entrogenous In-Betweenness**

Standard concepts fail to capture these novel features of digitalization. Indeed, most human sciences view in-betweenness as transitional, temporary, or paradoxical. Te closest concept is liminality, but even it implies being ephemeral, and permanent liminality is viewed as dysfunctional, a sign of faulty, incomplete processing (Ibarra & Obodaru, 2016). Terefore, to capture this novel type of ongoing in-betweenness, another term will be helpful. I propose "entrogenous" which builds on "entre," meaning between in numerous European languages. Applied to the generative metamodel of agency, "entrogenous" and "entrogeneity" refer to the digitalized mechanisms which mediate in-betweenness, and whereby forms and functions develop and transform. Notably, such mechanisms are neither endogenous nor exogenous, relative to fxed boundaries. Rather, they are constantly in-between and mediating potential boundaries. Te major risk, as shown by Fig. 2.4, is that poor supervision of entrogenous mechanisms will lead to divergent processes and dysfunctional outcomes. Tis particularly applies to the novel, digitalized mediators of augmented agency identifed earlier: intelligent sensory perception, performative action generation, and contextual learning.

In fact, this puzzle is far from new. In ancient Greece, Heraclitus famously wrote, "You cannot step into the same river twice, for other waters are continually fowing on." Equally important but less well known, he also wrote, "We step and do not step in the same rivers. We are and are not" (Kirk, 1954). In other words, human experience, thought, and action are inherently in-between, constantly in fux. Form and function are relative to the frame of reference. As Heraclitus observed, a river is defned by its banks and fowing waters, and therefore simultaneously stable and always changing. In-betweenness is then normal, not dysfunctional. Tis Heraclitian perspective contrasted the thought of Plato and Aristotle, who favored categorical stability and essential order. Digitalized agentic systems address this ancient dilemma, because they allow for continual composition and recomposition at multiple levels. Hyperparameters and parameters are set and reset, as processes unfold (Feurer & Hutter, 2019). With respect to augmented agency, form and function will stabilize for a time, depending on the context, and recompose as contexts change. Tis is the core dynamic of generative, augmented agency. By analogy, therefore, entrogenous mediation is Heraclitian rather than Aristotelian. Te implication being, that any instance of perceived permanence is mediated by some process in constant fux. To cite Herbert Simon (1996) again, science often transforms assumed states into dynamic processes.

At the same time, human and artifcial agents possess diferent, inherent capabilities and potentialities. Much human processing is relatively sluggish, parochial, and heuristic, while artifcial agents are increasingly fast, expansive, and precise. Terefore, what is entrogenous for artifcial agents, may appear exogenous or endogenous for humans. As Heraclitus wrote, "We are and are not." Tese entrogenous dilemmas amplify the risks identifed previously. Human agents could import infexible categories, beliefs, and biases into augmented agency. Endogeneity and exogeneity would be baked in, from a human perspective. At the same time, however, artifcial agents could relax categorical boundaries, and allow for greater plasticity and variation. Te overall result will be divergent, and potentially conficting, agentic form and function. In fact, these dilemmas are already observed in semi-supervised, collaborative systems (Kouvaris et al., 2015). Tey will be even more pronounced in larger, augmented communities and organizations. Later chapters will revisit this issue in relation to specifc functional domains.

## **Summary of Metamodels**

We can now summarize the foregoing discussion. To begin with, there are major diferences between human and artifcial agents. Natural human agents tend toward stability and possess limited capabilities and potentialities. As a result, purely human metamodels are relatively stable, weakly assisted by technologies, and not highly adaptive. Tey often possess deeply encoded hyperparameters, parameters, and variables, and fxed boundary conditions. Figure 2.1 illustrates this type of system labeled the replicative, agentic metamodel, which was dominant during premodernity. Next, as capabilities and technological assistance advanced, human agents became more autonomous and developmental, as shown in the modern, adaptive metamodel in Fig. 2.2. Granted, signifcant limits remain. Nevertheless, the adaptive metamodel of modernity afords greater degrees of freedom, compared to replicative metamodels. Boundaries are more adaptive and less fxed.

Digital augmentation now promises far greater capabilities and potentialities. Most notably, digitalization enables augmented agents which combine human and artifcial agents in close collaboration. A major feature will be the capability for generative metamodeling, efectively in real time. Digitalized entrogeneity will be fundamental, mediating the dynamic composition of agentic form and functioning. Tis type of digitally augmented, generative metamodel is shown in Fig. 2.3. If well supervised, augmented agency and humanity will enjoy greater degrees of freedom and potentiality. But there are also major risks and dilemmas to resolve.

## **2.3 Dilemmas of Digital Augmentation**

Te greatest benefts of digital augmentation are its potential weaknesses. As often happens, remarkable strengths easily skew performance outcomes. On the one hand, augmented agents will sample and search ever more widely, process information at increasing speed, scale, and accuracy, and learn at unprecedented rates. On the other hand, thanks to human semi-supervision, augmented agents will often inherit myopias, biases, and parochial commitments. Terefore, digitally augmented agency confronts a fundamental challenge: how to combine and supervise human and artifcial capabilities while avoiding excessive divergence, convergence, and distortion? Resolving these questions will be critical for augmented humanity.

## **Problematics and Metamodels**

To clarify these topics further, Fig. 2.5 summarizes the agentic metamodels and problematics already discussed. It shows three agents X, Y, and Z, over three successive time periods labeled 1, 2, and 3. Te fgure captures the essence of three broad historical periods, premodernity, modernity, and contemporary digitalization. First, recall that in premodern contexts, metamodels of agency assume low complexity, relatively poor capabilities and potentialities, with little variation. Agency tends to be imitative and replicative. Overall agentic functioning is viewed as a collective accomplishment, rather than an outcome of autonomous individuals. Predictably, therefore, premodern problematics focus on the integration of persons within communal narratives, and how to account for variation in a world of ordained stability and order (Walker, 2000). In Fig. 2.5, these conditions are shown by the segments with only one dot in each, which exclude agent X at time 3. Assume that each dot represents components of some agentic function. As the fgure shows, functioning is widely dispersed across the collective (agents X, Y, and Z) over time (periods 1, 2, and 3). Each individual agent is weakly responsible for overall functioning at any time, apart from agent X at time 3. Hence, they are highly dependent on each other. Terefore, if we assume that all the segments with one dot are required to perform a particular agentic process, then efcacious action will require the cooperation of all three agents over the three time periods, and hence the minimization of individual variance. In this way, the segments with one dot expose core features of the premodern, replicative metamodel and its associated problematics. Individuals must conform and cooperate over time, to achieve collective outcomes.

Notably, simple models of artifcial intelligence and machine learning possess similar features. Tey, too, reference encoded models to solve predictable types of problems (Norvig & Russell, 2010). Moreover, these simpler model-based, artifcial agents, are frequently embedded within processing networks, just like the agents in Fig. 2.5 with only one major functional role. Functioning is therefore highly distributed. In these respects, simpler types of artifcial agent exhibit metamodels which are comparable to those of premodern agency. Such technologies are therefore less genuinely "agentic" and augmenting, and sit at the passive end of the supervisory spectrum. Tat said, this suggests a promising avenue for research into replicative metamodels. Simpler model-based agents appear well suited to the task.

By contrast, during modernity, human agents develop stronger capabilities for autonomous thought and action. Agentic functioning is both an individual and collective accomplishment, and numerous technological innovations assist these developments. Modern problematics therefore focus on the reconciliation of individual freedom and collective solidarity, and the means by which humans adapt and transcend their limited capabilities and potentialities (Pinker, 2018). Tese problematics are illustrated by agent X and time 3, excluding the more densely dotted segment at the base of the fgure. Notably, there are now extra dots within agent X at time 3, which shows that this individual is more capable and performs more functions, thanks in part to increased technological assistance. Indeed, agent X contains as many functional components at time 3, as all other agents, which illustrates the agent's capability for autonomous action. Nevertheless, agent X remains reliant on the collective. Signifcant functions are still distributed, and efective performance will require cooperation with other agents over time. In these respects, agent X at time 3 illustrates core aspects of the adaptive metamodel and problematics of modernity, namely, how to develop individual capability and potentiality, while integrating with collective form and function?

Once again, there are strong parallels to artifcial agents. In fact, the modern adaptive metamodel corresponds to goal-based and utility-based, artifcial agents, which are more advanced than the simpler model-based agents discussed above (Norvig & Russell, 2010). First, goal-based artifcial agents use encoded preferences to guide problem-solving. Other things being equal, they seek to achieve predetermined outcomes. Second, utility-based artifcial agents possess additional rules for the rank ordering of potential outcomes and then seek to maximize utility. Clear parallels exist in modern social and behavioral theories. In many such disciplines, agents are conceived as goal seeking, maximizing preferences and utility (Bandura, 2007; Taler, 2016). Hence, the architecture of goal-based and utility-based artifcial agents, is broadly comparable to the adaptive metamodel of modernity. Both entail intelligent agents, working in concert, seeking to achieve goals and maximize preferences.

Contemporary digitalization supports a new, generative metamodel of agency. It assumes high levels of complexity, unprecedented processing capability, and intense patterns of functioning at every level. Figure 2.5 also illustrates the core features of such a metamodel, by showing one component of agent X at the base of time 3, which is very dense with

**Fig. 2.5** Historical problematics of agency

dots. Tis component is fully digitalized. Indeed, this digitalized component of agent X exhibits as many functions as all other segments of X, as well as the other agents in the fgure. In a fully digitalized collective, all agents and components will be equally intense. Te increase in functional complexity is exponential, across multiple levels and modalities, and within any time period as well. New problematics thus emerge: how can human beings collaborate closely with artifcial agents, while remaining genuinely autonomous, in reasoning, belief, and choice; how will human and artifcial agents learn to understand, trust, and respect each other, despite their diferent capabilities and potentialities; and how will augmented agents supervise the dynamic composition and recomposition of metamodels? And not surprisingly, this generative metamodel mirrors the architecture of the most advanced artifcial agents, because it assumes participation by such systems. Advanced artifcial agents will be integral to augmented agency.

In summary, the diferent functional patterns in Fig. 2.5 capture the history of both artifcial and human agency. Te fgure shows how the recent evolution of artifcial agency shares important features with the long history of human agency, at least in terms of their metamodels. First, model-based artifcial agents map to the replicative metamodels of premodernity. Second, goal-based and utility-based, artifcial agents mirror the adaptive metamodels of modernity. And third, advanced artifcial agents instantiate the generative metamodels of digital augmentation. In these respects, the rapid ontogeny of artifcial agency over recent decades, recapitulates the slow phylogeny of civilized humanity over millennia (see Clune et al., 2012). Or to paraphrase Hegel (1980), the recent history of digital science recapitulates the digitalized science of history. More striking still, both processes converge in the science of augmented agency because the story of digital science mirrors the science of augmented humanity. Producing a historical synthesis Hegel would surely appreciate.

## **2.4 Patterns of Supervision**

Even in a highly digitalized world, however, people will continue to exhibit models of agency which are efectively premodern, in terms of their core components, levels of complexity, and modes of supervision. Tere will be a spectrum of artifcial augmentation. In some contexts, that is, agency will still be governed and supervised in terms of replication and narrative, as in many cultural and faith communities. Similarly, people will continue choosing modern adaptive metamodels which entail less intrusive technological assistance, as in many social and cultural pursuits. Terefore, earlier agentic options will remain feasible and often desirable, as in matters of faith and family. But they will exhibit reduced functionality, compared to fully digitalized, generative options. In fact, engineers plan for these options too, recognizing that people will sometimes wish to control technological functioning for recreational or other reasons (Simmler & Frischknecht, 2021). However, extra problems arise when agents adopt diferent metamodels at the same time. I will return to this topic in later sections.

In the meantime, as the preceding argument explains, a central feature of any agentic metamodel is the quality of its supervision. Tat is, how and to what degree, the metamodel is copied or composed, self-regulated or externally controlled, and from which source. We can therefore distinguish metamodels in terms of their supervision, and particularly, in terms of human and technological sources of supervision. Nine alternative patterns are depicted in Fig. 2.6. On the horizontal dimension, the fgure shows the level of human supervision. While the vertical dimension shows the level of technological supervision. Each segment of Fig. 2.6 therefore shows two potential sources and three levels of supervision of agency, low, medium, and high. Circles with dashed borders represent technological supervision, while human supervision is represented by circles with solid borders. Te circles in each segment overlap because both types of supervision interact. It is important to note, that the size of these shapes does not represent the absolute strength or complexity of supervision, but rather their relative signifcance in any metamodel.

First, consider segment 1 in Fig. 2.6. It shows the type of simple supervision in replicative metamodels of agency, which dominated during premodernity. Human supervision is shown by the small circle with a solid border, and technological supervision by the small circle with a dashed border. In this metamodel, therefore, technological and human levels of supervision are both low. Supervision is routine, encoded, and replicative, relying on communal rituals, perhaps simple tools for writing,

Human Supervision

**Fig. 2.6** Historical eras of agentic supervision

counting, and communicating, but not much more. In summary, replicative metamodels of agency have relatively low levels of autonomous, technological, and human supervision. Te metamodels ofer few degrees of freedom. In this regard, segment 1 complements the premodern problematics depicted earlier, by the segments in Fig. 2.5 which have only one functional dot.

Now consider segments 2, 4, and 5 as well. Tey show metamodels with greater technological capabilities, as in modernity. In fact, segments 1, 2, 4, and 5, represent the patterns of supervision in modern, adaptive metamodels of agency. Tey complement the earlier depiction of modern problematics in Fig. 2.5, and especially the role of agent X in the collective. Most notably, there are now four options in Fig. 2.6, combining medium or low human supervision, with medium or low technological supervision. In other words, both human and technological supervision have advanced. Relevant technologies are largely mechanical and analogue and provide a moderate assistance to the supervision of agency. Human capabilities also advance, at least for some people, but still within constraints. Indeed, the limits of human supervisory capability are a persistent theme of modernity. Hence, the metamodels of agency represented by segments 1, 2, 4, and 5 in Fig. 2.6 bestow greater degrees of freedom, compared to the preceding replicative option.

Regarding the details, segment 5 shows medium levels of human and technological supervision. Tis encompasses technologically assisted domains, such as surgical practice, in which human and technological supervision are both critical. Segment 2, on the other hand, shows dominant human supervision, similarly to premodern contexts. Segment 4 shows the opposite scenario, in which technological supervision dominates, as it often does in mechanical systems which operate independently of human intervention. Many automated processes are like this. In summary, modernity exhibits diferent levels of human and technological supervision, generating alternative agentic options. For this reason, modernity also presents more frequent choices and dilemmas, about which metamodel of agency fts best, when, and why.

Next, by including all 9 segments of Fig. 2.6, we have an illustration of digitalized, generative metamodels of agency. Clearly, there are more feasible metamodels and choices. Human capabilities are more developed in segments 3, 6, and 9, as are technological capabilities in segments 7, 8, and 9. Moreover, augmented agents can exploit all these metamodels, and potentially in real time. Tis dynamism will be signifcantly owing to the entrogenous mediation mechanisms discussed previously and illustrated in Fig. 2.4. But these mechanisms also bring new challenges and risks. If supervision is poor, agents might develop in overly divergent or convergent ways and become dysfunctional, for example, adopting the option in segment 3 when the balanced option in segment 5 is more appropriate. In this respect, the whole of Fig. 2.6 captures a central challenge for digitally augmented agents, namely, the complexity of supervising human-machine collaboration (see Murray et al., 2020; Simmler & Frischknecht, 2021).

Depending on the context, therefore, each metamodel in 2.6 can be efective and appropriate. To begin with, scenario 1 will be largely routine. Whereas segment 9 shows the opposite scenario. Human and technological supervision are both strong and assertive. Tis metamodel is best suited to highly digitalized, complex, dynamic contexts. Expert medical practice is a good example, in which artifcial and human agents both supervise critical aspects of collaborative functioning. Te major risk is confict within the augmented agent, for example, when the circles in segment 9 overlap less and supervision is poorly coordinated. Other scenarios show the alternatives in-between, combining low, medium, and high levels of supervision. Sometimes human supervision is clearly dominant, as in scenario 3. Tis metamodel will be fully humanized and supervision will be guided by ordinary values and commitments. However, the risk is that human myopia and bias will intrude and distort the system. Next, there are metamodels in which technological supervision is dominant, as in scenario 7. Tese are highly digitalized, but the risk is that human inputs are excluded inappropriately. People could become digitally docile and overly dependent. Segment 5 is intermediate. It includes moderate supervision of both kinds, but signifcant freedom as well. Tis metamodel could be appropriate in exploratory, creative contexts. Te risk is that agents may lack enough supervision and tend toward incoherence.

Furthermore, segment 1 in a digitalized world, has the same general pattern of supervision in a premodern context. In other words, the type of routine agency which dominated during premodernity may still occur within a digitalized world. Even in the period of digitalization, that is, people may adopt purely replicative models of agency, over digitally augmented options. Tis may seem counterintuitive, but in fact, it will be widespread. Earlier, I cited traditional cultural and faith traditions as examples of such choices. However, this might lead to the reinforcement of human myopias and biases. Next, segments 1, 2, 4, and 5 are equivalent in the adaptive and generative systems. What this means, is that the modern patterns of supervision can also occur in a digitalized world. People will still exhibit adaptive, modern approaches, and eschew high levels of digital augmentation. For example, purely human inputs and adaptive learning will likely remain dominant in some social and professional contexts. Moreover, such metamodels may be fully functional, assuming the choice of metamodel fts the context. Te major challenge in all scenarios, therefore, is to develop mutual understanding, trust, and empathy, within the augmented agent.

## **2.5 Implications for Augmented Humanity**

Signifcant dilemmas therefore confront digitally augmented humanity, conceived in terms of purposive agency, primarily because human and artifcial agents have diferent capabilities and potentialities. Compared to artifcial agents, humans are often myopic, sluggish, layered, and approximating. While relative to humans, artifcial agents are increasingly expansive, fast, compressed, and precise. When combined in augmented collaboration, these divergent characteristics are either complementary or conficting. If the collaboration is well supervised, they are complementary. Human and artifcial agents strengthen each other and mitigate the other's limitations. However, if poorly supervised, they are conficting, and combination leads to poorly ftting metamodels: either underftting, meaning metamodels admit too much noise and variance, and fail to clarify potential models of interest; or overftting, meaning they omit too much noise and potential variance, thereby excluding potential models of interest. Metamodels of agency can therefore skew inappropriately, either amplifying human priors, especially myopias and biases. Or they skew the other way, by amplifying digital processes which diminish and override the human. Sometimes both patterns of distortion will occur, resulting in extremely divergent systems. In each scenario, augmented agents will underperform and incur functional losses. Tese risks already attract signifcant attention from computer scientists (Gavrilov et al., 2018). Moving forward, they will be major concerns for human scientists as well. Te supervision of augmented agency will require new theory and techniques.

# **Implications for Specifc Domains**

Hence, the following challenge arises for theory and practice: to understand which aspects of human and artifcial supervision should be reinforced, adapted, or relaxed, so that augmented agents maximize the benefts of digitalization, while preserving core values, commitments, and other humanistic qualities. Important legacies are at stake. Many aspects of modernity, and even premodernity, could remain fulflling and functional, even in a highly augmented world, assuming agents adopt these options appropriately and avoid superstitious thinking and distorting priors. For example, people will continue to fnd meaning in religious narrative and spiritual commitment, and purely human supervision could be fully appropriate in everyday life. Augmented humanity will beneft by preserving these options. Te challenge is dynamically to determine which values and commitments are humanizing, and which are distorting, when and why, and then to supervise them efectively (Sen, 2018).

Collaborative supervision is therefore a major challenge for augmented humanity. It refects the core problematic of digitalization: how to combine and balance human and artifcial capabilities and potentialities? It will impact all aspects of agentic form and function. Te following chapters examine important areas of impact. In doing so, they will emphasize diferent aspects of the generative agentic metamodel in Fig. 2.3:


## **Reality and Truth**

Additional consequences follow for the core criteria of reality, truth, and ethics, or in other words, for core ontological, epistemological, ethical, and cultural commitments. To begin with, recall that dominant agentic metamodels at any time, tend to refect the limitations of capability and potentiality. From this perspective, ideal criteria are extrapolations of human limitation (Appiah, 2017). As an adaptive mechanism, such idealization is explicable and often functional. It bolsters agents' self-efcacy and sense of security because human capabilities appear to encompass the limits of reality, truth, and value. Ideals also provide predictable meaning and guidance. Of course, empirical science consistently exposes the contingency of such ideals, which partly explains why scientifc advance is often controversial. It challenges infated self-efcacy and identity. Ironically, therefore, the greater the impact of applied science, including digitalization, the more resistance it may provoke.

To illustrate, consider the contemporary period. In much modern thinking, material nature is opposed to conscious mind. Kant (1998) clearly established the distinction: pure abstract reasoning draws from ideal spirit and is categorically diferent from practical thought in the temporal realm. Rational mind was a category of immaterial reality, while practical reasoning translated mind into the contingent world. From a purely functional perspective, this belief liberates autonomous mind from premodern myth and superstition, while protecting it from mechanistic reduction. In addition, by separating mind and nature, it provides a rationale for human exploration and exploitation of the natural world. However, in the light of artifcial intelligence and cognitive neuroscience, many now question these assumptions. Tey rightly observe that artifcial intelligence blends cognitive and material phenomena, while neuroscience promises to explain consciousness in terms of natural and ecological mechanisms (Seth, 2018).

Furthermore, as Chap. 1 explains, the insights of digitally augmented neuroscience imply that the fundamental hyperparameters of mind and consciousness are not directly accessible to refexive consciousness itself. Instead, researchers use neurophysiological and digital techniques. In fact, advanced artifcial intelligence already simulates signifcant aspects of conscious mental life, including calculative and associative reasoning, intuition and increasingly, empathy and personality (Mehta et al., 2019). As these technologies mature, traditional distinctions between mind and nature, and between virtual and material, will appear increasingly contingent, more functional than fundamental. Tis shift is foreshadowed by the generative agentic metamodel in Fig. 2.3. It shows that digitalized processes infuse all areas of agentic functioning.

# **Value and Commitment**

Earlier in this chapter, I also noted that ancient thought remains relevant today. For example, the ancients examined the hedonic nature of life, contrasting pleasure and pain, gain and loss, life and death. Not surprisingly, humans approach the former conditions and try to avoid the latter. Tese hedonic principles still run deep in Western thinking. To illustrate, Higgins' (1998) Regulatory Focus Teory is explicitly hedonic. It contrasts the prevention of pain and loss, against the promotion of pleasure and gain. As another example, Kahneman and Tversky's (2000) Prospect Teory draws fundamental distinctions between potential gains and losses in behavioral decision-making. And to be sure, life was and is, deeply hedonic. Human beings naturally seek to avoid pain, loss, and death, while hoping for pleasure, gains, and life.

Going further, however, the ancients also thought about eudaimonic needs, or the human desire for overall well-being and to live a good life (Aristotle, 1980). Tese aspirations transcend the hedonic avoidance of loss and pursuit of gains. Rather, eudaimonia embraces the whole of experience, and hence, the totality of human purpose and potentiality (Di Fabio & Palazzeschi, 2015). Notably, this concept has gained fresh prominence in contemporary thought, including positive psychology (Seligman & Csikszentmihalyi, 2000), new thinking in economics by Amartya Sen (2004) and others, and value-based models of social organization (Lounsbury & Beckman, 2015).

Tis conceptual shift may be partly explained by the rapid growth of capabilities and potentialities brought about by modernity and its related changes. Put simply, fourishing is now more feasible for more people (Phelps, 2013). Of course, widespread discrimination, deprivation, and inequality persist, and new divisions have emerged. Nonetheless, owing in part to digital augmentation, the realm of agentic potentiality is expanding. Human beings will be able to curate new metamodels of being and becoming, including dynamic compositions of the self and community. Agentic potentiality will be vastly diferent in such a world. In fact, this transformation will likely spawn a new science of eudaimonics, integrating value commitments broadly conceived, where commitment in this context is defned as being dedicated, feeling obligated and bound, to some value, belief, or pattern of action. Such a science would complement the existing disciplines of economics, ethics, politics, and aesthetics (see Di Fabio & Palazzeschi, 2015; Sen, 2004). I will return to this possibility in the fnal chapter.

In the meantime, advances in cognitive neuroscience and computer science reinforce the fact that fundamental properties of mind and consciousness cannot be accessed via ordinary means (Seth, 2018). Introspection and intersubjectivity are no longer enough, nor the anthropomorphic conceptions which these methods support. New concepts and techniques are required. Yet at the same time, human beings live in and through ordinary consciousness. It is fundamental to being and remaining human. Tis presents a core challenge for augmented agency, which is to maintain the value and signifcance of consciousness and mental life, even as science frees itself from anthropomorphic constraints. In fact, this dilemma reinforces the role of commitments in the supervision of augmented agency, because commitments will anchor agents in lived experience. Commitments validate and sustain ordinary consciousness and mind, without claiming scientifc status. Tey are simply and importantly human. Hence, commitments will play a central role in the supervision of augmented agency. Tey will reinforce humanistic values, helping to preserve the experience of ordinary mind and consciousness in a digitalized world.

## **References**


Vilela, M., & Hochberg, L. R. (2020). Applications of brain-computer interfaces to the control of robotic and prosthetic arms. In *Handbook of clinical neurology* (Vol. 168, pp. 87–99). Elsevier.

Walker, J. (2000). *Rhetoric and poetics in antiquity*. Oxford University Press.

Williams, B. (1993). *Shame and necessity*. University of California Press.

Wykowska, A. (2021). Robots as mirrors of the human mind. *Current Directions in Psychological Science, 30*(1), 34–40.

**Open Access** Tis chapter is licensed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/ by/4.0/), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence and indicate if changes were made.

Te images or other third party material in this chapter are included in the chapter's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the chapter's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.

# **3**

# **Agentic Modality**

Fortunately for humanity, many ecologies are stable and munifcent over time. Civilization can fourish, notwithstanding episodic disasters and disruption. Social systems evolve and human beings cooperate in purposive action. Tese ecologies elicit and sustain diferent agentic modalities, or expressions of agentic form and function. Tree such confgurations consistently emerge: individual persons, relational groups, and larger collectives (Bandura, 2006). All three are interconnected within agentic ecology, although, explanation of their origins and interconnection is problematic. In fact, persistent questions about the origins of agentic modality are central to human science. Scholars ask to what degree are there stable modalities of human agency, and how do such forms and functions originate, interact, and adapt? Tese puzzles have been deep and widespread, especially since the European Enlightenment (Giddens, 2013). During this period, scholars elevated the status of autonomous, reasoning individuals, as well as democratic institutions, and then worked to integrate these modalities with traditional forms of family and community. Tis clearly contrasted the premodern emphasis on patriarchal order and cultural compliance.

Contemporary debates continue, regarding the origins and interactions of individuals, groups, and collectives. Competing answers have major implications. For example, if collective forms and functions are foundational modalities, rather than individual persons or relational groups, then collective origins take precedence. Individuals and groups will inherit many of their core characteristics from membership of cultural and social collectives. In contrast, if individual persons and their close relationships are the primitive modalities, then collectives derive from the combination or aggregation of individuals. Collectives would inherit many core characteristics from their members.

Tese distinctions have been major fault lines in modern thought. On the one hand, some advocate bottom-up explanations, thereby invoking methodological individualism, in which persons assemble, aggregate, or contract, into collective agentic modalities. Within theories of this kind, interpersonal comparison and negotiated consensus are frequent concerns, because they mediate a liberal approach to aggregation and combination (e.g., Arrow, 1997; Locke, 1967). On the other hand, there are those who advocate top-down explanations, thus invoking methodological collectivism, in which individuals inherit and instantiate features of the collective (e.g., Marx, 1867). Intercommunal comparison and managed consensus are now typical concerns because they mediate a cultural process of agentic devolution. Other scholars occupy the middle ground, focusing on the dynamics of relational groups, using either a sociological lens to explain how groups join into larger collectives (e.g., Simmel, 2011), or a social psychological lens to explain how group relationships shape individuals (e.g., Lewin, 1947). In almost all approaches, modern scholars accept a major role for collectives, and then debate their interaction with individuals. As March and Simon (1993, p. 13) explain, "organization members are social persons, whose knowledge, beliefs, preferences, loyalties, are all products of the social environment in which they grew up, and the environments in which they now live and work."

Agentic modalities can therefore be defned in terms of their layers of form and functional mechanisms. Notably, the hyperparameters of agentic metamodels defne the same characteristics. Hence, there will be hyperparameters which specify the modalities within a metamodel of agency, including modal layers and their mechanisms of interaction, for example, in hierarchies or networks. Moreover, hyperparameters can be immediately visible, or hidden and require discovery (Feurer & Hutter, 2019). From the "persons in context" perspective, there are both visible and hidden layers and mechanisms. Much is known, but much remains to be uncovered (Cervone, 2005). Variation is contingent on context and individual diference, and perhaps the unconscious. Sigmund Freud certainly thought so, as do many of his postmodern inheritors (Tauber, 2013). In competing theories, more is visible. Persons are conceived in terms of stable, observable traits and states. From this perspective, there are fewer hidden layers and mechanisms, and less inherent variance (e.g., McCrae & Costa, 1997). Agentic modality is more visible and predictable.

Comparable distinctions apply regarding the hyperparameters of collective modality. Some theories emphasize observable structures, routines, and norms of collectivity, with few hidden layers and mechanisms. In new institutional theory, for example, organizations exemplify the observable forms and functions of institutional felds. Isomorphism, homophily, and imprinting are then predictable, because they refect hyperparametric transparency and stability (Scott, 2014). However, in other theories, collective modality is less transparent. Tere are hidden layers and mechanisms which need to be uncovered, explained, and sometimes reformed (e.g., Habermas, 1991). Tinking this way, Friedrich Engels sought to expose the "false consciousness" of capitalism (Augoustinos, 1999). Intermediate processes are possible as well, in which collective layers develop through shared action and sense-making, as iterative cycles of emergence or construction (Giddens, 1984; Weick et al., 2005). In summary, each type of agentic modality entails a debate about the hyperparameters for its fundamental layers, categories, and mechanisms. All theories of agency engage with these debates, in one way or the other.

## **3.1 Mediators of Agentic Modality**

Whether explicitly or implicitly, therefore, theories of human agency assume patterns of modal form and function. Refecting the problematics of modernity, most ofer an explanation for the relationship between individuals and collectives. Many posit a major role for procedural action in this regard, especially individual habit and collective routine. As William James (1890, p. 3) remarked, people can be described as "bundles of habits," implying that habit mediates personality. Leading contemporary psychologists agree (Wood & Rünger, 2016). Similarly, scholars view procedural routine as a key mediator of social collectives (Cohen, 2006; Salvato & Rerup, 2011). Indeed, at individual, group, and collective levels of modality, procedural patterns of action support the continuity of identity and organization (Albert et al., 2000). However, the origins of habit and routine remain problematic. At heart, the problem is one of mediated modality, as scholars debate the relationship between diferent layers of agency and their mechanisms of interaction (Latour, 2005). Many ask, does collective routine evolve bottom-up, from the aggregation of individual habit; or does procedural action originate at the collective level, and individual habit is then refective of routine? Similarly, are habit and routine fxed in memory, as models or templates of action, and performances then instantiate the encoded procedure; or do habit and routine continually emerge as expressions of situated practice and performance (Pentland et al., 2012)?

In fact, the contextual dynamics of human psychology ofers a way forward. To begin with, assume that a social ecology is relatively stable and endowed, sufcient to support patterns of recurrent action. As agents then interact, some share common goals and patterns of action. Over time, these patterns may become automatic among groups. In efect, the agents experience the same habituation process (Winter, 2013; Wood & Rünger, 2016). Each member of the group encodes the same triggers, procedures, and expectations of action. Moreover, each agent will encode similar social psychological processes, in the performance of action. Tey rely heavily on collective mind and memory, sensing the same signals from each other and the environment (see Cohen et al., 2014). Moreover, the process will not trigger signifcant individual diferences. Tis is possible, because we assume that individual personality is inherently open and adaptive, and allows for the upregulation and downregulation of psychological processes (Nafcha et al., 2016). In the case of routine, many personal motivations, goals, and commitments are downregulated and efectively latent. Only a limited subset of common, psychosocial processes is upregulated and active. Tis subset of active, upregulated processes will often include shared encodings, beliefs, goals, and competencies, while most individual diferences of these kinds are downregulated (Silver et al., 2020).

Tis distinction is important and worth restating. In procedural patterns of action, many individual diferences, such as personal values, goals, motivations, and commitments, are downregulated and latent. Whereas, shared characteristics, such as common encodings, beliefs, and competencies, are upregulated and active. In this way, shared patterns of action emerge, which are stored in individual and collective memory, and which invoke equivalent, habitual responses among groups of people, but without activating signifcant individual diferences. As Mischel and Shoda (1998) explain, this is how cultural norms evolve, as common, recurrent psychological processes. Hence, the formation of habit and routine is neither simply bottom-up nor top-down. Rather, it is a process of related agents downregulating their individuality, while upregulating common features of sociality. Habit and routine thus coevolve, within individual and collective modalities, respectively.

Furthermore, given the downregulation of many individual diferences in routine, individual persons will be less sensitive to outcome variance in routine performance, compared to more efortful, deliberate action. Tey are not consciously monitoring precise expectations or aspirations. Indeed, the purpose of much habit and routine is to maintain procedural control, rather than to achieve specifc goals or engage in intentional action (Cohen, 2006). Although, that said, routine and habit do adapt, in response to signifcant contextual change, or a major shift in beliefs or goals, and more frequently, when performance fails to achieve adequate levels of control (Feldman & Pentland, 2003; Wood et al., 2005). In these situations, individual aspirations, goals, and expectations upregulate and drive adaptation. Tis happens naturally, when human agents whether individual, group, or collective—are viewed as complex, open, and adaptive systems, fully situated in context.

## **Issues of Combination and Choice**

A major consequence of this analysis is that no mechanisms of bottomup aggregation or top-down devolution are required to explain procedural action and collective modality. Regarding collective routine, particularly, there is no need to aggregate personal motivations, values, goals, and preferences, which is what most aggregation models seek to do (see Barney & Felin, 2013). Only a common subsystem of psychosocial functioning is upregulated, and most individual diferences are downregulated. And as stated above, this naturally occurs when individuals are conceived as complex, open, adaptive systems. Diferent psychological subsystems may activate or not, combine or recombine, depending on the context and stimuli. At the same time, routine action is mediated by common, social-psychological mechanisms, such as social identity, collective memory, and docility. It is via these mechanisms, that collective routine emerges as a mediated pattern of action (Winter, 2013). In fact, all types of modality could activate the same pattern of action. What distinguishes them as individual habit or collective routine, is the downregulation and upregulation of diferent psychosocial processes.

It is important to acknowledge, however, that not all personalities or collectives are highly organized, and not all action is habitual or routine. Even if habit serves as a scafold for personality, and routine serves as a scafold of collectivity, non-procedural action regularly occurs, especially when novel, complex problems arise, and agents must be creative and innovative, or when important values and interests are at stake. Automatic, procedural routine does not sufce. In these situations, individual diferences often upregulate and are salient again (Madjar et al., 2011). Agents must actively seek solutions about how to think and act. To illustrate, assume that members of a collective have strong personal preferences and expectations regarding newly ofered benefts, such as access to health care and education. Personal goals and preferences are likely to upregulate in this situation. Individuals will form strong personal preferences, and the collective must negotiate how to allocate benefts among its members. Tis will entail an efortful process of collective choice, whereby members seek to communicate, compare, and combine their diverse preferences. More often than not, any solution will require truces and tradeofs (Cyert & March, 1992). An efortful method of collective aggregation is now required, and dilemmas of interpersonal comparison and combination quickly emerge. Ultimately, however, if this process succeeds, most members will be content, their personal diferences will downregulate once again, and the outcome becomes routine. Mechanisms of routinization thereby mediate social order and organization.

In fact, this type of problem is central to social choice theory, welfare economics, and behavioral theories of organization (Arrow et al., 2010). In these felds, theories highlight the aggregation of choice, in the face of individual heterogeneity and opacity. Often, previously agreed procedures—such as voting and decision routines—allow members to reach consensus and make collective choices. Such methods enable the incomplete, but acceptable aggregation of preferences, despite contrasting interests and commitments. Scholars then debate which routine procedures should be encoded, and why (Buchanan, 2014). In practical domains, this leads to political debates about the appropriate means of collective decision-making. But importantly, most theories of this kind assume that collective modalities already exist, typically as communities and institutions.

Furthermore, once made, collective choice often becomes routine and no longer requires debate or consensus building. Indeed, as noted earlier, many natural and artifcial ecologies are relatively stable and munifcent over time. Communities also become accustomed to the order of things, and people value the benefts which institutional order bestows. In these contexts, many people are docile, content with procedural controls, and seek no more. Collective choice is routine, not politicized, and can be accepted with the commons (Ostrom, 1990). As a practical matter, therefore, many situations are untouched by the technical impossibility of optimal aggregation (see Arrow, 1997). Collective life proceeds fairly and efectively, without the need to debate or vote, which is good news for social cohesion and civility.

## **3.2 Impact of Digitalization**

As preceding sections explain, artifcial and human agents share numerous fundamental characteristics. Both are intelligent, goal-directed types of agent, and can be understood as complex, open, adaptive systems. Both also occur in similar patterns, as individuals, in hierarchies and networks. Tese similarities mean that human and artifcial agents are well suited to collaborating as augmented agents. Furthermore, just like humans, artifcial agents are supervised in diferent ways, some more plastic and self-generative. In fact, in unsupervised forms of artifcial intelligence and machine learning, modality is hidden until it emerges through processing (Shwartz-Ziv & Tishby, 2017). Some artifcial systems are therefore fully emergent, using highly compositive methods (e.g., Wu et al., 2010). Tis already happens in virtual domains (Aydin & Perdahci, 2019; Cordeiro et al., 2016). Te same will be true of digitally augmented agents. We can expect to see self-generative metamodeling more widely.

However, as the complexity of data and processing increases, so do the time and resources required. Computer scientists therefore develop techniques to reduce the processing load. One major technique is the compression of modalities, that is, reducing the distinction between layers of form and function, meaning they are easier to connect and transform (Wan et al., 2017). Tis entails the defnition of functions, categories, and system boundaries to maximize integration and the ease of interaction. Similar techniques of modal compression and modularization are also applied in organizational settings, especially those which rely heavily on digital platforms and networks (Frenken, 2006). However, these techniques entail costs. Te compression of modality often increases hidden complexity, and it then takes more efort to identify and process layers and levels. In computer science, techniques have been developed to manage these challenges, including sparse sampling and partial completion (Wang et al., 2018), plus hyperparameter pruning and tuning (Tung & Mori, 2020). Te goal is to generate compressed, well-ftting metamodels, while also reducing the processing load (Choudhary et al., 2020). Resulting processes are more efcient, because they require less data and fewer steps to complete.

# **Persistent Limitations**

By contrast, human beings are limited and constrained in this regard. Teir modalities are relatively layered, distinct, and slow to adapt. Indeed, human modalities tend to be stable over time. Apart from anything else, physiological and neurological evolution are relatively glacial, and will probably remain so, at least for the foreseeable future. It takes time for human beings to learn and adapt. Personalities and relationships also tend toward stability, and for good reasons. Tey anchor the self and group in community. Social and cultural adaptation are sluggish too. Collective norms, organizations, and institutions, all evolve relatively slowly, often requiring generational cycles. Terefore, human sluggishness and path dependence are likely to persist. Human modalities will be relatively layered and stable, compared to artifcial agents.

In fact, some argue that moderate human sluggishness and path dependence are inherent and desirable in many contexts (Sen, 2018). Tese characteristics support the continuity of identity and meaning over time, for personalities, organizations, and cultures. Tey also elicit prosociality, because if human functioning is generally sluggish and incomplete, people must cooperate with each other to achieve shared goals. Tey cannot do so alone. Similarly, moderate intersubjective opacity often encourages trust and civility. When others are partially unknowable, people need to trust each other (Simon, 1990). Whereas the absence of such limits (actual or perceived) can lead to the over-activation of individual or group diferences. And if people feel separately empowered and independent of others, then antisocial outcomes become more likely, including intolerance and oppression. In these situations, emboldened autonomy can lead to mistrust or worse. Hence, while human limitations are sometimes frustrating, needing each other promotes prosociality and community.

Refecting these contrasting tendencies, dilemmas arise when human and artifcial agents combine in augmented modalities. Teir prior dispositions are resilient. Artifcial agents tend to compress modality, thereby reducing the distinctions between layers of form and function, while human modalities tend to be layered and uncompressed. When both combine, therefore, artifcial components could be highly compressed and fattened, and the human components are uncompressed and layered. For example, in massive online gaming, people compete against each other in a highly individualistic or group fashion, which evidences uncompressed human modality. At the same time, they collaborate with highly compressed artifcial agents and avatars which interact and combine with ease (Yates & Kaul, 2019). Te virtual world is compressed and fat, while the human players are layered and distinct, as individuals and teams. A risk in this context is extreme modal divergence, where the human players experience strong reinforcement of layered organization and identity, even as their artifcial partners further compress. Overall coordination and performance are likely to sufer.

Second, artifcial agents are increasingly self-generative, while human agents are less capable in this regard. Hence, augmented modalities might emerge in which artifcial components are highly self-generative, while human components are not. Online gaming is illustrative here too. Individual personalities are relatively stable and supervised over time, while artifcial agents can be highly dynamic and self-generative (Castro et al., 2018). A major risk in these situations is extreme modal convergence by over-compression. For example, players may immerse themselves too deeply and become socially disengaged, lacking a clear sense of human association and control (Ferguson et al., 2020). In fact, studies suggest that addicted players do become less sensitive to others. In more extreme online situations, people may surrender to artifcial supervision and forfeit autonomous self-regulation. Key aspects of their individual functioning are downregulated and latent.

# **Dilemmas of Agentic Modality**

Novel dilemmas therefore arise for augmented modality. Tese dilemmas derive from diferent human and artifcial tendencies. On the one hand, augmented modalities could be extremely divergent, by combining static human layering with dynamic artifcial compression. Te topography of such modality would be equivalent to a heterogeneous landscape, covered with irregular peaks and plains. Not an easy terrain to navigate, in terms of processing (Baumann et al., 2019). In such cases, metamodels would be underftting. Tat is, they would admit excessive noise and variance, and thus fail accurately to distinguish potential patterns of augmented agency (Goodfellow et al., 2016). But on the other hand, augmented modalities could be extremely convergent, by allowing artifcial compression to suppress human layering. Tis topography would be equivalent to a smooth landscape, arguably, too easy to navigate, because metamodels would be overftting. Tat is, they would omit too much noise and variance, and thus fail accurately to capture variant patterns of augmented agency. Or vice versa, augmented modalities could be extremely convergent, by allowing human layering to overwhelm and dominate modality. Now the topography would be a predictable landscape which lacks variety.

Furthermore, these efects suggest poor supervision of the entrogenous mediators discussed in the preceding chapter. Recall there are three such mediators: intelligent sensory perception, performative action generation, and contextual learning, which are critical for augmented modality. However, owing to their inherent dynamism and complexity, these mediators are difcult to supervise. Tey exploit rapid, intra-cyclical feedforward mechanisms, which typically elude human monitoring. Tey cycle quickly with high precision, and are largely inaccessible to consciousness. It is therefore difcult to involve human agents in the supervision of entrogenous mediation. Augmented modalities will easily drift toward divergence or convergence.

## **Ambimodality**

To conceptualize this novel feature of digitally augmented modality, I import another term, "ambimodality." It comes from chemistry and refers to single processes which result in diferent outcome states (Yang et al., 2018). Notably, the term incorporates the prefx "ambi" once again, meaning "both." With respect to augmented agency, ambimodality refers to single processes which lead to diferent modal outcomes, and more specifcally, processes which result in dynamic artifcial compression, plus stable human layering. A system is therefore highly ambimodal when it combines both extremely compressed and uncompressed form and function. Alternatively, lowly ambimodal agents will be highly convergent, either fully compressed and dynamic in artifcial terms, or fully layered and stable in human terms.

Consider the following examples. Many contemporary organizations are pursuing digital transformation. In doing so, they introduce highly compressed artifcial intelligence and machine learning across the organization. However, their human employees remain uncompressed individuals and groups, layered and hierarchical. Te overall result is highly ambimodal, making the organization difcult to integrate and coordinate. People and artifcial agents often struggle against each other, as humans try to maintain their social identities and commitments, in an increasingly fat and fuid, digitalized environment (Kellogg et al., 2020; Lanzolla et al., 2020). Ironically, members of the organization may be increasingly connected but feel less united. Alternatively, other organizations are becoming fully virtual and digitalized, and human actors are peripheral, perhaps contract "gig" workers. Te system is highly compressed and lowly ambimodal, making the organization easier to integrate and control. However, human identities and commitments are largely expunged. In fact, studies already report these efects, albeit without labeling them as ambimodal (e.g., Kronblad, 2020).

At the same time, it must be noted that ambimodal systems are not inherently dysfunctional. Human modality, whether digitalized or not, is a consistent blending of contrasts, combining stability and change, the self and the other, the one and the many (Higgins, 2006). Indeed, moderate levels of ambimodality can be advantageous in volatile, uncertain contexts. Tis is because, when environments are unpredictable, variable modalities enable a wider range of potential forms and functions, thereby enhancing adaptive ftness. In this respect, moderately ambimodal agents can be more robust and adaptive (Orton & Weick, 1990). In contrast, fully non-ambimodal agents generate far fewer potentials. Tese systems are uniformly structured and integrated. Sometimes this is benefcial, for example, in stable, technical environments. But otherwise, nonambimodal systems tend to be infexible and fragile. Tis type of risk arises in tightly bound groups (Vespignani, 2010) and in the "iron cage" of bureaucratic institutions (Weber, 2002). A major task for augmented supervision, therefore, is to maximize ambimodal ft by combining appropriate levels of modal compression and layering.

# **3.3 Patterns of Ambimodality**

Based on the foregoing discussion, this section summarizes and illustrates the main features of digitally augmented ambimodality, and especially systems which combine extreme forms of artifcial compression and/or

**87**

human layering. To begin with, it is important to acknowledge that digital augmentation ofers many potential benefts, for individuals, groups, and collectives. Augmented agents will possess unprecedented capabilities to compose and recompose new patterns of agency and action. If well supervised, ambimodality therefore increases agentic potentiality. In many task domains, signifcant benefts are already apparent. However, at the same time, it poses new risks. When human and artifcial agents combine, their diferent characteristics can skew augmented modality. On the one hand, augmented agents could be overly divergent, by combining compressed artifcial forms and functions, with more layered human forms and functions. On the other hand, agents could be overly convergent, fully dominated by artifcial compression, or by human layering. In other words, there are risks of inappropriate, high or low ambimodality. Augmented agents of this kind will be less coherent and potentially dysfunctional. Recall the examples given above, of organizations which undergo digital transformation and either alienate or expel people in the process.

## **Low Ambimodality**

In some augmented agents, there will be low ambimodality. Te resulting system will be highly integrated and convergent. In fact, this type of augmented agent is like a closely knit group, but the relationships are internal, between human and artifcial collaborators. Figure 3.1 illustrates the inner workings of such a system, assuming full digitalization and high modal compression. Te fgure builds on the generative metamodel of augmented agency, shown in Fig. 2.3. Shaded circles indicate digitalized processes, and unshaded circles are fully human. Adopting this approach, Fig. 3.1 shows two human agents A3 and B3, in the upper and lower portions of the fgure respectively, each with three major phases of processing: input stimuli trigger sensory perception (SI and SP); followed by cognitive-afective processing, which leads to action generation (CA and AG); and then behavioral-performative outputs, which stimulate evaluation of performance (BP and EP), conditional on sensitivity to variance. Evaluation may subsequently trigger feedback encoding (FB), while feedforward encoding occurs intra-cyclically (FF). Both agents, A3 and B3, also combine in the

**Fig. 3.1** Low agentic ambimodality

relational group R3, which is shown by three larger, overlapping circles. Relations between phases are mediated by digitalized entrogenous mechanisms: intelligent sensory perception (SP), performative action generation (AG), and contextual learning (from FB and FF). Finally, the agents also form a collective form C3, which spans the center of the fgure.

Note that all the small circles in Fig. 3.1 are shaded. Hence, digitalized processes dominate in this scenario, and purely human processes are downregulated and latent. Human modalities are therefore compressed, shown by the lighter boundaries for human agents A3 and B3. Human forms and functions are less distinct. Also recall that lowly ambimodal agents are like closely knit collaborative groups. Tis feature is shown by the heavy boundaries for the relational group R3 which encompasses all the digitalized processes depicted by shaded circles. Moreover, the main phases of the relationship are mediated by entrogenous mechanisms, indicated by the intersection of the large diamond shapes. In summary, Fig. 3.1 illustrates a lowly ambimodal augmented group which is highly digitalized and compressed overall.

As noted earlier, this scenario poses signifcant downside risks. Particularly, agentic modalities could overcompress. Te downregulation of purely human functioning could go too far. Digitalized routine would overwhelm human relating and communication. Individual distinctions are efectively dissolved. If this occurs, important features of being human may be lost, or at least suppressed in this group, including the sense of autonomous agency and identity, autobiographical narratives, as well as enduring personal commitments. Tis type of augmented group is therefore potentially dysfunctional because many human needs and interests will be squashed by the convergent, overcompression of modality. Low ambimodality therefore presents a major challenge for the supervision of augmented agency: how to combine human layering with artifcial compression, in ways which exploit and enhance the value of both while maximizing metamodel ft?

## **High Ambimodality**

Other augmented modalities are highly ambimodal. In these scenarios, human and artifcial modalities are markedly diferent, in terms of their compression and dynamism. Human modalities could be hierarchical and layered, while artifcial modalities are compressed and fat. Forms and functions are highly distinct and divergent. Now augmented agents are like very heterogeneous groups or families, in which members are closely related but often disagree and fail to cooperate. Figure 3.2 illustrates the inner workings of this kind of system. Once again, there are two human agents labeled A4 and B4, each with the same three major components: input stimuli which trigger sensory perception (SI and SP); cognitive-afective processing which leads to action generation (CA and AG); and behavioral-performative outputs, which stimulate evaluation of performance (BP and EP); which may subsequently trigger feedback encoding (FB), and feedforward encoding occurs (FF). Both agents, A4 and B4 also combine in the relational group R4, which is shown by the three large oval shapes. Te same entrogenous mediators are central once again, indicated by the intersection of the large

**Fig. 3.2** High agentic ambimodality

diamond shapes. Te agents combine in collective form C4 which spans the center of the fgure.

Digitalized processes are shaded, as before, and human are unshaded. In contrast to Fig. 3.1, however, digitalized processes do not dominate in Fig. 3.2. Human modalities are more distinct and signifcant. Human diferences are upregulated and active. Hence, there are more unshaded circles, showing human processes, compared to the system in Fig. 3.1. Granted, the two individuals, A4 and B4 collaborate within relational group R4 and collective C4. However, individuals and groups retain greater modal distinction, compared to the system in Fig. 3.1. But in consequence, new risks appear. Human components may be highly layered, while artifcial partners are highly compressed, requiring extra processing to integrate and coordinate them. At the same time, artifcial agents will be highly compressed and require little efort to integrate across layers. Terefore, the combined system will exhibit diferent forms and functions, between human and artifcial components. Overall supervision is divergent and potentially dysfunctional. In fact, as noted above, many contemporary organizations report this type of problem. Tey are digitally transforming many processes and systems, but their human members remain layered and cannot easily adapt (Lanzolla et al., 2020). Organizational integration and coordination are increasingly difcult to achieve. Individual diferences are active, routines are fragile, and the system is harder to control. Once again, important features of being human are at risk, but now for diferent reasons. Te persistent layering of human modality could squander the potential benefts of digital augmentation by reinforcing limiting priors. Augmentation results in ambimodal misft and dysfunction.

## **3.4 Wider Implications**

Troughout the modern period, scholars have assumed stable agentic forms and functions, and especially individual, group, and collective modalities. Tere are obvious biological and ecological reasons for doing so. Individuals, familial groups, and populations are the key organizing modalities of mammalian life (Mayr, 2002). Many theories of economics, politics, and institutions also focus on these modal distinctions, often drawing from psychology and sociology to do so. In most of these disciplines, scholars continue to debate how collectivities relate to groups and individuals. Questions remain about bottom-up versus top-down processes, and hence between methodological individualism versus collectivism, although a growing number inhabit the middle ground, theorizing about the coevolution of agentic modalities, often highlighting the role of groups and networks (e.g., Giddens, 1984; Latour, 2005).

Framing all these eforts is the modern, post-Enlightenment elevation of autonomous, rational agency. Reasoning persons took center stage, freed from the premodern strictures of superstition and autocratic order. Against this historical backdrop, the central thesis of this chapter is that mass digitalization is transforming agentic modality yet again. By exploiting digitally augmented capabilities, humanity will compose more variable forms of agentic expression and organization. Augmented modalities will be increasingly compositive and self-generative. It will also be possible to compare, contrast, and adapt modalities in a precise, dynamic fashion (Cavaliere et al., 2019). Apart from anything else, these developments challenge deeply held assumptions about the inherent opacity of reasons, preferences, and commitments (Sen, 1985). Tanks to digitalization, modality will be more transparent and composable, thereby mitigating the risk of agentic opacity for social organization.

However, as earlier sections of this chapter explain, if augmented agency is poorly supervised, modality could skew, either toward extremely convergent low ambimodality, making agents too homogeneous and lacking diversity, or toward extremely divergent high ambimodality, and agents would be too heterogeneous and lack coordination. In either scenario, digital augmentation impacts negatively on modality and degrades the efcacy of persons, groups, and collectives. Hence, the problematics of agentic modality expand from modern concerns about reductive individualism versus holistic collectivism, to include (a) concerns about artifcial overcompression, combined with human overexpansion, or (b) the potential suppression of modal diversity and plasticity, and (c) the implications of these distortions for human identity, efcacy, and coherence.

# **Agentic Ambimodality**

Among the top priorities for future research, therefore, is digitally augmented, agentic ambimodality. Recall the defnition again. Ambimodality refers to single processes which result in diferent outcome states. With respect to augmented agency, it refers to the combination of dynamic artifcial compression of form and function, with stable human layering and distinction, although, as previously noted, ambimodality is not fundamentally new, even if known by other names. But the property has not been explicitly conceptualized before, probably because its efects have been largely stable and moderate. Indeed, as noted earlier, moderate levels of ambimodality can be advantageous. For example, in highly volatile contexts and uncertain task domains, diverse modalities produce a wider range of agentic potentialities, which enhances adaptive ftness. Likewise, moderate ambimodality strengthens the resilience of personalities (Cervone, 2005) and institutions (Kirman & Sethi, 2016), and most agents beneft from an optimal level of distinctiveness (Leonardelli et al., 2010). If anything, modern scholars explore how to encourage moderate ambimodality by developing loosely coupled, modular systems (Westerman et al., 2006).

Fresh challenges now arise because digital augmentation greatly amplifes these efects. Ambimodal extremes are more likely, as well as the dynamic composition of alternative agentic forms and functions. Te full range of options was earlier shown in Fig. 2.6, which shows alternative combinations of human and artifcial supervision in augmented agency. A major task, therefore, is the specifcation of hyperparameters for modal compression and layering, the goal being to determine the appropriate level of ambimodality in any context, and thereby to maximize metamodel ft. Otherwise, agents' inherent tendencies could lead to inappropriate extremes. Tese should be key topics of future research. Scholars can look to computer science for guidance, where similar topics are already major foci of research (Sangiovanni-Vincentelli et al., 2009). Management scholars are exploring these topics also, in the digital transformation of organizations (e.g., Lanzolla et al., 2020; Ransbotham et al., 2020). Some research how to embed values and commitments into the supervision of digital augmentation, for example, by clearly articulating the human purpose of systems design.

## **Problems of Aggregation**

In numerous felds, theories posit routine as a key mediator of group and collective modalities. But questions remain about the origin and functioning of routine: does it emerge via bottom-up aggregation of habit, or does routine develop holistically and then devolve top-down, or perhaps both processes occur? Tese are central questions for behavioral and evolutionary theories of organizations and markets (Nelson & Winter, 1982; Walsh et al., 2006). Furthermore, many scholars in these felds argue that individuals' cognitive and empathic limitations—especially bounded rationality and intersubjective opacity—aggregate to collective limitations, compromises, and constraints. And hence, just like individuals, collectives employ procedural routine in decision-making, problemsolving, and the reading of group mind (Cyert & March, 1992). But exactly how aggregation occurs in these situations also remains a contentious puzzle (see Barney & Felin, 2013; Winter, 2013). Similar questions persist in other felds. For example, in microeconomics, scholars investigate the limits of interpersonal comparison and aggregation in collective choice (Sen, 1997). In legal theory and ethics, scholars analyze how empathic limitation shapes the organization and aggregation of commitments in contractual consensus (Sen, 2009). However, aggregation is typically imputed and not yet adequately explained.

Tis chapter proposes a solution, by viewing human agents as complex, open, and adaptive systems, which respond to variable contexts. From this perspective, humans naturally experience the downregulation of individual diferences, in the recurrent, predictable pursuit of common goals. In parallel, they experience the upregulation of collective characteristics including social norms and control procedures. In this way, it is possible to explain the origin and functioning of individual habit and collective routine, without aggregating full personalities, personal preferences, beliefs, goals, and motivations. A common subset of mediating mechanisms does most of the work (Brinol & DeMarree, 2012). And to repeat, no special process of bottom-up aggregation or top-down devolution is required. Rather, many individual diferences are downregulated and latent, while common characteristics are upregulated and active. Tus, habit and routine coevolve in procedural action.

Tese processes warrant deeper investigation, partly because habit and routine are prime targets for digital augmentation, but also because digital augmentation implies more dynamic processes of habit and routine (Bandura, 2007; Davis, 2015). Procedures will need to adapt and recompose, in a dynamic fashion, and adjust levels of modal compression and layering. Te variable upregulation and downregulation of cognitiveafective processes will be a key to these dynamics. In these respects, habituation and routinization will require more deliberate supervision. Recent investigations into the adaptation of habit and routine ofer relevant insight (Winter et al., 2012). Part of the solution will lie in identifying and managing the core components of any procedural action, and then upregulating or downregulating other factors, depending on the situation and context, to maximize metamodel ft. Digitally augmented processes will undoubtedly assist (see Murray et al., 2020). However, many questions remain unanswered.

## **Implications for Institutions**

Tis analysis of routine has additional implications for social and economic institutions. For example, markets and businesses are supported by routines of production, consumption, and transaction; political institutions by routines of representation, deliberation, and decision-making; and legal institutions rely on routines of examination, judgment, and sanction. However, as this chapter explains, collective augmented agents could skew toward extreme divergence or convergence. If artifcial and human components overly diverge, collectives will be internally conficted and lack coherence. Whereas if they overly converge, they could be overdetermined by artifcial agents, or dominated by infexible human hierarchy and priors. In the meantime, social networks and virtual power are growing rapidly, but governance and trust are lagging. We see these efects already, for example, where digitalization is destabilizing the administration of politics and justice (Hasselberger, 2019; Zubof, 2019).

In a highly augmented world, therefore, historic sources of collective coherence and consistency—such as negotiated truces, voting procedures, and routine docility—may be less efective, at least in the digitally augmented world. More will be known, transparent, and communicable, reducing the need for truces, voting, and docility. Entrogenous mediators will play a critical role here. New forms of intelligent sensory perception, performative action generation, and contextual learning, will mediate greater transparency and dynamism. Augmented agents will compose and recompose by design, rather than by imitation and other traditional means. In this respect, they will be generative and near composability, not only adaptive and near decomposability (see Simon, 1996). Tis contrasts prior assumptions that collective agency and choice emerge gradually, often through iterative processes of incomplete comparison and negotiation.

Viewed positively, these changes will support more agile organizations and institutions. On the downside, however, augmented collectives could over-compress and squash valued features of human experience. Alternatively, human and artifcial agents might diverge and confict, even as they collaborate more closely. In contrast, for most of human history, agentic modalities have been viewed as layered, stable forms. During premodernity, the dominant layers were communal and patriarchal, whereas, in the modern period, the most important modal layers are individual persons and social collectives. Digital augmentation problematizes these assumptions. Old stabilities and constraints are relaxing. Newer, compositive methods are now feasible, leveraging highly digitalized capabilities and networks. At the same time, fxed modal layers are giving way to more hybrid, self-generative forms. Te universe of agentic modality is becoming more pluralistic and this trend is likely to accelerate. It ofers genuine promise but also brings new risks. Efective supervision will be critical.

# **References**


**Open Access** Tis chapter is licensed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/ by/4.0/), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence and indicate if changes were made.

Te images or other third party material in this chapter are included in the chapter's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the chapter's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.

# **4**

# **Problem-Solving**

Troughout the modern period, scientifc discovery, widespread education, and industrial and economic development have encouraged commitment to rationality and humanistic values as progressive forces (Pinker, 2018). Assuming such commitments, systematic reasoning and the scientifc method promise resolution of increasingly complex and consequential problems. Te proper ambition of problem-solving is then to transcend the limits of ordinary capability, even if rational ideals are forever unreachable. Not surprisingly, the advocates of rationality and scientifc method are impatient with constraints on problem-solving and view them as challenges to be overcome. And their impatience is reasonable, given the dramatic growth of knowledge and technology during the modern period. Capabilities have greatly expanded, and many assumed limits have receded. Digital augmentation promises radically to enhance and accelerate this trend. Problem-solving continues to advance.

Even so, human capabilities remain limited. People still need to reduce potential complexity and manage cognitive load. Tey often do this by simplifying problem representation and/or solution search, depending on the relative signifcance of each activity in any problem context. Tis entails a series of trade-ofs between accuracy and efciency, which entail potential costs and risks (Brusoni et al., 2007). Most commonly, the simplifcation of sampling and search admits distorting biases, myopia, and noise into problem-solving (Kahneman et al., 2016; Kahneman & Tversky, 2000). Granted, in some situations, such simplifcation is warranted and satisfactory. For example, fast and frugal heuristics often work best in uncertain or urgent situations (Gigerenzer, 1996). And when they do, the practical challenge is not to mitigate the distortions of simplifcation, but to maximize its efectiveness. Either way, people naturally simplify sampling and/or search, resulting in less complex problems and solutions, respectively. Tey do so for a range of reasons: to maximize limited resources and capabilities; because prior commitments obviate the need for comparative processing (Sen, 2005); in order to maintain cultural norms and controls (Scott & Davis, 2007); or because heuristics are most appropriate for the problem at hand (Marengo, 2015).

Herbert Simon (1979) was among the frst to expose these patterns. He argues that to solve problems with bounded or limited rationality, people simplify diferent aspects of problem-solving and satisfce at lower levels of aspiration, rather than fully satisfying criteria of optimality. Simon (ibid., p. 498) identifes two broad types of satisfcing in problem-solving: "either by fnding optimum solutions for a simplifed world, or by fnding satisfactory solutions for a more realistic world." On the one hand, that is, agents simplify the representation of problems, to reach optimal solutions. In this case, the processing required is more owing to the complexity of solution search. Te major risks are myopic sampling and problem representation. Following common naming conventions, I call this type of problem-solving normative satisfcing (see Simon, 1959). On the other hand, agents simplify solutions and address more realistic, better described problems. In this case, the processing required is more owing to the complex representation of the problem itself. Now the major risks are myopic solution search and selection. I call this type of problem-solving descriptive satisfcing, again following convention.

Notably, the latter approach—accepting satisfactory solutions to more realistic problems, or what I call descriptive satisfcing—is the type of problem-solving found in behavioral theories of decision-making, economics, and organizations. Stated in more formal language, it seeks no worse solutions, to the best representation of problems. Whereas, the former approach—seeking optimal solutions to simplifed problems, which I call normative satisfcing—is typical of classical microeconomics and formal decision-making (March, 2014). Put more formally, it seeks the best solutions, to no worse representation of problems. Hence, as Sen (1997b) explains, satisfcing can be conceived as a type of formal maximizing, meaning problems and/or solutions are partially ordered, and agents accept some no worse option as good enough, assuming an aspiration level.

However, while descriptive satisfcing is widely studied, normative satisfcing is not. Even though Simon explained this important distinction decades ago—that classical theory also satisfces in the normative sense, by seeking optimal solutions for a simplifed world—few studies investigate this phenomenon. Levinthal (2011, p. 1517) also observes this oversight, when he writes that all "but the most trivial problems require a behavioral act of representation prior to invoking a deductive, 'rational' approach." Yet despite his astute observation, with a few notable exceptions (e.g., Denrell & March, 2001; Fiedler & Juslin, 2006), most behavioral researchers focus on descriptive satisfcing, that is, fnding satisfactory solutions for a more realistic world (e.g., Kahneman et al., 2016; Luan et al., 2019). Granted, this is an important topic. However, as a consequence, we still await a full treatment of bounded realism, representational heuristics, and normative satisfcing, especially in classical theory (Taler, 2016). Tis is another large project, but I will not attempt to fll the gap here.

## **Historical Developments**

Tese questions have a history worth recounting. For over two centuries, classically inspired economists have idealized Adam Smith's (1950) notion of the invisible hand to explain collective, calculative self-interest (Appiah, 2017). Equally, they idealize his characterization of *Homo economicus*, as a rational egoist bent on optimizing utility. Here are the roots of normative satisfcing in microeconomics: seeking optimal, calculative solutions to simplifed problems of economic utility. However, Smith (2010) also understood that rational egoism is a fctional, albeit functional ideal. In parallel, he recognized the complexity of human sentiments and commitments. From this more realistic perspective, *Homo economicus* defers to *Homo sapiens* meaning a richer conception of human agency and psychology (Taler, 2000). Terefore, Smith also set the agenda for descriptive satisfcing: accepting satisfactory solutions to well described, realistic problems, including problems of preferential and collective choice. In recent years, more scholars are embracing this broader conception of economic agency, responding to the increasing complexity and variety of choice (e.g., Bazerman & Sezer, 2016; Higgins & Scholer, 2009), although, as noted above, most research is still framed in terms of descriptive satisfcing and largely overlooks the puzzles of normative satisfcing, including myopic sampling and representational heuristics. Notable exceptions exist in the literature, but they are exceptional (e.g., Fiedler, 2012; Ocasio, 2012).

By contrast, the problems of both normative and descriptive satisfcing are central to modern scientifc method. Experimental researchers have consistently refned their methods of attention, observation, sampling, and problem representation. Indeed, the technological enhancement of attentional focus and observation are central to scientifc method, along with the enhancement of data analysis and solution search. For example, in the early modern period, the telescope and microscope revolutionized observation in astronomy and biology, respectively. Using these tools and techniques, novel problems emerged which rendered prior explanations obsolete. In parallel, new mathematical and statistical methods enabled deeper analysis. Fast forward to the present, and observational tools include satellites, particle accelerators, and quantum microscopy. At the same time, computing technologies massively enhance the compilation and analysis of observational data. Using these techniques, today's scientists represent and solve increasingly novel, complex, highly specifed problems. Natural science continues to transcend the limits of human capabilities and consciousness, especially in the sampling and representation of problems.

Not surprisingly, social and behavioral scientists attempt to do the same (Camerer, 2019). In these felds, however, selective sampling and experimental techniques prompt concerns about oversimplifcation and validity. Many caution that social and behavioral phenomena are too variable and situated, and cannot be reduced to measurable constructs and mechanisms (e.g., Beach & Connolly, 2005; Geertz, 2001). Regarding problem-solving, particularly, some argue that this activity is best explained in terms of narrative interpretation and sense-making, rather than rational expectations, preference ordering, and reasoned choice (e.g., Bruner, 2004; Smith, 2008). By implication, determinant models of problem-solving will be overly simplifed and mired in assumptions of normality and stability. Others are somewhere in between. Tey still present formal models and methods, but embrace a broader psychology of commitments, including empathy and altruism (e.g., Ostrom, 2000; Sen, 2000). As a further example, Stiglitz et al. (2009) argue for a richer description of human needs and wants, shifting toward *Homo sapiens*, and demonstrate how these could be measured and analyzed. Teir ambition is an economics of human fourishing and well-being, with public policies to match.

Nevertheless, like most, these scholars agree that something must be simplifed, to develop useful theories and actionable knowledge. Debate then focuses on what to sample, simplify, and conceptualize, when and how, and with what consequences for problem-solving. As stated above, those who endorse classical theory tend to simplify problems and psychology, seeking to optimize calculative solutions, whereas behavioral approaches seek richer problem representation, and then accept approximating heuristic solutions. Te debate exemplifes the modern problematic noted in earlier chapters: to what degree can and should human beings overcome their limits, to be more fully rational, empathic, and fulflled?

## **Contemporary Digitalization**

Digitalization now brings the advanced capabilities of empirical science and computer engineering to everyday, human problem representation and solution. For example, consider personal digital devices, such as smartphones and tablet computers. Tey grant individuals access to increasingly powerful and intelligent sampling, search, and computation, far beyond traditionally bounded capabilities. Using such devices, humans become collaborators in digitally augmented problem-solving. Importantly, these capabilities also reduce the need for trade-ofs. Less must be simplifed. Artifcial agents can process the enormous amount of information required to analyze highly complex problems and choices, and at all levels of agentic modality including collectives (Chen, 2017). In augmented collaboration, therefore, humans will have the potential to behave fully as *Homo sapiens.* In fact, it becomes feasible to pursue highly discriminate problems and solutions in many ordinary contexts, not just in the laboratory (Kitsantas et al., 2019). Tanks to digital augmentation, much human problem-solving will approach scientifc levels of detail, precision, and rigor, in both sampling and search.

Yet at the same time, natural human capabilities remain limited and parochial values and commitments will likely persist. Given these enduring features of human problem-solving, digital augmentation may compound rather than ameliorate behavioral dilemmas. For example, if racial and gender biases are encoded into training data and algorithmic processing, machine learning leads to even greater discrimination. Digitally augmented capability amplifes biased beliefs about gender and race (Osoba & Welser, 2017). As another illustration, consider classically inspired economics, in which problem-solving is often assessed in terms of the rational optimization of self-interested utility. Here too, digital augmentation could lead to increasingly dysfunctional problem-solving, if augmented agents simply reinforce narrow assumptions about self-interest and expectation, and overlook wider ecological, social, and behavioral factors (Camerer, 2019; Mullainathan & Obermeyer, 2017). Digitally augmented capability would thus amplify the idiosyncratic noise which often clouds decision making (Kahneman et al., 2021). It is therefore appropriate to ask, under which conditions will digital augmentation enable more efective problem-solving, rather than perpetuating the limiting myopias and models of the past; and hence, which additional procedures might help to minimize the downside risks of digital augmentation, while maximizing the upside? Moreover, these questions are urgent. Already, the speed and scale of digital innovation are transforming much problem-solving. Organizations, institutions, and citizens are struggling to keep up, trying to remain active and relevant in the supervision of these digitally augmented processes.

# **4.1 Metamodels of Problem-Solving**

To analyze the digital augmentation of problem-solving more deeply, we frst need to review dominant metamodels of problem-solving, that is, the major problem-solving choice sets. As the preceding discussion explains, modern approaches combine two main functions: sampling of various kinds, which results in problem representation, followed by solution search and selection. Both functions—problem sampling and representation, and solution search and selection—can be more, or less, specifed and complex. In ideal, optimal problem-solving, each should be fully specifed and result in the best possible option, although this is rarely achieved and often impossible in practical contexts. In this regard, ideal problem-solving is truly an ideal, whereas people function with limited resources and capabilities. Given these constraints, a few metamodels of problem-solving are possible.

First, as Simon (1979) explains, agents can seek optimal solutions to simplifed problems, that is, the normative satisfcing of classical theory. Often, such solutions are axiomatic and formalized, while problems are represented in clear, but simplifed terms. Hence, from a critical behavioral perspective, "utility maximization" is a simplifed representation of the problem of economic choice. And highly calculative solutions to such problems—such as rational or adaptive expectations—can only aspire for optimality because of normative satisfcing. If people choose this metamodel, more processing is required, owing to the complexity of optimizing the solution. As noted earlier, the major risks of doing so are the distortions which arise from myopic sampling and simplifed problem representation.

Second, agents can seek satisfactory solutions to more fully described, realistic problems, that is, descriptive satisfcing. Solutions are frequently heuristic and approximate, while problems are represented in a more detailed fashion. In this type of satisfcing, solutions are partially ordered, while problem representation is highly discriminated, striving for completeness. Hence, problem representation is optimized, meaning the chosen representation is the best alternative. While the chosen solution is maximal, that is, no worse than alternatives. If people choose this metamodel, more processing is required, owing to the complexity of the problem itself. Major risks arise from myopic solution search and simplifed selection criteria.

Tird, it is at least conceivable to seek optimal solutions to realistically represented problems, that is, ideal problem-solving which is not satisfcing in either sense. However, as noted earlier, this type of problem-solving is rarely observed in human contexts, and arguably impossible, in practical terms. Nonetheless, ideal metamodels are conceivable and play signifcant roles in abstract thought and formal approaches (March, 2006). Both problem representations and solutions are, in principle, fully ordered, and the best options are selected. Hence, I describe this metamodel as ideal optimizing. However, if people try to apply it in practice, they typically fall short owing to high complexity and limited capabilities, which is not to say they should not try. As March and Weil (2009) argue, pursuing unreachable ideals has its place, by helping to inspire and engage agents in the face of uncertainty and resistance.

Fourth, agents seek satisfactory solutions to simplifed problems, which is not satisfcing either, because agents do not seek to optimize problem representation or solution. Instead, both are no worse, at best, given some aspiration levels. In fact, this is the most frequent and feasible metamodel of problem-solving in practical terms (Sen, 2005). Much of the time, people solve imprecise problems in imprecise ways, which is good enough. Hence, we can expand Simon's original analysis. As he correctly explains, there are situations in which agents rightly pursue optimal problem representation or optimal solutions, and one or the other might be attainable, or at least approachable. However, these satisfcing options are an important subclass of problem-solving, not the full universe. Much practical problem-solving is not optimizing in either respect. It is fully maximizing instead, often owing to high complexity, or because optimizing is simply unwarranted. However, such problem-solving is therefore doubly myopic, in both problem sampling and representation, and solution search and selection. I label this metamodel of problemsolving as practical maximizing: choosing incompletely ordered, satisfactory solutions, to incompletely ordered, simplifed problems. From this perspective, many "fast and frugal heuristics" are instances of practical maximizing (see Gigerenzer, 2000).

It is also important to note, that much practical maximizing is procedural, performed as individual habit or collective routine. Te everyday world presents many ordinary problems which are appropriately solved in this way. Moreover, as the preceding chapter explains, this type of problem-solving is central to the coherence and organization of agentic modalities. In fact, many modalities come into being as systems of practical maximizing in problem-solving. Fortunately, enough problems are recurrent, easily recognized, and require little analysis to resolve. Habitual and routine problem-solving are sufcient. Not surprisingly, agentic modalities cohere around patterns of such procedures. Bundles of habits then mediate personalities, and bundles of routines mediate organizations. Tere are fewer processing trade-ofs in both scenarios because less efort is required. Procedural, practical maximizing is efcient and sufcient. Tat said, failures still occur and are impactful, because habit and routine often fulfll important control functions. Maximal does not mean minimal or trivial, but rather less than optimal.

Figure 4.1 summarizes the four major metamodels of problem-solving just described. Te fgure's dimensions are the complexity of problems and the complexity of solutions. Both range from high to low. Te fgure also assumes that complexity is proportional to the degree of variation,


Complexity of Soluons

and hence to the processing required for rank ordering. Tat is, the more varied the choice set, the more complex it is, and the more processing required to discriminate between options. Given these assumptions, quadrant 1 depicts the ideal optimizing metamodel or the best solution for the best representation of problems. Quadrant 2 shows descriptive satisfcing, which is seeking satisfactory solutions to the best representation of problems. Next, quadrant 3 shows normative satisfcing, or seeking the best solutions to simplifed problems. Finally, quadrant 4 shows practical maximizing, which is seeking satisfactory solutions to simplifed problems. In this fnal metamodel, problem representation and solution search are both no worse than the alternatives, and hence maximizing on both dimensions.

# **4.2 Dilemmas of Digital Augmentation**

Many digital processes need to complete as quickly and accurately as possible and must avoid unnecessary processing. For example, consider the artifcial agents which manage high reliability operations, monitor human safety, and mediate online transactions. Tey must function rapidly, with high accuracy, yet at the same time, gather and analyze massive volumes of data. Computer scientists therefore research how to maximize the efciency of their processing. Adding to the challenge, artifcial agents can easily over-sample problems, over-compute solutions, and over-complete rank ordering (e.g., Lee & Ro, 2015). Granted, overprocessing is sometimes benefcial. It can enhance robustness, by generating a richer set of options and thus slack. However, the risk is that processing becomes overly complex and less efcient. Tese are central issues for the design and supervision of artifcial agents.

To mitigate these risks, artifcial agents also simplify problem-solving. Tey accomplish this using algorithmic hyperheuristics and metaheuristics, defned as shortcut means of specifying metamodel hyperparameters and model parameters, respectively (Boussaid et al., 2013). Hyperheuristics are frst used to compose choice sets of potential models of problemsolving and thereby to defne metamodels of problem-solving, for example, composing sets of calculative or associative approaches (Burke et al., 2013). Next, given the resulting metamodel, metaheuristics are employed to select the appropriate model for solving a particular problem (Amodeo et al., 2018). Te chosen model is then applied to resolve the focal problem, for example, using specifc heuristic procedures. Importantly, at each level of processing, simplifying heuristics helps to manage the complexity of processing. As Chap. 1 also explains, research in artifcial intelligence focuses on optimizing such hierarchies of heuristics. Tey are critical for the efciency and efectiveness of problem-solving.

Human agents do likewise, although often unconsciously and automatically. Tey use simple, often routine hyperheuristics and metaheuristics. When faced with a new problem, a person may unconsciously deploy an encoded hyperheuristic to specify the appropriate metamodel of problem-solving (Fiedler & Wanke, 2009). For example, the context may be familiar and uncomplicated, suggesting a simplifed, heuristic approach. Next, the person will apply a metaheuristic to choose one specifc model. Perhaps the focal problem refects prior experience and can be solved using limited sampling. Fast and frugal heuristics procedures could work very well. Studies of "gut feel" in decision-making exhibit this pattern (Gigerenzer, 2008).

Digital augmentation is now transforming this domain. For example, many experts use real-time, decision support systems powered by artifcial intelligence (McGrath et al., 2018). Human intuition and calculation are being digitally augmented, and it may no longer be necessary or appropriate to reply solely on human inputs. In fact, Herbert Simon (1979, p. 499) predicted this shift many years ago: "As new mathematical tools for computing optimal and satisfactory decisions are discovered, and as computers become more and more powerful, the recommendations of normative decision theory will change."

Te challenge for human agents is learning how to integrate these additional sources of information and insight into problem-solving. However, given the complexity and speed of artifcial processes, they are often opaque (Jenna, 2016). In fact, the inner workings of complex algorithms mirror the opacity of the human brain. It may be impossible to know exactly what artifcial agents are doing, especially in real-time processing. In these respects, natural and artifcial neural networks are deeply alike. Both employ extremely complex, dynamic connections, which are difcult to monitor, supervise, and predict (Fiedler, 2014; He & Xu, 2010). Tat said, the similarity of both agents increases the likelihood of developing efective methods of integration and supervision. Human and artifcial agents are feasible collaborators in augmented agency.

# **Risk of Farsighted Processing**

Furthermore, as collaborative capabilities become more powerful, augmented agents might err toward overly farsighted sampling and search, the opposite of myopia. Tey could easily over-sample the problem environment, search too extensively, and then over-compute solutions. Where farsighted in this context, is not simply a reference to spatial distance, but any sampling or search vector. To conceptualize this efect, I borrow a term from ophthalmology, "hyperopia," which means farsighted vision, the opposite of nearsighted myopia (Remeseiro et al., 2018). When hyperopia occurs in problem-solving, agents will sample in a farsighted fashion to represent problems in rich detail; or they can search for solutions in a farsighted, extensive way. In both cases, hyperopia increases the overall complexity of problem-solving (Boussaid et al., 2013). Computer scientists already research ways to avoid these risks. Better supervision is key.

In contrast, with a few notable exceptions once again (see Denrell et al., 2017; Fiedler & Juslin, 2006; Liu et al., 2017), studies of human problem-solving largely ignore hyperopic risks. Tis neglect is partly owing to the priorities discussed earlier, namely, that scholars traditionally focus on myopia and limited capabilities. Moreover, when myopia is the primary focus of concern, farsighted sampling and search (that is, hyperopia) may be a welcome antidote. If so, then a modest degree of hyperopia is not a problem, but a potential advantage (e.g., Csaszar & Siggelkow, 2010). Moreover, when farsighted sampling and search do occur, they are typically viewed as natural characteristics relating to perceived temporal, spatial, and social distance (Trope & Liberman, 2011). For all these reasons, hyperopia is rarely included in studies of human problem-solving, and almost never viewed as problematic. However, in a world of digitally augmented capabilities, hyperopia is likely and potentially extreme. Going too far becomes a signifcant risk.

Tat said, neither myopia nor hyperopia is inherently erroneous. Indeed, depending on the problem context, if commitments, values, and interests are well served, and if satisfactory controls are maintained, then myopic or hyperopic, sampling and/or search, can be appropriate and highly efective (e.g., Gavetti et al., 2005). Tis is true for human and artifcial agents alike. For example, when problems are stable and recurrent, myopic sampling and search may be fully suited to the task (Cohen, 2006). Tis is often the case in habitual and routine problem-solving. Alternatively, when problems are complex, multidimensional, and not urgent, then hyperopic processes could be more appropriate (Bandura, 1991; Forster et al., 2004). Tis is often the case in technical problemsolving and replication studies.

## **Dilemmas of Hyperopia**

Notwithstanding these exceptions, humans tend to be myopic in sampling the problem environment and searching for solutions (Fiedler & Wanke, 2009). People must be trained, therefore, to overcome their natural myopia. Many areas of education and training focus on doing this, trying to develop capabilities, training students to sample and search more widely in specifc domains. Moreover, if this training is successful, lessons are deeply encoded. Yet such learning is problematic in digitalized contexts. Tis is because digitally augmented capabilities increasingly transcend human limitations. Consequently, hyperopic eforts may be redundant because this is what digitalized systems are good at. But people may continue striving to overcome their limits and myopias—to extend problem sampling and solution search—irrespective of the extra capabilities acquired through digital augmentation. Trained to overcome limits, they continue reaching for hyperopia, trying to be more farsighted despite the fact, that digitally augmented processes already do so. Te overall result is likely to be excessive sampling and search, or extreme hyperopia. What was corrective in previously myopic contexts is now a source of hyperopic distortion.

Artifcial agents face complementary challenges in this regard. Although, in contrast to humans, artifcial agents are built to be hyperopic, to sample and search widely, gathering massive volumes of information, and then process inputs at great speed and precision. Hence, artifcial agents also tend toward hyperopic sampling and search, but by design. For this reason, they can also go too far and be overly hyperopic, which increases complexity and reduces efciency. When these dispositions are imported into augmented agency, artifcial and human agents easily compound each other. Human agents are trained to go further, and artifcial agents go further by design. Hence, computer scientists research how to prevent over-sampling and over-computation, and to limit hyperopia (Chen & Barnes, 2014). Tis has led to a range of technical solutions, including hyperheuristics and metaheuristics, and algorithmic constraint satisfaction (Amodeo et al., 2018; Lauriere, 1978). In fact, a recent study conceptualizes "constraint satisfcing," mimicking Simon's work in behavioral theory (Jaillet et al., 2016). Problems still arise, however, when simplifying heuristics are infected by human myopias and other priors (Osoba & Welser, 2017).

# **Myopia with Hyperopia**

For complementary reasons, therefore, both human and artifcial agents are trained to supervise the upper and lower bounds of complexity in problem-solving. On the one hand, human agents are naturally myopic and trained to do more. While on the other hand, artifcial agents are naturally hyperopic and trained to do less, in specifc contexts. Terefore, both agents move in opposite directions and are trained to correct in opposing ways, especially in complex problem-solving. Given these divergent characteristics and strategies, if collaborative supervision is poor, they will easily undermine each other and reinforce problematic tendencies. Artifcial agents could be overly hyperopic, while their corrective procedures reinforce human myopia (Balasubramanian et al., 2020). At the same time, humans could remain myopic, while their corrective procedures reinforce artifcial hyperopia. In this fashion, inadequate supervision of augmented agency could lead to problem-solving, which is highly myopic in some human respects, and highly hyperopic in artifcial ways. Stated otherwise, augmented agents could be extreme satisfcers. Either seeking overly optimized solutions to overly simplifed problems (extreme normative satisfcing) or accepting overly simplifed solutions to overly detailed representations of problems (extreme descriptive satisfcing).

Once again, we see the efects of poorly supervised, entrogenous mediation. Ideally, augmented agents will use these dynamic capabilities to maximize metamodels of problem-solving, adjusting sampling and search to ft the problem context. As noted earlier, however, the supervision of such capabilities will be challenging, given the speed and precision of digitalized updates. Each agent will struggle to monitor the other. Distorted outcomes are likely if the supervision of entrogenous mediation is poor. First, intelligent sensory perception might simply reinforce human myopia. For example, when racial biases guide hyperopic sampling in machine learning, algorithms are quickly discriminatory (Hasselberger, 2019). Second, if the supervision of performative action generation is equally poor, it could reinforce existing procedures, for example, by escalating racially biased behaviors. And when both efects occur, human myopia and artifcial hyperopia will compound to produce dysfunctional, highly discriminatory problemsolving.

To conceptualize these efects, I adopt another ophthalmic term "ambiopia" which means double vision, which is also referred to as "diplopia" (Glisson, 2019). In these conditions, the same object is perceived at diferent distances by each eye—one being nearsighted and myopic, the other farsighted and hyperopic—causing the agent to perceive the image with distorted, double vision (Smolarz-Dudarewicz et al., 1980). Moreover, like other novel terms in this book, "ambiopia" includes the prefx "ambi" which is Latin for "both." In the diagnosis of vision, ambiopia refers to the compounding of visual distortions. Analogously, in problem-solving, it can refer to the compounding of nearsighted myopia with farsighted hyperopia in problem representation and/or solution search.

# **Summary of Digitalized Problem-Solving**

Based on the preceding discussion, we can summarize digitally augmented problem-solving. First, like human agents, artifcial agents perform two key processes: sampling and data gathering, leading to problem representation; then searching for and selecting solutions to such problems. Humans are limited in these respects and tend to be myopic, while artifcial agents are capable of increasingly complex, hyperopic sampling and search. In fact, digital augmentation imports the hyperopic methods of experimental science into ordinary problem-solving. Second, human and artifcial agents both use heuristics to choose between alternative logics, models, and procedures of problem-solving (Boussaid et al., 2013). Heuristics are often layered, in a hierarchy of increasing specifcity, from hyperheuristics in the specifcation of metamodels to metaheuristics about models and then heuristics for specifc solutions. Tird, human agents often encode ontological, epistemological, and normative commitments into artifcial agents, which then guide sampling and search and which help to establish the threshold of sensitivity to variance, although such priors are frequently distorting, when myopia and bias are amplifed by artifcial means. Fourth, both types of agents need to manage the risks of myopic and hyperopic sampling and search, balancing the demands of speed, accuracy, efciency, and appropriateness. Otherwise, human myopia and artifcial hyperopia will combine to produce dysfunctional, ambiopic problem-solving.

Te goal for augmented agents, therefore, is to maximize metamodel ft in any problem context. Tis will entail adjusting the relative myopia and hyperopia of sampling and/or search, combining both human and artifcial capabilities, although, as noted, such supervision will be challenging. Instead, agents' inherent tendencies will often lead to extreme patterns, either combining persistent human myopia with unfettered artifcial hyperopia (extreme divergence and high ambiopia) or allowing one agent fully to dominate the other (extreme convergence and low ambiopia). Notably, these risks are already topics of research in artifcial intelligence (Amodeo et al., 2018; Burke et al., 2013). Computer scientists worry about these problems already. What is not yet adequately understood is how these processes impact problem-solving by humanartifcial augmented agents.

## **4.3 Illustrative Metamodels**

Building on the preceding discussion, the following sections illustrate two major metamodels of digitally augmented problem-solving, that is, metamodels which are highly digitalized, similarly to the generative metamodel of agency shown in Fig. 2.3. Te frst scenario illustrated below is a highly ambiopic metamodel of problem-solving, with very divergent levels of complexity and simplifcation, in problem representation and solution search. Te second is a non-ambiopic metamodel with very convergent levels of complexity and simplifcation. While these two scenarios are not exhaustive, they highlight ambiopic risks, associated mechanisms, and their consequences.

## **Highly Ambiopic Metamodels**

Figure 4.2 illustrates highly ambiopic metamodels of problem-solving. Te vertical axis shows the level of complexity of problem representation, and the horizontal axis shows the complexity of solution search. Both range from high complexity to relative simplicity. Where high complexity implies a hyperopic, farsighted process, and low complexity (or simplifcation) implies a myopic, nearsighted process. In addition, the fgure shows two levels of processing capability, depicted by curved lines. One is labeled L2, and represents the processing capability of modernity, which assumes relatively moderate levels of technological assistance. Te second is labeled L3, representing the greater processing capabilities of digital augmentation, now assuming high levels of technological assistance.

As the fgure shows, the greater the processing capabilities, the greater the complexity of problem representation and solution search. Capabilities and achievable complexity are positively associated. Nevertheless, even digitalized capabilities remain limited to some degree, meaning they need to be distributed. Figure 4.2 depicts this type of distribution. It also shows that combined complexities reach limiting asymptotes of complexity, for both problem representation and solution search. Tese upper limits are almost never reached, in practical terms. But they do play a signifcant role in formal modeling, and by setting the upper bounds of problem representation and solution search.

*Descriptive Satisfcing* Te next features of Fig. 4.2 to note are the segments within it. To begin with, recall that descriptive satisfcing is defned as seeking satisfactory solutions to the best representation of problems (see quadrant 2 in Fig. 4.1). Now consider D2 in Fig. 4.2 which depicts such a metamodel, assuming modern processing capabilities L2. Problem representation is moderately complex, and the solution is simplifed. Overall, therefore, D2 is moderately divergent and ambiopic. It also suggests that solution search is anchored (and hence semi-supervised) by human myopia. Tis type of problem-solving is common in behavioral and informal approaches. Also note that D2 intersects with a small, curved section of L2. Tis feature illustrates a degree of possible variance in problem-solving, or in other words, the satisfcing nature of such problem-solving. By contrast, the segment labeled P2 depicts practical maximizing, given capabilities L2. Tis type of problem-solving was pre-

**Fig. 4.2** Highly ambiopic problem-solving

viously defned as the simplifed representation of problems, combined with the search for satisfactory, simpler solutions (see quadrant 4 in Fig. 4.1). Terefore, the levels of complexity are relatively low and roughly equal in P2, meaning this type of problem-solving is non-ambiopic. Neither sampling nor search is particularly hyperopic; rather, both are relatively myopic. In fact, P2 illustrates the actual problem-solving of most agents in a modern, behavioral world. People are not optimizing in either sense, but they rather solve simplifed problems in efcient ways, often using heuristic and intuitive means. Practical problem-solving is often like this.

Next, consider the segment labeled D3 in Fig. 4.2, which assumes stronger digitalized capabilities at level L3. Here too, the segment denotes descriptive satisfcing. However, D3 shows greater divergence between the complexity of problem representation and simplifcation of solution search, and D3 is therefore highly divergent and ambiopic overall. Agents are now digitally augmented and capable of more complex problem representation and satisfactory solution search. Tey sample in a farsighted, hyperopic fashion, but solution search remains anchored in the same myopias as D2, for example, when racially biased priors are encoded into machine learning algorithms. In consequence, D3 constitutes an extreme form of descriptive satisfcing.

Finally, the segment labeled P3 depicts practical maximizing, assuming digitally augmented capabilities L3. Tat is, P3 depicts incompletely ordered, simpler solutions, to incompletely ordered, simpler problems. As in P2, the levels of complexity in P3 are roughly equal, meaning this type of problem-solving is non-ambiopic, but it is highly myopic overall. In fact, P3 almost equals P2. Tis is because prior anchoring commitments have not shifted but are carried over from modernity to digitalization. Tis scenario illustrates the persistence of ordinary commitments and procedures. Even with digitalized capabilities, people do not seek to optimize, in either sense, but continue to rely on heuristic and intuitive means. Tey are persistently human, notwithstanding digital augmentation.

*Normative Satisfcing* Now consider the other set of segments in Fig. 4.2. To begin with, recall that normative satisfcing is defned as optimal solutions to simplifed problems. Segment N2 depicts such a metamodel, assuming modern processing capabilities L2. Within N2, the fgure shows moderate divergence between the two dimensions of complexity—problem representation and solution—and therefore N2 is moderately divergent and ambiopic overall. As noted earlier, this type of problem-solving is often axiomatic and calculative, as in classical economics: seeking optimal, calculative solutions to simplifed problems of utility. Whereas the segment labeled P2 again depicts practical maximizing, given modern capabilities L2. It illustrates non-ambiopic, actual problem-solving in the modern world of consumption and exchange. In such a world, most people often do not optimize nor try to. Rather, they solve the ordinary problems of transactional life using habitual or routine, heuristic, and intuitive means.

Next, consider the segment labeled N3, which depicts extreme normative satisfcing, assuming stronger, digitalized capabilities L3. In this kind of problem-solving, artifcial processes enable hyperopic search, but problem representation remains anchored in the same myopias as N2. Terefore, N3 shows even greater divergence between the two levels of complexity, and N3 is highly divergent and ambiopic overall. In fact, this type of distorted problem-solving is observed in semi-supervised machine learning, when hyperopic artifcial intelligence amplifes human myopia and bias (Osoba & Welser, 2017), whereas the segment labeled P3 depicts practical maximizing, given capabilities L3. As in P2, the levels of complexity in P3 are roughly equal, meaning this problem-solving is non-ambiopic. In these respects, P3 illustrates the actual problem-solving of augmented agents in the behavioral world. Consider, for example, how many people search the internet or shop online, saving favorites and encoding habits.

*Descriptive and Normative Satisfcing* In poorly supervised augmented agents, both types of extreme satisfcing (descriptive D3 and normative N3) are likely and will often occur together. Persistent human priors will be myopic and artifcial hyperopia will be largely unchecked. Both descriptive and normative satisfcing will then be ambiopic. Hence, overall problem-solving system is ambiopic as well. Tis is what Fig. 4.2 depicts. Te augmented agent combines both types of extreme satisfcing at L3. In consequence, overall problem-solving by this augmented agent is highly divergent and skewed, poorly ftted, and most likely dysfunctional. Digitally augmented individuals, groups, and collectives will be equally vulnerable in this way, if collaborative supervision is poor.

## **Non-ambiopic Augmented Metamodels**

In contrast, Fig. 4.3 illustrates non-ambiopic metamodels of problemsolving. Once again, the horizontal axis shows the level of complexity of problem representation, and the vertical axis shows the complexity of solution search, both again ranging from low to high. Te fgure also shows two levels of processing capability. L2 represents the processing capability of modernity as in Fig 4.2; while the greater processing

**Fig. 4.3** Non-ambiopic problem-solving

capabilities of digital augmentation are here labeled L4, to distinguish them from L3 in Fig. 4.2. Apart from this distinction, Fig. 4.3 shares its core features with Fig. 4.2. In fact, the segments D2, N2, and P2 are equivalent in both fgures. Tey again illustrate modern, moderately assisted problem-solving and, therefore, do not require repetitive explanation.

But now consider the segment labeled D4 in Fig. 4.3. It depicts descriptive problem-solving, given digitally augmented processing capabilities L4. Tat is, seeking solutions to richly described representation of problems. However, in contrast to D3 in Fig. 4.2, segment D4 shows no signifcant divergence between the two levels of complexity. Tis implies the relaxation of human priors and limited artifcial hyperopia. Hence, D4 is neither ambiopic nor satisfcing because it does not trade-of simplifcation for optimization. Tis scenario is non-hyperopic and non-myopic, in problem sampling and solution search. Furthermore, D4 is shown to equal N4. In other words, descriptive and normative methods are confated. Neither is ambiopic nor satisfcing. Instead, digitally augmented capabilities allow agents to heighten both problem representation and solution, to equal levels of complexity. By doing so, the agent mitigates myopia and hyperopia. In essence, description becomes highly computational, and normative computation is richly descriptive (Yan, 2019).

For similar reasons, the segment labeled P4, which depicts practical maximizing, is equivalent to D4 and N4 as well. In fact, all three segments overlap. What this illustrates, is that the agent fully relaxes prior commitments and forgoes optimization altogether. Te result is practical maximizing, highly contextual, and generative. Moreover, owing to digitally augmented capabilities, such maximizing may achieve a high level of completeness. Also note that all three metamodels intersect with a curved section of L4. Tis feature illustrates a degree of possible variance, or in other words, the maximizing nature of such problem-solving. In this fashion, P4 overcomes the traditional polarity between descriptive and normative problem-solving. All of problem-solving at level L4 is highly augmented and non-ambiopic in this scenario, although, by the same token, P4 shrinks the role of ordinary human intuition, values, and commitments.

Terefore, human priors are relaxed and artifcial hyperopia is controlled. Both problem representation and solution search will be largely free of human myopias and excessive artifcial hyperopia. Tis is what Fig. 4.3 depicts. Problem-solving is non-ambiopic and fully maximizing, from a practical perspective. However, as explained above, this type of problem-solving reduces the role of ordinary human intuition, values, and commitments. Granted, the system achieves greater precision and integration, but it also depletes problem-solving of important human qualities. Tis approach is also dysfunctional, therefore, when problem-solving warrants the inclusion of humanistic factors and commitments.

## **Moderately Ambiopic Augmented Metamodels**

Other digitalized metamodels will be less extreme, better supervised, and moderately ambiopic. Augmented problem-solving of this kind is more balanced. It includes some human supervision of descriptive and normative satisfcing, while also exploiting the benefts of augmented, practical maximizing. Agents accept a modest degree of myopia and hyperopia in sampling and search, often using both structured and unstructured data, given agreed criteria of supervision. In this kind of metamodel, the segments D4, N4, and P4, will be partially distinct and not fully equivalent. Te overall system of problem-solving will admit more human inputs, referencing personal and cultural values, goals, and commitments, but avoid excessive myopia, while at the same time allowing some artifcial agents to operate fully independent of human supervision, but avoiding excessive hyperopia. In this fashion, augmented agents exploit digitalized capabilities, while preserving valued features of human and artifcial problem-solving, thereby achieving strong metamodel ft. For this reason, many behavioral and social contexts will favor moderately ambiopic, augmented problem-solving.

# **4.4 Implications for Problem-Solving**

Digital augmentation promises great advances for problem-solving, assuming human and artifcial agents learn to function efectively as augmented agents, working together with mutual trust and empathic supervision. To some, it may seem strange to describe artifcial agents in this way, almost as if they were human. Some may reject the description as fanciful, and even as dangerous. However, recent technical innovations are compelling. Artifcial agents already surpass humans in many calculative functions, and recent developments enable associative and creative intelligence (Horzyk, 2016). In addition, artifcial agents are rapidly acquiring empathic capability, which allows them to interpret and imitate personality, emotion, and mood. Many agents also function in a fully autonomous, self-generative fashion. When combined, these capabilities are approaching human levels, in signifcant respects (Goertzel, 2014), at least, to the degree required for meaningful collaboration in augmented problem-solving.

At the same time, signifcant challenges lie ahead, as humans respond to the rapid growth of artifcial capabilities. Te combinatorics are challenging. On the one hand, human absorptive capacities are limited, people habitually simplify, biases and myopias easily intrude, and learning is often truncated. While on the other hand, artifcial intelligence and machine learning race ahead at unprecedented speed and scale. Indeed, we constantly see more powerful examples. However, owing to lagging skills of collaborative supervision, these technical innovations could amplify (rather than mitigate) the weaknesses of human problem-solving. Hence, humanity faces a growing challenge: to ensure that augmented problem-solving exploits the power of digitalization, while managing human needs and potential costs.

As this chapter reports, many are working on these questions. Some are optimistic (Harley et al., 2018; Woetzel et al., 2018). Tey point to positive developments, such as the difusion of knowledge, greater variety of choice, and the delivery of highly intelligent services, not to mention advances in complex problem-solving. Others are more pessimistic. Tey highlight the contagion of digitalized falsehood, bias, and social discrimination in problem-solving, plus intrusive surveillance and manipulation, whereby elites seek to control the fow of online information and analysis (Osoba & Welser, 2017; Zubof, 2019). Te mechanisms exposed here help to explain what is occurring in these situations, namely, the deliberate use of myopia, hyperopia, and ambiopia, in digitally augmented problem-solving. Whether one is optimistic or pessimistic about the future, these mechanisms warrant urgent attention.

## **Myopia, Hyperopia, and Ambiopia**

Among the most important topics for further research, therefore, are the risks of doing too much and too little. Tat is, of poorly supervised myopia plus hyperopia in sampling and search, leading to extremely divergent, ambiopic problem-solving (Baer & Kamalnath, 2017). As noted earlier, computer scientists already research similar risks. Many mitigating strategies focus on semi-supervised learning (Jordan & Mitchell, 2015). To date, however, these higher order procedures are not major topics for behavioral and organizational research (Gigerenzer & Gaissmaier, 2011). Tey should be. Augmented agents will confront these risks as well. Teir goal will be to maximize metamodel ft in any problem context. Otherwise, augmented agents face the prospect of dysfunctional problem-solving.

Te argument also highlights the role of human commitments, and especially those which serve as reference criteria about what is realistic, reasonable, and ethical, in problem-solving. Such criteria often emerge over time, are culturally embedded, and have institutional expression (Scott & Davis, 2007). Such commitments are deeply imprinted in thought and identity (Sen, 1985). For this reason, they are and often should be, difcult to change and adapt. Indeed, resilient commitments play an important role in sustaining institutions, social relations, and personalities. Te risk is that absent appropriate supervision, infexible commitments and their escalation can lead to excessive myopia and hyperopia in sampling and search. Overall problem-solving then becomes highly ambiopic for no good reason, and therefore dysfunctional.

# **Bounded and Unbounded**

Myopic risks refect the natural limits of human capabilities, which are widely assumed in modern thought. Whether in theories of perception, reasoning, empathy, memory, agency, or refexive functioning, scholars assume limited human capabilities. In relation to problem-solving, Simon (1979) explains the bounded nature of human calculative rationality, and why agents satisfce against relevant performance criteria, rather than fully optimizing. As noted earlier, he formulated two broad methods of satisfcing, which I label normative and descriptive. Te former seeks optimum solutions for a simplifed world, and the latter, satisfactory solutions for a detailed, realistic world. Simon's insights have infuenced numerous felds of enquiry, including behavioral theories of problemsolving and decision-making, the management and design of organizations, and branches of economics (Gavetti et al., 2007).

However, cognitive boundedness is signifcantly mitigated by digital augmentation. Digital technologies massively enhance everyday processing capabilities, and especially in complex problem-solving. Humans can perceive, reason, and memorize with far greater precision, speed, and collaborative reach. At least, these extensions are now feasible. In these respects, augmented agents can be bounded and unbounded, at the same time. Tis occurs because human agents will likely retain their natural boundedness, especially in everyday cognitive functioning. At the same time, artifcial agents will be increasingly unbounded. When both agents join in collaborative problem-solving, therefore, the resulting augmented agents could be simultaneously bounded and unbounded. In other words, they will exhibit functional ambimodality, as distinct from the organizational types of ambimodality discussed in the preceding chapter.

Satisfcing then becomes more complicated, but also more important, because it can help to limit overprocessing, including the tendency toward overly hyperopic sampling and search. Te role of satisfcing will therefore expand and deepen. Instead of satisfcing because of limited capabilities, augmented agents will satisfce because of extra capabilities. Deliberate satisfcing will help to avoid unnecessary optimization. Put another way, digitally augmented agents will satisfce, not only in response to limits, but to impose limits. Tey will choose descriptive or normative satisfcing, even when ideal optimization is feasible, or at least approachable. People sometimes do this already when they employ heuristics (Gigerenzer & Gaissmaier, 2011). Artifcial agents do as well when they limit their own processing to improve speed and efciency. Augmented agents will do the same, by managing myopia and hyperopia to maximize metamodel ft in problem-solving, forgoing possible optimization for good reasons.

Tis analysis has major implications for the felds mentioned earlier, which assume Simon's analysis of boundedness, including behavioral theories of problem-solving and decision-making, the management and design of organizations, and related felds of behavioral economics and choice theory. Each feld will need to revisit its core assumptions, to accommodate less bounded capabilities and intentional satisfcing. And when this happens, all of economics starts to look behavioral, as Taler (2016) predicts. In similar fashion, scholars may need to rethink the assumed opacity of preference ordering, interpersonal comparison, and collective choice (Sen, 1997a). Given the expanded capabilities brought by digital augmentation, it will become feasible to seek comparative transparency, almost complete ordering, and approach optimization in some digitalized contexts. Granted, this may not be desirable. It could erode human diversity and creativity. But this type of choice will be feasible, nonetheless. Mindful of these risks, augmented humanity will need to monitor and manage the risks of over-completion in preference ordering and collective choice, and often choose to be better rather than perfect (see Bazerman, 2021).

## **Extended, Ecological Rationality**

Another notable implication of digitalization is the extension of systematic intelligence to problem sampling and representation. In the past, everyday problems were taken as given, the intuited products of experience and sensory perception, whereas rigorous problem sampling and representation have been the preserve of empirical science. For this reason, most theories of behavioral problem-solving assume that systematic intelligence relates to solution search and selection, but rarely to problem sampling and representation. Rationality has been about fnding solutions and making decisions, not about the specifcation of problems as such. However, digital augmentation upends these assumptions too. New tools and techniques allow augmented agents to reason systematically during problem sampling and representation. In this regard, recall the discussion of feedforward mechanisms and entrogenous mediation in Chaps. 1 and 2. Problem sampling and representation will be updated in a rapid, intra-cyclical fashion, through intelligent sensory-perception. Sampling and representation become reasoned activities. Ecological theory should therefore expand to embrace realism as well as rationality. Both aspects of problem solving will be contextual and dynamic.

Augmented agents will therefore apply intelligence to problem sampling and representation, not only to solution search and selection. For example, important problems regarding personal health, fnances, and consumer preferences will be identifed and curated by artifcial agents, often in real time. In fact, this already happens, via smartphone applications. In the background, systems analyze and update problems in real time. However, this also entails that many processes will not be fully accessible to consciousness. In fact, as in other augmented domains, ordinary consciousness will play a diferent role in problem-solving. It will be an important source of humanistic guidance, but less signifcant as a window onto fundamental reality and truth. In all these ways, augmented problem-solving calls for an extended, ecological understanding of realism and rationality (Todd & Brighton, 2016).

Tis shift has another, profound implication. Important socialpsychological distinctions relate to proximal versus distal processing. Construal Level Teory, for example, assumes that humans treat phenomena and problems diferently, depending on their perceived spatial, temporal, social, and hypothetical distance (Trope & Liberman, 2011). If close or proximal, they are treated as more practical, short term, parochial, and risky, whereas if distal, they are more exploratory, long term, and expansive. Higgins' (1998) Regulatory Focus Teory assumes comparable distinctions. However, if digitally enabled hyperopia draws everything closer on these dimensions, then what is distal with respect to human experience and capabilities could be proximal in artifcial terms. When combined, augmented agents could perceive problems as proximal and distal at the same time. Tis would likely lead to ambiguous or conficting construals, and misguided sampling and search. Once again, augmented agents must learn to manage these ambiopic risks.

## **Culture and Collectivity**

Digitally augmented problem-solving has cultural implications as well. To begin with, communities share problems which they represent and resolve at a collective level. Such problem-solving is often divided between the domains of science and technology, on the one hand, and human value and meaning, on the other. In fact, some observers of modernity refer to two dominant cultures (March, 2006; Nisbett et al., 2001). Digitalization problematizes these distinctions. Already, digitalization is transforming the creative arts and entertainment. Artifcial agents make meaning and create aesthetic value. In consequence, the two cultures are blending, at least in these domains. Numerous potential benefts accrue, in terms of cultural interaction and understanding. However, it is equally possible that these trends could amplify ambiopic problem-solving and exacerbate cultural divergence and division (Kearns & Roth, 2019). Here, too, the challenges of digital augmentation are far from understood, let alone efectively supervised. It remains an open question, whether augmented agency will evolve quickly enough to manage these growing risks.

In the modern period, systematic reason and experimental science support unprecedented problem-solving capabilities. Digitalization extends this historic narrative to everyday problem representation and solution search. Yet in doing so, digital augmentation alters the dynamics of problem-solving itself. Every aspect of problem-solving becomes more intelligent and agile. However, human beings remain limited by nature and nurture, and these human factors will persist. Moving forward, therefore, research should focus on the interaction of human and artifcial agents in augmented problem-solving, the novel risks of hyperopia and ambiopia, and how collaborative supervision can mitigate these risks and maximize metamodel ft. Many of these questions already loom large in computer science. Tey deserve equal attention from scholars in the human and decision sciences.

## **References**


Smolarz-Dudarewicz, J., Poborc-Godlewska, J., & Lesnik, H. (1980). Comparative evaluation of the usefulness of the methods of studying binocular vision for purposes of vocational guidance. *Medycyna Pracy, 31*(2), 109–114.


**Open Access** Tis chapter is licensed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/ by/4.0/), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence and indicate if changes were made.

Te images or other third party material in this chapter are included in the chapter's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the chapter's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.

# **5**

# **Cognitive Empathy**

At the dawn of European Enlightenment, Descartes (1998) meditated on his own conscious life and concluded *cogito ergo sum*, meaning "I think therefore I am." He thereby located the core of selfhood in reasoned, refexive thought. Departing from premodern assumptions, he accorded supernatural forces a minor, ancillary role. For Descartes, and for many who followed him, the exercise of autonomous, intelligent agency was the distinguishing feature of being human, not the replication of some mythical narrative or religious ideal. Te experience and explanation of refexive selfhood were transformed. Understanding of intersubjectivity was equally afected. To understand another person, one must empathize with her or his reasoning and its relationship to the person's speech and action. For this reason, advocates of the Enlightenment look for realism and rationality in other minds and are dissatisfed with superstition and rituals (Pinker, 2018).

Modernity therefore celebrates intelligent, autonomous agency, and assumes that people can and should empathize at this level. Tis implies that other minds are potentially accessible and explicable to consciousness. Many social and behavioral sciences share this outlook. For example, theories of institutions invoke empathy with other minds to explain collective logics and decision-making (Tornton et al., 2012). For philosophers, it leads to the problem of other minds and how to interpret them (Dennett, 2017). Contemporary psychologists also research intersubjectivity, and especially cognitive empathizing, which is the process whereby individuals represent and comprehend the thoughts and reasoning of others, or read other minds (Decety & Yoder, 2016; Schnell et al., 2011). Likewise, assumptions about cognitive empathy and its limits are central to modern theories of ethics and justice (Sen, 2009), as well as experimental microeconomics (Singer & Fehr, 2005). All these felds recognize the signifcance of cognitive empathizing, that is, the representation and solving of problems of other minds.

Furthermore, modernity exhibits a steady stream of technological innovations which support cognitive empathizing. Earlier generations exploited the telegraph and telephone, which greatly expanded understanding of other people's thoughts. More recently, digital technologies, including artifcial intelligence and ubiquitous online services, allow people to learn more and more about others' beliefs, reasons, and mental worlds (Wulf et al., 2017). Examples proliferate. Social networks show who and what is liked; using their smartphones, people can share experiences instantaneously on a global scale; connected devices enable the realtime sharing of ideas and opinions; data about online behavior are then used to predict personal preferences; virtual assistants such as Apple's Siri and Amazon's Alexa (note the humanizing names) mediate communication like actual persons. Additional capabilities are now emerging, including artifcial personality and afective computing, wearable and potentially implantable devices, which will enable the digitalized interpretation and imitation of human mood and emotion (Poria et al., 2017).

In fact, some artifcial agents can already interpret and imitate signifcant aspects of human facial expression, empathy, and personality. As noted previously, in recent experiments of customer service by telephone, callers could not distinguish between human and artifcial agents (Leviathan & Matias, 2018). Te artifcial agent sounded fully human, in terms of its expressions and empathy. Granted, such innovations are nascent and too often fawed, but they will improve and become ubiquitous. Digital assistants will mediate signifcant aspects of intersubjectivity and cognitive empathizing. By exploiting such innovations, people can aspire to deeper understanding of each other and themselves. Subjectivity and self-consciousness could be digitally augmented as well. Digitalization will transform cognitive empathizing with others and the self. But this process will take time and new regulatory systems are needed. In fact, in the short term, radically augmented, cognitive empathizing could overwhelm many people and destabilize both personalities and communities (Chimirri & Schraube, 2019). Most people are not equipped to manage highly transparent minds, even with the support of artifcial intelligence. People do well to intuit what is hidden and often unformed in other minds, and in their own (Davidsen & Fosgerau, 2015). Apart from anything else, mental noise and nonsense would drown out much of the signal. Empathy can also be emotionally exhausting, at the best of times (Bandura, 2002).

Terefore, human empathic capabilities are likely to remain strictly limited, at least for the foreseeable future. As in other forms of complex problem-solving, habitual and routine procedures will often take precedence. Similarly, biases and myopias will likely continue as well. For these reasons, as in problem-solving generally, digitalization may compound, rather than ameliorate, the traditional dilemmas of cognitive empathizing (Mullainathan & Obermeyer, 2017). For example, if racial and gender biases are encoded into the sampling and representation of others minds, and then carried over into the training of algorithms, digitalization amplifes discriminatory judgments (Noble, 2018). Too many examples already exist, of poorly supervised, biased machine learning, imputing erroneous states of mind to racial or gender groups (Eubanks, 2018; Osoba & Welser, 2017). Comparable biases fuel the febrile tribalism of online xenophobia, reinforcing the perceived deviance and irrationality of others.

Yet bias and myopia are not the only dilemmas. As digitally augmented capabilities become more powerful and ubiquitous, augmented agents can err in the opposite direction, employing these capabilities to sample and search too widely, thereby over-sampling and over-searching other minds. Cognitive empathizing could go too far in these respects. Agents might gather too much information about other minds and apply overly complex algorithms to interpret them. In other words, the hyperopic risks of problem-solving which were examined in the preceding chapter, also impact cognitive empathizing, when it is conceived as solving problems of other minds. Recent studies already demonstrate these efects. Tey show the negative consequences of hyperopic sampling and search in obsessive cognitive empathizing, especially when combined with persistent myopias (Lemaitre et al., 2017). Reconsider an example given earlier. Some machine learning agents are trained using racially biased data. Te agent then over-samples and over-searches the utterances and behaviors of others, guided by biased supervision. In doing so, it gathers vast amounts of evidence to reinforce the erroneous priors, in consequence, automating framing and confrmation biases at scale (Baer & Kamalnath, 2017). Tis results in overly ambiopic cognitive empathizing, defned as the combination of divergent degrees of simplifcation and complexity in problem-solving about other minds.

We need to ask, therefore, under which conditions will digitalization enable more efective cognitive empathizing, widening the appreciation of others' thoughts, beliefs, and reasons, rather than perpetuating and compounding erroneous myopias and biases or amplifying noise; and which additional procedures might help to reduce risks for cognitive empathy, while enjoying the potential benefts of digital augmentation? Moreover, these questions are increasingly urgent (Bolino & Grant, 2016). Organizations and groups all rely on cognitive empathy. However, evidence suggests that the speed and scale of digitalization are outpacing ordinary empathic capabilities (Mullainathan & Obermeyer, 2017). Signifcant aspects of digitally augmented mentality, both individual and collective, are eluding self-supervision. And myopias and biases are persistent. Not surprisingly, many people already sufer from digitally distorted, cognitive empathizing which leads to misunderstanding and mistrust.

In contrast, artifcial agents can sample and search other minds with unprecedented speed and power. Trough social networks, messaging applications, and the like, billions of people share extraordinary details of their thoughts, feelings, and personal lives, which only artifcial agents have the capability to analyze and aggregate. When these systems combine with ordinary humans, however, the results can be highly ambiopic cognitive empathizing: overly distal and complex in some respects (owing to artifcial hyperopia), yet overly proximal and simplifed in other ways (owing to human myopia). If this occurs, agents of any modality whether individuals, groups, or collectives—will tend to misconstrue other's positions and perspectives and be prone to misjudgments and attribution errors. As a result, some other minds will be unfairly perceived as unrealistic, irrational, or deviant, while others will be misperceived as fully rational and realistic. Indeed, studies show that cognitive empathy is already degrading in some digitalized domains, and arguably for these reasons (Miranda et al., 2016).

Te digital augmentation of cognitive empathy therefore poses major opportunities and risks for humanity. If well supervised, augmented empathizing could enhance mutual understanding, trust, and cooperation. But if ambiopic tendencies are left unchecked, digitally augmented empathizing can skew in a few directions and be overly divergent or convergent. First, if human myopias and biases are encoded into augmented agents, and then amplifed by hyperopic processing, the resulting divergence will erode cognitive empathy, heighten mistrust, and fray the coherence of collective mind. Second, if artifcial agents dominate cognitive empathizing, they could smother ordinary human intuition and instinct and erode the diversity and delight of human relating. Alternatively, third, if human agents dominate, they could impose myopias and biases which stife and distort the sampling and search of other minds. Te current chapter examines these challenges. As a frst step, we need to examine the core mechanisms of cognitive empathizing more deeply.

## **5.1 Theories of Cognitive Empathy**

Jerome Bruner (1996) predicted a generation ago, that the next chapter of psychological research would focus increasingly on intersubjectivity, being the mechanisms by which people appreciate others' subjective experience of self and the world. As earlier sections of this chapter suggest, his prediction has proven correct. In particular, contemporary psychologists investigate cognitive empathy, defned as reading the thoughts and reasons of others (Liljenfors & Lundh, 2015). Moreover, as Descartes' (1998) meditations illustrate, people also cognitively empathize with themselves, whereby they form a sense of self as a reasoning agent. In fact, acquiring this refexive capability is an important phase of child development, along with the capability to distinguish other minds as separate from one's own (Katznelson, 2014). Cognitive empathy is therefore critical to a range of psychological and developmental processes.

# **Psychology of Cognitive Empathy**

Te core psychological mechanism of cognitive empathy is mentalization, which is the process whereby persons apprehend and form mental representations, of their own and others' mental states (Fonagy & Campbell, 2016). Mentalization is both internally and externally focused, on self and others, respectively. It also encompasses afective states, can be explicit and efortful, or implicit and relatively efortless, as a habit of mind (Liljenfors & Lundh, 2015). Notably, the construct of mentalization is well established in numerous felds. It is the subject of extensive research in clinical and cognitive psychology (Guerini et al., 2015), neuroscience (Schnell et al., 2011), education and learning (Haake et al., 2015), and now afective computing (Varga et al., 2018). Not surprisingly, mentalization also has neurological correlates (Ferrari & Coude, 2018). Moreover, because mentalization enables the perception and comprehension of cognitive states, it encompasses aspects of metacognition as well (Lindeman-Viitasalo & Lipsanen, 2017). Given this connection, its relevance will predictably spread to other felds, including management and professional studies, in which empathy receives increasing attention (Orbell & Verplanken, 2010; Ze et al., 2014). In summary, mentalization is central to the perception of, and empathy with, the subjective life of self and others.

Two types of mentalization support cognitive empathy with other minds. First, there is explicit, external, cognitive mentalization, that is, the deliberate attempt to represent and understand others' cognitive states, including the categories, concepts, beliefs, and logics which others employ in reasoning. Notably, this type of cognitive empathy is often implicit in classical theories of decision-making and formal problemsolving (March, 2014). Such theories assume that it is possible to observe and assess the reasoning of others, albeit about simplifed problems and choices. Second, there is implicit, external cognitive mentalization, being the intuitive, simplifed representation and comprehension of others' cognitive states. Tis type of cognitive empathizing is implicit within descriptive, behavioral, and informal theories of problem-solving. Tese theories assume that it may be impossible, and sometimes unnecessary, fully to comprehend the thoughts and reasoning of others. For a start, others' motivations and commitments can be opaque, and hard to determine. In fact, as Chap. 1 explains, mounting evidence suggests that deeper states of mind are not directly accessible to consciousness, and even less so, with respect to other minds. Emotional states also play a role, and they constantly wax and wane. In addition, other persons often use informal, heuristic strategies when making choices and decisions; which is to say that human minds are often murky or muddled and resist interpretation.

Consequently, like other cognitive functions—including reasoning and attention—the capability for mentalization is limited. In addition, information about other minds is often incomplete, difcult to gather and organize, or simply inaccessible. And because other persons inhabit a plurality of mental worlds, with diferent positions and points of view, mentalization is further constrained by positional ambiguity (Sen, 1993). Hence, people cannot clearly identify another person's point of view. For all these reasons, mentalization is often approximating. Te best one can hope for is to read other minds in a way that is no worse, and no less empathic, than other plausible readings. Indeed, people consistently maximize in this fashion, without negative consequences (Schneider & Low, 2016). Especially within shared cultures, they take much for granted, and reliably intuit each other's mental states. Although, it is important to note that major defcits in mentalization can be symptomatic of clinical disorder (Dimaggio & Lysaker, 2015).

In any case, efortful, precise mentalization is often unnecessary or inefective. For example, when people are engaged in purely procedural action, there may be no need to grasp the exact details of others' beliefs and reasoning. Efortful mentalization could even interrupt the fow of collective thought and action. In fact, a degree of empathic opacity is inherent and even helpful to collective mind. Such opacity invites mutual trust and civility and avoids the dilemmas and discomfort of empathic transparency (Sen, 2017). Indeed, cognitive opacity is often preferable to revealed disorder or deception, about others and oneself. Granted, there are occasions when mentalization needs to be heedful and efortful, striving for precision and transparency (Weick & Roberts, 1993). Agents then upregulate mentalizing functions. But when it can be appropriately imprecise, people downregulate mentalization and simplify in cognitive empathizing. Tey do so naturally, when capabilities are stretched to the limit, or because habitual and routine mentalization are sufcient to secure desired outcomes (Wood & Rünger, 2016).

## **Empathic Satisfcing**

Other minds are therefore complex and often hard to read. In fact, trying to understand other minds is a type of complex problem-solving. And as philosophers attest, the problem of other minds is a wicked one (Parft, 1984). Empathic capabilities are limited, and trade-ofs are frequent, as agents simplify the representation and solution of other minds. As in problem-solving more generally, therefore, people simplify in cognitive empathizing (Baker et al., 2017; Polezzi et al., 2008). In this sense, they cognitively "empathice" about other minds, as a species of satisfcing in complex problem-solving. Tey accept simpler empathicing outcomes, rather than seeking optimal empathizing ones.

Like other forms of satisfcing, this results in two major patterns of cognitive empathicing. To quote Simon (1979, p. 498) again: "decision makers can satisfce either by fnding optimum solutions for a simplifed world, or by fnding satisfactory solutions for a more realistic world." Te same distinction applies to cognitive empathicing when we view it as a type of complex problem-solving. First, people can simplify the representation of other minds, and seek optimal solutions about them. Tis results in normative cognitive empathicing, which seeks to optimize solutions. People then look for deliberate calculative reasoning in others. As an example, consider the analysis of preferential choice within classically inspired microeconomics. Teories of this kind assume: (a) that agents are uniformly self-interested and rational; (b) that most agents prioritize utility maximization; and (c) that choices are made by persons in a rational, calculative fashion. In other words, the problems of other minds are simplifed, so that solutions can be optimized.

Second, people describe more complex, realistic problems of other minds, and seek satisfactory solutions. Tis generates descriptive cognitive empathicing, which prioritizes the realistic representation of other minds. To illustrate this approach, consider the analysis of preferential choice in behavioral economics. Such theories assume that: (a) people are infuenced by a wide range of factors, including beliefs, emotions, motivations, and commitments; (b) that agents prioritize a range of process and outcome conditions, including utility; and (c) they make decisions using various principles and logics. From this perspective, other minds are expressions of *Homo sapiens*, rather than *Homo economicus* (Taler, 2000). Hence, it requires more resources to process representational complexity. To summarize, in descriptive cognitive empathicing, the problems of other minds are more realistic and relatively complex, and solutions are therefore satisfactory, rather than optimal.

Furthermore, implicit cognitive empathicing is an important mechanism of routine and collective mind, which emerge as people mentalize in purposive action (see Becchio et al., 2012; Schneider & Low, 2016). Trough implicit cognitive empathicing, that is, groups of people intuit each other's thinking and develop an efortless appreciation of their common beliefs and patterns of reasoning. People thereby attribute comparable mental states and processes to each other. Granted, these representations are imprecise, as a form of implicit mentalization. But they are frequently reliable enough to maintain procedural thought and action. In this way, routines of cognitive empathicing mediate collective mind and choice (see Sutton, 2008). Moreover, as Chap. 3 explains, these characteristics of collectivity do not require the aggregation of individuals' more complex, mental processes and states (see Zhu & Li, 2017). Collective mind and choice are not aggregation puzzles. As in other scenarios of routinization, the problem of aggregation fades away, replaced by the downregulation of individual cognitive diferences, and the upregulation of shared patterns of cognition.

## **Practical Empathicing**

To manage within their constraints, human agents therefore develop practical, heuristic methods of representing and solving other minds. For example, they rely on cultural signs and symbols and use these to infer others' mental states and world (Morris et al., 2015). Going further, businesses use information about people's demographic and consumption patterns to predict their future preferences. Sporting teams assume that the opposition knows the rules of the game and will probably adhere to them. Other things being equal, practical empathicing assumes that other minds are realistic, reasoning, and self-regulated, at least to the degree required for organized social life (Bandura, 2002). Although when human emotion and idiosyncrasy intrude, anything might happen.

Modern systems of justice exhibit similar patterns. Courts and juries review information about agents' actions and utterances, predicated on the assumption that people are reasoning agents, and that it is possible to infer and assess their cognitive states and processes. In contrast, demonstrable mental illness or defcits can be a defense. To formalize all of this, some scholars argue that, in principle, it is possible to optimize cognitive empathy about others' motives and reasons. Tis leads to theories of justice which assume universal principles of optimal cognitive empathy. John Rawls (2001) broadly supports this position. He believes it is possible to attain a view from everywhere, *sub specie aeternitati*s, at least about fundamental features of other minds. Others disagree. Amartya Sen (2009), for example, argues that cognitive empathy is consistently and inherently incomplete. Other minds are forever partially opaque or translucent. From this perspective, justice is deeply contextual, informed by cultural context, position, and commitments. Tat said, both perspectives assume that agents can be understood as reasoning and self-regulated, at least to a signifcant degree. Tey disagree about the limits of cognitive empathy in these contexts and, hence, about how much is accessible to external mentalization.

Cognitive empathicing is equally important for civic and political institutions. Via empathicing, communities build consensus, by exchanging ideas and debating policies and principles of governance (Scanlon, 1998). Tis entails the widespread exchange of opinion, which is common to contractarian and communitarian perspectives. All are modern political visions, in this respect, because they acknowledge the importance of reasoned intersubjectivity. Cognitive empathizing is therefore critical to the functioning of such systems. It allows people to recognize the intelligent agency of others and sustain a sense of collective mind. Liberal democracy is certainly reliant on these mechanisms, and hence vulnerable to their disruption (Bandura, 2006). Not surprisingly, therefore, autocrats often try to subvert these processes. Tey try to control rather than liberate cognitive empathy. Although, in these respects, autocracies are modern too, because they also recognize the force of collective mind and then try to stife it, perhaps by cultivating "false consciousness," as Marx and Engels once argued (Kołakowski & Falla, 1978). Some worry that digitalization heightens this risk, by giving more power to power (Helbing et al., 2019).

Figure 5.1 summarizes the resulting metamodels of cognitive empathizing. It mirrors the analysis of general problem-solving shown in Fig. 4.1 in Chap. 4. Te new fgure shows the two major components of cognitive empathizing: the representation of problems of other minds, and the solutions to such problems. For both dimensions, the fgure shows their complexity as high or low. First, quadrant 1 summarizes the ideal, optimizing metamodel of cognitive empathizing, consisting of highly complex, best solutions to highly complex, best representations of other minds. Hence, I use the term empathizing here, rather than empathicing. Quadrant 2 then shows descriptive, cognitive empathicing, consisting of less complex, satisfactory solutions to complex representations of other minds. Next, quadrant 3 summarizes normative, cognitive



empathicing, which is complex solutions to simplifed representations of other minds. And quadrant 4 shows practical cognitive empathizing, which is fnding satisfactory solutions to simplifed problems of other minds. Tis metamodel is therefore another type of empathizing, neither optimizing nor empathicing.

# **Empathizing and Discrepancy**

It is also important to note, that the object of cognitive empathicing is frequently a form of satisfcing itself. Tat is, people empathice in understanding other's satisfcing. Put another way, people use empathicing heuristics, to represent and solve other's cognitive heuristics. Once again, this is common in cultural interactions, in which people often rely on cognitive shortcuts to read other minds (Henrich et al., 2001). It follows, therefore, that cognitive empathicing admits a range of mental performances by others, rather than fxed patterns of belief and reasoning. Empathicing agents are frequently insensitive, therefore, to cognitive variance in others. Tey neither perceive nor evaluate discrepant reasoning in a determinant fashion (see Wood & Rünger, 2016). Cognitive empathicing thus maximizes and people grant each other mental slack. Tat said, when other's performances fall below acceptable minima, cognitive empathicing will trigger the perception of signifcant discrepancy in other minds. Other persons then appear unrealistic, irrational, or deviant in some way. Extreme cases can trigger cognitive antipathy, meaning others are perceived as dangerously deviant or irrational (Nath & Sahu, 2017).

Evidence supports this analysis. Studies show that humans have bounded empathizing capabilities, meaning they are limited in the capacity to monitor and assess others' cognitive states and performances (Fiedler, 2012). Hence, there is always a degree of variability in the perception and assessment of other's cognitive limits, and hence, in perceiving cognitive discrepancy. In fact, to claim that cognitive empathicing references fully determinate aspiration levels, is to perpetuate the rationalist ideals of classical theory. Simon (1955, p. 111) made the same point in his original, groundbreaking exposition of aspiration and satisfcing: "…there are certain dynamic considerations, having a good psychological foundation … as the individual, in his exploration of alternatives, fnds it *easy* to discover satisfactory alternatives, his aspiration level rises; as he fnds it *difcult* to discover satisfactory alternatives, his aspiration level falls." Te same dynamic process is found in cognitive empathicing.

Granted, some instances of cognitive empathizing approach full determination and fxed aspiration levels, especially when reasoning must be highly systematic. For example, in some highly technical domains—such as the piloting of aircraft—we hope that the responsible agents think clearly and interpret each other almost perfectly. To be sure, classical theories aspire in this direction. Recall the earlier analysis of classical microeconomics. But these situations are an important subclass of the problems of other minds, not the whole universe (Sen, 1997). Many forms of cognitive empathizing are empathicing, entailing a range of satisfactory performances and, consequently, a degree of insensitivity to variance. Empathic translucence, or partial transparency, is often appropriate and efective.

## **Digitalization of Cognitive Empathizing**

Turning next to the impact of digitalization on these processes. As noted previously, digital innovations allow people to share their mental lives in real time, on a global scale, even if much remains opaque. In addition, newer digital technologies can simulate human expression and emotion. Afective computing and artifcial personality will soon be commonplace (Poria et al., 2017). In fact, the augmentation of empathy and personality is now a major feld of computer engineering, already fnding applications in education, automobiles, the ofce, and home. For all these reasons, digitalization will transform cognitive empathizing. Over time, augmented agents will be capable of richly descriptive, rapid cognitive empathizing, at every level of agentic modality and mind. Even so, natural human limitations will persist, and augmented agents will often inherit cultural stereotypes and biases about other minds.

Hence, divergence can occur in cognitive empathicing, as in other areas of augmented problem-solving. Humans may remain myopic and not sample or search other minds far enough, while artifcial agents may be hyperopic and sample and search other minds too extensively. Moreover, each type of agent could easily reinforce the inherent tendencies of the other. Te overall result will be divergent patterns of oversimplifcation and over-complexity, in the representation and solution of other minds. In short, cognitive empathicing will be highly ambiopic (Liu et al., 2017). Alternatively, the system could be overly convergent. Artifcial components might overwhelm the human, or vice versa. Whether by default or design, cognitive empathicing could be hijacked by artifcial or human agency.

# **Summary of Augmented Cognitive Empathizing**

Based on the preceding discussion, we can summarize the features of digitally augmented, cognitive empathizing conceived as a type of complex problem-solving. First, empathizing integrates two main processes: the sampling and representation of problems of other minds, and the search for solutions to such problems. Second, these systems often include encoded cultural and other commitments, which guide sampling and search. Tird, much cognitive empathizing is empathicing and vulnerable to overly myopic and hyperopic tendencies, resulting in highly ambiopic outcomes. And if this occurs, other minds are more likely to appear unrealistic, irrational, or deviant, and hence less trustworthy. Fourth, implicit cognitive empathicing is central to mental routine and collective mind, which do not require any process of aggregation. Fifth, agents are partially insensitive to variance in others' reasoning, given that empathicing simplifes and approximates. Many of these topics are already foci of research in computer science, for example, in sentiment analysis and artifcial personality (Amodeo et al., 2018; Burke et al., 2013). What is not yet adequately understood is how they will impact cognitive empathizing in augmented agency and especially comprehension and trust between human and artifcial agents, which are critical for their collaborative supervision.

# **5.2 Metamodels of Cognitive Empathizing**

Tis section illustrates representative metamodels of augmented, cognitive empathizing. Te illustrations adapt the earlier analysis of augmented problem-solving in the preceding chapter. Tis refects the fact that cognitive empathizing can be understood as a type of complex problemsolving. Hence, the fgures presented below are like those in Chap. 4, although the new fgures difer in one obvious, critical respect. Rather than illustrating problem-solving in general, the fgures will illustrate cognitive empathizing, including the depiction of empathicing. Also, the following discussion focuses primarily on digitally augmented mentalizing capabilities at level L3, rather than the lesser capability level L2.

## **Highly Ambiopic Empathizing**

Some augmented cognitive empathizing is highly ambiopic. Tis will be the case when the representation and solution of other minds are very divergent, in terms of their relative complexity or simplifcation. Figure 5.2 illustrates this type of system, adapting Fig. 4.3 from Chap. 4. In the new fgure, the axes show the two major activities of empathizing, being the representation of problems of other minds, and the search for solutions to such problems. Each dimension ranges from low to high complexity. Te fgures also depict the natural limits of modern, moderately assisted mentalizing capabilities, which are labeled L2 and the higher level of digitally augmented mentalizing capabilities labeled L3. Tese capabilities again reach limiting asymptotes of high complexity for both problem representation and solution search.

Figure 5.2 depicts alternative metamodels of cognitive empathicing, which optimize on one dimension and simplify on the other. First, the metamodels labeled N2 and N3 are normative cognitive empathicing, exploiting augmented mentalizing capabilities at levels L2 and L3, respectively. Tese metamodels combine optimizing solutions about relatively simplifed problems of other minds. Tis implies myopic problem representation, plus hyperopic solution search. Notably, problem representation does not change between N2 and N3, which suggests the persistence of human sampling myopia in these metamodels. Prior simplifcations persist, that is, in the representation of other minds, despite the increase in capabilities at L3. Tis results in extreme normative empathicing. Granted, this could sometimes be appropriate, especially when others' cognitions relate to core human values, commitments, or cultural

**Fig. 5.2** Highly ambiopic empathicing

norms. Finally, there is a range of empathicing models, shown by the curved intersections of N2 with L2 and N3 with L3. All options along these intersections are no worse than each other and hence can be maximizing.

Second, the metamodels labeled D2 and D3 indicate descriptive cognitive empathicing, again exploiting mentalizing capabilities at L2 and L3, respectively. Tey combine complex problems of other minds, with simpler, satisfactory solutions. Tis implies hyperopic sampling and problem representation, with myopic solution search. Importantly, the solutions of other minds in D2, are translated to D3 without any increase in complexity. Priors persist in solution search, despite the increase in capabilities. Tis results in extreme descriptive empathicing. For example, consider the digitalization of collective choice, in which hyperopic sampling results in complex problem representations, which are resolved using simple choice procedures. Once again, a range of possible empathicing

**155**

models is shown by the curved intersections of L2 with D2, of L3 with D3. It is also notable, that N3 and D3 only partially overlap in practical empathizing P3. Furthermore, owing to the persistence of myopic commitments, these options have not expanded and P2 is equivalent to P3. Tis may be adequate for everyday life, but neither problem representation nor solution search is well specifed.

Now assume a poorly supervised, augmented agent which combines N3 and D3. Myopic human priors remain entrenched, and artifcial hyperopia is largely unchecked. Hence, both the representation and solution of problems of other minds will be anchored in human myopias. At the same time, artifcial processing is highly hyperopic. In these situations, there are two potential patterns of distortion. Either augmented agents will adopt overly simplifed explanations of overly complex representations of other minds, that is, extreme descriptive cognitive empathicing (D3), or alternatively, they will adopt overly complex explanations of overly simplifed representations of other minds, that is, extreme normative cognitive empathicing (N3). Moreover, sometimes they might do both at the same time. In all scenarios, agents are more likely to perceive other minds as discrepant, deviant, and irrational. We already see such efects in the rise of online antipathy between diferent groups, where artifcial systems reinforce and amplify encoded biases. Critical questions therefore arise: how can augmented agents supervise the retention or relaxation of human commitments in cognitive empathizing; relatedly, how can they manage the risks of myopia and hyperopia in these contexts, and maximize metamodel ft; and fnally, how can digitalization enhance cognitive empathy and trust?

## **Non-ambiopic Empathizing**

Other agents will be non-ambiopic in cognitive empathizing. Problem representation and solution search will exhibit comparable degrees of complexity. Tis implies that both myopia and hyperopia are relatively low, and the agent is balanced in this respect. Te result is a type of practical empathizing, in which problem representation and solution search are of comparable complexity. Indeed, digitalization makes this kind of

**Fig. 5.3** Non-ambiopic empathizing

empathizing increasingly feasible, because digital augmentation enables detailed, precise, and intelligent sampling of other minds, combined with rigorous solution search. For the same reason, there will be fewer tradeofs. Figure 5.3 illustrates this type of metamodel, building on the illustration of non-ambiopic problem-solving in Fig. 4.3. Once again, axes represent two major components of cognitive empathizing, with two levels of mentalizing capabilities, labeled L2 and the higher level L4. Te metamodels at level L2 are the same as the previous fgure, and therefore do not warrant repeated description.

Notably, the metamodels labeled D4 and N4 are both non-ambiopic. Hence, neither is clearly descriptive nor normative, in the classic sense. Rather, both are equivalent to practical empathizing P4. In fact, all three metamodels overlap. Cognitive empathizing has confated into the same set of digitally augmented processes. Such metamodels have been very unusual in ordinary human empathizing, although they are observed among scientists and some expert professionals, who form deep, clear understanding of each other's thoughts and intentions, often doing so with the support of sophisticated technologies. Indeed, a primary goal of the scientifc method is to reduce subjective variance and instill precision into the reading of other minds. In these contexts, low ambiopia is appropriate.

Any agents who empathize in this fashion, therefore, will likely view each other as fully realistic, rational, transparent, and trustworthy. By the same token, however, this kind of empathizing will homogenize other minds, by making agents highly commensurable to each other. Some may welcome this development, hoping for better understanding of other minds. It will certainly help in complex, technical task domains. Yet others will regret the trend, concerned that digital augmentation will smother intersubjective diversity and intuition. Indeed, as noted previously, empathic ambiguity and a degree of opacity or translucence are valued in some contexts, especially in creative artistic and innovative pursuits (March, 2006). In these respects, the confation of D4, N4, and P4 represents the radical transformation mentioned earlier. Subjectivity itself has been fully augmented by digitalized perception, thought, and feeling. As a result, however, ordinary intuition and instinct are bleached.

## **Summary of Cognitive Empathizing**

In summary, there are reasons to hope and be wary. Fully digitalized empathizing (shown in Fig. 5.3) could be accurate and transparent, but homogenized and lack human intuition and instinct. Whereas humanized empathizing (shown in Fig. 5.2) will be diverse and intuitive, but translucent and less accurate. Figure 5.4 summarizes the overall patterns. Te vertical dimension shows the level of cognitive empathy versus antipathy. Te horizontal dimension shows the degree of ambiopia in cognitive empathizing, moving toward extreme descriptive empathicing on the left and toward extreme normative empathicing on the right. Six patterns of cognitive empathizing are positioned on these dimensions.

First, the fgure shows the two modern metamodels in Figs. 5.2 and 5.3 (labeled D2 and N2), which are moderately ambiopic, descriptive and normative empathicing, respectively. Both exhibit moderate cognitive

**Fig. 5.4** Summary of cognitive empathicing

empathy. Second, the fgure also shows the two metamodels in Fig. 5.2 (D3 and N3), which represent highly ambiopic, extreme descriptive and normative empathicing, respectively. In fact, both systems exhibit moderate cognitive antipathy. Tird, Fig. 5.4 positions the two metamodels in Fig. 5.3 (D4 and N4), which represent non-ambiopic descriptive and normative patterns of empathizing, respectively. Both metamodels exhibit high potential for cognitive empathy, although, as noted earlier, the risk is less diverse and intuitive empathizing. In summary, there are three broad patterns in Fig. 5.4, from high cognitive transparency and empathy, to balanced cognitive translucence and empathy, to cognitive opacity and antipathy. And depending on the context, each could be a good ft, even antipathy sometimes. However, moderate ambiopia and translucence will often be most advantageous, especially in ordinary life, because they foster reliable cognitive empathicing while also allowing for diversity and intuition.

## **5.3 Wider Implications**

For good and ill, digitalization is transforming cognitive empathy. Digital innovations are rapidly augmenting the capability to perceive and read other minds. Yet human limitations and biases persist. As in all complex problem-solving, therefore, augmented agents will simplify when reading other minds. Tey will cognitively empathice. Two major patterns of distortion are most likely. First, augmented agents could embed myopic human priors, which are then reinforced by hyperopic, artifcial sampling and search. Tis will result in highly ambiopic, cognitive empathicing, the misunderstanding of other minds, and in extreme cases, cognitive antipathy. Second, augmented agents could adopt fully artifcial supervision, which expunges human priors and intuition altogether. Tis produces non-ambiopic, cognitive empathizing, but it suppresses human intuition and instinct. Granted, other minds would be more transparent and predictable, but valued features of intersubjectivity would be lost.

## **The Understanding of Other Minds**

Digitalization is therefore simultaneously empowering and endangering cognitive empathy. On the one hand, artifcial intelligence, afective computing, and social networks allow people to share and comprehend more about each other's mental lives, their patterns of belief and reasoning (Baker et al., 2017). Cognitive empathizing is potentially enhanced. On the other hand, if augmented empathizing is poorly supervised, agents are prone to misunderstanding, misjudgment, and mistrust, or alternatively, bleached of diversity and intuition. Studies already report examples of such efects (Noble, 2018; Osoba & Welser, 2017). At scale, this would undermine social cohesion as well as freedom of thought. However, existing theory does not adequately capture or conceptualize these phenomena. In response, this chapter introduces novel mechanisms and constructs: the hyperopic sampling and search of other minds; ambiopic empathizing, which combines myopic and hyperopic processes; and resulting patterns of descriptive and normative, cognitive empathicing. Furthermore, my argument introduces insights from the psychology of mentalization and cognitive empathy. In fact, this work is among the frst to import these constructs into behavioral thinking about agency and organization (see Polezzi et al., 2008).

Tese novel constructs and mechanisms have wide implications. For example, prior research shows that collective action, agency, memory, and mind, all rely heavily on the shared assumption that other persons are realistic and reasonable (Bandura, 2007). Agents derive shared meaning and trust from cognitive empathy (Brickson, 2007). Otherwise, civility, comity, and docility are unsustainable. Groups and collectives also rely on cognitive empathy to develop transactive memory systems, shared mental models, and routines (Argote & Guo, 2016). Digital augmentation could radically enhance or erode these collective attributes. In terms of positive enhancement, augmented agents could achieve deeper, more appropriate cognitive empathy. While in terms of negative erosion, poor supervision will lead to inappropriate, empathicing extremes. New risks therefore emerge. First, cognitive empathizing could amplify, rather than mitigate, persistent myopias and biases, leading agents to view each other as unrealistic, irrational, deviant, or worse. Second, artifcial determination could suppress intersubjective intuition, autonomy, and diversity. Collective mind and memory would be at risk too. Making cultural adaptation and re-grounding more difcult, even as such changes are urgently needed. New regulatory systems will be needed to monitor these risks and protect the autonomy of mind.

# **Empathy for Commitments**

My argument further highlights the role of commitments—ontological, epistemological, and ethical—by which agents interpret and assess other minds. As in problem-solving more generally, such commitments emerge over time, often culturally embedded, and fnd institutional expression (Scott & Davis, 2007). Many are deeply imprinted in collective patterns of thought and identity (Sen, 1985). Moreover, even if contexts and capabilities change, people tend to retain their prior commitments. Tat said, it is important to recognize that the resilience of such commitments can have positive efects. Collective mind and identity, shared routine, and cultural norms are all sustained by the continuity of commitments about other minds (see Higgins et al., 2021; Sutton, 2008). Rapid adaptation and full transparency could be destabilizing. In fact, they already are in some digitalized contexts. Tese risks will grow.

People's empathic commitments play another critical role. Tey help to anchor the self and community, by defning what it means to be a realistic, rational, and ethical person in the world. Civilized humanity is grounded in such commitments. Hence, they will be important inputs into the collaborative supervision of augmented empathizing. At the same time, however, digitalization could disrupt these commitments, if artifcial agency becomes dominant and overwhelming. It will be important, therefore, for augmented agents to respect and incorporate empathic commitments, irrespective of their ontological status. For even if these features of conscious life do not grant access to fundamental reality, they do capture what matters from a human point of view. In this respect, at least, digitally augmented empathizing may support a shared view from everywhere, as Rawls (2001) envisioned, albeit veiled in translucence.

## **Future Investigations**

Jerome Bruner (1996) was certainly prescient. Research into human intersubjectivity has blossomed over recent decades. Tis includes studies of mentalization and cognitive empathy, using a range of behavioral and experimental methods, such as functional magnetic resonance imaging (fMRI) (Schnell et al., 2011) and neurocognitive techniques (Walter, 2012). Similar methods are employed in the study of behavioral decisionmaking and neuroeconomics (Camerer, 2017; Park et al., 2017), which is good news, because the same methods can be used to investigate the neurological and behavioral bases of cognitive empathicing, and how it will be supervised in digitalized contexts (e.g., Contreras et al., 2013; Lombardo et al., 2010). It is also important to note that these methods rely on advanced technologies which transcend ordinary introspection.

In like fashion, testing of this chapter's proposed mechanisms could exploit the techniques of artifcial intelligence and afective computing. For example, digital simulations might vary and control for encoded myopias and hyperopias in sampling and search, thereby predicting the efects of augmented cognitive empathizing (Baker et al., 2017; Wiltshire et al., 2017). Subsequent studies could then interrogate the massive databases which already exist—including social network data—and use reinforcement learning to test predictive power, asking how and why agents perceive others as cognitively empathic, discrepant, or deviant. Results could provide techniques for enhancing cognitive empathy within digitally augmented teams and organizations, online networks, and other settings (e.g., Decety & Cowell, 2014; Muller et al., 2014). Tere is much which can and should be done.

## **References**


**Open Access** Tis chapter is licensed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/ by/4.0/), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence and indicate if changes were made.

Te images or other third party material in this chapter are included in the chapter's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the chapter's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.

# **6**

# **Self-Regulation**

From a modern perspective, people should manage their own thoughts, feelings, and actions and thereby exercise autonomous self-regulation. Here again, Bandura (2001) is a leading scholar on the topic. In his social cognitive theory, he explains how individuals, groups, and collectives achieve such autonomy by developing self-efcacy, which is feeling confdent to perform in specifc task domains through self-regulated action. Self-efcacy thereby strengthens self-regulation, and self-regulation strengthens self-efcacy. Both mechanisms are complementary and reciprocal. While also noting that efective performance is contingent on access to appropriate resources, opportunities, and the development of capabilities. Tis helps to explain why Bandura (2007) and other scholars of self-regulation pay special attention to learning, and to the factors which limit the development of self-regulatory capabilities (Cervone et al., 2006; Ryan & Deci, 2006). By focusing on these questions, research into self-regulation refects the ambition of enlightened modernity, to liberate and empower autonomous human agency.

Bandura and others fueled a blossoming of research on this topic during the late twentieth century. Not by coincidence, their eforts paralleled the rise of cognitive science, neuroscience, cybernetics, and computer science (Mischel, 2004). In many felds, scientists were exposing the deeper mechanisms of intelligent processing. Teir discoveries inspired new understanding of human agency, self-regulation, and its functional companions, metacognition and self-supervision. Particularly from a systems perspective, human agents can be viewed as complex, open, adaptive, situated, and responsive, also, as agents which self-monitor, self-regulate, and self-supervise, to signifcant degrees. Human personality can be understood in this way too, not simply as an expression of fxed traits or conditioned responses. Chapter 1 explains this ecological perspective as viewing "persons in context." Chapter 2 also relies heavily on this perspective, to develop historical metamodels of agency.

# **Social Cognitive Perspectives**

Refecting the contextual nature of self-regulation, many leading scholars on the subject are social cognitive psychologists, including Bandura. Social cognitive self-regulation allows agents to monitor and adapt to changing contexts and develop domain-specifc self-efcacies. Teories therefore integrate major cognitive-afective processes into self-regulatory functioning: encodings and beliefs about the self, afective states, goals and values, motivations, competencies, and self-regulatory schemes (Shoda et al., 2002). Most theories of self-regulation combine these factors, although they do so in diferent ways. For example, Baumeister (2014) places more emphasis on attention and afective states, as primary sources of self-regulatory strength and capability. In contrast, Carver and Scheier (1998) emphasize the role of goals and control mechanisms. While Higgins (2012) and his collaborators, put special weight on agents' core motivations and the experience of value. Yet, irrespective of emphasis, all agree that self-regulatory capabilities are critical and rarely develop unassisted. Tey require efective parenting, education, and social modeling, as well as natural capability. Without such support, self-regulation must rely on instinct and chance, which are poorly efcacious in the modern world.

Each theorist therefore integrates social, cognitive, and afective factors, views self-regulation as fundamental to agentic functioning, and highlights contextual variance. Most also recognize and seek to mitigate the limitations of self-regulatory capability. Indeed, one of the main functions of self-regulation is to manage such limitations, for people can only achieve so much, whichever mechanism is operative. Actual self-regulation regularly falls short of aspirations, and almost never matches ideals (Higgins, 1987). Tere are consistent trade-ofs and compromises, between short- and long-term goals, ideal and actual outcomes, individual and collective priorities and commitments, and schematic complexity and processing rates, where processing rate, in this context, is defned as the number of full cycles of self-regulation which can be completed, per unit time. And schematic complexity is defned as the number of distinct steps and interactions, required for a self-regulatory process to complete. Importantly, these defnitions of processing rate and schematic complexity are accurate for artifcial self-regulation as well (see Den Hartigh et al., 2017). In fact, the rest of this chapter will focus primarily on these two aspects of self-regulation: processing rate and schematic complexity. My analysis is selective, in this respect. Te reason being that these characteristics are fundamental to both human and artifcial self-regulation and capture important similarities and diferences between the two types of agents. Tat said, I acknowledge that other features of augmented selfregulation will require future investigation.

## **Rate and Complexity**

Owing to their limited capabilities, human agents need to balance selfregulatory processing rate and schematic complexity, because both consume limited resources. Put simply, owing to limited capabilities, the higher the processing rate, the lower the schematic complexity, and vice versa (Fiedler et al., 2020). Tis results in two major options. Both entail trade-ofs, which parallel those in complex problem-solving. First, people can try to optimize self-regulatory processing rates, responding quickly to signals and stimuli. To be sure, fast self-regulation is often advantageous for survival, especially in competitive or threatening situations. Evolution favors this characteristic. When immediate threats erupt, a fast response is typically more important than schematic complexity. But owing to limited capabilities and other inputs, agents must then simplify the self-regulatory scheme. Tey may need to rely on simple heuristics when feeing from danger. For example, leave everything behind and run. Second, people can seek to optimize the complexity of self-regulatory schemes, to ensure outcomes are precise and complete, often by employing careful, calculative procedures. Tis type of self-regulation is important in extended, complex goal pursuit, or when potential gains and losses are into the future. But in consequence, agents must be content with a slower processing rate. For example, scientifc research and vocational training often require complex self-regulatory schemes which take time to complete.

In fact, both scenarios stretch human agents to the limits of their capabilities. Whether they seek to optimize the self-regulatory processing rate and adopt a simpler scheme; or seek to optimize schematic complexity, and then self-regulate at a slower rate. Both scenarios can be very demanding. Choosing which to employ will depend on the type of agentic modality, the urgency and complexity of the task, its relation to values, goals, and commitments, and potential impact, plus the agent's self-regulatory capabilities. Additional important factors include self-efcacy, goal orientation, and temporal frame, which refect the desire for development and future gains, and/or to maintain existing conditions and prevent shortterm losses (Higgins, 1998).

# **Impact of Digital Augmentation**

Nevertheless, even though humans are limited in self-regulatory capabilities, some become experts in specifc task domains, and hence very skilled self-regulators. Tey are highly self-efcacious experts. In the contemporary world, this often entails technological assistance. To illustrate, consider contemporary clinical medicine, in which doctors work with artifcial agents to diagnose and treat disease. Granted, doctors also rely on their personal experience and intuition, but human insights are increasingly complemented by artifcial agents. As a result, digitally augmented medicine increases overall self-regulatory processing rates and schematic complexity. Clinical practice is more timely, precise, personalized, and efcacious. In this fashion, digital augmentation is transforming clinicians' self-regulation. As Bandura (2012, p. 12) observes:

Revolutionary advances in electronic technologies have transformed the nature, reach, speed, and loci of human infuence. People now spend much of their lives in the cyberworld. Social cognitive theory addresses the growing primacy of the symbolic environment and the expanded opportunities it afords people to exercise greater infuence in how they communicate, educate themselves, carry out their work, relate to each other, and conduct their business and daily afairs.

Central to this transformation are digitalized, intra-cyclical feedforward mechanisms of self-regulation. Via these mechanisms, augmented agents will rapidly update self-regulatory schemes within processing cycles, not only between them, and often in real time. Tis type of feedforward process is illustrated in Fig. 2.3, which depicts intra-cyclical, feedforward updating in digitally augmented agency. Chapter 2 further explains that these mechanisms involve novel entrogenous mediators: intelligent sensory perception, performative action generation, and contextual learning. Figure 2.4 illustrates the core principles of such entrogenous mediation. In relation to self-regulation, the main mediator of this kind will be performative action generation, whereby augmented agents dynamically update action plans during performances. Feedforward is therefore an important source of self-regulation for augmented agents, complementing inter-cyclical performance feedback.

Among the major consequences of this shift are that self-regulation will be more prospective, forward looking, proactive, and intelligent (Bandura, 2006). Processing rates and schemes will be subjects of selfregulation as well, adjusting in real time during the generation and performance of action. Autonomous artifcial agents already function in this way, especially those which are fully self-generative and self-supervising. Moving forward, augmented self-regulatory processes will be equally intelligent and dynamic. Early evidence of this shift can already be seen in the everyday use of smartphones and digital assistants, which augment the self-regulation of human relationships, preferential choice, goal pursuits, and more.

# **Self-Regulatory Dilemmas**

However, as artifcial capabilities expand, and self-regulation becomes more complex and rapid, augmented agents will encounter new tensions and conficts. Poorly supervised self-regulation could become dysfunctional, especially at the level of intra-cyclical entrogenous mediators. First, artifcial and human agents often exhibit diferent processing rates. In many contexts, humans are relatively sluggish in self-regulation, processing more slowly over cultural and organizational cycles (Shipp & Jansen, 2021). By comparison, artifcial agents are increasingly hyperactive, cycling quickly. When combined, these divergent rates could lead to dyssynchronous processing, meaning diferent aspects of self-regulation process at diferent rates, and therefore lack synchronization (see van Deursen et al., 2013). For example, in the self-regulation of problemsolving or cognitive empathizing, relatively sluggish human self-regulation of sampling and search could combine with hyperactive artifcial selfregulation of the same functions. Compounding this divergence, the fast intra-cyclical mechanisms of artifcial self-regulation will often be inaccessible to human consciousness, further impeding coordination. Te overall result is dyssynchronous processing, with artifcial systems selfregulating rapidly and humans relatively slowly. Comparable problems occur in automated control systems, and also in artifcial neural networks, in which some updates lag for various reasons (Zhang et al., 2017).

Second, artifcial and human agents exhibit diferent levels of schematic complexity. Human self-regulatory schemes are frequently simplifed and heuristic, often for good reasons. Simple schemes facilitate efective functioning in everyday life. By comparison, artifcial selfregulatory schemes are increasingly complex and expansive, supervising and regulating massive networks and processes. When these diferent characteristics combine in augmented agents, self-regulation may be discontinuous, meaning there are gaps and discontinuities in self-regulation, at diferent layers and levels of detail. Human self-regulatory processes will tend toward simpler schemes, while artifcial schemes are precise and complex. To illustrate, consider the self-regulation of augmented problem-solving once again. Human self-regulatory heuristics in problem sampling might combine with complex, algorithmic self-supervision of solution search. Te outcome will be discontinuous self-regulation of problem-solving, having gaps and possible conficts in sampling and search.

In summary, depending on the quality of their collaborative supervision, augmented agents may combine relatively sluggish, human selfregulatory processing, with hyperactive artifcial processes, resulting in highly dyssynchronous self-regulation, as well as combining simplifed human self-regulatory schemes, with complex artifcial schemes, resulting in highly discontinuous self-regulation. Moreover, these patterns are another example of poorly supervised, entrogenous mediation, and especially of performative action generation. Indeed, it is difcult to integrate the rapid, intra-cyclical feedforward updates generated by artifcial processing, with the slower, inter-cyclical feedback updates generated by human processes. Overall self-regulation becomes dysfunctional. And as earlier examples show, this would compound the ambiopic distortions discussed in Chaps. 4 and 5, because poor self-regulation increases the risks of extreme myopia and hyperopia in sampling and search.

## **Ambiactive Self-Regulation**

To conceptualize this self-regulatory dilemma, I import another new term, this time from biology. It is the term "ambiactive" which refers to processes which simultaneously stimulate and suppress a property or characteristic. For example, microbiologists use this term to refer to processes which simultaneously stimulate and suppress aspects of gene expression (Zukowski, 2012). In this chapter, the term "ambiactive" refers to processes which simultaneously dampen and stimulate the same feature of self-regulation and, specifcally, processing rates and schematic complexity. Hence, self-regulation by augmented agents will often be ambiactive because it simultaneously suppresses and stimulates processing rates, and/or suppresses and stimulates levels of schematic complexity, among human and artifcial collaborators.

However, it must be noted that ambiactive self-regulation is not inherently dysfunctional. Similar as ambimodality and ambiopia, a moderate level of ambiactivity is often advantageous in dynamic contexts. Tis is because, when environments are uncertain and unpredictable, ambiactive self-regulation will increase the diversity of potential responses. Te agentic system is less tightly integrated, in both temporal and schematic terms, making it more fexible and adaptive (e.g., Fiedler et al., 2012). For the same reasons, moderately ambiactive self-regulation helps to stimulate novelty and creativity (March, 2006). Te problem is that digital augmentation greatly amplifes these efects and the potential for ambiactivity. If supervision is strong and appropriate, this will be an advantage and enable more dynamic, efective self-regulation. Otherwise, extremely dyssynchronous and discontinuous self-regulation will become more likely and even probable in some contexts.

Notably, these potential risks and benefts are like the conditions identifed in earlier chapters: ambimodal agency in Chap. 3, ambiopic problem-solving in Chap. 4, and ambiopic cognitive empathy in Chap. 5, and now ambiactive self-regulation in this chapter. Te reader will quickly notice the common prefx, "ambi" meaning "both," which captures the fundamental combinatorics of human-artifcial augmentation. In each area of functioning, digital augmentation presents comparable opportunities and risks. Tere are opportunities to improve form and function and maximize metamodel ft, by adjusting ambimodal, ambiopic, and ambiactive settings. But there are also new risks of extreme divergence or convergence, if supervision is poor. Regarding selfregulation, the major risks stem from ambiactive rates and schemes.

# **Metamodels of Self-Regulation**

Figure 6.1 summarizes the resulting metamodels of self-regulation, in terms of their hyperparameters for processing rates and schematic complexity. Rates are distinguished between hyperactive and sluggish, where artifcial agents tend to be hyperactive, and humans are typically sluggish by comparison. Te second dimension distinguishes complex from simplifed self-regulatory schemes, where artifcial agents are increasingly complex, and humans tend to be more simplifying. Given these hyperparameters, Fig. 6.1 shows four resulting metamodels of self-regulation.


#### Self-Regulatory Cycle Rate

**Fig. 6.1** Metamodels of self-regulation

Quadrant 1 shows hyperactive processing of complex self-regulation, forming an ideal, optimizing metamodel of self-regulation. Artifcial agents are more likely to attempt this option, given their greater capabilities; although humans are unlikely to do so, owing to lesser capabilities. Quadrant 2 shows sluggish processing of complex self-regulation, which results in a scheme-maximizing metamodel, meaning it prioritizes schematic complexity over faster processing rates. Next, quadrant 3 depicts hyperactive processing of simplifed self-regulatory schemes, which results in a rate-maximizing metamodel, which prioritizes faster processing rates over schematic complexity. Both types of agent are likely to attempt these maximizing options, whether acting independently or together. Finally, quadrant 4 shows sluggish processing of simplifed selfregulatory schemes, which results in a practical metamodel of selfregulation, which is neither fast nor complex, but adequate for the situation at hand. Humans often exhibit this approach in everyday life.

Not surprisingly, given limits and choices, and the need for trade-ofs, theories of self-regulation focus on the maximizing options depicted in quadrants 2 and 3 of Fig. 6.1. Optimal self-regulation is a rare achievement in human activity. It is reserved for experts in specialist domains, for example, modern empirical science. Whereas human agents are more likely to exhibit practical self-regulation in everyday situations, often as habit and routine.

Furthermore, we can map these metamodels of self-regulation to the historical patterns of agency discussed in Chap. 2. To begin with, in premodern times, when agency was replicative (see Fig. 2.1 in Chap. 2), ideal optimizing metamodels were more feasible (quadrant 1 of Fig. 6.1). Tis is because, given the relative stability, simplicity, and regularity of agentic life, it was possible to self-regulate in a timely, complete fashion, given criteria at the time. Because rates were sluggish, by contemporary standards, and schemes relatively simple, optimizing self-regulation was at least feasible in such a world. Technological assistance was also minimal, meaning no more rapid or complex options were possible. Of course, not all self-regulation was, or is optimal, in a replicate metamodel of agency. Most of the time, people self-regulate using scheme or ratemaximizing options and especially practical self-regulation.

As modernity unfolded, the replicative metamodel gave way to enlightened, developmental ambitions. From a modern perspective, that is, selfregulation is an adaptive process (see Fig. 2.2 in Chap. 2). Human agents should monitor and manage their own goals and choices, develop their capabilities, seek opportunities and learn, all the while becoming more self-efcacious and autonomous in self-regulation. Indeed, as noted earlier, self-regulatory challenges are central to modernity: how can autonomous individual self-regulation coexist with collective self-regulation and responsibility (Giddens, 2013; Sen, 2017)? Two notable solutions to this question are Adam Smith's (1950) invisible hand of market self-regulation and Tomas Hobbes' (1968) leviathan of sovereign self-regulation. Smith's conception is more rate maximizing, as he seeks to explain market dynamism and efciency assuming a simplifed self-regulatory scheme. Whereas Hobbes' is more scheme maximizing, given his interest in the complex functioning of the state over time. Importantly, both conceptions eschewed divine intervention and made simplifying trade-ofs.

In a digitalized world, by contrast, the extra power of augmented capabilities mean that self-regulation is potentially fast and complex. In fact, optimality is again within reach, not because of relative stability and simplicity, as in the premodern period, but thanks to the speed and scale of digitalized capabilities. Mediated by entrogenous intra-cyclical mechanisms, augmented agents will be capable of composing and recomposing their self-regulatory rates and schemes, in real time, to maintain and maximize ft. Self-regulatory potential greatly expands. In this regard, digital augmentation will enable consistent self-transformation and regeneration. Tis contrasts with modernity, in which incremental adaptation is typical, but self-transformation is harder to attain.

However, potentiality is one thing, and actuality is another. Digitalized self-regulation will require very sophisticated supervision. Augmented agents will have to marry relatively sluggish, simpler, human selfregulation, with the increasingly hyperactive, complex self-regulation of artifcial agents. Te challenge of managing ambiactivity is therefore daunting and already evident. Studies show that many people are poor managers of digitalized self-regulation (Kearns & Roth, 2019). Tey resist, founder, or foat on a rising tide of digital innovation, unable or unwilling to take responsibility for augmented being and becoming. I will return to this question in Chap. 9, which examines the implications of digital augmentation for self-generation.

# **6.1 Dilemmas of Self-Regulation**

Digital augmentation therefore expands self-regulatory capabilities and potentialities. By collaborating with artifcial agents, humans can selfregulate more rapidly, with higher levels of schematic complexity. However, major supervisory challenges need to be resolved. First, divergent rates might lead to extremely dyssynchronous processing: sluggish human self-regulatory mechanisms, combined with hyperactive artifcial rates. Second, divergent degrees of schematic complexity could lead to extreme discontinuity: simpler human self-regulatory schemes, combined with more complex artifcial ones. When processes diverge in this way, digitally augmented self-regulation will become highly ambiactive and dysfunctional. Tird, one agent might dominate the other and selfregulation will be overly convergent and skew toward human or artifcial control. Following sections discuss these dilemmas in greater depth.

# **Self-Regulatory Processing Rates**

Regarding human self-regulation, as noted earlier, it is often relatively sluggish, and for good reasons. Many situations neither beneft from nor deserve rapid self-regulation. For instance, much of everyday life moves at behavioral or cultural speed. Tinking and acting more slowly are appropriate. Slower processing is also advantageous in exploratory learning, where speed can lead to premature, less creative outcomes. Tough the opposite is true in competitive, risky situations, where fast selfregulation is often better. Human agents are therefore trained to accelerate self-regulation in some task domains, while keeping it slow in others. When such training is successful, it becomes deeply encoded as selfregulatory habit and routine. However, these procedures tend to persist, even when digitally augmented capabilities transcend prior limits, partly because humans are ill-equipped to monitor and manage this type of adjustment. Terefore, people may continue trying to accelerate selfregulation, even as artifcial agents do exactly this. But such striving will be misplaced, and easily go too fast. Humans will remain inherently sluggish, while encouraging artifcial acceleration. Te result will be dyssynchronous self-regulation.

In contrast, artifcial self-regulation is inherently hyperactive, again for good reasons. As noted earlier, one of the great strengths of artifcial agency is its capability for rapid self-regulatory processing. However, this becomes a potential source of tension as well, especially if artifcial agents cannot accommodate relatively sluggish humans. Hence, it is necessary to moderate artifcial processing rates, to be more attuned to slower human processes. For example, consider travel by autonomous vehicles. In these contexts, artifcial and human agents will collaborate as augmented agents for the shared purpose of efcient, safe, and enjoyable travel. In order to do so, agents will need to align their self-regulatory processing rates, to ensure adequate synchronization (Favaro et al., 2019). If collaborative supervision is poor, however, human processes may operate beyond the reach of artifcial monitoring, or vice versa. Artifcial agents may continue accelerating self-regulation, while humans remain inherently sluggish, and overall self-regulation will be even more dyssynchronous. In the case of autonomous travel, human response times could contradict or fail to coordinate with artifcial controls, risking the safety and security of both vehicle and passenger.

## **Self-Regulatory Schemes**

Artifcial agents are equally capable of complex self-regulatory schemes. Indeed, this is another distinguishing strength of artifcial agents. Tey can monitor and regulate many variables, across multiple levels, with great precision. By comparison, humans often adopt simpler selfregulatory schemes. Tey are far less capable, in these respects, and rely on heuristic and imitative schemes, more suited to behavioral and cultural situations. Tese opposing tendencies can easily exacerbate each other too, especially if human and artifcial agents are incapable of monitoring each other's schemes and functions. Self-regulation would be simultaneously simple and complex, and hence discontinuous. Augmented agents must therefore learn how to integrate human and artifcial self-regulatory schemes, especially in the entrogenous mediation of performative action generation. Often, this will entail the deliberate simplifcation of some artifcial components, while increasing the complexity of human elements.

Te example of autonomous vehicles is instructive once again. If supervision is poor, the automated system could adopt a complex selfregulatory scheme, monitoring and managing multiple parameters, and perhaps presenting too many of these to human passengers. At the same time, passengers may adopt simple, heuristic schemes, as they come to rely on the automated system. As a result, the overall self-regulation of vehicles could be discontinuous, with signifcant gaps emerging between the artifcial and human schemes. Tis will increase both technical and human risks. Automotive engineers already recognize this problem and are working to resolve it. Many of these eforts also address the complementary problem of synchronization. When both problems combine, dyssynchronous and discontinuous processing will lead to ambiactive self-regulation, that is, augmented self-regulation which simultaneously dampens and stimulates processing rates and schematic complexity.

Figure 6.2 illustrates the dilemmas just described. Te horizontal dimension shows sequential cycles of self-regulatory processing. Two longer cycles are labeled 1 and 2. Each is further divided into two subperiods, labeled 1.1 through 2.2. Next, the vertical dimension of the fgure shows schematic complexity, ranging from low in the center to high in the upper and lower sections. Te fgure also depicts three levels of processing capability and associated cycles, labeled L1, L2, and L3. As in previous chapters, these levels will represent agentic processing capabilities in premodern, modern, and digitalized periods, respectively.

Now consider the two curved lines labeled L1 and L2. First, the unbroken line labeled L1 exhibits a relatively slow processing rate and low schematic simplicity. Tis corresponds to premodern, replicative metamodels of agency, with relatively sluggish, simplifed self-regulatory schemes. Many culturally based forms of self-regulation continue to exhibit such patterns. Given these characteristics, optimal self-regulation is at least feasible in premodern and cultural contexts. Second, the dashed and dotted line labeled L2 illustrates a modern, adaptive metamodel of selfregulation, which iterates fully during each of the major cycles, and with a moderate degree of schematic complexity. Notably, the pattern depicted by L2 (modern adaptive metamodel) is not fully synchronized or continuous with the pattern depicted by L1 (premodern replicative metamodel). Processing rates and levels of schematic complexity both diverge, at least within the major temporal periods because L2 is cycling at twice the rate of L1. It requires efortful supervision, therefore, to ensure that replicative and adaptive self-regulation are adequately synchronized and continuous. Refecting this challenge, critiques of modernity often highlight the potential for self-regulatory alienation, owing to the intrusion of technological and other external forces, into the ordinary rhythms of cultural life (Ryan & Deci, 2006).

Next, the fully dashed line L3 depicts self-regulation within the digitalized, generative metamodel of augmented agency, illustrated by Fig. 2.3 in Chap. 2. As Fig. 6.2 shows, this type of self-regulation cycles more rapidly and intra-cyclically, relative to the longer cycles of L1 and L2, and with a higher level of complexity. Hence, L3 is only partially synchronized and continuous, in relation to L2, and even less so in relation to L1. Partly for this reason, much of L3 is not accessible to ordinary human monitoring. Tere are higher risks of dyssynchronous and discontinuous

**Fig. 6.2** Dilemmas of self-regulation

processing, and hence of overly ambiactive self-regulation. In these respects, Fig. 6.2 illustrates the self-regulatory challenge for augmented agents: how to synchronize, integrate, and adapt diferent human and artifcial, self-regulatory processes?

## **6.2 Illustrations of Augmented Self-Regulation**

Te following section illustrates highly ambiactive and non-ambiactive metamodels of self-regulation by augmented agents. First, recall that in relation to self-regulation, ambiactivity refers to processes which simultaneously dampen and stimulate processing rates and/or schematic complexity. In highly ambiactive systems, processes will be extremely divergent and often lack coordination. Whereas in lowly ambiactive systems, processes will be highly convergent and suppress one agent or the other. In each following illustration, the vertical axes show the complexity of selfregulatory schemes, and the horizontal axes show the processing rates of self-regulation, both ranging from low to high. Te fgures also depict the limits of self-regulatory processing capability, maintaining the labeling convention of earlier chapters. Moderately assisted, modern capability is labeled L2, and digitally augmented capability is labeled L3. As in earlier fgures, capabilities reach limiting asymptotes, and in the case of selfregulation, of high schematic complexity and processing rates.

# **Highly Ambiactive Self-Regulation**

Figure 6.3 illustrates highly ambiactive metamodels of self-regulation. Each exhibits an alternative combination of schematic complexity and processing rate. To begin with, the segments labeled D2, P2, and N2 illustrate modern metamodels of self-regulation with moderately assisted capabilities at level L2, while the segments labeled D3, P3, and N3 illustrate metamodels at higher level of digitalized capability L3. Note that the symbols are consistent with earlier chapters, for reasons I explain below.

Segments D2 and D3 both defne relatively low processing rates, plus higher schematic complexity. In fact, these are examples of the

**Fig. 6.3** Highly ambiactive self-regulation

scheme-maximizing scenario, depicted in quadrant 2 of Fig. 6.1. Te symbol D is employed, because these metamodels prioritize the descriptive complexity of self-regulatory schemes, rather than processing rates. Both metamodels are therefore ambiactive, because they increase schematic complexity, while suppressing processing rates. Notably, D3 is even more ambiactive than D2, because D3 increases complexity while holding the processing rate constant. Tat is, the metamodel at L3 retains the prior self-regulatory processing rate at L2, even with enhanced, digitalized capabilities. Tis illustrates the dysfunction explained earlier, in which human and artifcial agents reinforce each other's opposing dispositions.

As an example, consider the self-regulatory schemes and processing rates of some expert professions, such as legal practice. In these contexts, patterns of action are frequently regulated and have mandated procedures and processing rates. Nevertheless, this domain is being digitally augmented, gradually shifting to level L3. Consequently, schematic complex is increasing, but often with no major change to overall processing rates, owing to the persistence of professional regulation and institutional factors. Hence, there is an ambiactive challenge for legal professionals and frms, to ensure that self-regulation remains synchronized and continuous, during the process of digitalization.

Next, N2 and N3 reference high processing rates, but low schematic complexity. Te symbol N is employed, because these metamodels prioritize normative rates and efciency, rather than the complexity of selfregulatory schemes. In this case, N3 cycles even more rapidly, owing to higher capabilities at level L3. Tese are examples of the rate-maximizing scenario, depicted in quadrant 3 of Fig. 6.1. Both N2 and N3 are therefore ambiactive, because they suppress self-regulatory complexity, while increasing the processing rate. Moreover, N3 is more ambiactive than N2, because the former increases the processing rate signifcantly, while not increasing the level of complexity. Prior self-regulatory schemes persist. As an example of N3, consider the self-regulatory schemes required of students in digitalized examinations. Processing rates may rapidly increase, allowing for real-time testing, evaluation, and feedback. At the same time, however, schematic complexity may be unchanged, owing to the nature of what is being examined and students' natural capabilities. For example, students may still be asked to reason and write about the same problems. Tis poses a challenge for digitalized education and training, to ensure that self-regulation remains synchronized and continuous, during the digitalization of evaluation. Te overall goal being to maximize metamodel ft, best suited to the context.

## **Combined Ambiactive Metamodels**

Considered together, the metamodels in Fig. 6.3 constitute self-regulatory dualisms. First, D2 and N2 illustrate the ambiactive self-regulation which is typical of modernity, assuming moderate technological assistance. D2 represents humanistic self-regulation of personal, social, and cultural domains, which is relatively holistic, heuristic, and sluggish, while N2 represents the self-regulation of mechanized, industrialized domains, which are more focused, automated, and rapid. In summary, therefore, efective self-regulation in a modern context often requires agents to combine D2 and N2. Tey must be capable of integrating the detailed human thought and action depicted by D2, as well as the automated domains depicted by N2, for example, self-regulating both behavioral and normative patterns of choice in social and economic life. In organized collectives, this implies a type of ambidextrous capability, meaning agents can adopt and exercise diferent agentic metamodels at the same time and, specifcally, exploratory risk-taking along with exploitative risk aversion (O'Reilly & Tushman, 2013). However, as Fig. 6.3 suggests, ambidexterity is challenging and coordination is difcult to achieve.

Second, D3 and N3 illustrate highly ambiactive self-regulation in digitalized contexts. Together they form an extreme type of dualism. D3 represents highly ambiactive self-regulation of digitalized human domains. In this type of self-regulation, schemes will be overly complex, owing to digital augmentation, but persistently sluggish, owing to human factors. While N3 represents highly ambiactive self-regulation of digitalized, technical domains, in which artifcial processing rates are increasingly rapid, but schemes are persistently simplifed. Te overall consequence is dualistic, ambiactive self-regulation, combining extremes of artifcial complexity and human simplifcation, with artifcial hyperactivity and human sluggishness. As noted in Chap. 3, many contemporary organizations are struggling with this problem owing to rapid digitalization (Lanzolla et al., 2020).

Te third set of segments in Fig. 6.3 are equivalent, labeled P2 and P3. Tey both refer to relatively low rates of processing, plus low levels of self-regulatory complexity. Tese are examples of practical self-regulation, depicted in quadrant 4 of Fig. 6.1. Such metamodels will be nonambiactive overall because they simultaneously suppress both selfregulatory complexity and processing rates. However, the scope of practical self-regulation does not increase, despite the extra capabilities at L3. Te segment P3 does not expand but remains bounded by the commitments and procedures of P2. Tis means that augmented processes remain anchored in human priors. As an example, consider the selfregulatory schemes of everyday habit and routine. Self-regulation in these domains could remain almost unchanged, even as humans collaborate with artifcial agents. Anchoring commitments at level L2 persist and might escalate at level L3. Such persistence prevents the expansion of P3, and everyday habit and routine remain the same, although this response could be appropriate and efective, depending on the context (see Geiger et al., 2021).

## **Non-ambiactive Self-Regulation**

It is equally possible that augmented self-regulation will be nonambiactive, that is, relatively synchronous with respect to processing rates, and continuous regarding schematic complexity. Figure 6.4 depicts non-ambiactive metamodels of this kind, labeled D4, P4, and N4. Tey reference combinations of complexity and rate, at a higher level of digitalized capability L4, labeled thus to distinguish it from L3 in the preceding fgure. Te new Fig. 6.4 also includes the same modern metamodels as the preceding fgure, D2, P2, and N2, which do not require repetitive description.

Te most notable feature of the metamodels represented by D4, P4, and N4 is the fact that they are all equivalent. In stark contrast to Fig. 6.3, these metamodels fully overlap. Tis means that self-regulation has been completely digitalized. Te distinctions between human and artifcial

**Fig. 6.4** Non-ambiactive self-regulation

functioning have been erased. Terefore, the human commitments which anchored self-regulation at level L2, and which drove high ambiactivity in Fig. 6.3, are now fully relaxed and variable. Tere is no signifcant divergence in self-regulatory processing rates and levels of schematic complexity between the metamodels D4, P4, and N4. Tey are non-ambiactive. As an example, consider the following scenario of autonomous vehicles. It is possible that human needs and interests will be fully known to the system, and artifcial agency will be fully humanized and empathic. Likewise, artifcial processes will be made fully clear and meaningful to the human passenger. Hence, overall self-regulation will be highly synchronous and continuous, ensuring safety and efciency. In fact, this is exactly what automotive engineers aim to achieve, even going further to integrate autonomous transport systems with personal experience in the home, ofce, and community (Chen & Barnes, 2014; Zhang et al., 2018).

Now assume that the metamodels in Fig. 6.4 are the properties of an augmented agent, as in the autonomous vehicle scenario. Tis confation also poses major risks. Most importantly, the complete digital augmentation of self-regulation could eliminate aspects of human autonomy and diversity. Tis may be appropriate in some very technical environments, where ordinary intuition could be dysfunctional—such as the control of autonomous vehicles—but not in other domains. For example, consider the role of self-regulation in many social, creative, and innovative domains. In these contexts, self-regulation benefts from the diversity of human behavior and commitments. Indeed, techniques for creativity and innovation deliberately upregulate such factors, encouraging team members to self-regulate diferently from each other, some being fast, others slow, or analytical versus intuitive. If such diversity is lost, then valuable aspects of human experience will be lost as well. Augmented agents must therefore learn to be empathic and know when and how to incorporate purely human self-regulatory processes, as well as purely artifcial processes, to avoid over-synchronization and over-integration, for example, admitting some intuitive human self-regulatory processes into the control of autonomous vehicles (Favaro et al., 2019). However, this will pose a further challenge for collective self-regulation and oversight. Societies will have to determine the level of acceptable risk posed by human involvement in collaborative supervision, monitoring overly convergent and divergent approaches.

## **6.3 Wider Implications**

Troughout modernity, scholars have rightly assumed that human freedom and potentiality are enhanced by strengthening self-regulatory capabilities, often through the introduction of technological innovations. Digitalization promises to enhance these efects. Major gains are certainly possible. By collaborating with artifcial agents, humans may enjoy greater self-regulatory freedom and control. Contemporary digital innovations, such as smartphones, wearable devices, and expert systems are only the beginning, in this regard. However, as this chapter explains, digitalization problematizes this optimistic prediction because the opposite scenario is now equally possible. In fact, if digitally augmented selfregulation is poorly supervised, it could reduce human freedom and potentiality. Tis will happen if artifcial processes become too complex and go too fast, thereby overwhelming human inputs. Other losses will occur if persistent human processes are too sluggish and simplifed. Both scenarios will tend toward very dyssynchronous and discontinuous processing, resulting in highly ambiactive self-regulation. Te risks are clear and already topics for research (e.g., Camerer, 2017; Helbing et al., 2019). Other questions also warrant further study, as following sections explain.

## **Engagement and Responsibility**

People experience engagement and a sense of value if their means of selfregulation align with goals and outcome orientation (Higgins, 2006). On the one hand, when seeking to ensure safety and prevent losses, people should use vigilant avoidance means, whereas when hoping for positive gains, they should employ eager approach means; and the stronger the alignment of means and goals, the stronger the engagement and experience of value. Task engagement also depends on the experience of efortful striving, on a sense of overcoming external obstacles and one's internal resistance. Humans derive satisfaction and self-efcacy from such accomplishments (Bandura, 1997). Indeed, as earlier chapters explain, a central feature of modernity has been human striving to overcome obstacles and limitations. However, digitalization signifcantly reduces some traditional obstacles and sources of resistance. People will experience fewer struggles, compared to the past, or at least distinctly diferent challenges. Ironically, therefore, digitalization could result in less engagement and satisfaction from self-regulated goal pursuit.

Moreover, these changes are occurring rapidly, within relatively short cycles, and certainly within human generations. People experience the pace of change more intensely, with each cycle of digital innovation being more rapid and impactful than the last. Indeed, some aspects of augmented experience are already dyssynchronous and discontinuous. In such digitalized domains, the locus of self-regulatory control is shifting, from human to artifcial agents. Artifcial agents are taking more responsibility for self-regulatory persistence and outcomes. Similarly, human agents will exert less control over the entrogenous mediators of augmented agency: intelligent sensory perception, performative action generation, and contextual learning. Tese mediators will be central to augmented agency, yet less accessible to human consciousness and self-regulation.

As these efects become more pronounced, it could be more difcult for people to sense self-efcacy and meaning over time. Te risk is that human beings will feel less engaged, less autonomous, and ultimately less fulflled, even as efciency and efcacy increase. Moreover, as Bandura (2016) argues, when agentic responsibility is difused and distant, individuals and communities become disengaged from each other, and they lose a sense of moral obligation and responsibility. Te locus of ethical agency shifts away from the self, spread out across the network or buried in an algorithm (Nath & Sahu, 2017). It becomes too easy, even normal, to avoid responsibility, passing it of to artifcial intelligence or the system. Illustrating this efect, concern is growing that highly automated warfare will dull human sensitivity to its ethical and human implications (Hasselberger, 2019). Such efects will have major consequences for civility and good governance, not to mention international relations. It will be important, therefore, to maintain a strong sense of human striving and commitment in augmented self-regulation, and thus a sense of personal engagement and moral responsibility. As noted earlier, societies will have to determine the level of acceptable risk posed by human involvement and exclusion, in collaborative self-regulation.

## **Procedural Action**

Additional implications follow for procedural action. In Chap. 3, I proposed a way to resolve nagging questions about the aggregation of procedural action, as a mediator of collective routine and modality. Te solution requires that we treat human agents as complex, open, and adaptive systems, which respond to variable contexts. From this perspective, humans experience the downregulation of individual diferences, in the recurrent, predictable pursuit of shared goals. At the same time, they experience the upregulation of shared norms and control procedures. In this way, it is possible to explain the origin and functioning of individual habit and collective routine, without aggregating personalities and individual diferences. Importantly, this implies the downregulation and upregulation of self-regulatory plans and competencies as well. When routines form, personal self-regulatory orientations are efectively latent, and people adopt the shared goals and orientations of the collective, at least within routine contexts (Wood & Rünger, 2016). When routine procedures need adjustment, therefore, self-regulatory processes will require upregulation or downregulation, and perhaps deletion or creation. Via such means, augmented agents will supervise the rate and complexity of self-regulatory processing, to maximize metamodel ft.

However, if supervision falters, and self-regulation is overly ambiactive or non-ambiactive, the management of collective routine will go quickly awry. Self-regulation could become highly dyssynchronous and discontinuous, meaning the augmented agent is fast and complex in some respects, but slow and simple in other ways. Tese distortions will complicate the development and adaptation of procedural routine. If human action remains sluggish and simplifed, while artifcial self-regulation becomes fast and complex, the resulting routine will be dyssynchronous, discontinuous, and potentially dysfunctional, whereas fully nonambiactive self-regulation will exacerbate human docility and dependence because human self-regulation will tend to downregulate. In both scenarios, collective routine encodes ambiactive distortion and dysfunction. Moreover, by doing so, it will also exacerbate ambimodal distortion. Tat is because the collective agent will be highly compressed in some respects, but layered and hierarchical in other ways. Tis follows, because as Chap. 3 explains, collective modality relies on routine, and the ambiactive distortion of routine fows through to cause collective ambimodality. Examples already exist in organizations which attempt digital transformation. Tey introduce highly dyssynchronous and discontinuous procedures powered by artifcial intelligence, but in doing so, trigger stress and confict with preexisting relationships and hierarchies.

## **The Regulating Self**

Other implications follow at the individual level. To begin with, augmented self-regulation could lead to a false sense of autonomous selfefcacy. People might attribute too much to themselves by mistaking artifcial capabilities for their own. Tey would experience a version of what Daniel Wegner (2002) called "the illusion of conscious will," in which consciousness follows, rather than precedes, the neurological triggering of thought and action. But now the illusion of autonomy and control will follow, rather than precede, digitalized triggering which humans neither perceive nor understand. Indeed, as noted previously, the rapid, entrogenous mediation of augmented self-regulation will be largely inaccessible to ordinary consciousness. People could easily experience a digitalized illusion of conscious will. In fact, some powerful actors already understand this trend and see augmented self-regulation as a new means of social manipulation and control, by engineering an illusory sense of self-regulation.

Digitally augmented self-regulation therefore signals a potential shift in agentic locus. In fact, just as autonomous self-regulation was problematic relative to the gods of premodernity, and then problematic relative to collectivity during modernity, so autonomous self-regulation will be problematic relative to artifcial agency in the period of digitalization. As artifcial agents grow in power and become more deeply integrated into all areas of human experience, the primary locus of self-regulation may shift toward artifcial agency and away from human sources. Whether by design or default, humans could become increasingly dependent on artifcial forms of supervision and regulation. Tis prompts additional questions regarding the future role of human intuition, instinct, and commitment in self-regulation. In fact, such questions are not new. Tey often arise when considering the limits of self-regulatory capability in a social world. In contexts of digitalization, the same topics become pressing for a diferent reason. Uniquely human sources of self-regulation, such as intuition, instinct, and commitment will require deliberate preservation, to prevent artifcial agents from becoming too intrusive and dominant.

Tat said, most people will enjoy the self-regulatory benefts of digital augmentation, but many will not realize the price they pay. A selfreinforcing process of diminishing autonomy could occur. Te result would be digital docility and dependence (Pfeifer & Verschure, 2018). For Bandura et al. (2003), this raises the question of which type of agent—human, artifcial, or both—will be truly efcacious and selfregulating, in digitalized domains. Likewise, which agent will set goals and regulate attention, even if persons experience an internal locus of control? While for Higgins and his collaborators (1999), which type of agent will guide self-regulatory orientation, whether toward achieving positive gains, or avoiding negative losses? And if digitalization reduces self-regulatory obstacles and resistance, will augmented agency weaken the human sense of task engagement and value experience? Te supervision of digitally augmented self-regulation poses urgent questions for theory and practice.

## **References**


Wegner, D. M. (2002). *Te illusion of conscious will*. MIT Press.


**Open Access** Tis chapter is licensed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/ by/4.0/), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence and indicate if changes were made.

Te images or other third party material in this chapter are included in the chapter's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the chapter's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.

# **7**

# **Evaluation of Performance**

Agents consistently evaluate their performances to measure progress toward goals, to assess the efcacy of action, and to learn. Some evaluation criteria focus on the quality of agentic processing itself, for example, regarding its speed and procedural fdelity. Other criteria will be about outcomes or end states, for instance, whether specifc goals are met and preferences realized. Almost all theories of agency assume evaluative mechanisms of this kind, including how agents learn from performance feedback and feedforward. Te agentic metamodels presented in Chap. 2 include these features as well. Tey illustrate how agents generate behavior performances (BP) and evaluate such performances (EP). Te intra-cyclical evaluation of ongoing performance triggers feedforward updates (FF), while inter-cyclical evaluation of outcomes leads to feedback updating (FB), assuming some evaluation criteria and sensitivity to variance.

Evaluation of performance is therefore central to theories of agency. Consider social cognitive theories. From this perspective, an agent's evaluation of performance—as a type of self-reaction—is central to learning, future goal setting, task engagement, value experience, and developing self-efcacy in specifc task domains (Bandura, 1997). Similarly, the evaluation of performance plays a major role in the detection of self-discrepancy, being a person's sense of whether or not they achieve their preferred or ideal self-states and what adjustments they make in response (Higgins, 1987). Other psychologists theorize that evaluation of performance is central to planning and goal setting, to a person's self-evaluation, and even the development of coherent personality and a sense of identity (Ajzen, 2002; Cervone, 2005).

Comparable processes occur at the level of collective agency. Groups, organizations, and institutions, all evaluate their processes and outcomes, to assess efectiveness, improve procedures, formulate plans, as well as to learn and adapt. In addition, evaluative processes support modal cohesion, shared goal setting, interpersonal relationships, and the management of organizations, while a negative evaluation of performance exposes problems and conficts, triggering adaptation and other corrective actions (Cyert & March, 1992). Equally within institutions, the evaluation of performance and subsequent feedforward and feedback play critical roles in reinforcing or updating collective procedures and systems (Scott, 2014).

# **Problematics of Evaluation**

An important problematic is shared among these felds of study. Within each discipline, scholars debate the potential variance of evaluation criteria. For example, they debate whether criteria are fxed and stable, or vary from situation to situation, and also which criteria are detailed and specifc, versus broad and general. Earlier chapters of this book review similar debates about criteria in problem-solving and cognitive empathizing. In all chapters, my argument defends a "persons in context" perspective, which suggests that evaluation criteria will be contingent and variable to some degree, depending on the context and type of functioning. From this perspective, criteria are activated, chosen, or formulated, to ft the situation. Tey are rarely, if ever, fxed and universal, although, this does not imply loose relativism. But it does imply that diferent criteria are activated or not, then upregulated or downregulated, depending on the context, its problems, and the agent's position and priorities.

Comparable debates occur in other areas of psychology, for example, regarding the evaluation of self-efcacy and self-evaluation. For instance, Bandura (2015) insists that self-efcacy is specifc to task domains, and hence some evaluation criteria will be domain specifc too. Tat said, broad criteria apply if goals and actions are themselves broad. For example, an agent could evaluate her or his self-efcacy in life planning, which might cut across numerous other activity domains (Conway et al., 2004). Te main signifcance of these distinctions is that infexible, limited performance criteria may distort evaluation and impede learning, whereas variable, multiple criteria allow for more fexible and appropriate evaluations, sensitive to context.

Similar processes occur at group and collective levels. For example, studies show that some features of collectives can be relatively stable over time, owing to imprinting and isomorphism within institutional felds, and deeply embedded cultural norms (Hannan et al., 2006; Marquis, 2003). Collectives then reference such criteria in the evaluation of performance. At the same time, studies also show that collectives reference adaptive criteria which refect changing contexts, goals, and commitments, plus diferent levels of sensitivity to variance (Hu et al., 2011). Moreover, such variability mitigates the negative efects of low evaluations of performance. Instead of lingering in a state of perceived failure, agents recalibrate their goals and aspirations, thereby enhancing the potential for better evaluations in the future. In fact, studies show that collectives which combine contextual embeddedness with adaptive aspirations—that is, both long- and short-term perspectives—tend to be more successful in sustained goal pursuit (Dosi & Marengo, 2007).

## **Impact of Digitalization**

Not surprisingly, the evaluation of performance is deeply impacted by digitalization. Capabilities are expanding, allowing for more ambitious goals and higher expectations of performance. Digitalization also provides new, more precise means to evaluate performance, including through rapid intra-cyclical, feedforward mechanisms. Performances can be evaluated continuously, in real time, which enables adaptation and enhancement during action cycles, prior to outcome generation. Evaluation is thus partly mediated by entrogenous, performative action generation. To illustrate, every time a person searches the internet, background systems adapt the process in real time, helping to guide search in one direction or the other, curating preferences and goals (Carmon et al., 2019). And if preferences and goals shift, so will criteria of evaluation. Comparable processes are critical for digitalized expert systems, in which performances are constantly evaluated and refned. However, as in other domains, there can be unintended consequences. Like self-regulation, the digitally augmented evaluation of performance is vulnerable to extreme divergence or convergence. Digitalization therefore brings signifcant opportunities and risks to the evaluation of performance.

## **7.1 Theoretical Perspectives**

Evaluation of performance has always been central to the study of human thought and action. Apart from anything else, this refects the fact that purposive goal pursuit is central to civilized humanity. To achieve goals, it is necessary to monitor and assess performance, issuing rewards and sanctions, while updating goals and strategies. Tis happens at individual, group, and collective levels. For example, business organizations update their strategies and issue dividends, contingent on the evaluation of performance. Similarly, public institutions embody the collective evaluation of performance in political and legal systems. At the same time, evaluative criteria vary between cultures and periods of technological evolution. Within premodern contexts, for example, evaluation of performance focused on conformity and docility with respect to deeply encoded norms. Criteria were fxed and prescriptive in most contexts. By contrast, modernity elevates autonomous self-regulation at every level of performance. Modern criteria of evaluation are therefore more expansive and adaptive.

## **Evaluation of Individual Performance**

At the individual level, evaluation of performance maps onto the cognitive-afective processing units (PU) incorporated into the metamodels in Chap. 2 and which are identifed by Mischel and Shoda (1998). First, some criteria reference encodings of self and the world, meaning how phenomena are classifed, stored, and processed in memory. Tese criteria could help to assess the realism and relevance of a performance, for example, whether the problem addressed is adequately representative of observable reality. Second, other evaluation criteria will reference distinct beliefs and expectations and help to assess the reasonableness and utility of a performance, which are central concerns for microeconomics. Tird, criteria may reference agents' goals, values, and commitments, for example, assessing whether an outcome meets standards of efcacy, fairness, and honesty, or conforms to precepts of faith. Fourth, some evaluation criteria will reference afective states, such as the degree of perceived empathy exhibited by a performance, plus the afective state of the assessor, for example, assessing whether performance makes a person feel happy or sad, calm or anxious. And ffth, some evaluative criteria reference competencies and self-regulatory plans, including core orientation toward gains, versus avoiding losses. Evaluation then asks whether the performance aligns with the criteria incorporated into the self-regulatory scheme and the preferred cycle rate. For example, does the performance use vigilance means to prevent pain and losses, or eagerness means to attain pleasure and gains (Higgins, 2005).

By describing evaluation criteria in these terms, an important feature of human psychology comes to the fore. To begin with, recall the argument presented in Chap. 3, regarding variable upregulation and downregulation of psychosocial processes, in response to internal and external contingencies. Tat is, diferent cognitive-afective processes may be more or less salient, upregulated, or downregulated (Mischel & Shoda, 2010). Notably, as the preceding paragraph suggests, the same process can explain the adaptation of evaluation criteria. As diferent psychosocial processes upregulate or downregulate, so do the associated evaluation criteria. For example, sometimes an agent will reference performance criteria based primarily on goals and values but may not invoke afect. In other situations, the exact opposite could be the case, while in relatively mundane situations, many psychosocial factors and criteria will be downregulated, because the agent is relatively docile and satisfed by procedural controls. Criteria of evaluation will be habitual and routine. Alternatively, an agent may feel weakly motivated and engaged and superfcially committed to the expected outcome. On the other hand, sometimes most types of criteria are upregulated, because the situation engages the agent on many psychosocial dimensions (Bandura, 2016). Now, the agent is highly motivated and engaged, very eager and excited, and committed to the preferred means and outcome.

Te main consequence of these distinctions is that evaluation criteria are contextual and variable as well. Tey can be more, or less salient and active, upregulated or downregulated, depending on the task domain, specifc situation, and the agent's psychosocial condition. Tis further entails the variability of sensitivity to outcome variance. As the agent moves between contexts, diferent internal processes are activated, evaluation criteria then become more, or less salient, and the agent's sensitivity to outcome variance changes in tandem (Kruglanski et al., 2015). For example, in purely routine performance, sensitivity to variance is relatively low, while in very deliberate goal pursuit, sensitivity to variance is high. Clearly, these contextual dynamics play an important role in the evaluation of performance.

## **Evaluation of Collective Performance**

Te evaluation of performance is equally important in modern theories of collective agency, particularly about institutions and organizations. Moreover, if one assumes a contextual perspective, then collective evaluation criteria will also be activated or deactivated, upregulated and downregulated, in a dynamic fashion, depending on the situation and task domain. Collective sensitivity to outcome variance will be adaptive as well (Fiedler et al., 2011). Studies demonstrate the importance of these efects for adaptive ftness in institutions (Scott & Davis, 2007) and business organizations (Teece, 2014). Tere is a positive relationship between evaluative fexibility and performance.

Furthermore, the types of evaluation criteria previously identifed for individuals, also apply to collectives. First, collective criteria reference encoded categories and procedures, especially when representing problems and categorizing features of the world. Second, collective criteria reference shared beliefs and expectations, for example, when assessing causal relationships and consequences. Tird, collective criteria refect goals and values, exemplifed by the adaptive aspiration levels of behavioral theories of organization, and reasoned expectations in classical theory (Cyert & March, 1992). Fourth, collective criteria reference shared afect when evaluating emotional climate and psychological safety (Edmondson, 2018). And ffth, collective criteria often reference shared self-regulatory plans, especially in the evaluation of collective self-efcacy and competencies (Bandura, 2006).

Nevertheless, it is important to acknowledge that some social scientists hold diferent views. Some argue for more stable, universal criteria in the evaluation of collective performance. In efect, they argue or assume that some evaluative criteria are universal and invariant. Karl Marx (1867), for example, defended universal criteria based on class and capital. More recent economists also propose universal criteria, albeit citing diferent mechanisms, such as rational or adaptive expectations (e.g., Friedman, 1953; Muth, 1961). Likewise in sociology, for example, Levi-Strauss (1961) argued that collective performance can be universally evaluated in terms of social structure. Rawls (1996, 2001) provides a further example in legal and political theory. He is inspired by Kantian idealism to argue for universal principles of justice as fairness, which he claims all rational persons and communities should adopt.

All the theories mentioned above are strong and infuential, providing at least some universal criteria for the evaluation of performance. However, as noted earlier, many regard such assertions as problematic and contingent at best (Giddens, 1984; Sen, 1999). Tat said, most would accept the practical utility of treating some criteria as if they were universal, in appropriate situations. Normative models of this kind help to clarify evaluation within defned contexts and provide unambiguous guidance. Ideals are practical and useful, in this regard. Debate will no doubt continue about their ontological status and degree of variability.

# **7.2 Impact of Digitalization**

Evaluation of performance is deeply impacted by digitalization. Most notably, agents' capabilities expand greatly at every level, allowing for more ambitious goals and expectations, higher levels of sensitivity to variance, and more exacting criteria of performance evaluation, with the potential for deeper, faster learning. Augmented agents will also be capable of the dynamic supervision of evaluation, by upregulating or downregulating different criteria. However, as in other areas of agentic functioning, digitalization could also have negative efects, owing to the potential divergence or convergence of human and artifcial processes. Similar to self-regulation, major issues arise regarding the complexity and rate of evaluation, and for the same reasons. Other factors matter as well, but I will focus on rates and schemes once again, because they are central challenges for augmented agents.

## **Complexity of Evaluation**

As noted previously, artifcial agents are hypersensitive in evaluation, meaning they can detect very minor variations. Criteria are often precise and exacting. Tis is critical in complex, technical systems. However, given this capability, artifcial agents easily over-discriminate the evaluation of performance, going beyond what is necessary and appropriate. For example, a simplifed heuristic may be perfectly adequate, but the artifcial agent applies very discriminating criteria to the evaluation the performance. Tis results in wasted time and resources, plus the overftting of evaluative models, leading to less diverse and less adaptive, future performances. For this reason, computer scientists research how to avoid inappropriate complexity and overftting, in the evaluation of performance (Zhang et al., 2018). Once again, adaptive supervision is key.

By contrast, human agents are often relatively insensitive and employ heuristic means in evaluation of performance. Tis can be for good reasons too. Habitual and routine performances, particularly, may be appropriately evaluated using heuristic means. Likewise, simple rules frequently work best in highly turbulent, dynamic environments, where both information and time are lacking (Sull & Eisenhardt, 2015). In fact, hypersensitivity to variance impedes performance in such contexts, though the opposite is often true in complex, technical task domains, where accurate evaluation of performance is critical. Human agents are therefore trained to be more sensitive to variance in specifc domains. Moreover, when this training is successful, procedures are deeply encoded as evaluative habit or routine. Yet these procedures can persist, even when digitally augmented capabilities transcend prior limits. People continue striving for greater sensitivity in the evaluation of performance, even as digitalization does exactly this.

In summary, artifcial agents must work to avoid overly discriminate, complex, and hypersensitive evaluation, while humans must try to avoid overly indiscriminate, simplifed, and insensitive evaluation. However, if supervision fails in these respects, then persistent artifcial hypersensitivity may combine with persistent human insensitivity, and the augmented evaluation of performance will become discontinuous. Evaluation of performance would be complex and highly discriminating in artifcial respects, but simple and far less discriminating, from a human perspective. Te overall result will be gaps and discontinuities in the evaluation of performance. Augmented evaluation of performance would be discontinuous, also ambiactive, and potentially dysfunctional.

## **Rates of Evaluation**

Additional challenges derive from divergent rates of evaluation. As in self-regulation, artifcial agents can evaluate very quickly, hyperactively, especially in real time. Once again, this is advantageous in complex, technical domains. However, in other situations, it can lead to excessive evaluation, cycling too fast and frequently. For example, the agent might evaluate and adjust environmental controls at great speed, outpacing human physiology and need. Such evaluations would overcorrect and be an inefcient use of resources. By comparison, human agents are often relatively sluggish in evaluation. Tey cycle at behavioral and cultural rates, and often appropriately so. Many human performances may neither beneft from nor deserve rapid evaluation. Cycling too fast could truncate exploration, generate outcomes too quickly, leading to premature judgment and less creativity (Jarvenpaa & Valikangas, 2020; Shin & Grant, 2020). In fact, this prompts eforts deliberately to slow or stagger the evaluation of performance in some contexts. Te goal becomes delayed or provisional judgment, allowing for iterative exploration and evaluation. Design and innovation processes exhibit this approach (Smith & Tushman, 2005).

Although, the opposite is often true in urgent and competitive situations, where rapid evaluation can be a source of advantage and adaptive ftness. In these domains, human agents are trained to accelerate the evaluation of performance, to become more active. Moreover, when such training is successful, rapid evaluative procedures are encoded as habit or routine. Once again, however, these procedures tend to persist despite the fact, that digitally augmented capabilities transcend prior limits. People continue striving to speed up evaluation of performance, even as artifcial processing accelerates. When this happens in augmented agency, humans may remain relatively sluggish, while artifcial processes are increasingly hyperactive. Te overall process will therefore be dyssynchronous, again ambiactive, and potentially dysfunctional.

# **Summary of Augmented Evaluation**

Based on the foregoing discussion, we can now summarize the main features of evaluation of performance by augmented agents, at least with respect to rates of evaluation and the complexity of evaluative schemes and criteria. First, regarding human processes of evaluation, criteria reference cognitive-afective processing units: core encodings, beliefs and expectations, goals and values commitments, afective and empathic states, plus competencies and self-regulatory plans. Tese criteria can be activated or deactivated, upregulated or downregulated, and precise or approximate, depending on the context and type of performance. At the same time, humans possess limited evaluative capabilities, especially in complex, technical task domains. Trade-ofs are therefore common, especially between the rate and complexity of evaluation, because human agents cannot maximize both at the same time. Consequently, human agents either accelerate simpler evaluative processes or decelerate more complex processes. Deliberate efort is required to supervise these efects, especially in task domains which require more rapid or discriminate evaluation. Tis will typically entail the activation and upregulation of some human processes and criteria, and the acceleration of specifc cycle rates, to achieve better ft with the task at hand.

Second is regarding artifcial evaluative processes. As noted earlier, these systems are capable of extremely rapid, highly discriminating evaluation, especially using intra-cyclical means, which is fully appropriate in complex, technological, activity domains. Artifcial evaluation therefore tends toward hyperactivity and hypersensitivity. Trade-ofs are less common, because artifcial agents can achieve high rates and levels of discrimination, potentially maximizing both at the same time. In consequence, however, deliberate supervision is required to avoid unnecessary overevaluation of performance, especially when collaborating with human agents. Tis will typically involve the deactivation, downregulation, or deceleration of some artifcial evaluative processes, so they are better aligned with human and ecological processes.

Risks therefore arise, for the evaluation of performance by augmented agents. If processes are poorly supervised, artifcial hypersensitivity and hyperactivity could combine with relatively insensitive, sluggish human processes of evaluation. Evaluation of performance could become highly dyssynchronous, discontinuous, and ambiactive, meaning it simultaneously stimulates (activates or upregulates) and suppresses (deactivates or downregulates) evaluative sensitivities, criteria, and cycle rates. Tree main outcomes are likely. First, the combined system of evaluation could be ambiactive and conficted, with human and artifcial processes both upregulated and diverging from each other. Second, one agent might dominate the other and the combined system will be extremely convergent. Particularly, artifcial evaluation could outrun and overwhelm human processing, or strong human inputs could distort and interrupt artifcial processing. Tird, in very complex performances, all three types of distortion may occur and evaluation will be extremely divergent in some respects but convergent in others. In each scenario, the overall result will be dysfunctional evaluation of performance, undermining adaptive learning, reducing self-efcacy, and weakening other agentic functions which rely on evaluation.

# **7.3 Metamodels of Evaluation**

Te foregoing analysis suggests at least four diferent metamodels of evaluation, in terms of the upregulation or downregulation of human and artifcial processes. Tese are depicted in Fig. 7.1. First, human and artifcial evaluative processes may be active and upregulated (quadrant 1). Both agents are therefore stimulated, cycling and discriminating as best they can. Evaluation will be deliberate, efortful, and often precise. However, evaluation is therefore vulnerable to divergence and confict, because both types of agent are upregulated but have markedly diferent capabilities. Te corresponding pattern of augmented supervision is shown by segment 9 in Fig. 2.6. Te resulting evaluations are more likely to be dyssynchronous and discontinuous, and hence highly ambiactive. Second, human evaluative processing may be active and upregulated, while aspects of artifcial processing are deactivated or downregulated (quadrant 2). In this scenario, human sluggishness and insensitivity are more likely to intrude, like segment 7 in Fig. 2.6. Evaluations of performance will tend to be moderately dyssynchronous and discontinuous,


#### **Fig. 7.1** Augmented evaluation of performance

and hence moderately ambiactive, although there is a risk of overconvergence when humans dominate. Tird, human evaluative processing may be deactivated or downregulated, while artifcial evaluative processing is active and upregulated (quadrant 3). Artifcial hyperactivity and hypersensitvitiy are now more dominant, similarly to segment 3 in Fig. 2.6. Resulting evaluations are likely to be moderately dyssynchronous and discontinuous, owing to the greater activation of artifcial processes and the relative passivity of human processes. Tis produces moderately ambiactive evaluations, although there is a risk of overconvergence when artifcial agents dominate. Fourth, both human and artifcial evaluative processes may be deactivated or downregulated (quadrant 4), meaning both are suppressed. Such evaluations will be purely procedural, habitual, or routine. Evaluative processes are less discriminating, cycle without efort, and focus on maintaining control, like segment 1 in Fig. 2.6. For this reason, evaluations are more likely to be continuous and synchronous, lowly ambiactive, and functional.

In the following sections, I illustrate more details of the four metamodels summarized in Fig. 7.1. As in the previous chapter, the metamodels will focus on the internal dynamics of augmented agents, showing the interaction of human and artifcial collaborators. Hence, in this section, I refer to human and artifcial evaluative processes as distinct inputs. Either type of evaluative process (human or artifcial) can be upregulated and active, or downregulated and latent. It is also important to note that any metamodel can be appropriate and efective, depending on the context. Te challenge for supervision is to maximize metamodel ft.

## **Overall Downregulated Processes**

In the frst of these metamodels, both types of evaluative processes are deactivated or downregulated, as summarized in quadrant 4 of Fig. 7.1. Distinctions are less exacting, and many potential criteria are latent. Sensitivity to variance will be equally subdued. Te risk of evaluative divergence is therefore low because evaluation tends to be procedural, habitual, and routine. Scenario 7.2A in Fig. 7.2 illustrates such a system. Only a subset of processes is shown, however, to highlight the patterns of

**Fig. 7.2** Overall upregulated or downregulated processes

downregulation and upregulation. At this point, the reader should recall the generative metamodel of augmented agency in Fig. 2.3. It consists of three successive phases: situational inputs (SI); cognitive-afective processing units (PU) including referential commitments (RC); and behavioral performative outputs (BP). Te same phases are depicted in 7.2A. Within each phase there are processes indicated by small circles. Some are shaded, meaning they are digitalized. Others are not shaded, indicating they are fully human and not digitalized. Notably, in 7.2A, many of the small circles—both digitalized and human—have dashed borders, which means they are downregulated and latent. Only a few have unbroken borders, indicating they are upregulated and active. And this is the case for each major stage of the process, that is, for sensory perception, cognitive-afective processing, and behavior performance. Terefore, evaluation of performance is based on a reduced set of criteria, most often refecting procedural consistency and control. Moreover, for this reason, the augmented agent will only be sensitive to variance when minimal criteria are not met (see Wood & Rünger, 2016). Typically, the resulting evaluation of performance will be continuous and synchronous, lowly ambiactive, coherent, and functional.

In summary, when both components of an augmented evaluative process—human and artifcial—are largely deactivated or downregulated, their evaluative processes are more likely to be convergent and routine. Benefts follow for characteristics which depend on the evaluation of performance. Tese include self-efcacy, coordinated goal setting, selfdiscrepancy, the stability of identity, and general psychosocial coherence. Although, potential benefts are modest in this scenario, owing to the downregulation of many psychosocial processes. In any case, to guarantee these efects, augmented agents will require methods of supervision which appropriately deactivate and downregulate, or activate and upregulate, evaluative processes in specifc task domains.

## **Overall Upregulated Processes**

In other cases, evaluative processing is activated and upregulated for both human and artifcial agents, summarized in quadrant 1 of Fig. 7.1 and shown in greater detail by 7.2B. All the small circles now have solid, unbroken borders, which indicates they are active. Tis includes human processes depicted by unshaded circles, and the digitalized processes, which are shaded circles. Terefore, sensory perception of the stimulus environment, cognitive-afective processing, and behavior performances are all highly discriminated. Many of the artifcial processes will be rapid and intra-cyclical as well, although typically hidden from human consciousness. Hence, the evaluation of performance is based on a complex set of performance criteria, often refecting deliberate, purposive goal pursuit, requiring calculative, efortful means. For the same reason, evaluation will be sensitive to outcome variance, in both human and artifcial terms. Augmented agents will therefore struggle to coordinate evaluation, which could be very discontinuous and dyssynchronous, and therefore highly ambiactive.

Signifcant risks therefore follow. Highly ambiactive evaluations will tend to weaken self-efcacy, undermine future goal setting, will often lead to ambiguous learning and, in extreme cases, to psychosocial incoherence. For example, imagine a clinician who decides to override the advice of an expert system. Tis might reinforce the clinician's personal self-efcacy, but it would likely erode her trust in the expert system. At the same time, the expert system would report an error or failure because of the override and might fag the clinician as a risk. When combined, their divergent evaluations would likely undermine their future collaboration. Mutual cognitive empathy will sufer as well. To restore trust and confdence, both human and artifcial agents would require signifcant changes to their individual and shared supervisory functions. Not surprisingly, computer scientists are developing such applications already (Miller & Brown, 2018).

## **Upregulated Artifcial Processes**

Other situations will combine the downregulation of human evaluative processes, with the upregulation of artifcial ones. Tese scenarios are summarized in quadrant 3 of Fig. 7.1, and further detail is shown by 7.3 A in Fig. 7.3. While the human process is minimally activated, the artifcial process is highly active. Once again, activation is indicated by the dashed borders of small circles, and in 7.3A, more of the unshaded human processes are dashed and latent. Terefore, evaluation will be largely based on artifcial processes. Once again, many of the artifcial processes will be rapid and intra-cyclical, and thus hidden from consciousness and perception. In consequence, the augmented agent will be hypersensitive and hyperactive from the artifcial perspective, but relatively insensitive and sluggish in human terms.

In such a scenario, the risk of evaluative divergence is moderate, because human and artifcial processes are less likely to diverge and confict. Te typical result being that the evaluation of performance is only moderately discontinuous and dyssynchronous and therefore moderately ambiactive. Tis could occur in autonomous vehicles, for example. Artifcial agents will identify risks and evaluate conditions rapidly and

**Fig. 7.3** Partially upregulated and downregulated processes

precisely, in ways which human passengers will habitually accept and may not even monitor (Kamezaki et al., 2019). Te artifcial components are highly activated, rapidly evaluating, and precise, while the passenger processes information slowly and simply. Tis will be fully appropriate, given the circumstances.

## **Upregulated Human Processes**

Te fnal scenario combines the upregulation of human processes with the downregulation of artifcial processes. Tis type of system is summarized in quadrant 2 of Fig. 7.1 and detailed by 7.3B in Fig. 7.3. More human processes now have solid borders, and more shaded artifcial processes are dashed. In this scenario, the risk of evaluative divergence and ambiactivity is again moderate. Tis is because human evaluations of performance are likely to be deliberate and detailed, whereas artifcial evaluation will be relatively automated and procedural. Similarly, the augmented agent will be sensitive to outcome variance from the human point of view, but relatively insensitive from the artifcial perspective. Te overall result complements the preceding scenario. For example, consider the situation in which a teacher evaluates students' online assignments. An artifcial agent in the learning management system (LMS) may routinely evaluate timeliness and authorship, while the teacher reads the work fully, to form a detailed assessment. Te artifcial agent could routinely assess an assignment as on time and authentic. However, the teacher might evaluate the work as poor after careful reading, despite it being on time and authentic. Te overall result is that processing is moderately discontinuous and dyssynchronous, and hence moderately ambiactive, which may also be fully appropriate, given the circumstances.

## **Summary of Augmented Evaluation of Performance**

Each of the metamodels just presented shows that digital augmentation could signifcantly accelerate and/or complicate the evaluation of performance, depending on which processes are activated and upregulated, or deactivated and downregulated, and how well they are supervised. If supervision is efective, and the agent maximizes metamodel ft, then evaluation of performance will be timely, accurate, and a valuable source of insight. However, if supervision is poor, there are major risks. First, there are risks of evaluative divergence, ambiactivity, and dysfunction, when both types of evaluative processing are upregulated. Second, evaluation could be overly convergent and dysfunctional, if one type of process inappropriately dominates the other. Moreover, these risks will only increase, as artifcial agents become more powerful and ubiquitous.

# **7.4 Implications for Other Fields**

As this chapter explains, the evaluation of performance is fundamental to theories of human agency, at individual, group, and collective levels. When evaluation is positive, agents develop self-efcacy, plus a sense of autonomy and fulfllment. Even when performance falls short, agents can learn, strive to overcome, and feel positively engaged. Not surprisingly, these efects are deeply related to the functions considered in earlier chapters: all agentic modalities evaluate performances; they also evaluate the results of problem-solving and cognitive empathizing; and self-regulatory capabilities develop through evaluative feedback and feedforward, triggered by evaluation of performance. Other implications warrant consideration as well.

## **Augmented Performance**

In fact, owing to digitalization, the nature of agentic performative itself is changing. As previous sections explain, digitally augmented capabilities will transform performance and its evaluation. Aspirations and expectations will rise, and outcomes will often improve to match them. Yet many artifcial processes will be inaccessible to consciousness and beyond human sensitivity. Hence, people could mistake artifcially generated success as their own, and develop an illusion of self-efcacy and control. For example, consider the human driver of a semi-autonomous vehicle. Te person may feel very self-efcacious, even having a sense of mastery. However, much of the performance will be owing to the capability of the artifcial agents which operate beyond the driver's consciousness and perception (Riaz et al., 2018). Te person's perceived locus of control may be equally misleading, posing signifcant operational risks (Ajzen, 2002).

Major questions therefore arise for evaluation of performance by augmented agents. To begin with, what should be supervised and evaluated in augmented performance, at what rate and level of detail, and by which agent? And more specifcally, how will augmented agents supervise the activation and upregulation, or deactivation and downregulation, of different evaluation criteria? Answering these questions will require rethinking agentic performance itself, at least when it is highly digitalized. Most fundamentally, augmented agentic performance must be understood as a highly collaborative process. From this perspective, future research should investigate the collaborative dynamics and consequences of evaluation. It should ask how digitally augmented agents will develop a shared sense of self-efcacy and engagement in this area of functioning.

## **The Nature of Evaluation**

In addition to rethinking the nature of agentic performance, digitalization prompts new questions about widely assumed processes of evaluation. In modern social and behavioral theories, the evaluation of performance is often conceived in linear terms, as the assessment of fully completed processing cycles, that is, as the inter-cyclical evaluation of outcomes and end states (Argote & Levine, 2020; Gavetti et al., 2007). In such theories, sensitivity to variance is triggered at the completion of performance cycles and especially by variation from outcome expectations or aspirations, which then leads to feedback, adaptation, and learning.

However, in a digitalized world, evaluation becomes increasingly dynamic, intra-cyclical, and a source of real-time, intelligent adjustment and learning. Entrogenous mediation comes to the fore, as intelligent sensory perception, performative action generation, and contextual learning. Via these mechanisms, artifcial feedforward will rapidly update evaluative criteria during processing, thereby recalibrating the evaluation of performance itself. In highly digitalized domains, therefore, the evaluation of performance will be nonlinear. Rather, it will be increasingly generative, as it already is in deep learning systems and artifcial neural networks (e.g., Goddard et al., 2016).

Further important questions then arise. To begin with, as the evaluation of performance becomes increasingly dynamic, key processes will be less accessible to consciousness. We must then ask, under which conditions will human beings sense self-congruence, self-discrepancy, and selfefcacy? Might these states become increasingly opaque to consciousness, and if so, a potential source of psychosocial incoherence and disturbance? Or at least poorly aligned with contextual reality. In parallel, highly digitalized processes could mitigate the human experience of self-congruence and self-discrepancy, with consequences for self-awareness and selfregulation. If this were to happen, might too much be taken for granted? People may feel less responsible and disengage (Bandura, 2016). Civility and fairness would also be at risk. To sustain engagement, augmented agents may need to simulate the experience of obstruction, incongruence, and discrepancy, deliberately creating positive friction, as some propose (e.g., Hagel et al., 2018).

## **Implications for Collectives**

Related implications arise for collective evaluation of performance, and especially by digitally augmented organizations and institutions. In each context, digitalization accelerates and complicates collective performance and its evaluation. Here, too, there are major risks. Artifcial processes might outpace and override established methods of collective evaluation. For example, digitalized methods could displace public debate and negotiated consensus in political assessment. Granted, evaluations may become more precise and prompt. However, such changes would likely erode the bases of communal trust, collective choice, and participatory decisionmaking. Evaluation would be digitally determined, often hidden from sight, over-discriminate, and over-complete (Chen, 2017). Indeed, studies already point to these efects (see Hasselberger, 2019).

Furthermore, the speed and scale of augmented evaluation could homogenize collectivity, by demoting the role of human traditions and commitments. If this occurs, digitalization will erode cultural diversity and smother alternative ways of assessing the world. Civilized humanity would be depleted and arguably less adaptive because diverse aspirations feed forward, by planting alternative potentials which the future can reap. Having a richer set of possible futures enhances adaptive fexibility, as the cultural equivalent of biodiversity. Others fear that digitalized surveillance will be used to dominate and control, assigning rewards and sanctions based on the invasive evaluation of performance (Zubof, 2019). For this reason, many oppose the public use of facial recognition technologies and worry about the future misuse of brain-machine engineering. At the extreme, state actors could exploit digitalization to manipulate the evaluation of performance and predetermine outcomes (Osoba & Welser, 2017). All are examples of the dilemmas illuminated in this chapter: poorly supervised collaboration between human and artifcial agents, in which myopic priors and digital capabilities combine to produce ambiactive, dysfunctional evaluations of performance.

All that said, through the evaluation of performance, human beings sense their progress, value, and worth. If things go well, they are engaged, develop self-efcacy, feel a sense of achievement, and plan their next steps. At the individual level, these processes support the development of purposive goal setting, a sense of autonomous identity, adaptive learning, as well as the coherence of personality. Similarly, at the collective level, the evaluation of performance supports collective self-efcacy, identity, and coherence. All these functions could be enhanced or endangered, by augmented agents' evaluation of performance. Genuine benefts are possible, if supervision maximizes metamodel ft, balancing the rates and complexity of human and artifcial processing. Poor supervision, on the other hand, will lead to ambiactive, dysfunctional evaluations. Learning would sufer too, as the following chapter explains.

# **References**


**Open Access** Tis chapter is licensed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/ by/4.0/), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence and indicate if changes were made.

Te images or other third party material in this chapter are included in the chapter's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the chapter's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.

# **8**

# **Learning**

Intelligent agents perceive conditions and problems in the world, gather and process information, then generate solutions and action plans. And insofar as outcomes add to knowledge and associated procedures, agents learn. In this sense, much learning is responsive and adaptive, the result of cycles of problem-solving, action generation, performance evaluation, and updates. For human beings, inter-cyclical performance feedback is the primary source of such updates, unfolding as patterns of experience through time. Whereas for artifcial agents, much learning is intensely intra-cyclical, meaning it occurs during action cycles, often in real time, mediated by rapid feedforward mechanisms. Tis is possible because artifcial agents cycle far more rapidly and precisely, relative to most behavioral and mechanical processes. Indeed, artifcial agents achieve unprecedented complexity and learning rates. Owing to this capability, artifcial agents are increasingly important in practical problem-solving and process control, especially in environments where real-time adjustments are benefcial. Te same capabilities will empower learning by digitally augmented agents.

Yet supervision is challenging here too. Human and artifcial agents have noticeably diferent capabilities and potentialities in learning. On the one hand, as stated above, artifcial agents learn at high rates and levels of complexity and precision. Tey are hyperactive and hypersensitive in learning. For example, it only takes hours or days to train advanced artifcial agents to high levels of expertise. On the other hand, human agents exhibit relatively sluggish learning rates and low levels of complexity. As any schoolteacher can attest, it takes years of incremental learning to educate a human being, and many never achieve expertise.

Clearly, these distinctions are like those in Chap. 7, regarding performance evaluation, which is no surprise because much learning is driven by performance feedback. Terefore, in both areas of functioning—the evaluation of performance and learning—human and artifcial agents difer in terms of their processing rates, the complexity of processing, and primary mechanisms of updating. As noted, human learning is relatively sluggish, accrues in simpler increments, and mostly from inter-cyclical feedback, whereas, artifcial agents are hyperactive and hypersensitive in learning, rapidly acquiring complex knowledge, including through feedforward mechanisms. It also follows that learning by augmented agents will exhibit the same potential distortions as the evaluation of performance. When combined in augmented agency, human and artifcial learning can become divergent, dyssynchronous, and discontinuous, and thus ambiactive in terms of learning rates and levels of complexity.

Hence, learning by augmented agents can also skew, like the evaluation performance. Tree patterns of distortion are possible. First, highly ambiactive learning will combine rapid, complex, artifcial updates, with far slower, simpler, human updates. For example, digitalized learning systems cycle rapidly, shortening attention spans and compressing content, yet behavioral aspects of education and training require attentive dedication over long periods of time. In consequence, augmented learning could be overly divergent and hence dyssynchronous, discontinuous, and ambiactive. Second, in other situations, learning by augmented agents could be overly convergent and dominated by artifcial processes which overwhelm or suppress human inputs. People would be increasingly reliant on digitalized procedures. Tird, the opposite is also possible, in which learning is dominated by human myopia, bias, and idiosyncratic noise. Augmented learning would then reinforce and amplify erroneous priors. If any of these distortions occur, learning by augmented agents will be dysfunctional and often highly ambiguous and ambivalent.

# **8.1 Theories of Learning**

Modern theories of learning emphasize the development of autonomous capabilities and reasoned problem-solving, rather than replication and rote memorization. Modern learning also highlights the role of experience and evaluative feedback. Via such mechanisms, agents develop capabilities, knowledge, and self-efcacy. For the same reasons, modern scholarship accords learning a major role in social development and human fourishing. Writing over a century ago, the founders of modern educational psychology espoused very similar principles (e.g., James, 1983; Pestalozzi, 1830). Te enduring challenge is to explain and manage the deeper mechanisms of learning. For example, many continue to ask which aspects of learning are predetermined as natural priors, rather than resulting from experience and evaluative feedback. In other words, what is owing to nurture versus nature? And further, in which ways, and to what extent, can natural learning capabilities be enhanced, especially through structured experience and training? More recently, scholars also investigate how human and artifcial agents best collaborate in learning (Holzinger et al., 2019).

# **Historical Debates**

Once again, there is an impressive intellectual history. Te ancient Greeks made important contributions which remain relevant today. For example, Plato argued that much knowledge was innate, bestowed by nature and inheritance. Te challenge was then to release it. Whereas, Aristotle put more emphasis on nurture and learning through experience, and highlighted the role of memory in the absorption of such lessons (Bloch, 2007). Two thousand years later, Enlightenment scholars explored similar problems and solutions. John Locke (1979) explained learning in terms of the progressive encoding of new knowledge, initially onto a child's blank mind or *tabula rasa*. Tis complemented Rousseau's (1979) stronger emphasis on learning through the liberation of natural human curiosity, intuition, and inherent capability. Although, despite their diferences, both Locke and Rousseau shared the modern view that all persons can learn and grow. Both elevated the status and potential of the autonomous, reasoning mind.

Later psychologists continued exploring the mechanisms of learning. In the mid-twentieth century, Skinner (1953) proposed a radical form of behaviorism, based on operant conditioning and reinforcement learning, driven by stimulus and response. However, then and now, critics view this approach as overly reductionist and materialistic. Most now agree that human beings are more intentional and agentic in learning. Not surprisingly, Bandura (1997) is among this community. He argues that humans learn much through experience and the modeling of behavior, which strengthen self-efcacy. Jerome Bruner (2004), another leading fgure in educational and cognitive psychology, argues that human beings are embedded in culture and learn within it, as they compose and interpret narrative meaning. While for Gardner (1983), learning involves multiple intelligences, including rational and emotional, which engage a range of cognitive and afective functions. Diferent senses are engaged by these processes, including visual, auditory, and kinesthetic systems. To summarize, modern theories of learning emphasize the development of intelligent capabilities, the wider role of agentic functioning, the contextual nature of learning, and the importance of performance feedback.

# **Levels of Learning**

At the individual level, contemporary theories of learning prioritize cognitive and afective factors, sensitivity to context, the value of experience, and the need for engagement, although, scholars have long disagreed about the mechanisms which explain these aspects of learning. For example, Chomsky (1957) argued for genetically encoded structures which scafold grammar and the learning of language. Modern cognitive science was also emerging at the time, along with early computer science and neuroscience. Scientists started to explore the neurological systems which underpin human learning, akin to software architecture. Chomsky's work can be seen in this context. However, his critics saw innate knowledge structures as a throwback to scholastic conceptions, versus the liberation of autonomous mind (Tomalin, 2003). Many therefore resisted any notion of innateness, favoring fully developmental processes instead. For example, also around the mid-twentieth century, Piaget (1972) argued there are progressive stages of learning through childhood, corresponding to the development of the neurophysiological system, layering more complex concepts, relations, and logical structures. He argued that these structures were cumulative and contingent on early, albeit predictable developmental processes. Chomsky (Chomsky & Piatelli-Palmarini, 1980) disagreed and argued in response, that deep semantic structures emerge holistically, irrespective of context.

Recent research is more complex and nuanced, including Chomsky's (2014) own. No single psychological, behavioral, or neurophysiological model is fully explanatory. Like the agentic self generally, learning involves culture and context, physical embodiment, experience, and functional complexity. Evidence therefore points to more complex processes of development and procedures in learning (Osher et al., 2018). Refecting this view, contemporary researchers focus on the variability of contexts and the way in which cognitive and neurophysiological processes interact in learning. Tey also investigate cognitive plasticity, with some arguing for relatively high levels of fexibility across the lifespan, while others are more conservative in this regard. Notably, recent studies show that the brain remains more plastic than previously thought (Magee & Grienberger, 2020). Many also highlight the importance of model-based learning, by which people master relatively complex patterns of thought and action in more holistic ways (Bandura, 2017). Formal education leverages modelbased learning through experiential methods and problem-based instruction (Kolb & Kolb, 2009). Te general trend is toward more complex models of learning which engage multiple components of the agentic system, including cognition, afect, diferent modalities, types of performance, feedback, and feedforward mechanisms (Bandura, 2007).

Similar ideas inform scholarship about learning at the group and collective levels. Teories emphasize functional complexity, integrating cognitive, afective, and behavioral factors, plus sensitivity to social, economic, and cultural contexts, and the potential to develop and grow over time (Argote et al., 2003). However, theories of collective learning also recognize strict limitations. In fact, they share many of the same concerns as individual level theories: cognitive boundedness, attentional defcits, poor absorptive capacity, persistent superstitious tendencies, plus myopias and biases (Denrell & March, 2001; Levinthal & March, 1993). Proposed solutions relate to the development of dynamic capabilities, fexible organizational design, the use of information technologies, and transactive memory systems, in which the storage and retrieval of knowledge are distributed among groups, allowing them to learn more efciently (Wegner, 1995).

## **Procedural Learning**

Another important strategy is procedural learning, which leverages individual habit and collective routine to reduce the processing load (Argote & Guo, 2016; Cohen et al., 1996). All agents beneft from acquiring less efortful learning procedures. Te reader will recall that similar topics are discussed in Chap. 3, regarding agentic modality. In the earlier discussion, I review the debate about aggregation: whether collective routine emerges bottom-up, from the combination of individual habits, or functions topdown, from the devolution of social forms. In fact, the same questions arise for learning, namely, do collective learning routines emerge from the aggregation of individual learning habits, or vice versa? Some scholars privilege learning at the individual level and argue that routine learning is an aggregation of habit (Winter, 2013). In contrast, other scholars privilege the holistic origins of collective learning. From this alternative perspective, collective mind and action are the primitives of organizational learning, not the result of bottom-up aggregation. Terefore, questions of agentic modality come to the fore once again: does collective learning aggregate individual habits of learning, or vice versa? Similarly, do the limitations of collective learning refect the aggregation of individual constraints, or do social and organizational factors impose limits on individual learning? Or perhaps all learning combines both types of constraint?

Te solution to the aggregation question presented in Chap. 3, also applies to procedural learning. In this type of learning, many individual diferences, such as personal encodings, beliefs, and goals, are downregulated and efectively latent, whereas shared characteristics, such as collective encodings, beliefs, and goals, are upregulated and active. Hence, learning habit and routine coevolve and are stored in individual and collective memory, integrated via common storage and retrieval processes. Collectives thus learn without activating signifcant diferences, and without needing to aggregate such diferences. Mischel and Shoda (1998) explain this is how cultural norms evolve, as common, recurrent psychological processes. Hence, routine learning is neither simply bottom-up nor top-down.

Tat said, collective agents also learn in nonroutine ways, from deliberate experimentation and risk-taking (March, 1991). Increasing environmental uncertainty and dynamism favor these approaches. In organizational life, this has led to an emphasis on continuous learning and methodologies which highlight feedforward processes, such as design thinking, lean startup methods, and agile software development (Contigiani & Levinthal, 2019). All provide ways for teams and organizations to learn in complex, dynamic environments, using intra-cyclical, adaptive means. Via such methods, agents learn despite high uncertainty, ambiguity, and accelerating rates of change. Tese methods contrast earlier, linear approaches toward learning, which emphasize slower, intercyclical feedback loops (e.g., Argyris, 2002).

Learning is therefore central to modern theories of agency. Apart from anything else, agents develop self-efcacy and capabilities through learning, by trying and sometimes failing, but succeeding often enough. In these respects, developmental learning exemplifes the vision of modernity, which is to nurture autonomous, intelligent agency, and thereby to increase the potential for human fourishing. Modern theories of learning prioritize the capability of agents to learn and grow, especially from the evaluation of performance. Digital augmentation promises major advances in all these functions.

# **8.2 Digitally Augmented Learning**

Artifcial agents are quintessentially problem-solving, learning systems. Some are fully unsupervised and self-generative, such as artifcial neural networks and evolutionary machine learning. Tese agents are becoming genuinely creative, speculative, and empathic (Ventura, 2019). Tey also compose their own metamodels of learning, based on iterative, exploratory analysis, to identify the type of learning which fts best in any context. Artifcial reinforcement learning is one recent development, which enables expert systems to learn rapidly from the ground up (Hao, 2019). Other agents are semi-supervised or even fully supervised. In these systems, models are encoded to guide learning and development. In all cases, artifcial agents will employ a metamodel of learning, which is defned by its hyperparameters. Among other properties, hyperparameters will specify the potential categories and layers of learning, plus major mechanisms and cycle rates. Tese will include dynamic, intra-cyclical feedforward mechanisms, as well as longer inter-cyclical feedback processes.

Notably, artifcial neural networks refect the fundamental architecture of the human brain. Tis deep similarity between artifcial and human agents facilitates close collaboration between them in augmented systems of learning (Schulz & Gershman, 2019). Both share similar features of neural architecture, and when joined, they can self-generate their own, augmented metamodels of learning. Some will be semi-supervised—such as Semi-Supervised Generative Adversarial Networks or SGANs—which might incorporate human values, beliefs, and commitments into metamodels of learning. Important applications already include artifcial empathy and personality (Kozma et al., 2018). However, as in other areas of augmented functioning, the key challenge is to ensure efective supervision and metamodel ft.

# **Ambiactive Learning**

Not surprisingly, the supervisory challenges of learning by augmented agents are like those of augmented evaluation of performance. To begin with, artifcial agents will tend toward hypersensitive and hyperactive learning, meaning they quickly detect small degrees of variance, which trigger rapid, precise updates. As previously explained, this includes intracyclical feedforward mechanisms and entrogenous mediators, such as performative action generation. However, these mechanisms can lead to excessive updates and overlearning. Digitally augmented agents might learn too much and too often, thereby wasting time and resources. Tis could also produce ambiopic problem-solving, by encouraging sampling and searching too widely for problems and solutions, further and faster than required. For similar reasons, computer scientists research how to avoid unnecessary overlearning (Gendreau et al., 2013).

By comparison, human agents are frequently insensitive to outcome variance and sluggish in learning (Fiedler, 2012). Human learning is relatively simple and slow, compared to artifcial agents. Signifcant problems therefore arise for augmented agents because they will combine sluggish, insensitive human updates, with hyperactive, hypersensitive artifcial updates. As noted earlier, such learning could easily become dyssynchronous and discontinuous, and therefore ambiactive, meaning it simultaneously stimulates and dampens, diferent learning rates and levels of complexity and precision. For example, when people use online search about political matters, they trigger rapid, precise cycles of artifcial processing, iterating rapidly to guide search and learning. But at the same time, the human agent may input infexible, myopic priors as search terms. As a result, the learning process uses digitally augmented means to reinforce political bias. In fact, such learning is an expression of confrmation bias at digital scale and speed. And in most cases, it is highly ambiactive and dysfunctional. Tese scaling efects help to explain the rapid spread of fakery and falsehood on social networks.

As stated earlier, ambiactive learning also increases the risk of ambiguity and ambivalence, owing to its poorly synchronized, discontinuous nature. Tis is because ambiactive learning easily produces contradictory or incompatible beliefs, interpretations, and preferences. Granted, moderate degrees of ambiguity and ambivalence can be benefcial (Kelly et al., 2015; Rothman et al., 2017). Tey support creativity and enhance the robustness and fexibility of learning (March, 2010). But digitalization greatly amplifes these efects and extremes become more likely. Metamodel ft will be harder to achieve and sustain. Some people may become overly reliant on digitalized processes and incapable of autonomous, self-regulated learning.

Furthermore, excessive ambiguity and ambivalence can lead to cognitive dissonance and confusion, even triggering psychological and behavioral disorder, especially when they impact core beliefs and commitments (van Harreveld et al., 2015). Indeed, when ambiactive learning is extreme and widespread, people could lose a shared sense of reality, truth, and ethical norms (Dobbin et al., 2015; Hinojosa et al., 2016). Major consequences follow for digitally augmented communities and collectives. For without a shared sense of reality, truth, and right behavior, people are more vulnerable to deception and superstitious learning. Unable to discriminate real from fake, truth from falsehood, or right from wrong, they are more likely to be docile and rely on stereotypes.

## **Summary of Learning by Augmented Agents**

Based on the foregoing discussion, we can now summarize the main features of learning by augmented agents. To begin with, it is important to recognize there are many potential benefts. Augmented agents acquire unprecedented capabilities to explore, analyze, generate, and exploit new knowledge and procedures. In many domains, signifcant benefts are already apparent. However, at the same time, the speed and scale of digitalization pose new risks. First, augmented agents risk discontinuous updating, because they might skew toward hypersensitive, complex artifcial processing, while being relatively insensitive and simplifed in human respects. Second, augmented agents risk dyssynchronous updating, because they might skew toward hyperactive, fast learning rates in artifcial terms, while being sluggish in human terms. When combined, these divergent tendencies will produce ambiactive, dysfunctional learning, heightening the risks of ambiguity and ambivalence, and in extreme cases, superstitious learning and cognitive dissonance. Te corresponding pattern of supervision is shown by segment 9 in Fig. 2.6. Alternatively, artifcial agents might dominate learning and relegate human agency to the sidelines, like segment 7 in Fig. 2.6, or human agents could distort semi-supervised augmented learning, by importing myopia and bias, as in segment 3 of Fig. 2.6.

# **8.3 Illustrative Metamodels of Learning**

Tis section develops illustrations of learning, showing premodern, modern, and digitalized metamodels. Like the earlier discussions of selfregulation in Chap. 6 and evaluation of performance in Chap. 7, the following illustrations highlight internal dynamics, especially the interaction of human and artifcial agents in augmented systems. Also like the preceding two chapters, the following illustrations focus on contrasting processing rates and degrees of complexity, but this time in relation to the precision and rate of learning. Once again, the analysis highlights critical similarities and diferences between human and artifcial agents.

# **Lowly Ambiactive Modern Learning**

Figure 8.1 illustrates the core features of a lowly ambiactive, modern system of learning. Tat is, learning in which agents are moderately assisted by technologies, and where updates are mainly from performance feedback, relatively synchronous and continuous. Te horizontal axis depicts

**Fig. 8.1** Synchronous and continuous modern learning

two major cycles of learning labeled 1 and 2, which are further subdivided. Tese cycles encompass processes of feedback generation and subsequent updates to knowledge and procedures. Te vertical axis illustrates the complexity and precision of learning updates, which range from low in the center to high in the upper and lower regions. Te fgure then depicts cycles for two metamodels of learning, shown by the curved lines labeled L1 and L2. Each depicts a full cycle of processing and updates. Metamodel L2 cycles during each period at moderate complexity. However, metamodel L1 cycles only once over periods 1 and 2, at a lower level of complexity. In this respect, L1 represents a slower, simpler metamodel of learning, such as learning in premodern contexts. L2 on the other hand, represents a faster, more complex metamodel of learning, which is typical of modernity. Tat said, every two cycles of L2 are fully synchronized with one cycle of L1, making the two metamodels moderately synchronous overall. In fact, L2 is intra-cyclical relative to L1, but assuming slower rates for both, synchronization is feasible. Both are of comparable complexity, and hence moderately continuous as well. As a combined system of learning, therefore, these two metamodels are moderately synchronous and continuous. Indeed, modern learning can be exactly like this. Traditional and cultural systems of learning, represented by L1, are often moderately continuous and synchronized with technically assisted adaptive learning, represented by L2. Overall, the scenario illustrated in Fig. 8.1 is therefore lowly ambiactive, often functional, and neither excessively ambiguous nor ambivalent.

# **Highly Ambiactive Augmented Learning**

Next, Fig. 8.2 depicts highly dyssynchronous and discontinuous, ambiactive learning by augmented agents. Once more, the horizontal axis depicts two temporal cycles of learning, labeled 1.1 and 1.2, while the vertical axis illustrates the complexity and precision of learning updates, from low to high. Te fgure again depicts two metamodels of learning, this time labeled L2 and L3. Te curved line L2 is a modern metamodel of learning which assumes moderate technological assistance. L3 depicts a fully digitalized, generative metamodel with a higher learning rate and greater complexity. Importantly, the two metamodels of learning are now poorly synchronized and connected. As the fgure shows, they intersect in an irregular fashion. Updates are therefore dyssynchronous. Furthermore, the two metamodels exhibit diferent levels of complexity, which means updates will be discontinuous as well. If we now assume that both metamodels combine in one augmented agent—that is, the agent combines modern adaptive learning L2 and digitalized generative learning L3—overall learning will be dyssynchronous, discontinuous, ambiactive, probably dysfunctional, and highly ambiguous and ambivalent as well.

Consider the following example. Assume that L2 in Fig. 8.2 represents a modern metamodel of adaptive learning, in which an aircraft pilot learns from practical performance feedback. Next, assume that L3 represents the generative learning of an artifcial avionic control system. Now assume that the pilot and avionic agent collaborate in fying an aircraft. Given their diferent modes of learning, they will update knowledge and procedures in a dyssynchronous and discontinuous fashion, refecting the pattern in Fig. 8.2. Overall learning will be highly ambiactive, ambiguous, and ambivalent. Learning will likely be dysfunctional and, in this case, potentially disastrous. Indeed, aircraft have crashed for this reason (Clarke, 2019). Pilot training and artifcial systems were poorly

**Fig. 8.2** Dyssynchronous and discontinuous augmented learning

coordinated and synchronized, and human and artifcial agents failed to collaborate efectively. Pilots could not interpret or respond to the rapid, complex signals of the digitalized, fight control system. And the fight control system was insensitive to the needs and limitations of the pilots. Te resulting accidents are tragic illustrations of ambiactive dysfunction. Similar risks are emerging in other expert domains, and many more instances are likely.

## **Lowly Ambiactive Augmented Learning**

In contrast, Fig. 8.3 illustrates lowly ambiactive learning by an augmented agent. Digitalized learning is labeled L4, and modern adaptive learning is again labeled L2. As in the previous fgures, the horizontal axis depicts temporal cycles, and the vertical axis again illustrates levels of complexity and precision. In contrast to the preceding fgure, however, the two metamodels in Fig. 8.3 are now moderately synchronous and

**Fig. 8.3** Synchronous and continuous augmented learning

continuous. Cycles are better aligned, intersecting at the completion of major learning cycles, despite their diferent rates. Teir levels of complexity are similar as well. By implication, collaborative supervision is strong. If we now assume that these two metamodels are combined in one augmented agent—that is, the agent combines modern adaptive learning L2 and digitalized generative learning L4—then overall learning will be lowly ambiactive, functional, and not signifcantly ambiguous or ambivalent. For the same reasons, entrogenous mediators will be adequately aligned, synchronous, and continuous as well. In summary, Fig. 8.3 illustrates a well-supervised system of learning by augmented agents, which is what engineers aspire to build (Chen et al., 2018; Pfeifer & Verschure, 2018). Te supervision of learning achieves strong metamodel ft, in this case, with appropriately balanced learning rates and levels of complexity.

## **8.4 Wider Implications**

Highly ambiactive learning therefore poses major risks for augmented agents: extreme ambiguity and ambivalence, incoherent and inconsistent updates, functional losses, and cognitive dissonance. Moving forward, researchers must develop methods of supervision which mitigate these risks and maximize metamodel ft. Fortunately, research has already begun. For example, dissonance engineering explicitly addresses these risks (Vanderhaegen & Carsten, 2017). It seeks to manage and supervise human-machine interaction in learning, and especially the risks of learning confict within augmented systems. Other researchers are working to develop more empathic communication interfaces, to facilitate better human-machine communication in learning (Schaefer et al., 2017). However, we have yet to see comparable research eforts in the social and behavioral sciences. Following sections highlight some of the major challenges and opportunities, in this regard.

# **Divergent Capabilities**

Many of these problems arise, because the speed and scale of digital innovation are outpacing human absorptive capacities and traditional methods of learning. Most human learning is gradual, cycles relatively slowly, responding to inter-cyclical performance feedback. Knowledge is absorbed incrementally, often vicariously. Teories therefore assume broadly adaptive processes, driven by experience and performance feedback, iterating in punctuated gradualism over time. Moreover, owing to limited capabilities and behavioral contingency, human learning is often incomplete. In contrast, artifcial learning is increasingly powerful, fast, and self-generative. Indeed, for today's most advanced artifcial agents, it may only take minutes or hours to perform complex learning tasks, which no human agent could ever complete. Compounding the challenge, artifcial feedforward mechanisms are largely inaccessible to consciousness, given the relatively sluggish, insensitive nature of human monitoring. When combined, these divergent capabilities produce novel risks for augmented agents. Artifcial processes could race ahead, while humans constantly struggle to keep up. Human attention spans may continue to shrink, while digitalized content is compressed and commoditized (Govindarajan & Srivastava, 2020). Recall that similar efects drive entrogenous divergence in Fig. 2.4. Learning could be simultaneously adaptive and generative, fast and slow, over-complete and incomplete. At the same time, human supervision could import erroneous myopia, bias, and noise, and digital augmentation will reinforce and amplify these limitations. In these situations, learning will be dysfunctional and lead to functional losses.

Existing theories of learning are ill-equipped to conceptualize and explain these risks. Teories typically assume the gradual, progressive absorption of knowledge and skills (Lewin et al., 2011). Myopic learning, insensitivity to feedback, and sluggishness are the main foci of scholarly attention, which makes perfect sense in a pre-digital world. For the same reasons, the risks of overlearning and over-absorption receive little attention. It is no surprise, therefore, that existing theories are ill-equipped to explain the efects of digitally augmented hyperopia, hyperactivity, and hypersensitivity, and the resulting risks of dyssynchronous, discontinuous updating, and ambiactive learning. In fact, to date, most of these risks are not even conceptualized in theories of learning.

Te opposite is true for research in computer science and artifcial intelligence. In these domains, overlearning, hypersensitivity, and hyperactivity already receive signifcant attention (Panchal et al., 2011), which is fully explicable because the risks are clear and growing. Tey interrupt and confuse artifcial learning, leading to functional losses. Consider the control of autonomous vehicles once again. In an emergency, the artifcial components of the augmented system should prioritize rapid approximate learning over slow exact learning. For example, there is no need to distinguish a pedestrian's age or gender before stopping to avoid a collision (Riaz et al., 2018). Overlearning would delay action and be disastrous. Comparable trade-ofs will arise in other augmented domains, including the piloting of aircraft, as discussed earlier. Supervised trade-ofs will be critical, balancing the complexity of updates versus learning rates, given the context and goals. If well supervised, augmented agents can enjoy signifcant benefts. Tey will incorporate the best of human experience and commitment, and the best of artifcial computation and discovery.

Furthermore, digitally augmented agents will recompose metamodels of learning in real time, to maintain metamodel ft as contexts change. To do so, they will rely heavily on entrogenous mediators, namely intelligent sensory perception, performative action generation, and contextual learning. Every phase of learning will be intelligent and generative, rather than procedural and incremental. In this fashion, digitally augmented learning enables adaptive learning by design. But this raises additional questions. How much of augmented learning will be accessible to human consciousness and supervision, or will it be an opaque product of artifcial intelligence? And if the latter proves true, could this lead to a new type of superstitious learning, in which humans absorb outcomes without understanding how or why they came about? In fact, we already see some evidence of this, in the opacity of deep learning and artifcial neural networks. Tese systems are widely applied in augmented systems, but important features can remain hidden and not explainable, even to the developers (Pan et al., 2019).

# **Problematics of Learning**

When viewed collectively, these developments signal a major change in the problematics of learning. To begin with, recall that modernity problematizes the following: how and to what degree, can human beings transcend their natural limits through learning, to be more fully rational, empathic, and fulflled? Digitalization prompts additional questions. To begin with, how can human beings collaborate closely with artifcial agents in learning while remaining genuinely autonomous in reasoning, belief, and choice? Relatedly, how can humans absorb digitally augmented learning while preserving their natural intuitions, instincts, and commitments? And how can digitally augmented institutions and organizations, conceived as collective agents, fully exploit artifcial learning while avoiding extremes of digitalized determinism? Also, how can humanity ensure fair access to the benefts of digitally augmented learning and not allow it to perpetuate inequality, discrimination, and injustice? Finally, how will human and artifcial agents trust and respect each other in collaborative learning, despite their diferent levels of capability and potentiality? Te metamodels and mechanisms described here can help guide research into these questions.

# **References**


**Open Access** Tis chapter is licensed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/ by/4.0/), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence and indicate if changes were made.

Te images or other third party material in this chapter are included in the chapter's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the chapter's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.

# **9**

# **Self-Generation**

Modern persons are most fulflled when they freely choose who to be and become in the world. In other words, they fourish best through autonomous self-generation, which is to manage one's own development and reproduction without external direction. At the individual level, people self-generate career paths, social identities, and autobiographical narratives (McAdams, 2013). Options for doing so are found in culture and community, which provide choice sets of possible selves and life courses. Similarly, groups and larger collectives self-generate through organized goal pursuit and the composition of shared narratives (Bruner, 2002). Options at this level emerge from culture, social ecology, and history. Tese choice sets also comprise metamodels of self-generative potentiality, that is, related sets of self-generative models. Any choice will therefore instantiate one or other agentic metamodel.

Self-generative potentiality also varies from culture to culture, between social-economic groups, and across historical periods. Regarding the past, as previous chapters explain, in premodern contexts, agentic potentiality was tightly constrained. Metamodels of agency were relatively fxed and stable and provided few degrees of self-generative freedom. For most people in premodernity, life was dominated by tradition and templates for survival. Living a good life meant having physical security, food and shelter, family continuity, and the replication of communal rituals and norms. Similar principles are applied at the collective level. Social organization was stable and patriarchal. Collective self-generation referenced embedded norms and established orders. Indeed, these are the core features of the replicative, agentic metamodel which dominated during premodernity.

By contrast, during the modern era, self-generative potentiality expanded greatly, at least for many. As capabilities and endowments increased, people enjoyed greater degrees of freedom and choice, to develop as autonomous, self-efcacious agents. In many societies, cultural norms have shifted in the same direction, to encourage personal ambition and mobility. Refecting such freedom, the modern period is characterized by self-generative possibility. It aspires to liberate human potential, transcending the premodern focus on survival and fate. Modernity tells a story of progress, reasoning mind, scientifc discovery, and innovation, all dedicated to the "social conquest of earth" (Wilson, 2012). And to be sure, progressive social policies and economic growth have expanded self-generative capability and potentiality. Technological innovation, improvements in education, public health, participatory government, and free market economies have combined to lift many (though not all) from historic deprivation and ignorance. For example, the career path of entrepreneurship is now a well-established option in contemporary societies (McAdams, 2006). It incorporates values of autonomy and exploration, a preference for risk-taking, creativity, and organized goal pursuit—all qualities which exemplify the adaptive, agentic metamodel of modernity. Production and consumption have also grown, leading to a predictable emphasis on the acquisition of goods and services, and the enjoyment of their utility.

Nevertheless, self-generation often falls short of aspirations, owing to enduring constraints and defcits. To begin with, options remain limited for many. Survival may be the best a person or community can hope for. What is more, self-generation can also disappoint in relatively abundant environments. Even if better, more varied self-generative options emerge, they might prove difcult to realize, because agents are incapable of choice and lack conversion capabilities, being the capabilities required to exploit the opportunities one has (Sen, 2000). Hence, owing to increasing complexity and limited capabilities, people neither discriminate between options nor convert them into reality. Te expansion of potentiality overwhelms them. In these situations, many rely on social docility instead. Tey adopt the career and life path recommended by their community or family. Although that said, this kind of docility is often satisfying, especially in relatively munifcent societies. Living a standard life in a plentiful world can be fulflling enough.

Contemporary digitalization amplifes these opportunities and challenges. For example, at the individual level, digitalization provides new ways for people to curate and share memories, form new relationships, and choose alternative identities and futures. Digitalization also creates fresh opportunities at the collective level, to organize, collaborate, pursue common goals, and compose new narratives. Artifcial agency points in the same direction. Particularly, today's most advanced systems are fully self-generative and globally connected. In these respects, human and artifcial agents are increasingly compatible, as intelligent self-generative agents. A pluralistic world of augmented potentiality is fast emerging. However, at the same time, digitalization amplifes the dilemmas of munifcence described earlier. Presented with a rapidly expanding range of self-generative options, many are unprepared, resistant, or overwhelmed, by the range and complexity of choice. Tey resist, delay, or retreat from digitalized, self-generative options (see Kozyreva et al., 2020).

Ironically, therefore, and in contrast to earlier periods, digitally augmented self-generation may disappoint because of too many opportunities and resources, rather than too few. To be sure, self-generative potentiality will increase, but if human capabilities lag, the freedom to choose will decline. In fact, recent studies report such efects (e.g., Scott et al., 2017). Tey show that digitally augmented self-generation can skew toward such extremes. People might resist digitalization on some dimensions, feel blocked and incapable in other ways, or retreat to their priors, while others surrender to digital determination (Collins, 2018). For example, in curating an online persona, some people deliberately avoid information about alternative life choices, yet struggle to search the sources they trust, and therefore rely on artifcial agents to determine their choices. In this fashion, digital augmentation might narrow and distort self-generation.

# **9.1 Self-Generative Dilemmas**

In fact, self-generative trade-ofs are the norm. All agents compromise to some degree, as they balance self-generative freedom with the need for coherence and control (Bandura, 1997; Schwartz, 2000). One common strategy is to limit the range of options under consideration. As I explained earlier, people often simplify choice by relying on docility within the social world (Ryan & Deci, 2006). Tey defer to culture and convention, rather than autonomous refection, when making self-generative choices. Tey adopt myopic life paths and focus on singular domains of being and doing. To be sure, myopia and social docility simplify choice (Bargh & Williams, 2006). Myopic choices are typically clear and predictable. And docility allows people to fnd psychosocial meaning and continuity within culture (McAdams, 2001). Tey choose from a preexisting set of possible futures, confdent in their meaning and feasibility. By choosing mimetic life paths, therefore, agents can self-generate with a modest sense of autonomy, while securing coherence and consistency.

# **Sources of Disturbance**

However, self-generative coherence is easily disturbed, especially if the choice set suddenly contracts or expands. Regarding the contraction of choice, a sudden loss of resources or social order will reduce the range of self-generative options. Disease, social disorder, or economic depression may strike, and sometimes all three, as in times of global pandemic. When such events occur, there are fewer opportunities and degrees of freedom for self-generation. Potentiality shrinks and lives are disrupted. In contrast, regarding the sudden expansion of choice, a rapid increase in resources, capabilities, or endowments will enhance options for self-generation. For example, a person may unexpectedly inherit a fortune, or be transported to an abundant environment, or gain access to extraordinary knowledge and capabilities. Similarly, a community might discover vast, untapped resources. Self-generative choice sets rapidly expand.

However, plentiful choice is unusual and presents diferent challenges. As stated above, some agents struggle to appreciate an expanded range of possibilities and fnd it hard to discriminate and order preferences. And even if they can choose, they may fail to realize their choice, owing to inadequate conversion capabilities and lack of requisite resources, especially when self-generative options are novel and complex. Hence, people are myopic and simplify choice. Tey make singular, predictable life choices. Opportunities are missed, and sometimes intentionally avoided (Bandura, 2006). In any case, self-generative abundance is exceptional. For most individuals and communities, the opposite is true. Tey endure deprivation as a permanent condition and have few self-generative options at the best of times (Sen, 2000). Not surprisingly, therefore, the dilemmas of munifcence are rarely studied, apart from some fctional accounts (e.g., Forster, 1928; Huxley, 1998), and almost never treated as problematic. Rather, scholarly attention rightly focuses on persistent limitations and deprivation.

## **Digital Augmentation of Self-Generation**

Digitalization promises a qualitative shift in this regard. Quite simply, it afords more options for being, doing, and becoming. By leveraging digitalized capabilities, augmented agents will be able to combine diferent modes of action and becoming, self-generating dynamically in real time. Consider clinical medicine once again. In this domain, augmented human-machine agents (clinicians and computers) will combine empathy and personality, associative and speculative analysis, clinical expertise and robotic capability, plus predictive scenario modeling, all simultaneously in real time. Working together, they will take patient care to a new level. In this fashion, digitalization will enable more dynamic, fexible self-generation by clinicians, as people and professionals. More generally, it will allow augmented agents to function efectively across multiple modes of being and doing (see Chen & Dalmau, 2005), that is, to collaborate in ambidextrous self-generation.

Digitalization therefore continues the narrative of modernity, toward richer self-generative capability and potentiality, but now at great scale and speed (Bandura, 2015). In fact, the digital augmentation of self-generation will constitute a historic transformation, at least for many, toward self-generative abundance and ambidexterity, in contrast to historic patterns of limited, singular modes of activity and self-generation. Already, digital networks allow people to adopt new modes of action and compose alternative narratives within virtual worlds. Similarly, online communities proliferate, while digital platforms support innovative social and organizational forms (Baldwin, 2012). Moreover, future innovations will accelerate these trends. Even at the bottom of the socioeconomic pyramid, digitalization allows a growing number of people to aspire to forms of life and action which were previously inconceivable (Mbuyisa & Leonard, 2017).

Nevertheless, as noted earlier, some people will retreat, resist, feel blocked, or simply be incapable of embracing new possibilities. Indeed, most people are poor at combining new and diferent modes of action. For example, many cannot synthesize associative and calculative intelligence, nor can they combine creativity and computation (see Malik et al., 2017). Similarly, they struggle to absorb alternative modes of being and doing in social life. Most people are not ambidextrous in these respects. In fact, this limitation is refected in the classic metamodel of industrial modernity: the strict division of labor, singular domains of training and efcacy, and path-dependent careers. For this reason, contemporary educational and training programs try to develop ambidextrous capabilities, especially in managing opportunity and innovation (O'Reilly & Tushman, 2013).

Other people will retreat or resist the digital augmentation of selfgeneration, especially those who are deeply committed to cultural traditions or have infexible assumptions about the ideal self. For these people, digital augmentation will not expand self-generative potentiality. It will reinforce myopic priors instead. For example, studies document the proliferation of online xenophobia against alternative life choices (Chetty & Alathur, 2018). At the opposite extreme, some people could overly relax and abandon their prior commitments. Instead of maintaining cultural norms and values, and seeking to own their own choices, they may surrender to artifcial control and become digitally docile. Teir domains of action, even careers and life paths, will be determined by artifcial sources. Risks therefore emerge at every extreme. Neither retreat, resistance, blockage, nor surrender are efective responses to the digital augmentation of self-generation.

## **Dilemmas of Augmentation**

New dilemmas thus emerge for augmented agents, as they seek to selfgenerate. On the one hand, artifcial agents are increasingly self-generative, able to combine diferent modes of action and intelligence in real time, far beyond the reach of human capabilities and consciousness. While on the other hand, humans are typically sluggish and myopic in selfgeneration and tend toward singular modes of being and doing. Terefore, when human and artifcial agents collaborate as augmented agents, they bring diferent self-generative strengths and weaknesses. If poorly supervised, the combined system could be singular and path dependent in human respects, but variable and dynamic in artifcial terms. Tis will result in divergent, distorted patterns of self-generative ambidexterity. Agents will combine singular, exploitative modes of human selfgeneration, with fexible, exploratory modes of artifcial selfgeneration. Alternatively, one agent might dominate the other and self-generation is highly convergent, for example, when people surrender to artifcial determination.

Tis presents another supervisory challenge for augmented agents. Tey must fnd an appropriate ambidextrous balance, that is, combining human and artifcial modes of self-generation to maximize metamodel ft. If supervision is poor, however, the result will be dysfunctional divergence or convergence. Consider the following example. Assume that some years ago, a woman or man trained to be a schoolteacher and learned traditional pedagogical methods. A predictable life course lay ahead. However, more recently, rapid digitalization, pandemic risks, and other social developments, require the teacher to master digitally augmented techniques and tools. In other words, the teacher must now collaborate in ambidextrous self-generation. However, she or he may resist or feel blocked, and default to prior knowledge and procedures. Te risk, therefore, is that human and artifcial self-generation will be divergent and dysfunctional. Digitally augmented self-generation would be a distorted form of ambidexterity, in which both agents are likely to obstruct each other. In fact, recent studies show that this is already happening (e.g., Salmela-Aro et al., 2019).

Tese dilemmas suggest a major shift in the problematics of selfgeneration. As noted above, modern scholarship rightly focuses on the persistent deprivations and limitations which constrain self-generation (Sen, 2017a). However, in a highly digitalized world, the problematics of self-generation expand. In addition to overcoming limits and deprivation, humanity must learn to appreciate and absorb digitally augmented potentiality. Lifelong learning and self-regeneration will become the norm. New questions therefore arise: how can human and artifcial agents collaborate in dynamic self-generation, learning to be jointly ambidextrous in this regard, while ensuring human coherence and continuity; and what will count as well-being and fourishing, in a digitally augmented world?

## **Summary of Augmented Self-Generation**

In summary, whether for good or ill, digitalization is transforming established patterns of self-generation. On the one hand, artifcial selfgeneration is increasingly exploratory and autonomous, as artifcial agents compose and recompose themselves. By incorporating these capabilities, augmented agents will be capable of multiple modes of being, doing, and becoming. Tey will be ambidextrous in this regard, combining both human and artifcial modes of self-generation. On the other hand, however, human agents naturally possess limited capabilities and often remain committed to exploiting singular narratives and traditional life paths. Tey are persistently non-ambidextrous, unless trained to be otherwise. When both types of agent combine in augmented agency, the result could be self-generative divergence or convergence. Regarding divergence, human self-generative functioning will combine and confict with artifcial functioning (e.g., Levy, 2018). And regarding convergence, some people will either overtake or surrender to artifcial self-generation. Te digital augmentation of self-generation therefore focuses this book's core question: how to be and remain agentic in a digitalized world? Te challenge is to supervise self-generation in ways which exploit new capabilities and potentialities, while respecting human choices and commitments.

# **9.2 Illustrations of Self-Generation**

Te preceding argument identifes the following principles. Human and artifcial agents are situated, complex, open, and adaptive systems. Both exhibit varying degrees of self-generative capability and potentiality. However, humans have limited capability to discriminate, choose, and explore new self-generative options. Instead, they often exploit singular and predictable modes of self-generation, while artifcial agents are increasingly dynamic and exploratory. In consequence, many humans will retreat, resist, feel blocked, or simply surrender, in response to the digital augmentation of self-generation. In these situations, digitalization will produce distorted forms of self-generative ambidexterity. Assuming these principles, the following sections illustrate major scenarios of selfgeneration, including the new patterns emerging in today's augmented world. Te frst illustration shows the baseline of modernity.

## **Self-Generation in Modernity**

As earlier sections explain, modernity aspires to develop autonomous reasoning persons who can self-generate their own life path. Contemporary educational and behavioral interventions exemplify these aspirations, as do modern institutions and organizations (Scott, 2004). Figure 9.1 illustrates the self-generative metamodels within such a world. Te fgure focuses on the core challenge discussed in the previous section, namely, the capability of agents to discriminate and choose between self-generative options—the major risk being that people discriminate poorly, often resist or surrender, and fail to maximize choice. To capture these efects, the fgure compares the complexity of self-generative metamodels to the degree of discriminate ranking between them. Te fgure further assumes capabilities at level L2, with a moderate level of technological assistance. It also assumes that the more complex the self-generative choice set, or metamodel of self-generation, the less discriminated it is likely to be, and vice versa.

Te fgure then depicts four metamodels of self-generative choice. Quadrant 1 combines complex self-generative models, with highly discriminate ranking of them. Tis implies that agents can make a best

**Fig. 9.1** Modern self-generation

choice about a complex model. Hence, these choices are optimizing. But they demand strong ambidextrous, self-generative capabilities, which enable agents to discriminate and combine multiple, complex options. Second, quadrant 2 combines complex self-generative models, with less discriminate ranking. Such choices will be maximizing. Agents will incompletely rank complex options and choose one which is no worse than alternatives. Tis scenario assumes moderate ambidextrous capabilities and is more feasible in this respect. Indeed, it accords with observed reality: people often choose no worse versions of complex life paths—for example, adopting an entrepreneurial career, in which options are complex and hard to rank. Next, quadrant 3 combines less complex selfgenerative models with highly discriminate ranking of them. Such choices will also be maximizing, owing to the almost complete rank ordering of less complex options. Tis metamodel also assumes moderate ambidextrous capabilities. And once again, it accords with observation: many people choose the best version of a simpler life path, for example, striving to achieve elite career status in a highly regulated community or profession. Finally, quadrant 4 combines less complex self-generative models with less discriminate ranking. Tese choices will be practical, meaning they are feasible and likely to succeed, and adequate for being a selfgenerative agent in the world. Not surprisingly, this metamodel assumes lesser capabilities and is therefore very feasible. Arguably, many individuals and collectives exhibit this type of self-generation: choosing a no worse version of a simpler life path. Making routine, mimetic choices in a modern world and being adequately fulflled by doing so.

Figure 9.1 also shows further details. Diferent metamodels of selfgeneration, or model choice sets, are shown by the oval shapes N2, D2, and P2. First, it is important to note, that these metamodels do not encompass much of the optimizing quadrant 1. Such choices are ideal and inspirational, but difcult to rank and realize, owing to their extreme complexity and the required level of discrimination. Second, N2 is primarily overlapping quadrant 3, which combines simplifed models, with highly discriminate ranking of them. Tese options will be maximizing, with respect to the complete ordering of simplifed, self-generative models. Tis metamodel therefore assumes moderate capabilities, at best. It is also more normative and calculative, for example, by planning to achieve elite status within a regulated community or profession. Hence, the symbol N is employed. Tird, the metamodel D2 is primarily overlapping quadrant 2, which combines complex self-generative models, with partial, less discriminate ranking. It also assumes moderate capabilities. Tese options are maximizing, with respect to the incomplete rank ordering of complex models. Hence, the symbol D is used, and the selfgenerative options in D2 are more descriptive, intuitive, associative, and harder to discriminate—for example, choosing an entrepreneurial career and life path. And fourth, the metamodel P2 largely overlaps quadrant 4, which illustrates a practical self-generated life in the modern world, following a narrow, routine path with modest expectations or aspirations, which is adequate, feasible, and hence the most frequent choice.

Note that the fgure also shows another scenario labeled P1. Tis indicates the practical self-generative choices of a premodern world. Clearly, P1 is even less complex and discriminated than P2, and P2 only partly overlaps P1. Tis illustrates the fact that much of self-generative practicality in the premodern world is insufcient for modernity. For example, a peasant life may be practical and adequate in premodernity, but inadequate and dissatisfying during modernity. By the same token, much of self-generative practicality in modernity would be exceptional during premodernity. For example, social and economic mobility are widely viewed as feasible and adequate in modern societies but were exceptional and elite in premodern times.

Furthermore, the metamodels N2 and D2 are signifcantly distinct, shown by their small overlap with P2. Self-generation in the modern world is dualistic, in this regard, and therefore agents must be efcacious in diferent types of choice, often at the same time, if they hope to embrace both. In other words, they must be ambidextrous, learning to explore and exploit diferent life paths simultaneously (see Kahneman, 2011). For example, imagine living a typical family life, striving to optimize stability and continuity, while pursuing a highly creative, risky entrepreneurial career. In such a life, integration and coherence are not guaranteed. To manage these dilemmas, modern agents must develop ambidextrous efcacies across diverse modes of being, doing, and becoming.

## **Divergent Augmented Self-Generation**

Now consider the digitally augmented world, in which self-generative capabilities and potentialities are greatly enhanced. Central features include the collaboration of human and artifcial agents in systems of augmented agency; highly creative, compositive methods of selfgeneration; and rapid learning, both intra-cyclical and inter-cyclical. In fact, augmented agents will have the capability to compose and update self-generative models during life phases, and potentially in real time. However, as I explained earlier, despite rapidly expanding capabilities and potentialities, many people will be slow to absorb these developments. Some will be resistant, retreat, feel blocked, or simply surrender. Figure 9.2 illustrates this type of digitally augmented self-generation. Similar as the previous fgure, Fig. 9.2 shows the complexity of self-generative models

**Fig. 9.2** Distorted augmented self-generation

on the vertical axis, from low to high, and the degree of discriminate ranking on the horizontal axis, also from low to high. Being digitally augmented, the fgure assumes that capabilities have signifcantly expanded to L3, compared to the previous fgure. Four quadrants then distinguish the same broad options as the preceding fgure.

Next, the fgure shows diferent metamodels or model choice sets. First, consider the oval shapes N3, D3, and P3. Te shape P3 primarily overlaps the practical choice in segment 4. Hence, P3 illustrates the minimal type of self-generation required, to live a practical life in a digitalized world. Te fgure also shows P3 partially overlaps the earlier metamodels of this kind. It overlaps a small portion of P1 and more of P2. Tis indicates that practical self-generation in a digitalized world transcends the minimal standards of modern and premodern scenarios. Although, a limited number of premodern options may continue, perhaps cultural or religious life choices, and a good portion of modern options as well. However, signifcant aspects of self-generative normality in the digitalized world will be exceptional, relative to earlier periods. For example, thanks to digitalization, global connection and collaboration are standard features of self-generation for many people today, but these attributes were exceptional and elite during much of modernity and would be signs of divinity in premodern societies.

Furthermore, the metamodels N3 and D3 are very distinct, shown by their relatively minor overlap with each other and P3. Self-generation is therefore highly divergent. Te scenarios are skewed toward distorted forms of ambidexterity. In fact, this suggests opposing human and artifcial self-generative processes, and self-generation is highly dualistic. Such dualism was less problematic in earlier modern contexts, which are more forgiving in these respects. However, in highly digitalized contexts, extreme self-generative divergence is more likely. Tere is a signifcant risk that self-generation will exhibit ambidextrous distortion. Figure 9.2 depicts exactly this. And in such cases, there is a high risk of psychosocial incoherence for personalities, groups, and collectives. Efective supervision will be critical to avoid such extremes.

# **Convergent Augmented Self-Generation**

In other digitalized contexts, augmented agents will be more balanced and maximize metamodel ft. Artifcial agents will be empathic and support humans to choose and pursue richer life paths. Human agents will then enjoy more fulflling, self-generative choices. However, to achieve this, both types of agent need to take signifcant steps. First, human agents will have to relax some traditional commitments, including fxed narratives, and embrace lifelong learning. Second, artifcial agents will have to develop genuine empathy for human needs and aspirations, while resisting distorting myopia and bias. If human and artifcial agents can achieve this type of ambidextrous collaboration, the universe of selfgenerative potentiality will expand dramatically. Figure 9.3 illustrates this type of balanced self-generation by augmented agents.

**Fig. 9.3** Balanced augmented self-generation

Once again, the fgure shows the complexity of models on the vertical axis and the degree of discriminate ranking on the horizontal axis. Te four quadrants show the same general choice options as the preceding two fgures. Te notable change is that the metamodels labeled N4, D4, and P4 are more convergent when compared to the divergent set in the preceding fgure. All three metamodels now overlap to signifcant degree. Tis illustrates the fact that in this scenario, human and artifcial selfgeneration are broadly convergent, rather than divergent. Te augmented agent exhibits strong ambidextrous capabilities.

In contrast to the preceding fgure, therefore, N4 and D4 are more convergent, although they retain modest distinction. Tey do not fully overlap, which shows that self-generation is not fully digitalized. Signifcant degrees of freedom remain, allowing space for human intuition and imagination as well as purely artifcial self-generation in D4 and N4. Hence, these metamodels are less polarized and dualistic, and more continuous and pluralistic. Tey synthesize human and artifcial selfgeneration in a balanced, ambidextrous fashion. Finally, the practical metamodel P4 overlaps prior scenarios, but is larger than both P2 and P3. What was exceptional or impossible, even in the recent digital past, is now practical and feasible. In summary, the metamodels in Fig. 9.3 achieve strong ft and largely mitigate the risk of psychosocial incoherence. Agents enjoy the benefts of augmented self-generation.

## **9.3 Implications for Human Flourishing**

Troughout recorded history, including the recent past, self-generative options have been strictly limited for most individuals and communities. Choices have been few, owing to limited capabilities, resources, and opportunities. Hence, the dominant concern for modern scholars, policy makers, and practitioners is to empower self-generation by overcoming defcits, growing endowments, and providing opportunities to learn and develop—the ultimate goal being to expand well-being and the prospects for human fourishing (Sen, 2017b). In the contemporary world, digitalization raises additional concerns, for it promises unprecedented selfgenerative capabilities and potentialities. New opportunities and risks emerge for digitally augmented self-generation.

# **Self-Generative Risks**

First, some people will retreat or actively resist the digital augmentation of self-regulation. Tese people might be deeply committed to priors about well-being and what counts as a good life, often grounded in cultural traditions. For these people, new versions of the self and alternative narratives will be threatening, seen as a source of disturbance and deviance. Hence, these people will fght back and resist, or fee from digitalization to established life choices. We already see evidence of this among groups which are dedicated to traditional values and norms. Tough their resistance is not inherently mistaken or destructive, because it can refect sincerely held values and commitments which are genuinely at risk. However, to retreat or resist means that these groups will not enjoy the potential benefts of digitally augmented self-generation.

Second, poor supervision could also lead to a sense of blockage and existential foundering. Many people are not prepared for a rapid increase in self-generative capabilities and potentialities. Older generations and cultures, especially, are accustomed to slow self-generative cycles, stretching across autobiographical life phases (Conway & Pleydell-Pearce, 2000). At the same time, they may have deeply encoded assumptions about well-being and what counts as a good life. Terefore, they may use digitally augmented capabilities to reinforce myopic priors about the self and world. But such outcomes will be deeply ironic. Tese agents will enjoy greater self-generative potentiality yet fail to exploit and convert these opportunities. In this sense, augmented self-generation would lead to existential foundering: agents will have more plentiful, varied selfgenerative options, but they will be incapable of preferential choice. Instead of fourishing, they will feel blocked and founder.

Tird, there is an equal risk of existential foating if people overly relax or abandon prior commitments. To begin with, human beings are naturally sociable and docile and often refer to others when making life and career choices. If they are overly docile to artifcial infuence, however, these systems might take control. Tis leads to another ironic outcome. Digital augmentation will enhance self-generative potentiality, but may ultimately reduce freedom, if it encourages docility and dependence. Even worse, these efects could be deliberately engineered by powerful actors, as a means of social domination. Evidence suggests that some are attempting this already (Helbing et al., 2019). Tey encourage and reward digital docility, while penalizing autonomy. In these ways, whether by default or design, augmented self-generation may result in existential foating. People would disengage from autonomous choice, and drift on a rising tide of perceived well-being. Many could also develop a false sense of self-efcacy. But in reality, the locus of self-generative control would shift, away from human and toward artifcial sources (Stoychef et al., 2018). Recognizing this risk, some psychologists are investigating ways to maintain agentic autonomy in digitalized contexts, through the development of self-regulatory skills, the deliberate avoidance of some digital infuences, and boosting resilience against manipulation (Kozyreva et al., 2020). In fact, this research illustrates the positive supervision of digitally augmented self-generation.

## **Social and Behavioral Theories**

Agentic self-generation also plays a central role in numerous social and behavioral theories. For example, it has major implications for psychosocial development and biographical decision-making (Bandura, 2006). Collective self-generation is equally important for institutions and organizations. Indeed, collectives can be defned in terms of their selfgenerative characteristics: goal oriented and purposive, with identities and aspirations, organizing to achieve goals and grow over time (Bandura, 2001; Scott & Davis, 2007). Self-generation is also widely viewed as a necessary precondition of human freedom and fourishing, and increasingly for employee engagement (Sen, 2000). However, as already noted, most prior research has focused on limitations and obstacles to freedom and fourishing. Moving forward, theories will also need to accommodate the digitalized expansion of capabilities and potentialities. Te novel problem is having too much, rather than too little. Fresh problematics thus emerge: how to integrate artifcial agents into human self-generation, without falling into retreat, resistance, blockage, or surrender; and how to enhance human fourishing through digital augmentation while preserving core human values and commitments.

Furthermore, most self-generative choices refect cultural narratives of meaning and value. As Nelson Goodman (1978) explains, communities join together in cultural worldmaking and people's lives unfold within these worlds. In his conception, worldmaking captures the essence of cultural community, including its categories of perceived reality, value, truth, and beauty, which are typically expressed in language, faith, art, and scholarship. Goodman further explains that worldmaking "always starts from worlds already at hand; the making is a remaking" (ibid., p. 6). Like other expressions of self-generation, cultural worldmaking inherits and recomposes. Indeed, he writes that worldmaking emerges through "composition and decomposition and weighting of wholes and kinds" (ibid., p. 14). In premodern times, such worldmaking was through shared myth and storytelling. Agentic transformation in this world was a heroic exception. Whereas during modernity, agentic self-transformation is possible for everyone, thanks to education, enlightened reasoning, social progress, and scientifc discovery.

Extending this line of thought, digitally augmented worldmaking promises increasingly dynamic self-generation. Indeed, newly made worlds are proliferating, in online communities and networks, which augment cultural systems of value and meaning. Some are enriching, although many are not. In fact, poorly supervised worldmaking leads to cultural imbalance and distortion. It produces what Goodman calls "conficting versions of worlds in the making," which undermine cultural coherence. And to be sure, digitalization is no cultural panacea. In fact, it is possible that digitalization—seen in the context of ongoing industrialization and environmental exploitation—will perpetuate unsustainable practices and degrade collective well-being. In these respects, digital augmentation is part of a larger challenge: how to enhance shared meaning and value through collective self-generation, making worlds which are ft, fair, and sustainable for all?

As partners in augmented agency, therefore, human agents can hope for a world which ofers better life choices, richer communal narratives, and new cultural experiences. However, to make such a world, human and artifcial agents must learn to appreciate and choose maximizing options. Tey must also develop strong ambidextrous, self-generative capabilities. In the past, this type of self-generation was reserved for the gods and superhuman heroes (see Nietzsche, 1966). Within a highly digitalized world, however, augmented self-generation will empower all persons and communities, at least potentially, to transcend predetermined life choices and fxed narratives, and travel more open, fulflling paths.

Human self-generation therefore strives to transcend limits, but almost never succeeds. Trade-ofs are common: between the desire for freedom and efective control; between being and doing in the present, and future becoming; between individual autonomy and collective solidarity; and between the risk of loss and hope for gain. Against this backdrop, digital augmentation is transforming self-generative capabilities and potentialities. Historic patterns of limitation and deprivation are complemented by new sources of empowerment and possibility. Digitally augmented ambidextrous capabilities are now feasible for all. But this gives rise to novel dilemmas. On the positive side, if augmented self-generation is well supervised, the outcomes will be liberating and enriching. Human agents will enjoy unprecedented self-generative potentiality on a global scale. On the negative side, however, if augmented self-generation is poorly supervised, it could reduce the prospects for human fourishing. People might retreat, resist, feel blocked, or surrender. Tey could fee augmented self-generation, by fghting back, foundering, or foating, rather than fourishing.

## **References**


Conway, M. A., & Pleydell-Pearce, C. W. (2000). Te construction of autobiographical memories in the self-memory system. *Psychological Review, 107*(2), 261–288.

Forster, E. M. (1928). Te machine stops. 1909. *Collected short stories*, 109–146. Goodman, N. (1978). *Ways of worldmaking*. Hackett Publishing.


**Open Access** Tis chapter is licensed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/ by/4.0/), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence and indicate if changes were made.

Te images or other third party material in this chapter are included in the chapter's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the chapter's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.

# **10**

# **Toward a Science of Augmented Agency**

Previous chapters identify major dilemmas for digitally augmented humanity, which this book defnes in terms of close collaboration between human and artifcial agents. Tese dilemmas are largely owing to diferences between human and artifcial capabilities and potentialities, and the resulting tensions in their collaboration. Working together, human and artifcial agents must learn to manage these challenges. Joint supervision will be critical. Otherwise, augmented agents will tend toward divergent or convergent, dysfunctional form and function. As preceding chapters also explain, these dilemmas give rise to the following novel problematics: how can human beings collaborate closely with artifcial agents, while remaining genuinely autonomous in reasoning, belief, and choice; relatedly, how can humans integrate digital augmentation into their subjective and intersubjective lives, while preserving their identities, commitments, and psychosocial coherence; how can digitally augmented institutions and organizations, conceived as collective agents, fully exploit artifcial capabilities, while avoiding extremes of digitalized docility, dependence, and determinism; how can humanity ensure fair access to the benefts of digital augmentation, and not allow them to perpetuate systemic discrimination, deprivation, and injustice; and fnally, the most novel and controversial challenge, which is how will human and artifcial agents learn to understand, trust, and respect each other, despite their diferent capabilities and potentialities?

In fact, comparable challenges have arisen before, albeit in less advanced technological contexts. During each historical period of civilized humanity, there have been new agentic forms, functions, and associated challenges of supervision. In parallel, procedures and institutions have evolved to exploit and govern these transitions. For example, the premodern world produced artisan guilds and councils, while modernity created institutions to regulate markets, industries, professions and more. However, such historic transitions are problematic too, because major socioeconomic change disrupts established orders and exposes the limitations of existing institutions. Once again, modernity is illustrative. Troughout the modern industrial period, diferent stakeholder groups have struggled over issues of governance, the distribution of resources, access to opportunities, and the rights and duties of employees versus owners. Tis has often led to major social and political disruption, and sometimes revolution.

Mass digitalization continues this historic trend, at unprecedented speed and scale. In fact, as earlier chapters explain, digitalization signals a major shift in human experience and organization. Digital technologies reach far more deeply into all aspects of agentic form and function, compared to earlier periods. Prompting some to predict a type of singularity, in which artifcial intelligence equals and perhaps surpasses the human, and both then fuse to become efectively one (Eden et al., 2015). Preceding chapters list some of the enabling technologies, including artifcial empathy and personality, and brain-machine engineering. In any case, whether singularity happens or not, human and artifcial agents are ready subjects for a science of digitally augmented agency.

# **10.1 Science of Augmented Agency**

Tis science is clearly needed. Augmented agents must know how to supervise their increasingly close collaboration, maintaining appropriate levels of convergence and divergence, and thus maximizing metamodel ft. A science of augmented agency will support many of the required tools and techniques. Without such capabilities, however, poor supervision will result in dysfunctional patterns of ambimodality, ambiopia, ambiactivity, and ambidexterity. Moreover, as this terminology clearly demonstrates, the conceptual architecture of modern human science fails to capture important features of digital augmentation. I therefore import a few concepts from other felds. Table 10.1 lists these conceptual innovations.

Te frst new concept is ambimodal, which comes from chemistry and refers to transition or transformation processes which lead to multiple outcome states. In this book, the term refers to single processes which generate diferent modal characteristics, and more specifcally, to agentic forms and functions which combine artifcial compression with human layering. Te second conceptual innovation is hyperopia, which is borrowed from ophthalmology, and refers to farsighted vision, the opposite of myopia. In this book, hyperopia refers to farsighted problem sampling


**Table 10.1** New concepts and terms

and solution search. In fact, the concept is already applied in some social and behavioral sciences. Tird is the concept of ambiopia, which refers to double vision in ophthalmology, when one eye is myopic and the other is hyperopic. I use the term to describe processes which combine myopic and hyperopic, sampling and search, especially in problem-solving and cognitive empathizing. Fourth, the concept of empathicing is original and refers to satisfcing in solving problems of other minds, rather than seeking to optimize in cognitive empathizing. Fifth, the concept ambiactive is borrowed from biology and refers to processes which simultaneously suppress and stimulate the same type of efect. Here the term refers to processes which both suppress and stimulate levels of complexity, sensitivity to variance, and processing rates. For example, an ambiactive system of augmented agency could suppress human sensitivity and processing rates, while also stimulating artifcial hypersensitivity and hyperactive rates. Tis book also employs the established concept of ambidexterity, to describe the combination of diferent modes of human and artifcial self-generation.

As noted previously, the prefx "ambi" is consistent, meaning "both" in Latin. It captures the fundamental combinatorics of augmented humanity, which integrates human and artifcial agents. In fact, comparable concerns occur throughout Western thought. During the premodern period, for instance, agentic combinatorics focused on the relationship between human and divine beings. Whereas in modernity, scholars investigate the combination of autonomous, reasoning persons within social collectives. Both periods emphasize diferent combinatorics, refecting the stage of social and technological development at the time. In the period of digitalization, greater focus will be on human-machine interaction. Granted, such combinatorics are observed in every period of civilization, albeit involving lower levels of technological sophistication and capability. Human-machine processing has always exhibited divergent rates, ranges, and levels of complexity, combining fast and slow, near and far, simplifcation and complexity. However, contemporary digitalization massively expands such efects. Te scale, scope, and speed of digital augmentation are transformative. In consequence, augmented humanity will be characterized by dynamic agentic combinatorics. Tat said, human spirituality and autonomous reason will continue to matter greatly, but in the context of increasingly augmented realities. Te major risk will be that combining diferent human and artifcial capabilities could result in distorted agentic forms and functions, which are either too divergent or convergent for a particular context.

Table 10.1 also includes another concept which is original and captures important novelties of the digitally augmented world, namely the concept of entrogenous, which refers to the systematic in-betweenness of digitalized mediators. Chapter 2 identifes three such mediators, which are central to augmented agency: intelligent sensory perception, performative action generation, and contextual learning. Together, they allow augmented agents to learn, compose, and recompose, in a dynamic fashion, updating form and function in real time. Importantly, these mediators are neither endogenous nor exogenous, relative to the boundaries they help to defne. Rather, they are consistently in-between, processing potential form and function, and hence entrogenous. Recall that Fig. 2.4 illustrates this type of mediation. It depicts three levels and rates of processing and highlights the way in which human and artifcial processes might diverge. Te major driver of this efect is that artifcial agents are inherently hyperopic, hyperactive, and hypersensitive, while humans are naturally myopic, relative sluggish, and insensitive. Hence, artifcial and human processes could easily diverge in terms of their ranges, rates, and levels of precision and complexity.

## **Dilemmas of Digital Augmentation**

Extreme divergence will manifest in numerous ways. Tis book exposes a number of critical manifestations. First, ambimodal distortion will create poorly integrated agentic forms and functions, which are overly compressed and layered at the same time. Second, ambiopic distortion will lead to problem-solving which is overly complex and simplifed on diferent dimensions of problem representation and solution. Cognitive empathizing will be equally afected, when viewed as a type of complex problem-solving. Tird, ambiactive distortion will produce dysfunctional self-regulation, evaluation of performance, and learning, in which relative human simplicity, sluggishness, and insensitivity diverge from artifcial complexity, hypersensitivity, and hyperactivity. As a further consequence, ambiactive distortion heightens the risk of cognitive dissonance, extreme ambiguity, and ambivalence, especially regarding core beliefs and commitments, where commitment in this context is defned as being dedicated, feeling obligated and bound, to some value, belief, or pattern of action (Sen, 1985). Fourth, these distortions compound to produce divergent ambidextrous self-generation, in which augmented agents adopt poorly synchronized, conficting modes of human and artifcial self-generation. In summary, digital augmentation could either enhance or diminish agentic form and function. Table 10.2 summarizes the resulting dilemmas of ambimodality, ambiopia, ambiactivity, and ambidexterity, plus the human and artifcial tendencies for each, their potential risks and impact.

To mitigate these risks and maximize the opportunities of digitalization, human and artifcial agents must therefore strengthen the supervision of their combinatorics. More specifcally, when joined in augmented agency, human and artifcial agents must be sensitive to contextual variance and ecological dynamics, while managing their complementary strengths and weaknesses. In doing so, they will regularly compose and recompose metamodels and methods. Te primary goal will be to achieve and maintain maximal ft on every dimension. But to achieve this type of supervision, we need to develop the science of augmented agency.

## **10.2 Hyperparameters of Future Science**

Humans are quintessentially agentic when they seek scientifc understanding: purposive, forward looking, refective, and self-directed. Tis includes the efort to interpret and explain their own patterns of thought and action. In this respect, civilized humans have always been their own object of interpretation and study. Te agentic self has been a problem for the self, even in premodern worlds of narrative myth. In like fashion, the scientifc study of augmented agency will be a major domain of augmented, agentic activity, which leads to an important insight: the science of augmented agency will exemplify the phenomena examined in earlier chapters. Tis science will be, itself, an expression of digitally augmented agency and its dilemmas, just as scientifc thought and method are subjects of study in modern human science.


**Table 10.2** Risks for digitally augmented agency

Te science of augmented agency therefore faces the same challenges as other expressions of augmented agency. Tat is, issues arise regarding the specifcation of hyperparameters and metamodeling, plus the activation and upregulation, or deactivation and downregulation, of artifcial and human processes. To begin with, metamodeling entails ontological hyperparameters, or the specifcation of fundamental categories of reality, both visible and hidden. Additional hyperparameters relate to epistemological properties, which specify logics and models of reasoning. Next, there are hyperparameters which defne core activation and change mechanisms, including potential sensitivity to variance and cycle rates. Following sections discuss each type of hyperparameter in relation to the science of augmented agency.

# **Ontological Principles**

In the science of augmented agency, the fundamental categories of reality will transcend traditional conceptions of material nature and conscious mind. However, this does not imply the reduction of mind and consciousness to purely material cause. Rather, these categories are reconceived as higher order expressions of generative, augmented systems, which in turn result from complex neurophysiological, symbolic, and digital interactions. In this science, therefore, ontological commitments will be contextual, systematic, and rigorous. Te resulting shift is comparable to earlier historical transitions. Just as the ancient concept of soul was demystifed and naturalized by modernity, and human psyche then became a topic of science, so conscious mind will be digitally naturalized within the science of augmented agency (see Quine, 1995). In both cases, the shift is from anthropomorphic conceptions based on ordinary experience to a deeper understanding of reality which requires specialized techniques of observation and analysis. Tis also suggests that a new domain of enquiry may be required, focusing on the study of digitally augmented, agentic combinatorics (see Bandura, 2012; Latour, 2013). Neither the existing human sciences nor computer sciences adequately capture the forms and functions of augmented agency and mind. Te recombination of prior felds is not enough. Radically new phenomena of this kind will require fresh concepts and frameworks.

In addition, the science of augmented agency will investigate novel forms of entrogenous mediation—previously defned as digitalized mediators of in-betweenness—which facilitate the dynamic composition and functioning of augmented agents. As noted earlier, this book has identifed three such mechanisms: intelligent sensory perception, performative action generation, and contextual learning. It is important to stress, once again, that entrogeneity does not entail unfettered relativism or irregularity. Rather, the science of augmented agency will accommodate the dynamic generation of alternative categories and their boundaries. In this fashion, entrogenous mechanisms will set and reset system boundaries, but they are neither endogenous nor exogenous with respect to these boundaries. Notably, this mirrors the approach to agentic hybridity proposed in numerous behavioral and social sciences (e.g., Battilana et al., 2015; Seibel, 2015). And not by coincidence, hybridity often emerges in digitalized contexts.

## **Epistemological Principles**

Digital technologies also massively enhance intelligent processing capabilities. By leveraging these capabilities, augmented agents will gather and process information with far greater precision and speed. At least, expansion is feasible, notwithstanding persistent human limitations. In these respects, augmented agents will be bounded and unbounded, at the same time. Tis will occur, because human agents retain signifcant degrees of boundedness, especially in everyday cognitive functioning. Yet at the same time, artifcial agents are increasingly unbounded. In efect, augmented agents will exhibit functional ambimodality with respect to rationality, as distinct from the organizational ambimodality discussed in Chap. 3. Tat is, digital augmentation will combine two diferent modes of reasoning, thinking far and fast, as well as near and slow. Te supervisory challenge, therefore, is to manage the potential divergence or convergence of simultaneously bounded and unbounded, ambimodal patterns of reasoning.

Satisfcing then becomes more dynamic and complex, and arguably more important. Most notably, because satisfcing both simplifes and maximizes, it helps to mitigate the risks of overprocessing. Satisfcing will constrain overly hyperopic sampling and search, and overly hypersensitive and hyperactive responses to variance. Hence, in addition to satisfcing because of limited capabilities, as Simon (1955) originally argued, augmented agents will also satisfce to restrain excessive capabilities. Put another way, digitally augmented agents will satisfce, not only because of limits, but to impose limits. Tey will choose to satisfce, even when ideal optimization is feasible, to avoid unnecessary processing. In fact, artifcial systems do this already, when they limit their own processes to improve speed and efciency. Augmented agents will do the same, choosing to forgo optimization for good reasons, just as humans already do (Gigerenzer & Gaissmaier, 2011). Hyperparameters will specify these epistemic features in metamodels of augmented science.

Another major epistemic shift is the extension of systematic reasoning to problem sampling and representation. Tis occurs because intelligent sensory perception will allow augmented agents to apply systematic reasoning to problem sampling and representation. In such a world, problems will emerge in an intelligent fashion, similar to the sampling and representation of problems in empirical science. By contrast, even in the recent past, the ordinary sampling and representation of problems are not viewed as reasoned activities. At most, they involve selective attention and observation (Ocasio, 2012). Intelligent sampling and representation only consistently occur in expert domains, such as experimental science. Even behavioral research rarely focuses on the cognitive-afective mechanisms of sampling and problem representation (Fiedler & Juslin, 2006). Similarly, behavioral research largely neglects normative satisfcing. Rationality is applied to solution search, not to problem sampling and representation. Digital augmentation upends these assumptions and suggests a fusion of ecological realism and rationality.

Furthermore, digitalization supports the dynamic composition of metamodels of reasoning, using methods which can be described as "compositive" (see Latour, 2010). Such methods do not rely on predetermined models or axioms, nor do they rely on traditional descriptive and normative templates. Rather, compositive methods employ digitalized processes to develop customized metamodels which best ft the problem context. At the same time, compositive methods are systematic and rigorous, neither ad hoc nor idiosyncratic (e.g., Pappa et al., 2014; Wang et al., 2015). In fact, many digital systems already exhibit these capabilities. As noted in earlier chapters, advanced artifcial agents are already compositive in this sense, and require minimal or no supervision. Evolutionary deep learning systems and GANs function in exactly this way (Shwartz-Ziv & Tishby, 2017). Via rapid inductive, abductive, and reinforcement learning, they process massive volumes of information, identifying hitherto undetectable patterns, to compose new explanatory methods and models without external guidance. In this fashion, generative metamodeling will translate the techniques of experimental computer science into all domains of augmented agency.

## **Mechanisms of Adaptation and Change**

Hyperparameters also specify core mechanisms and processing rates. During modernity, the human and natural sciences bifurcated, in these respects. Within the biological sciences, the major change mechanisms are organic processes of variation and selection. In contrast, within many human sciences, conscious thought, will, and intention are seen as primary drivers of change. In consequence, modern scholarship often struggles to integrate bifurcated science. Scholars are unsure how to integrate biological processes of random variation, natural or ecological selection, and material cause, with conscious processes of intentional variation, preferential choice, and intelligent cause. Polarizing debates therefore persist about materialism versus idealism, the distinction of mind from body, reductionism versus holism, and positivist versus interpretive explanation. Tese bifurcations also partly explain the poor integration of ecological and behavioral mechanisms, especially during modern industrialization (Latour, 2017).

Herbert Simon (2000) had foresight on these issues as well. At the dawn of the twenty-frst century, in the last year of his life, he proposed three priorities for digitally augmenting humanity. Tey were his minimal requirements for "designing a sustainable acceptable world." In efect, he described a program of global recomposition or digitally augmented worldmaking. First, he argued that humanity must learn to live at peace with all of nature, in a sustainable collaborative way, and overcome the "false pride" of being separate from, and superior to, the rest of the natural world. Second, he argued that humanity must share goods and wealth fairly and productively, so that all persons will enjoy comparable benefts and opportunities. Tird, to achieve such fairness, he said humanity must eliminate the divisions which arise from cultural and social antipathy and stop viewing the world in terms of "we versus them." In fact, Simon was rejecting the classic bifurcations of modernity, that mind and consciousness are distinct from nature, that the autonomous self stands apart from the other, and that empathy is inevitably limited and local.

Simon was correct, then and now. Just as he predicted, digitalization problematizes the conceptual architecture of modernity. Augmented agents will better connect material nature and conscious mind. Likewise, the science of augmented agency will synthesize the study of human agency with the natural and computer sciences. Research methods will be contextual and compositive, adapting to maximize and maintain metamodel ft. Entrogenous mediation will be critical, and many polarities will thus resolve, as we incorporate intelligent sensory perception, performative action generation, and contextual learning. In a digitally augmented world, moreover, change will occur through generative variation and intelligent adaptation, not merely through random mutation and natural or ecological selection. Agentic evolution will be experimental and intelligent, similar to Gregor Mendel's cultivation of new plant varieties through guided variation and selection (Levinthal, 2021). It is also likely that in future, advanced digital systems will fully integrate with the biological, geophysical world. When this occurs, digitalization will augment organic variation and selection as well. Augmented agency could become a truly positive force in the natural world, enabling self-generation and renewal, rather than destruction and exploitation. All this is possible, assuming a future science of augmented agency and appropriate supervision of its application. Te overall efect would be transformative.

Table 10.3 summarizes the paradigmatic shift just described. It shows three historical periods—premodern, modern, and digitalization—their major ontological and epistemological commitments, plus the dominant mechanisms of change and scientifc methods. Most notably, the table summarizes the emerging shift toward generative, augmented pluralism and compositive methods. It is also important to note that all three systems may continue adding value to agentic experience and understanding, assuming appropriate supervision and application.

# **10.3 Domains of Augmented Science**

While the future unfolds, contemporary human science still grapples with the dilemmas of modernity. Numerous dialectics accompany these concerns: explaining the interaction of nature and nurture; how material cause relates to meaning and intention; developing autonomous personality as well as sociable collectivity; seeking order and continuity while embracing change (Giddens, 2013). Refecting these dialectics, the human sciences divide into separate disciplines, most of which focus on diferent agentic modalities and functional domains. For example,


**Table 10.3** Summary of scientifc metamodels

psychology focuses on the study of mind and behavior within individuals, groups, and collectives. In contrast, sociology focuses on social life and collectivity and then examines the role of individuals in these contexts. Other human sciences, such as education and management studies, combine modalities in particular activity domains. Contemporary research also organizes around multidisciplinary, hybrid approaches to complex problems (Seibel, 2015; Skelcher & Smith, 2015). In this respect, contemporary human science recognizes the increasing integration and interdependency of agentic modalities and contexts. Ecological and environmental factors receive increasing attention as well.

Digitalization accelerates these trends. It also generates new questions for human science, especially regarding the dilemmas of augmented combinatorics, or how to combine human and artifcial agents. To investigate these questions, the future science of augmented agency will organize around complex problems too. It will be less divided into siloed disciplines, and less oriented toward diferent modalities (Latour, 2011). Disciplinary categories and boundaries will be more fexible and fuid. In fact, recent scholarship is moving in this direction already, illustrated by ecological theories of social organization, and neurocognitive models of personality and culture (Chimirri & Schraube, 2019; Kitayama & Salvador, 2017). Trough this type of research, scholars develop multidisciplinary theories of agentic form and function (Fiedler, 2017). Tis will be the norm in the science of digitally augmented agency.

Furthermore, the science of augmented agency will treat modality itself as generative and contextual. In fact, some scholars already view agentic modality as epiphenomenal to performance, meaning it is mediated by action in context, rather than expressing autonomous form (Hwang & Colyvas, 2021; Pentland et al., 2012). Collective hybridity emerges in this fashion too. Postmodern thinkers go even further. Tey view autonomous agency as chimerical, a device which dissolves in the deconstruction of text and context. Many of these thinkers take inspiration from Freud's argument that conscious ego refects the hidden subconscious (Tauber, 2013). However, my proposals take a markedly diferent approach. Tey anticipate a systematic, empirical science of emergent phenomena, with clearly defned metamodels and mechanisms.

To illustrate such a science, consider the needs of autonomous mobility systems, in which human and artifcial agents collaborate as augmented agents. Tese systems will digitalize and integrate every level and modality of agency, both organizational and functional. Smart cities will digitalize the transport infrastructure, to create the necessary environment for immediate contextual learning. Vehicle manufacturers will incorporate intelligent sensory perception which supports fully augmented problem representation and feedforward response. Advanced, empathic artifcial agents will be embedded throughout, enabling performative action generation in real time. And network management agents will supervise and govern the entire system, to ensure efciency, safety, sustainability, and social inclusion. In summary, the augmented science of autonomous mobility systems will be generative, compositive, and integrate multiple disciplines, technologies, and human factors.

# **Science of Consciousness**

Tese developments impact the role of ordinary consciousness in the science of augmented agency. Many disciplines research such questions, including the philosophy of science, cognitive psychology, and the human sciences more broadly (Metcalfe & Schwartz, 2016). However, the nature and role of consciousness are not yet fully understood. Tat said, ample evidence shows that ordinary consciousness is an imperfect means of observation in rigorous, scientifc pursuits. Unassisted, it often leads to anthropomorphic assumptions which are inherently myopic and misleading. For this reason, ordinary consciousness will have a diferent role in the science of augmented agency. It will be less a means of access to fundamental reality and truth, and more a source of humanistic reference for augmented agency, which is an equally vital role. Ordinary consciousness will remain important, therefore, but for diferent reasons, compared to the past.

However, as the history of science shows, humanity always struggles to reset the role of consciousness as a source of reality and truth. Over time, the trend is to expose anthropomorphic assumptions and demote the status of consciousness as such. For example, as noted earlier in this chapter, the ancient soul was naturalized to become a topic of study for modern psychological science. Such shifts often incite trouble, because they threaten embedded narratives and identities. Similar shifts were primary sources of opposition to Copernican cosmology, Galilean mechanics, and Darwin's theory of evolution. Intuitions of the world run deep and are resilient. Natural scientists acknowledged this problem long ago and worked hard to liberate their thinking. Tey largely succeeded. Every educated person now knows that it takes sophisticated technological means to observe the deeper realities of the physical world.

In the human sciences, by contrast, there are ongoing debates. Some maintain that ordinary consciousness does provide access to the fundamental realities of human form and function. In branches of linguistics and psychology, for example, some rely on self-reports to illuminate core processes of language acquisition and reasoning. By implication, they believe that subjective mental states can be treated as primitive and are not decomposable. Others disagree and argue that ordinary consciousness is not adequate for such purposes (e.g., Wilson & Dunn, 2004). Tey contend that, just as natural science demoted ordinary consciousness and turned to technological tools and formal methods, the human sciences must do the same. Granted, the humanities will continue to treat mind and consciousness as fundamental. Tese disciplines are concerned with the interpretation of hermeneutic and cultural phenomena. But any attempt at an empirical science of human agency—and especially digitally augmented agency—must adopt the tools and techniques of neurophysiology, cognitive psychology, computer science, and the like. How phenomena transform into consciousness and subjective mental states are then questions for ongoing research (Sohn, 2019). In like fashion, the future science of augmented agency will investigate the role of consciousness in humanistic supervision, and how best to regulate its infuence (see Lovelock, 2019).

## **Generative Commitments**

Tis also points toward a science of generative commitments. Tat is, augmented agents will have the capability to relax, update, and recompose their commitments, which constitutes another signifcant departure from traditional assumptions. Troughout most of history, cultures have assumed that core commitments and reference criteria are fxed, often inviolable. Deviation has prompted sanction and confict. It still does, in many places. More recently, however, as humanity becomes globally connected and mobile, commitments are more pluralistic and embracing, even if such pluralism sometimes triggers anxiety and antipathy, which is not surprising, given the deep role of shared commitments in culture and identity (Appiah, 2010). To be sure, many traditional commitments warrant preservation. If supervision is fawed, important aspects of human experience could erode, including shared commitments about reality, truth, beauty, and justice. Noting these risks, the science of augmented agency will need to resolve how to supervise generative commitments.

Some already research the closely related topic of holistic value. New theories of economics and management, for example, incorporate diverse concepts of human welfare and socioeconomic value creation (e.g., Raworth, 2017; Sachs et al., 2019; Stiglitz et al., 2009). Similarly, in contemporary theories of agency itself, scholars are expanding their conception of human fourishing and psychosocial well-being to accommodate richer, alternative commitments (Seligman et al., 2013). Some psychologists are exploring new theories of virtue, referencing classic thinking about holistic well-being (Fowers et al., 2021). In addition, as noted earlier, agentic hybridity is increasingly recognized in many felds. Te future science of generative commitments can build on these contributions. In fact, this prospective enquiry harks back to Aristotle's (1980) concept of eudaimonia, about living a good life with practical wisdom. From this perspective, a science of generative commitments will be a science of eudaimonics. It will study how to compose and live a complete, fourishing life in a digitalized world. Te inquiry would encompass all value commitments, complementing the existing study of specifc types of value in economics, ethics, and aesthetics (see Di Fabio & Palazzeschi, 2015).

For example, consider digitally augmented health care. In these contexts, emerging priorities are overall health, well-being, and quality of life, or in other words, eudaimonic concerns. Systems will be designed and evaluated based on holistic human outcomes, and not simply on crude metrics of service delivery. Once maximized in this way, health care will be value based, personal, precise, and fully integrated into social life. Relevant technologies will include wearable and implantable devices. Importantly, this kind of system will require generative commitments, developing and adapting values and goals for both individuals and collectives. Te responsible, augmented agents will recognize and/or generate a range of cultural, social, and personal commitments and preferences. Tere will also be empathic artifcial agents enabling performative action generation in real time. Generative commitments will thus guide the design and delivery of services, while network management agents will supervise and govern the entire system. In this fashion, augmented agents in health care will generate commitments.

As Aristotle further understood, shared commitments underpin the good governance of communal life (Nussbaum, 2000). In ancient Athens, this was centered in the polis, its rituals, and celebrated by dramatic chorus. Whereas, in the modern period, good governance calls for reasoned public debate, the fair determination of collective choice, and participatory decision-making. In the period of digitalization, the governance of collective agency will be transformed as well. For example, civic participation could become more inclusive and globally integrated. As in the past, therefore, a new period of agentic experience will require a fresh approach toward collective governance and politics. And if history is any guide, we should expect to see more strife and struggle in this regard, as the impact of digitalization continues to grow. Institutional systems of power and infuence are never easy to change, and digitalized societies will be no different. We see evidence of such confict already, as opposing political and cultural groups struggle to control online social networks.

## **Science and History**

Troughout this book, history is a guide. Te argument consistently refers to three major periods of civilized humanity and agency: premodernity, modernity, and the contemporary period of digitalization. Among other key features, each period is characterized by stages of technological assistance: from the primitive technologies of premodernity, to the mechanical and analogue technologies of modernity, to the digital and neural technologies of the contemporary period. Hence, my argument also speaks to the history of human science, conceived as the study of human self-understanding over time. In fact, the agentic metamodels presented in this book constitute a broad framework for reconceiving the history of human science.

Some historians take an equally broad perspective on the past. Tis is true of the Annals School, founded in the mid-twentieth century by Lucien Febvre and Marc Bloch (2014). For them, history is a long narrative of unfolding worlds of lived experience and mentality. Politics and princes are then expressions of their periods, not the primary forces of history. Tomas Kuhn's (1970) work on paradigms of scientifc investigation and knowledge is equally broad and long term. In fact, his analysis of paradigms could be restated in terms of metamodels and their hyperparameters. Each successive paradigm exhibits major shifts in core ontology, epistemology, and mechanisms of knowledge generation and difusion. In many ways, Kuhn's view of the past aligns with the historical perspective of this book. Both identify long periods and general frameworks, although my argument articulates alternative processes, mechanisms, and metamodels and tries to bring fresh clarity and organization to this narrative.

More practically, digitalization accelerates historical time. During premodernity, rates of change were slow and often imperceptible. Societies were relatively stable and evolved slowly. For this reason, the premodern concept of historical time was expressed in legend and myth, rather than narratives of social and political change. Ten during modernity, history accelerated. Indeed, for modern societies and persons, historical change is a central feature of communal and autobiographical narrative, not some distant horizon or myth, although for many indigenous cultures, the modern compression of time has been a source of cultural distress and alienation. Tey struggle to maintain traditional narratives in the face of imperialistic and industrial forces. In a digitalized world, history accelerates yet again. Now all humanity will share the indigenous struggle to maintain cultural narratives. In these respects, augmented humanity can look to indigenous peoples for lessons about cultural survival in the face of overwhelming social and technological change (Hogan & Singh, 2018).

For without doubt, in a digitalized world, dynamic change will be constant and ubiquitous. Tis will not be the end of history, by any means, but it does imply signifcant acceleration. Historical transformation will occur within generations and seasons, not only across the life span. Viewed positively, this will enable a new type of self-generativity, empowering augmented humanity to make and remake the world, while living within it (Latour, 2013). Augmented humanity will move "of the edge of history," as Anthony Giddens (2015) puts it, by compressing and transcending the classic parameters of historical time. Change will be discontinuous and the past will explain less and less about the future. But what comes next is not yet assured. Moving of the edge of history can be perilous or liberating. Regarding peril, some people and communities might lose their bearings or surrender to artifcial control. In terms of liberation, a new period of self-generative freedom and fourishing is possible, assuming humanity meets the supervisory challenge of digital augmentation.

## **Research Methods**

Additional consequences follow for research methods. In standard approaches, researchers in the human sciences gather qualitative data to support descriptive, interpretive models of human experience and behavior, and quantitative data to support calculative, causal models of such phenomena. Te former methods focus on rich, holistic description, narratives, and sense-making, hoping to interpret meaning and intention, while the latter methods seek measurable data and discrete mechanisms, to explain causation. Multiple and mixed methods blend these approaches. Scholars debate which approach is more reliable and enlightening: rich, holistic descriptions of experience and the interpretation of meaning, or measurable mechanisms of variation in causal explanation; or some combination of these approaches (Creswell, 2003). Of course, this takes us back to Herbert Simon (1979) again, and the dilemmas of simplifcation in the modeling of human thought and behavior.

In parallel, scholars debate ontological and epistemological priorities. On the one hand, those who privilege qualitative methods and interpretation, typically argue that holistic description, consciousness, and meaning take priority and cannot be reduced to mechanistic cause, while on the other hand those who privilege quantitative methods and causation argue that functional mechanisms and assisted observation take priority and reject any reliance on subjective meaning and interpretation. Not surprisingly, many regard qualitative and quantitative methods as deeply incommensurable. Tat said, a signifcant research community now advocates for blended, mixed, and multiple methods (Denzin, 2010).

In a period of digitalization, these distinctions will blur even further and faster. For example, it is already possible to achieve machine-based outcomes which were previously deemed impossible, such as automated pattern recognition, associative computation, artifcial empathy, intuition, and creativity (Choudhury et al., 2020; Varshney et al., 2015). Quite simply, digital systems are replicating many of the more complex, holistic functions of human cognition. As noted in earlier chapters, within the foreseeable future, there will be no detectable diference between human and artifcial agents in these domains, although whether artifcial agents should be classifed as truly sentient and conscious is another question. Nevertheless, in consequence of these developments, it is feasible to compose blended research methods at massive scale. Diferent tools and techniques will be combined and recombined to match problem contexts. Studies will apply quantitative techniques to the interpretation of meaning, including self-narratives and sense-making, while also scaling qualitative techniques to predict complex patterns of thought and behavior.

Using such compositive methods, augmented science will customize diferent techniques of sampling and search to the phenomena and questions of interest. Here again, entrogenous mediators will play a central role, updating metamodels and methods in real time. Terefore, just as the metamodels of augmented science will be contextual and generative, so will be the methods used to gather, interpret, and analyze information. Methods will be composed to ft the problem space. Tey will be compositive, as Hayek (1952) originally proposed, not simply qualitative, quantitative, or mixed, in the traditional sense. In fact, advanced forms of artifcial intelligence and machine learning already do this (e.g., Mehta et al., 2019). Some social scientists do as well (e.g., Latour, 2011).

## **Future Prospect**

James March (2006), the great scholar of organizations, argues that it requires courage and positive deviance to embrace the ambiguity and ambivalence of exploratory thought. It also requires patience and persistence, to see whether fruits ripen or not. And it should, given the need for rigor and replication. However, the science of augmented agency calls for extra efort and speed, in these respects. Digitalization is rapidly infusing agentic domains, bringing unprecedented gains in capability and potentiality. In consequence, it problematizes the traditional assumptions of modernity, and presents new and urgent challenges. Most particularly, the combinatorics of digitally augmented humanity are transforming and confronting. Human agents will likely remain relatively myopic, sluggish, layered, and insensitive to variance, while artifcial agents will be increasingly farsighted, fast, compressed, and hypersensitive. As both collaborate more closely, they risk amplifying the tendencies of the other, leading to internal divergence or convergence and dysfunction.

Novel problematics and dilemmas emerge. Inadequate supervision of these could produce the following dysfunctions: highly ambimodal systems, resulting in incoherent agentic form and function; highly ambiopic problem-solving and cognitive empathicing will skew judgments of the world and other minds; highly ambiactive self-regulation, evaluation of performance, and learning, would risk incoherence, extreme ambiguity and ambivalence; all contributing to dysfunctional patterns of ambidextrous, human and artifcial self-generation. To mitigate these risks, human and artifcial agents must develop the capability for collaborative supervision grounded in mutual understanding, trust, and respect. Achieving all this will be contingent on the development of a science of augmented agency. Te core features of this science will include the following: digitally augmented mind will be treated as a fundamental category of reality; the science of augmented agency will employ contextually sensitive, compositive methods; its metamodels will be highly generative, rather than replicative or slowly adaptive; problem sampling and representation will be intelligent and reasoned, complementing ecological rationality; augmented agency will rely deeply on the entrogenous mediation of intelligent sensory perception, performative action generation, and contextual learning; ordinary consciousness and commitments will play important roles in humanizing the science of augmented agency, rather than being sources of fundamental insight about the world itself.

Granted, the exact shape of this future science is not yet clear. Much of the current chapter—indeed, this book as a whole—is therefore prospective. It anticipates the future, grounded in the best knowledge currently available, while acknowledging that its proposals will require further elaboration and testing. Nor is this book a comprehensive treatment of the phenomena. Rather, it takes steps toward a science of augmented agency. But the process remains emergent. Te trajectory of digital augmentation could change, as the natural, human, and virtual worlds continue evolving, interacting, and often conficting. Tat said, we need to move forward. Prospective theorizing helps, by shedding light on unfamiliar territory. Te history of science also teaches that radically new phenomena often require fresh conceptual architecture. Existing frameworks rarely sufce and waiting for certainty and normality is unlikely to succeed. Digitalization is too novel and dynamic. We could wait in vain, while the world moves on. Tis would be unproductive and arguably negligent, given the accelerating impact of digitalization. Te augmentation of humanity has clearly begun. Its dilemmas are present and increasingly urgent. Science must respond with matching speed and purpose.

## **References**


**Open Access** Tis chapter is licensed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/ by/4.0/), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence and indicate if changes were made.

Te images or other third party material in this chapter are included in the chapter's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the chapter's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.

# **Glossary of Defned Terms**


**Hypersensitive** Abnormally or extremely sensitive.

**Maximize** To partially rank order a set, and then choose a member of which is no worse any other members, given some evaluation criteria.

**Metaheuristic** Simplifed means of selecting sets of related heuristics or models.

**Metamodel** Te common features of a family or related set of models.

**Optimize** To fully rank order a set, and then choose the member which ranks above all others.

# **Index**

#### **A**

Adaptation cultural, 160 by design, 241 personality and, 15–18, 78–81 rates of, 12 science of augmented agency and, 279–280 technological innovation and, 12–13 Adaptive aspirations, 201, 205 expectations, 205 feedback, 231, 240 ftness, 13, 92, 204 learning, 236, 237 Afective computing, 11, 140, 144, 151, 161 Agency modal dilemmas of, 84–85 technology and, 29 types of, 6

*See also* Artifcial agency; Augmented agency; Human agency Agentic functioning activation mechanisms of, 47–51 combinatorics of, 5, 272–273 complexity of, 31 (*see also*  Discontinuous processing) divergence of, 30 modalities of, 31 (*see also*  Ambimodality) ranges of, 31 (*see also* Ambiopia) rates of, 31 (*see also*  Dyssynchronous processing) sensitivities of, 31 (*see also*  Ambiactivity) upregulation and downregulation of, 78 Agentic metamodels digitalized generative, 45–48 modern adaptive, 44–45 premodern replicative, 40 problematics of, 57

© Te Author(s) 2021 **297** P. T. Bryant, *Augmented Humanity*, https://doi.org/10.1007/978-3-030-76445-6

Aggregation alternative explanation of, 80 of choice, 81 collective agents and, 76 mechanisms of, 93–95 Ambiactivity defnition of, 175–176 evaluation of performance and, 209 learning and, 225–226, 230–232 potential benefts of, 233 self-regulation and, 175 Ambidexterity self-generation and, 253, 260 Ambiguity, 289 empathy and, 157 entrogeneity and, 51–52 of learning, 214, 226, 233 positional, 145 Ambimodality artifcial compression and, 82 defnition of, 85 functional, 277 (*see also* Bounded rationality) high ambimodality, 87–91 low ambimodality, 86–89 self-regulation and, 191–192 Ambiopia defnition of, 117 (*see also*  Hyperopia) highly ambiopic metamodels, 119 moderately ambiopic metamodels, 103–104 non-ambiopic metamodels, 123–125 problem solving and, 115–118 Ambivalence, 289 learning and, 233–234

Annals School, 286 Anthropomorphism, 29, 70, 283 Aristotle (384-322 BCE), 55, 227, 285 Artifcial agency autonomy and, 10, 51 capabilities of, 45, 225, 230–232 compression of, 82 defnition of, 6 diferent levels of, 9, 58, 63–65 empathic capabilities of, 126 metamodels of, 59 over-processing by, 112 Artifcial empathy, 140 *See also* Afective computing Artifcial intelligence biases and, 30 capabiities of, 23 compositive methods and, 24, 46 metamodels of, 58–61 mind-body problem and, 11, 68 risks of, 241 Artifcial personality, 11, 48, 68, 140, 151, 232 Augmented agency defnition of, 6 (*see also*  Human-machine) dilemmas of, 65–66, 253–254, 273–274 humanization of, 29 metamodels of, 45–48 Autonomous vehicles collaborative supervision of, 9, 29, 46 evaluation of performance and, 214 learning and, 241 self-regulation and, 180, 181, 188 Autonomy augmented learning and, 242 collectivity and, 265 Enlightenment and, 75, 91, 228 evaluation of performance and, 220 modernity and, 44, 58, 139 risks for, 84, 89, 160, 193, 263 self-generation and, 247 self-regulation and, 169

#### **B**

Bandura, Albert digitalization of agency, 11, 173, 194 psychology of agency, 15, 169, 201, 228 Behavioral theories adaptive aspirations and, 205 of decision making, 69, 128–129 of economics, 106, 147 empathicing and, 144 future of, 129, 279–280 of organizations, 81, 93 Behavior-performance agentic metamodels and, 41–49 augmentation of, 21 evaluation of, 199–200 limited capabilities of, 19 personality and, 11 Bias artifcial intelligence and, 10 augmented risk of, 25, 27, 49, 64, 234 behavioral theories and, 19, 104 confrmation, 233 learning and, 226

machine learning and, 108, 141–142 persistence of, 6, 141, 158, 160 reinforcement of, 65, 142 Bloch, Marc (1886-1944), 286 Bounded rationality collective agents and, 93 digitalization and, 21, 128–129 *See also* Satisfcing Braudel, Fernand (1902-1985), 286 Bruner, Jerome (1915-2016), 143, 161, 228

**C** Capabilities digitalization of, 14 human-artifcial divergence, 31, 56, 65 limitations of, 2, 4, 19–20, 103 Chomsky, Noam, 16, 21, 228 Clinical medicine augmentation of, 64 expert systems and, 21 self-generation and, 251 self-regulation and, 172 Cognitive-afective processing agentic metamodels and, 41–49 augmentation of, 21 limited capabilities of, 19 personality and, 11 Cognitive empathicing collective mind and, 147 defnition of, 146 (*see also*  Satisfcing) heuristics and, 150 potential divergence of, 151–152, 158–159

Cognitive empathy ambiopic, 141–142 digitalization of, 151–152 justice and, 148 limited capabilities of, 141–142 mentalization and, 144–146 metamodels of, 149–150 modernity and, 139–140 Collective agents ambimodality of, 192 culture and, 131 digitalization and, 14 empathy and trust, 147 evaluation of performance and, 200, 204–205 historical models of, 40, 42, 178 learning and, 230–231 metamodels of, 86–89 origins of, 76–79 self-generation and, 247, 264 self-regulation and, 189 Collective choice aggregation of, 81, 94 augmented risks for, 219 behavioral theories of, 129 microeconomics and, 106 science of augmented agency and, 285 Collective mind augmented risks for, 143 cognitive empathicing and, 147 culture and, 12 empathy and, 145 Commitments defnition of, 13 future science and, 284–286 importance of, 70, 127, 284 referential, 42, 44

Compositive methods defnition of, 23–24 science of augmented agency and, 278, 287–289 self-generation and, 258 Consciousness limits of, 23, 29, 106, 145 role of, 28–29, 43 science of augmented agency and, 282–284 Construal Level Teory, 130 Contextual human sciences and, 15–18 performance evaluation criteria, 204 personality, 15 problem solving, 124 Contextual learning science of augmented agency and, 276, 290 self-regulation and, 191 Culture augmented humanity and, 4, 265 cognitive empathy and, 145, 150 collective agents and, 12, 18, 76, 131, 201 commitments and, 127, 284 indigenous peoples, 287 institutions and, 104 norms of, 79, 252 self-generation and, 247–248 self-regulation and, 182

#### **D**

Darwin, Charles (1809-1882), 43 Deep learning, 24, 218, 241

Descartes, Rene (1596-1650), 28, 139 Digital assistants, 5, 11, 140, 173 Digital divide, 12 Digitalization economic development and, 252 generative agentic metamodel and, 45–48 historical period of, 5, 10–12 period of, 45–47 problematics of, 269, 279 risks of, 6 Discontinuous processing evaluation of performance, 207 learning schemes, 232–234 self-regulatory schemes, 174 *See also* Ambiactivity Dyssynchronous processing learning rates, 232–234 performance evaluation rates, 208 self-regulatory rates, 174 *See also* Ambiactivity

#### **E**

Economics behavioral theories of, 129 classical theories of, 105, 146 cognitive empathy and, 140 eudaimonia and, 69 evaluation of performance and, 203 fourishing and, 107 future science and, 285 of welfare, 81 Engels, Friedrich (1820-1895), 77 Entrogenous mediation ambiactive learning and, 232–234 ambiactive self-regulation and, 182–183

ambimodality and, 95 ambiopic problem solving and, 115–118 defnition of, 55 evaluation of performance and, 218 science of augmented agency and, 273, 288–289 three main types of, 50–51 Epistemology behavior and, 20 commitments, 67, 160 contextual, 17 of science of augmented agency, 275 Ethics cognitive empathy and, 94 commitments, 20, 67, 160 contextual, 17 future science and, 285 moral disengagement risk, 190–191 theories of justice, 148 Eudaimonia, 69 future science and, 285 Evaluation of performance agentic metamodels and, 41–49 augmentation of, 21 collectivity and, 218–219 criteria of, 205 diferent rates of, 207 digitalization of, 201–202, 206–209 downregulation of, 211–213 metamodels of, 210–211 personality and, 218–219 sensitivity to variance and, 206–207 upregulation of, 213–216

#### **F**

False consciousness, 149 Feedback adaptive, 26 augmentation of, 21 divergent rates of, 175 evaluation of performance and, 199, 218 insensitivity to, 240 inter-cyclical, 45, 226 learning updates and, 225, 227 mechanisms of, 42 self-regulation and, 173 Feedforward augmentation of, 21, 46, 50 consciousness and, 49, 240 entrogenous mediation and, 85, 130 evaluation of performance and, 199, 218 intra-cyclical, 26, 48 learning rates and, 19, 225 organizational learning and, 231 self-regulation and, 173 Flourishing deprivation and, 251, 254 economics of, 107 feasibility of, 69 future science and, 284 self-generation and, 262 Freud, Sigmund (1856-1939), 77 Functional divergence in learning, 240–241 science of augmented agency and, 273 upregulation and downregulation, 94, 210 *See also* Agentic functioning

#### **G**

Gardner, Howard, 228 Generative Adversarial Networks (GANs) hyperparameters of, 28 science of augmented agency and, 278 self-generation of, 24–26 semi-supervised versions of, 232 Generativity artifcial agents and, 10–11 augmented metamodels, 45–49, 59–61 of augmented learning, 235–239 commitments and, 284–286 entrogenous mediation and, 55–56 human agency and, 247–249 human sciences and, 15–17 Giddens, Anthony, 287 Goodman, Nelson (1906-1998), 264

**H** Hayek, Friedrich (1899-1992), 24, 289 Hedonic, 68 Hegel, Georg Wilhelm Friedrich (1770-1831), 61 Heraclitus (?500 BCE), 55 Heuristics algorithmic, 25, 27, 112–114 cognitive empathy and, 145 cognitive limits and, 19 evaluation of performance and, 206 fast and frugal, 26, 104, 110 representational, 105 self-regulation and, 172

Higgins, E. Tory, 68, 170, 194 History acceleration of, 286–287 contingency of, 3 periods of, 2–4 science of augmented agency and, 286–287 Hobbes, Tomas (1588-1679), 178 *Homo economicus*, 105 Human agency contextuality of, 15–18 defnition of, 6 learning capabilities of, 226 modalities of, 75–77 modern problematics of, 14 sluggishness of, 83 Human-machine collaboration, 6 divergence of functioning, 49, 57 Hyperheuristics algorithms and, 116 problem solving and, 112 Hyperopia cognitive empathy and, 142 defnition of, 114 dilemmas of, 115–116 problem solving and, 116–118 Hyperparameters of agentic modality, 76–77 defnition of, 21 hyperheuristics and, 25 of science of augmented agency, 275 visibility of, 27

**I**

Idealization, 20, 67 *Imitation of Christ, Te*, 40 In-betweenness augmented mediation of, 50–56 (*see also* Entrogenous mediation) Heraclitus and, 55 Institutions ambimodality and, 95 felds, 22 historical change and, 270 risks of digitalization, 14 Intuition empathy and, 144, 157 self-generation and, 261 suppression of, 143, 158–160 Irrationality perception of, 143 (*see also*  Cognitive empathicing)

**J** James, William (1842-1910), 78

#### **K** Kahneman, Daniel, 68 Kant, Immanuel (1724-1804), 14, 67 Kempis, Tomas à (1380-1471), 40 Kuhn, Tomas (1922-1996), 286

#### **L**

Latour, Bruno, 3, 24 Learning ambiacitivity of, 232–234 ambiguous and ambivalent, 237 highly ambiactive metamodels, 235–238 individuals and, 228–229

Learning (*cont.*) limited capabilities of, 229 lowly ambiactive metamodels, 235–236 new problematics of, 242 organizations and, 230–231 superstitious, 234, 241 theories of, 227–228 Levinthal, Daniel, 105 Levi-Strauss, Claude (1908-2009), 205 Locke, John (1632-1704), 227 Luther, Martin (1483-1546), 40

#### **M**

Machine learning bias and, 27, 108, 122, 141 supervision of, 27 March, James (1928-2018), 76, 110, 289 Marx, Karl (1818-1883), 4, 205 Maximizing artifcial agents and, 59 cognitive empathicing and, 150–151 evaluation of performance and, 208–209 metamodel ft, 24, 86, 118, 155, 239, 253 in problem solving, 26, 110 satisfcing and, 104, 129 science of augmented agency and, 270 self-regulation and, 177 Mentalization limited capabilities of, 145–146 practical maximizing of, 146–148

problem solving and, 144–145 *See also* Cognitive empathy Metaheuristics, 26 in problem solving, 112 Metamodels of agency, 22, 56 (*see also* Agentic metamodels) defnition of, 21 of learning, 235–238 supervision of, 31, 62–65 Methods compositive, 23–24 science of augmented agency and, 280, 287–289 scientifc, 106–107 Microeconomics aggregation and, 94 classical theory of, 105 Mischel, Walter (1930-2018), 41, 79, 203, 231 Modernity adaptive metamodels, 44 agency in, 42 assumptions of, 28, 67 conceptual architecture of, 21 problematics of, 11, 20 problem solving in, 103–104 Myopia behavioral theories and, 104 cognitive empathy and, 141 historical focus on, 114–115 human tendency for, 49 hyperopia and, 114, 116 learning and, 226, 240 persistence of, 6 problem solving and, 19, 104 reinforcement of, 117 self-generation and, 250

**N** Neural networks opacity of, 113 self-generation of, 231 Neuro-economics, 161 Neuroscience consciousness and, 29, 68, 69 learning and, 228 mentalization and, 144 mind-body problem and, 68 science of augmented agency and, 284

#### **O**

Ontology commitments, 20, 67, 160 contextual, 17 of science of augmented agency, 275 Oppression augmented self-generation and, 263 augmented self-regulation and, 193 digitalization of, 12, 127 evaluation of performance and, 219 Optimizing algorithms and, 25 cognitive empathy and, 146–147 empathy and justice, 148 microeconomics and, 105–107 problem solving and, 110 satisfcing and, 104–131, 277 self-generation and, 256 self-regulation and, 171–172, 178 Organizations design of, 128 digitalization of, 85, 186 evaluation of performance and, 200 institutional theory and, 77 problem of aggregation and, 93–95 procedural routine and, 111 Other minds augmented misunderstanding of, 159 (*see also* Cognitive empathicing) problems of, 140

#### **P**

Personality agency and, 11, 18 (*see also*  Artifcial personality) cognitive-afective model of, 41 habits and, 78 trait models of, 16 Persons in context, 15–18, 41, 44, 77 evaluation of performance and, 200 self-regulation and, 170 Piaget, Jean (1896-1980), 229 Plato (?429-347 BCE), 55, 227 Positive psychology, 69 Potentialities human-artifcial divergence, 56 limitations of, 2 self-generation and, 248–249 self-generative risks, 262–264 Premodernity agency in, 9 assumptions of, 28 historical period of, 2, 39 replicative metamodel, 40

Problematics defnition of, 13 of digitalization, 14, 58, 60, 269 of modernity, 14, 43, 57 Problem representation, 129–130 problem solving metamodels and, 111–112 satisfcing and, 103–105 science of augmented agency and, 290 Problem solving ambiopia of, 115–118 digitalization of, 107–108, 116–118, 129–130 highly ambiopic, 119 hyperopia in, 114–115 metamodels of, 109–111 non-ambiopic metamodels, 123–125 routines of, 111 satisfcing in, 103–105 Procedural action, 78–81

#### **R**

Rationality cognitive empathy and, 139 ecological, 15, 19, 129–131 modernity and, 103 of sampling and representation, 129–130, 278 *See also* Bounded rationality Rawls, John (1921-2002), 17, 148, 161, 205 Regulatory Focus Teory, 68, 130 Rousseau, Jean-Jacques (1712-1788), 227

#### **S**

Sampling artifcial over-sampling, 116 cognitive empathy and, 152 hyperopic, 114 intelligent, 46 myopic, 109 of other minds, 141 satisfcing and, 103–105 sparse, 82 Satisfcing cognitive empathicing and, 146 descriptive, 104, 109, 120–121 extremes, 117 future of, 128–129, 277–278 normative, 105, 109, 121–122 two types of, 104 Science dualism of, 43, 279 historical metamodels of, 279–280 methods of, 106 resistance to, 67 Science of augmented agency agentic combinatorics and, 272–273 compositive methods and, 278, 287–289 conceptual architecture of, 270–273 dilemmas of, 273–274 future prospects of, 289–290 intelligent sampling in, 278 problematics of, 279 Search artifcial over-computation, 112 hyperopic, 114 myopic, 110 satisfcing and, 104

Self-efcacy digitalization risks for, 209, 214 false sense of, 193 illusion of, 217, 263 self-regulation and, 169 Self-generation ambidexterity and, 253 augmented worldmaking and, 264–265 digitalization of, 249–252 digital resistance or surrender, 252–253, 262 history of, 247–249 human fourishing and, 265 modern metamodels of, 255–258 new problematics of, 254 Self-regulation ambiactivity of, 179 digitalization of, 172–175 discontinuous schemes, 180–181 dyssynchronous rates, 179–181 metamodels of, 176–179 modernity and, 169–170 moral disengagement risk, 190–191 psychological theories of, 170–171 Semi-supervision bias and, 57, 234 learning and, 232 metamodels and, 25 Sen, Amartya, 17, 69, 105, 148 Sensitivity to variance artifcial hypersensitivity, 5 augmented learning and, 225–226 cognitive empathicing and, 150 entrogeneity and, 51 human insensitivity, 5, 19

referential commitments and, 118 science of augmented agency and, 276 Sensory perception agentic metamodels and, 41–49 augmentation of, 21 limited capabilities of, 19 personality and, 11 Shoda, Yuichi, 41, 79, 203, 231 Simon, Herbert (1916-2001) on digitalization, 113, 279 on human agency, 76 satisfcing and, 104, 109, 146, 288 on scientifc insight, 11, 18 Singularity, 11, 270 Skinner, B.F. (1904-1990), 228 Smith, Adam (1723-1790), 3, 105, 178 Social Cognitive Teory, 169 Socrates (470-399 BCE), 40 Superstition digitalization of, 12 learning and, 230 premodernity and, 2 Supervision of augmented self-generation, 266 of augmented self-regulation, 194 challenges of, 7, 61 dysfunctional, 10, 64 of science of augmented agency, 282–284

**T**

Technology assistance of agency, 4 digitalization and, 48 history of, 39

Taler, Richard, 129 Trust augmented problematics of, 60 cognitive empathy and, 143 human-machine, 15 potential erosion of, 143, 159 sociality and, 83 Tversky, Amos (1937-1996), 68

**U** Utility maximization, 109

**W**

Wegner, Daniel (1948-2013), 193 Wittgenstein, Ludwig (1889-1951), 20 Worldmaking, 264–265