Philip F. Yuan Hua Chai Chao Yan Neil Leach Editors

# Proceedings of the 2021 DigitalFUTURES

The 3rd International Conference on Computational Design and Robotic Fabrication (CDRF 2021)

Proceedings of the 2021 DigitalFUTURES

Philip F. Yuan • Hua Chai • Chao Yan • Neil Leach Editors

# Proceedings of the 2021 DigitalFUTURES

The 3rd International Conference on Computational Design and Robotic Fabrication (CDRF 2021)

Editors Philip F. Yuan College of Architecture and Urban Planning Tongji University Shanghai, China

Chao Yan College of Architecture and Urban Planning Tongji University Shanghai, China

Hua Chai College of Architecture and Urban Planning Tongji University Shanghai, China

Neil Leach College of Architecture and Urban Planning Tongji University Shanghai, China

#### ISBN 978-981-16-5982-9 ISBN 978-981-16-5983-6 (eBook) https://doi.org/10.1007/978-981-16-5983-6

© The Editor(s) (if applicable) and The Author(s) 2022. This book is an open access publication. Open Access This book is licensed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made.

The images or other third party material in this book are included in the book's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the book's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.

The use of general descriptive names, registered names, trademarks, service marks, etc. in this publication does not imply, even in the absence of a specific statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use.

The publisher, the authors and the editors are safe to assume that the advice and information in this book are believed to be true and accurate at the date of publication. Neither the publisher nor the authors or the editors give a warranty, expressed or implied, with respect to the material contained herein or for any errors or omissions that may have been made. The publisher remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

This Springer imprint is published by the registered company Springer Nature Singapore Pte Ltd. The registered company address is: 152 Beach Road, #21-01/04 Gateway East, Singapore 189721, Singapore

# Preface

DigitalFUTURES is an annual series of academic events, consisting of a conference, workshops and exhibition, hosted by the College of Architecture and Urban Planning, Tongji University, Shanghai. The aim of DigitalFUTURES is to encourage international collaboration and interaction and to promote theoretical and scientific research into computational design, robotic fabrication and other areas of architectural intelligence. The "2021 DigitalFUTURES—The 3rd International Conference on Computational Design and Robotic Fabrication (CDRF 2021)" provides an international platform for advanced researches addressing Material Intelligence in architecture.

'Materialist philosophers, it is becoming increasingly clear, cannot afford to ignore the basic fact that the study of matter does matter.' — Manuel DeLanda.

Why Material Intelligence? First of all, we must recognize that there is an increasing emphasis on the intelligent use of materials and on the use of intelligent materials in contemporary architectural culture. One of the primary reasons for this has been the question of material performance. Concerns about structural and environmental performance, in particular, have become paramount. These concerns go beyond mere economic considerations, to become an ethical imperative in a world of diminishing resources and global warming. Secondly, we must also recognize that we can now use intelligent computational techniques, such as artificial intelligence, to make our designs ever more materially intelligent. Thus, while intelligent computational techniques remain immaterial, they can be used to inform the intelligent design of a material building. Material intelligence, then, stands not only for the intelligent use of materials and the use of intelligent materials in the construction of a building, but also for the use of intelligent computational techniques to design the material form of that building.

# Organization

# Committees

# Honorary Advisors

Philippe Block ETH Zurich, Switzerland Jane Burry Swinburne University of Technology, Australia Mark Burry Swinburne University of Technology, Australia Ximing Chen Nanyang Technological University, Singapore Lieyun Ding Huazhong University of Science and Technology, China Jian Gong Shanghai Construction Group (SCG), China Guoqiang Li Tongji University, China Jiaping Liu Xi'an University of Architecture and Technology, China Areti Markopoulou Institute for Advanced Architecture of Catalonia, Spain Achim Menges University of Stuttgart, Germany Antoine Picon GSD, USA Patrik Schumacher Zaha Hadid Architects (ZHA), UK Mette Ramsgaard Thomsen Royal Danish Academy, Denmark Zhiqiang Wu Tongji University, China Yimin (Mike) Xie RMIT University, Australia Xianzhong Zhao Tongji University, China

# Organization Committees


# Scientific Committees

Wanyu He Xkool, China

Felix Amtsberg University of Stuttgart, Germany Alisa Andrasek RMIT University, Australia Nic Bao RMIT University, Australia Thomas Bock Technical University of Munich, Germany Serban Bodea University of Stuttgart, Germany Biayna Bogisian Florida International University in Miami, USA Daniel Bolojan Florida Atlantic University, USA Matias Del Campo University of Michigan, USA Brad Cantrell University of Virginia, USA Tengwen Chang National Yunlin University of Science and Technology, Taiwan, China Kristof Crolla University of Hong Kong, China Benjamin Dillenburger ETH Zurich, Switzerland Marcus Farr Tongji University, China Melissa Goldman University of Virginia, USA Yunsong Han Harbin Institute of Technology, China Hua Hao Southeast University, China Tim Heath University of Nottingham, UK Alvin Huang University of Southern California, USA Weixin Huang Tsinghua University, China Guohua Ji Nanjing University, China Gene Ting-Chun Kao ETH Zurich, Switzerland Immanuel Koh Singapore University of Technology and Design, Singapore Neil Leach Tongji University, China Guan Lee University College London, UK Hyejin Lee Tongji University, China Biao Li Southeast University, China Linxue Li Tongji University, China Yujie Lu Tongji University, China Peng Luo Tongji University, China Andrea Macruz Tongji University, China Sandra Manninger University of Michigan, USA Wes McGee University of Michigan, USA Xianchuan Meng Nanjing University, China Virginia Melnyk Tongji University, China/Clemson University, USA Kris Mun University of Minnesota, USA Guvenc Ozel University of California, Los Angeles, USA Gilles Retsin University College London, UK Klaas de Rycke Bollinger + Grohmann, Germany Bob Sheil University College London, UK

Xing Shi Tongji University, China Miroslaw J. Skibniewski University of Maryland, USA Roland Snooks RMIT University, Australia Satoru Sugihara Architectural Technology Laboratorial Venture, Japan Chengyu Sun Tongji University, China Kostas Terzidis Tongji University, China Oliver Tessmann Technische Universität Darmstadt, Germany Kathy Velikov University of Michigan, USA Tomas Vivanco Tongji University, China Xiang Wang Tongji University, China Makoto Sei Watanabe Tokyo City University, Japan Dylan Wood University of Stuttgart, Germany Jing Wu Southern University of Science and Technology, China Leiqing Xu Tongji University, China Weiguo Xu Tsinghua University, China Michael Weinstock The Architectural Association, UK Chao Yan Tongji University, China Feng Yang Tongji University, China Jiawei Yao Tongji University, China Kaiho Yu University of Applied Arts Vienna, Austria Philip F. Yuan Tongji University, China Xu Zhen Tianjin University, China

# Contents

### Computation and Formation





Contents xiii




# **Computation and Formation**

# **Serlio and Artificial Intelligence: Problematizing the Image-to-Object Workflow**

Jean Jaminet1(B) , Gabriel Esquivel2, and Shane Bugni2

<sup>1</sup> Kent State University, 800 E. Summit St., Kent, OH 44242, USA jjaminet@kent.edu <sup>2</sup> Texas A&M University, College Station, USA

**Abstract.** Virtual design production demands that information be increasingly encoded and decoded with image compression technologies. Since the Renaissance, the discourses of language and drawing and their actuation by the classical disciplinary treatise have been fundamental to the production of knowledge within the building arts. These early forms of data compression provoke reflection on theory and technology as critical counterparts to perception and imagination unique to the discipline of architecture. This research examines the illustrated expositions of Sebastiano Serlio through the lens of artificial intelligence (AI). The mimetic powers of technological data storage and retrieval and Serlio's coded operations of orthographic projection drawing disclose other aesthetic and formal logics for architecture and its image that exist outside human perception. Examination of aesthetic communication theory provides a conceptual dimension of how architecture and artificial intelligent systems integrate both analog and digital modes of information processing. Tools and methods are reconsidered to propose alternative AI workflows that complicate normative and predictable linear design processes. The operative model presented demonstrates how augmenting and interpreting layered generative adversarial networks drive an integrated parametric process of three-dimensionalization. Concluding remarks contemplate the role of human design agency within these emerging modes of creative digital production.

**Keywords:** Serlio · Artificial intelligence · Language · Design agency

# **1 Influence of the Disciplinary Treatise**

The classical disciplinary treatises of the Renaissance have become a technical-literary genre that today are considered an essential part of the historical development of architecture. From the time of its publication to the present, no treatise has been more influential than Sebastiano *Serlio's Tutte l'opere d'architettura et prospetiva*. Serlio's ambition did not rely solely on producing an encyclopedic treatise composed of seven volumes. One of his most important objectives was to provide copious illustrations to the first five volumes and the seventh. Each of these volumes has, in fact, abundant architectural drawings, all large woodcuts, sometimes full pages, which were a great challenge and achievement in the art of printing for the period. Certainly, Serlio's architectural treatise was the first to provide a visual dimension to the study of architecture in print, something never seen before and in such a forceful way. Introducing the visual culture of architecture via drawings and diagrams is perhaps the most important achievement of Serlio's treatise.

Architectural language is typically understood through its coded operations. Serlio was instrumental in developing this code through his canonization of the five orders. These codes, or rules, are laid out in the earlier volumes of his treatise, then applied in the later volumes. What is fascinating about Serlio's experiments is that in applying the codes, he proceeds to vigilantly deviate from them. The results are sometimes defined by the code, where the code and the product are isomorphic—that is, a one-to-one relationship exists between the plan and the section. However, at other times architectural elements are organized or misaligned, which suggests that a latent diagrammatic operation other than the code is at work. Architectural language is not the code; the language emerges when the code is scrambled. Thus, in Serlio we find entrenched the architectural code (transposition) within its analogical modulation (transfiguration). These insights into the discordant pairing of the analog and the digital suggest alternative theoretical parallels between brains and computer as well as emerging modes of creative production regarding advancements in machine learning.

# **2 Analogical and Digital Flux**

Language (the possibility of communication) cannot be separated in distinct categories we do not have one language that is analogical (pictorial and continuous) and another that is digital (coded and discrete). According to Gilles Deleuze, "From one point of view, we think of … analog and digital, as two completely opposite determinations. But from another point of view, we could say that every digital language and every code is deeply embedded in an analogical flux" [1]. For Deleuze, language is defined by the discordant pairing of both analogical and digital modes of communication. A rudimentary understanding of current digital display technology may help to clarify this enigmatic concept. When digital signals are received by a display, they are continuously decoded as a field of light pulses displayed as discrete points of color (pixels). The signal remains coded, but the screen becomes responsive or modulates as the digital code is transplanted into the analogical flux of the pictorial image.

This modulation is where Deleuze locates the function of the diagram in painting and in the aesthetic act itself. "The diagram, the agent of analogical language, does not act as a code, but as a modulator" [2]. According to Deleuze, the intention of the diagram is to remove any predetermined "figurative givens" or predetermined resemblances that might be implied on the canvas or in the artist's mind. Thus, "The diagram is … the operative set of asignifying and nonrepresentative lines and zone, line-strokes and color-patches" [2]. Deleuze locates Francis Bacon's work somewhere between abstract painting (cubism) and abstract expressionism (art informel). The code is prevalent in the former—geometric shapes imply figurative resemblances (optical space of representation)—while the latter is all diagram; the modulating power of the diagram becomes inert as it is deployed across the entire canvas (tactile space of line and color). Deleuze notes, "The manual diagram produces an irruption like a scrambled or cleaned zone, which overturns the optical coordinate as well as the tactile connection" [2]. This scrambling—the continual conversion between analog and digital, figure and figuration, optic and haptic—is the domain of the diagram and that which constitutes the possibility of the aesthetic act.

# **3 Analog-to-Digital Information Processing**

The human brain receives and processes information through both analog and digital means. Cognition is understood as an integrated analog-to-digital conversion process. This prevailing model of information processing gained credibility when neuroscientists in the 1980s demonstrated that neurons exhibit properties of both the analog and the digital. As Shores observes, "[I]f we consider neurons in terms of their being in a 'firing or non-firing state,' then we are examining their digital operation, but if we emphasize the 'ongoing chemical processes' of the brain, then we are looking at their analogical functioning" [3]. In other words, our primary means of acquiring and processing information is a continuous stream of sensory perception (analog); however, to store and retrieve this information, these experiences are encoded into discrete units (digital).

Computational theorists are now developing AI systems that integrate both analog and digital modes of information-processing. The primary task of such systems is to make a computer that better models human intelligence. However, designers are also interested in how these new technologies not only simulate reality but also become creative tools of production, particularly in regard to generative adversarial networks (GANs).

Every GAN has two neural networks—a generator and a discriminator. The generator synthesizes new sample images from random noise, while the discriminator samples from both the initial dataset (input images) and the generator's output. The generator's output is compared to the initial dataset by the discriminator to determine whether the synthesized image can be considered real or fake. As the generator receives feedback from the discriminator, it learns to synthesize more images better resembling the input images. In addition, progressive training can improve detail and resolution with each successive training.

The primary intention of these image-based neural networks is to synthesize artificial images that are indistinguishable from authentic images. However, GANs can also operate diagrammatically. In this sense, the GAN creates an exchange between continuous analogical modulation and codification of discrete digital units. Analog information (image input) and the digital information (noise) are synthesized by the discriminator then fed back into the system as new inputs. This process creates a continuous feedback loop transferring code into the analog pictorial flow of the image in each successive training. In other words, the GAN creates the possibility of continuous modulation of the analog and the digital through pictorial flux and transplanted code.

This is particularly the case with poorly-trained AI models that produce *artifacts* effects or residues made visible by their diagrammatic scrambling. Bacon might call these "involuntary free marks" and Deleuze might describe them as "*asignifying traits* that are devoid of any illustrative or narrative function" [2]. Figure, ground, and contour begin to lose their coherence in the synthesized image, allowing "a form of a completely different nature to emerge from the diagram" [2]. Although the common or crude purpose of GANs can produce figurative resemblance, its novel intelligence may unlock new avenues for design and creative production. The power of the GAN is not to mislead but to modulate.

# **4 Problematizing the Image-to-Object Workflow**

This research is part of a larger design project that investigates the illustrated treatises of Serlio in parallel with discussions about aesthetics and advancements in artificial intelligence.<sup>1</sup> The intention of these experiments is not simply to synthesize new images that simulate Serlio's illustrations but rather to modulate their qualities and problematize their 2D to 3D translation beyond the rules of representation and orthographic projection.

**Dataset Curation.** The image data sets collected for our initial investigations were retrieved from the Avery Architectural and Fine Arts Library's extensive online holdings of the works of Sebastiano Serlio. Avery's Digital Serlio Project [4] includes full-page digital scans of multiple published and unpublished editions of all seven of Serlio's manuscripts and his *Extraordinario Libro* and several subsequently published manuscripts of collected works. To create our datasets, pages from the manuscripts were downloaded from Avery's repository. Images of individual objects were cropped from these pages in 1024 px × 1024 px format to accommodate various image-based machine learning platforms.

Dataset images were collected based on broad categories of the illustrated objects columns, porticos, plans, and facades. Although Serlio's treatise is organized based on tectonics (geometry, perspective, orders) and typology (monuments, churches, domestic buildings), we chose instead to curate our datasets by object type. Since GANs require input based on superficial likeness between images, our datasets exploit the self-same repetition inherent to Serlio's drawing tectonics. These broad groupings of drawings were necessary to establish a dialog between the coding of classical objects and the analog-todigital modulation of image-based neural networks. The intention of this dataset curation and subsequent 2D and 3D experimentation was to explore the capacity of the image and its qualities to suggest alternative ideas about materiality and logics of assembly beyond the techniques of orthographic projection and its related narratives of language and representation.

**Layered Generative Adversarial Networks.** Following image curation, experiments were conducted to train the Serlio datasets using various GAN platforms. Again, the primary purpose of image-based GANs is to synthesize artificial images that maintain fidelity to the dataset. However, as a productive design tool, we were more interested in the latent image qualities that became evident during the training process.

<sup>1</sup> All design research was conducted during the 2021 spring semester at Texas A&M University College of Architecture under the instruction of Jean Jaminet and Gabriel Esquivel and with the assistance of Shane Bugni. Student contributors include Brenden Bjerke, Erin Carter, Nate Gonzalez, Kamryn Massey, Ana Rico, Luis Sanabria, John Scott, Dalton Turpin, Austin White, and Spenser Young.

In our initial styleGAN experiments, the Serlio datasets (input images) were trained against a pretrained model (generator input) by the discriminator. We chose pretrained models (generic datasets of faces, buildings, landscapes, etc.) for the generator input instead of random noise to expedite the training results. In cases where styleGAN platforms required a domain of images to feed into the neural network, other GANs—like sinGAN and style transfer—that only require a single image were deployed. The style-GAN trainings predominantly generated various image distortions. Conversely, sinGAN outputs seemed to break down individual images into smaller fragments. Style transfer output images were used later in the process as texture maps to enhance details of 3D objects.

Distortions that were produced reveal other shapes, profiles, and postures of the objects that move the image away from its original resemblance and semantic content. For example, a regulated facade becomes a cascading field of apertures, or a single arched opening becomes a winding surface of figural voids; both produce estrangement in silhouette and scale. These distortions are based on image values (color, contrast, saturation, etc.) rather than formal complexity and linguistic articulation (line, edge, plane, volume, etc.). Fragmentation, on the other hand, concerns pulling the image apart into components that are detached or incomplete. For example, incongruities in patterning of a segmented masonry arch become reassembled, suggesting a tectonic that is foreign to the initial construction system. This fragmentation of the Serlio objects reveals alternative logics of assembly beyond the classical tectonics suggested by their orthographic projection. These distortions and fragmentations are inherent to the way machine learning interprets data through its analog-to-digital conversion process. Of equal importance is how designers guide machine learning processes through direct manipulation of the code and visual interpretation of the output (Fig. 1).

**Fig. 1.** Fragmented pediment produced by sinGAN (left) and style transfer image projected onto morphed fragment (right). Image by Spencer Young.

Throughout the training processes, the degree of distortion and fragmentation can be controlled by adjusting various parameters of the GAN, including *truncation value* and *scale of manipulation*. The adjustment of these parameters allows designers to claim agency in the machine learning process to control the fidelity to the initial input datasets and the uniqueness of the output. Lower truncation values synthesize images that are more self-similar to the initial dataset, whereas higher values can produce results that deviate significantly. Likewise, adjusting the scale of manipulation adjusts the size of the random noise fed into the GAN. Scaling up decreases the size of the noise and creates highly articulated images (changes the detail but not the figure of the image). Scaling down increases the size of the noise and creates chunky or fragmented images (reduces detail; however, creates figuration). The intention of these methods was not to synthesize columns, facades, or doorways that would be indistinguishable from Serlio's illustrations but to use the digital code of the GAN as a substitute for the architectural or orthographic code to scramble the analog pictorial signal of the image.

**Integrated Parametric Three-Dimensionalization.** Industry-standard 3D modeling software platforms use mathematical coordinate-based representations to simulate surface and mesh geometry. These models begin as primitive shapes (curve, cube, sphere, etc.) that gain complexity by augmenting the component geometry. Geometry is manually created and manipulated through direct access to its points, curves, surfaces, and polygons displayed as orthographic projection drawings in multiple default windows of the user interface. These platforms give designers the power to create any 3D object by drawing on skills of visual and tactile acuity and knowledge of conventional representational systems.

Other 3D modeling and animation software applications utilize procedural generation tools rather than coordinate-based geometry. *Procedural generation* "is a method of generating data or content algorithmically as opposed to manually that combines human-generated assets and algorithms with computer-generated randomness and processing power" [5]. The gaming industry uses procedural generation tools to create open world games that generate environments in real time, providing an immersive experience of limitless variation based on choices made by the player. Likewise, these powerful tools are utilized in movie editing to create fantasy landscapes and crowd swarms with unprecedented randomness [6].

We utilized these procedural generation tools to proceed with our threedimensionalization of the GAN outputs. These tools can interpret GAN outputs through voxelization of the image content. Voxelization is similar to creating a heightmap that uses image qualities (color, black and white values, etc.) to determine the 3D coordinates of the 2D image data. However, whereas heightmaps extrude pixel information only along one coordinate axis, voxels can be projected in all directions, thereby creating more complex spatial geometry. It was important that our 3D process follow a similar logic of neural network image generation, wherein the computer procedurally generates the 3D model based on parametric constraints available in the code. However, direct access to the 3D model geometry is limited; instead, design agency is asserted by adjusting the parameters of the script. To accompany this more conceptual description, one example is outlined below that illustrates this image-to-object workflow in more technical detail.

# **5 Operative Model: Portico**

One of Serlio's major contributions to architectural discourse is his canonization of the classical orders. These classical elements are exhibited and analyzed as fragments in his early volumes, then deployed in larger building configurations. Nowhere in his treatise is this mereology more evident than in the *Extraordinario Libro*, where Serlio creates prototype doorways (porticos) in which classical components (columns, arches, pediments, etc.) are deployed and combined with unprecedented variation. These part-towhole relationships become the speculation of this operative AI model. Neural networks and procedural generation tools are used to speculate about latent assembly logics within the portico images that exist outside classical tectonics and traditional orthographic representation.

In this exploration, the portico undergoes a series of estrangements (neural network image generation), reassembly (procedural generation modeling), and stylization (texture mapping). First, our curated dataset of Serlio's porticos is processed by a styleGAN, the output of which inherits multiple features of individual porticos (image training). Due to the similarity between input images, the most productive synthesized images exhibit distortions of the architectural component (details), while the overall figure (silhouette) remain largely unchanged. Based on these characteristics, a single frame from the resultant latent walk was selected as the input for a succeeding sinGAN training. The intention of introducing the sinGAN was to fragment the portico image into multiple parts (fragmentation) (Fig. 2).

**Fig. 2.** Image-to-object workflow: estrangement (neural network image generation), reassembly (procedural generation modeling), and stylization (texture mapping). Image by Nate Gonzalez.

The sinGAN selects patches of elements it can recognize and repeat to create "diverse samples that carry the same visual content as the [input] image" [7]. This process can be useful in generating images that simulate the randomness in natural environments, cityscapes, and flocks and swarms. However, when trained on an image that depicts a singular object, the sinGAN produces unexpected configurations that have the effect of fragmenting rather than unifying the image. With the portico, the resulting fragmentation creates alternative parts loosely based on the textures of the illustration. The assembly logic of the synthesized image is based on image construction rather than drawing tectonics.

The subsequent 3D reassembly process combines both procedural generation tools and conventional 3D modeling. The fragmented images are voxelized by assigning depth values to the range of black and white pixels (fragment three-dimensionalization). Two voxelized fragments are morphed to produce an object that inherits 3D information from both. This merged fragment is then combined with a 3D model of a recognizable portico fragment (fragment morphing). This novel morphological assembly (object output) simultaneously exhibits the figuration of voxelized fragments (scrambled digital code) and partial reconstruction of the familiar language of architecture (pictorial analog signal). An alternative assembly logic emerges that is a matter of data processing rather than tectonic joinery or orthographic projection.

The final step in the reassembly process deals with creating additional levels of detail. Utilizing a neural style transfer, the UV map of the morphological assembly is blended with a Serlio detail image then remapped to the object (style transfer). Detail is assigned rather than constructed through tectonic joinery (image mapping). This recursive process of estrangement, assembly, and stylization allows an alternative assembly logic to emerge that becomes a matter of image construction and data processing rather than tectonic joinery or orthographic projection.

# **5.1 Intelligence Beyond Serlio**

Beyond the intentions to reinterpret historical artifacts and deploy novel design technologies, the focus of this research is to address the transformative potential of AI in architecture. Artificial intelligence is not simply a toolset to optimize building elements but rather emphasizes architecture's ability to serve as a cultural marker. Mario Carpo observes that in failing to recognize opportunities to expand architectural intelligence through technology, "the design professions seem to have flatly rejected a techno-cultural development that would weaken (or, in fact, recast) some of their traditional authorial privileges" [8]. These assertions radically challenge computational methodologies as tools of expediency and efficiency and more importantly embrace the possibility of using them as strategies of communication between the human mindset and alien intelligence. Serlio's treatise on architecture, although inherently analog, anticipates these contemporary technological circumstances. By cataloging a considerable variety of buildings, details, and typologies in a uniform manner, Serlio, in a sense, created his own data set 400 years before IBM coined the term. The ever-present problem of agency within the discipline necessitates a revisitation of these manuals through a hyper-digital lens.

By surrendering established roles of authorship, alternative design agency in the machine learning process is acquired through visual selection and interpretation, thus fostering a shared ownership between designer and machine. The machine is tasked with indiscriminately processing content through iteration, replication, and complication of data through feedback loops in the computational process. As such, human intervention is required at the level of mundane digital tasks and post-processing image manipulation. As Abrons and Fure note, "Designers can take up this challenge by critically considering the digital processes we take for granted, such as default render settings, photoshop filters, geometric primitives, 'pan' and 'zoom,' extrude commands, and so forth" [9]. These discerning maneuvers can be further integrated in varying degrees at different stages, rather than coming at the end of a more linear design process. Seemingly trivial operations become the primary form of mediation between human perception and machine learning.

Furthermore, the paradigm of drawing has undergone radical change since Serlio and no longer provides a stable reference for the discipline of architecture. What we think are drawings are actually pictures of drawings or simulations of lines on a digital interface. "Images are inherently dynamic, and our tendency to think of them as static or fixed is a result of the psychohistorical residue of drawings…" [10]. Likewise, the facility with which these images can be manipulated suggests that the drawing no longer constitutes an original act of creation. Problematizing the image-to-object workflow through imagebased neural networks and procedural generation 3D modeling contests the hegemony of traditional drawing tectonics and assembly logics associated with orthography. These synthesized images and objects are fragmentary, which is a characteristic of the latent diagrammatic operation in Serlio's drawings. However, this kind of machine learning can rework the way we present, learn, and teach architecture because it scrambles the orthographic codes or conventions that have defined architectural language since the Renaissance and have persisted through pedagogies established by the École des Beaux-Arts and the Bauhaus. The continuous modulation of analog and digital information processing defies linear design processes and dialectical translations from drawing to building—signaling a shift away from modern and postmodern notions of consistency, semantics, and representation toward a new paradigm of medium, communication, and agency—thereby creating the possibility for new languages to emerge.

Challenges to architecture in the twenty-first century demand a historical reflection on theory and technology as critical counterparts of architecture's intelligence, particularly in regard to visual/spatial acuity unique to the discipline. Serlio's illustrated exposition serves as a conduit to initiate these discussions about contemporary aesthetic communication and shared design agency that may allow architecture to gain disciplinary perspective on our technological circumstances and stimulate new modes of perception and creative digital production.

### **References**


**Open Access** This chapter is licensed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made.

The images or other third party material in this chapter are included in the chapter's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the chapter's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.

# **A Generative Approach to Social Ecologies in Project [Symbios]City**

Hao Wen(B) , Pengcheng Gu, Yuchao Zhang, Shuai Zou, and Patrik Schumacher

Design Reaserch Lab, Architectural Association, London WC1B3ES, UK hao.wen@aaschool.ac.uk

**Abstract.** The following paper talks about the studio project [Symbios]City, which is developed as a design research project in 2020–2021 Schumacher' studio on social ecology of the graduate program in Architectural Association's design research lab. The project aims to create an assemblage of social ecologies through a rich but cohesive multi-authored urban district. The primary ambition is to generate an urban area with a characterful, varied identity, that achieves a balanced order between unity and difference avoiding both the sterile and disorienting monotony of centrally planned modernist cities and the (equally disorienting) visual chaos of an agglomeration of utterly unrelated interventions as we find now frequently. Through a thorough research process, our project evolves mainly out of three principles that are taken into consideration for the development of our project: topological optimization, phenomenology, and ecology. By "ecology", we understand it as a living network of information exchange. Therefore, every strategy we employ is not merely about reacting to the weather conditions, but instead it is an inquiry into the various ways we can exploit the latter, a translation of the weather conditions into spatial and programmatic properties. [Symbios]City therefore aims at developing a multi-authored urban area with a rich identity that achieves a balance between the various elements. [Symbios]City began formally from topological optimization, developed based on studies on ecology, and concluded the design following our phenomenological explorations, aiming at a complex design project that unifies the perception of all scales of design: from the platform to the skyscrapers.

**Keywords:** Structural optimization · Social ecologies · Phenomenology · Generative design

# **1 Introduction**

The studio extrapolates from recent urban concentration processes, assuming a further intensification. Ecological challenges like climate change in general and rising sea levels in particular also point to the wisdom of further urban densification because a sprawling urbanization is much harder to protect. Densification is also mandated by the transformation of cities from industrial centres to knowledge and innovation centres where

© The Author(s) 2022

The work of Wen and Gu are equal.

the collaborative integration of self-directed, creative labor processes mandates the full exploitation of colocation synergies in hyper-dense, permeable urban fabrics.

The proposal is a part of the studio project developed by Schumacher Patrik in AADRL 2020–2021. It envisions a new high-rise district between the City of London and Canary Wharf, straddling both sides of the river. Four teams, each with four architects, simulate the potential for a multi-author urbanism that generates a legible order and unity across a highly differentiated 3D urban fabric. The project assumes the hegemony of tectonism as a precondition for achieving the required density of differentiation and correlation. At the same time, the proposal serves as a highly synthesized testing ground for several important theorems in the realm of architecture, including bendsoe and his predecessors' studies on topological optimization, lovelock's argument for the concept of Gaia and ecophysiology as well as Schumacher's parametric semiology.

The development of the project focuses on several important environmental and social parameters. Wind, a strong environmental factor for skyscraper designs, is implemented in the early stage of the design. Lateral forces in coordination with the primary wind direction in London crucially influence the shape language of the structure benchmarks. Flood, another potential hazardous environmental condition, is responded with hydraulic generated ground conditions. Sunlight, a dynamic parameter throughout the day and year round, shapes the groupings of the building façade and their overall performances. The proposal also aims to optimize within itself and the nearby neighborhoods, creating social clusters for versatile interior experience and moments of integration and fragmentation according to principles in phenomenology.

Although it is hard to summarize a 12-month project in a 10-page paper, we intend to document the thought process, important milestones and the design development of the project in the following article.

# **2 Topological Optimization as a Method of Parametric Semiology**

As Schumacher argues, "all design is communication design. The built environment, with its complex matrix of territorial distinctions, is a giant, navigable, information-rich interface of communication. Each territory is a zone of communication. It gives potential social actors information about the communicative interactions to be expected within its bounds."1 Parametrcism speaks to this level of semiology from its rule-based generation process and the overall part-to-whole relationships. Architectural elements communicate one-anther at the initial geometrical development and are gradually into spatial environments that correspond. However, it is crucial to be aware of how a semiotic system should demonstrate both syntax and semantics. As his criticisms on the postmodern architectural approach suggests, Schumacher argues, "Eisenman's work had no semantic dimension, and Jencks had no syntax."2 Topological optimization (TO) approach, on the other hand, naturally fit in the agenda of parametric semiology. As our design research suggests, in the later paragraph of the paper, TO processes have the capacity to generate a catalogue of varied yet similar (varied due to differences of input parameters

<sup>1</sup> Patrik (2012).

<sup>2</sup> Patrik (2012).

and mathematical principles; similar because of geometrical generation algorithms as well as TO's unique design language) of structural benchmarks. These benchmarks are grouped by tectonic similarities and potential spatial behaviors and are later translated into typical building materials of skyscraper designs as well as a field of interconnected neighborhoods.

### **2.1 Background**

To this day, in lieu of the development of computation aid design and revitalization of mathematical structural principles since the design of Antonio Gaudi, simulation and optimization methodologies has become an indispensable part of structural design. TO methods for structural design were first introduced and developed in Bendsoe's Topological Optimizations: Theory, Methods and Application. As stated in the book, "The typology optimization algorithm returns a structural layout describing mean load path in the form of a density distribution on the design domain."3 Different TO methods (SIMP, ESO, BESO and so on) have since integrated with different CAD software, providing designers with ready-to-use interfaces that allow somewhat intuitive numeric and geometrical input and outputs benchmark models as readable geometries to the software environment. TO was first applied to the structural design of automobiles, where strong torque force and tight building spaces ask for materials with high levels of structural performance. As it develops, TO process starts to influence the structural design of mega structures, especially the main structure support for skyscrapers, and thus making its impact in the architectural built environment. Namely, in the competition entry of citic financial centers by SOM and the One thousand Miami by Zaha Hadid Architect, a new type of tower design that is supported by TO algorithms demonstrates a new paradigm of super-tall towers that are lightweight, coreless, structurally expressive and spatially informed.

Our proposal therefore uses TO software as a driver for architectural semiology, achieving a multi-author district that reads as a difficult and complex whole, generating characterful tower identities on the urban, cluster and individual level, and creating interconnected neighborhoods as well as different moments of tower clusters based on principles of phenomenology.

### **2.2 TO Software and Its Potential to Achieve Tower Semiology**

The design research is embedded in three TO grasshopper plug-ins, Topos by Sebsastian "archiseb" Bialkowski, Peregrine by LimiState and Ameba by Mike Yimin Xie. These three software each offer a unique benchmark generation process based on the differences in algorithmic approach and geometry interpolation. Each software takes input such as geometric domain, load and support conditions as well as nodal divisions and material density from the geometry set-up interface and calculates optimized load paths through numerous generations, and then interpolates the load path into meaningful geometry outputs. Yet due to the differences of each software, even with the same input, the output benchmark is going to be different.

<sup>3</sup> Bendsoe and Sigmund (2003).

Topos uses the ESO (SESO) method (evolutionary structural optimization or single directional structural optimization) which was first proposed byMike Xie and G.P. Steven in 1993.<sup>4</sup> During the interactive calculation process, load path remains static. The more iteration it runs, the more resolution the final load path gains. Topos also uses density distribution of voxels to translate the calculated load paths into geometrical elements following the marching cube algorithm.5 The resulting benchmark model is a holistic mesh with unevenly distributed density values according to the load path.

Peregrine also uses the ESO method, and in terms of mathematical principles, it only has minor differences with that of Topos. However, for the geometry interpolation, it uses discrete element piping to assign different thickness value to each load path. At the same time, it recognizes the calculated load path as either element in tension or elements in compression.

Ameba, on the other hand, uses the BESO method (bi-directional evolutionary structural optimization). It is an extension of the ESO method, also proposed by Mike Xie, which allows for efficient material to be added and inefficient material to be subtracted from the geometry domain.<sup>6</sup> Therefore, for each iterative calculation, load path becomes dynamic, different from the previous iteration of the load path calculation. Finally, it uses marching cube method as geometrical mesh generation, the same as Topos, for its benchmark output.

Through a painstaking research of trial-and-error, we managed to generate a series of benchmark catalogues that looks at tower structure in a typical 200-m skyscraper under the condition of self-load and dynamic wind load. Although the workflow and design principle for these three algorithms yield a similar process, each software adapts best to different values of input parameters. For Topos, the structure benchmark become most legible when we spread load zones evenly on each different level; for peregrine, the structural members have the highest performance with load conditions mirrored on two halves of the tower; for ameba, we use repetitive surface support to adhere different parts of the mesh results.

As shown in the Fig. 1, the differences and similarities of configurative voids in each structure study can later be interpreted as semiology in architectural language and the built environment.

Depending on their configurative void behaviors, we categorized the benchmarks into four tower clusters. Type Y towers that are generated by topos and have mostly branch structural moments. Type H towers that are generated by ameba, which uses strong mid-level floor plates to connect the two half towers. Type X towers that use large cross bracing to support the half towers. Type Z towers that have small cross bracing moments all over the tower (Fig. 2).

<sup>4</sup> Xie and Steven (1993).

<sup>5</sup> Lorensen and Cline (1987).

<sup>6</sup> Huang and Xie (2007).

**Fig. 1.** Top part shows software differences between topos, peregrine and ameba. Bottom part shows the benchmark catalog.

**Fig. 2.** Figure is composed of four diagrams, each studying different aspects of benchmark configuration.

### **2.3 Benchmark Post Processing and Materialization**

The initial benchmark generation only present optimized structural studies in an ideal scenario. To complement the real-life conditions, we employed structural evaluation software to test the benchmark's performances under real-life conditions. We used Karamba 3d as the primary parametric engineering software. For each benchmark evaluation, we used the anticipated floor load and wind load as the load input and simplified the benchmark into simplified stick models for analysis. The simulation of the stick model behavior allows us to add structural members to stabilize the overall structure.

**Fig. 3.** Figure shows the post evaluation process.

During the simulation, we also used real-life material data for typical skyscraper design, wood, concrete and steel, allowing the structural studies to gain resolution in terms of buildability and materiality. The tri-material approach therefore adds another level of urban reading in addition to the configurative clusters Y, H, X and Z. The resulted structural studies after their material translation form an urban matrix of dual

**Fig. 4.** Figure shows tower matrix, horizontal represents configurative differences; vertical shows material differences.

readings (Fig. 3). These two layers of clustering intertwine with one another, following the principles of phenomenology, forming a complex and variegated order (Fig. 4).

# **3 Ground Design and Flood Simulation**

In order to accommodate for the 3 mm sea level rise and the 6-m daily tidal differences in London, the design tries to create a hydraulic-driven ground condition. The rise of water planes acts as gestalt switches for the ground. At relative sea level 0 m, 4 m and 7 m, the shape language of the ground transitions from a continuous landscape to shattered islands. The connectivity between different tower cluster changes from ground to highrise bridges respectively. The ground design also implements a hierarchy of river paths which reflects the flooding pattern of the Thames River. These river paths act as means to drive the water flow, mitigating and containing the flooding.

### **3.1 Flood Simulation**

To study the flooding condition, we did a series of simulations inMaya and Houdini, using particle systems. Parameters were set in speed, viscosity, density, friction of particles to test different conditions. An outcome which mostly corresponding to the reality was finally filtered and adopted.

Two negative factors were found during the simulation experiment: In reality, there will always be some other factors affecting the result and cannot be specifically predicted by software. Specific trails of particles are quite random and complex, which are hard to be used. As a solution, we summarize the general movement trend of particles to get a more instructive image result.

#### **3.2 Tower Arrangement**

The advantage of current arrangement is that in the secondary turns of simulation with towers on the site, the particle trails are more in line with the previous simulation result which can be easier to do further design. Then, we make the second iteration of simulation with towers as obstacles and get the second graph of particle trend.

Water flow will not directly affect the towers. We use the final pattern for the ground design: the area closed to towers will be raised to block the flood, while the rest of the ground will be used as river paths to let flood go through, thus preventing the city area. In the outcome, the division logic of towers are based on the groups of ground by the operations set above (Fig. 5).

#### **3.3 Podium Design and Network Theory**

Related theories are studied and used in podium design in reference to 'The Autopoiesis of Architecture' by Patrik Schumacher, in order to find the best way of circulation for different kinds of functions.

**Fig. 5.** Left top: the 1st iteration of particle trails; Right top: Location of towers based on trails Left bottom: the 2nd iteration of particle trails; Right bottom: ground operations result.

Network theory and space syntax theory explains the relationship between number of routes and accessibility, thus can be further used to describe the openness and privacy of functions. Set theory quantifies the level relationship between spaces, a further containment relationship can be set between different types of connections.

The arrangement of towers based on particle trails in the previous part of this article proposed an aspect of network relationship when the towers being regarded as nodes and the network produced (which is in parallel and perpendicular to the particle trend lines) being regarded as circulations to connect nodes. Therefore, different connectivity prototypes are proposed and applied to different functional areas in the site. Podiums were designed on the basis of the circulation relationships.

At this stage, the base of the design project is determined. Responses to external environmental conditions such as wind, flood and land topography blossom into various identities and points of exchange. These points of exchange take place where one transition from one cluster to another, whether it is from one shape language to another, or one material to another, or through the different water levels of the day. However, spatial and neighborhood experience is still the determining factor of an architectural/urban planning project. From this point of view, the following part focus on the exchange of neighborhoods and spatial performances of the design.

# **4 From Programmatic Distribution to Neighborhood Ecologies**

Our proposed neighborhood experience is a criticism of the modernist architectural paradigm raised by Le Corbusier.7 In the modernist approach of urban planning, program districts are segregated into city zonings with stretched-out communication facilities. To accommodate for the current habits of living and principles of the 15-min city, we purpose a hyper-dense urban area with program overlays and interconnected neighborhoods. We achieve it on several important architectural scales. In the individual towers, coreless voids which are generated by TO benchmarks host numerous programmatic sub-environments. On the cluster level, towers are grouped by dual readings of either configurative behaviors or material behaviors, resulting in zones of different program specificities or different levels of activity intensity. On the city scale, our purposed area, together with the other three proposed urban designs in the studio, bridges two separated densified urban developments in London, Canary Wharf and the city of London.

### **4.1 Typical Program Classification and Distribution**

After case studies and research on a catalog of skyscraper design, we categorized the main program functions in a typical tower into four major groups: Residential use which includes apartment housing, student housing, hotels and residential facilities; Corporate use which includes open offices, small offices, conference rooms and rest areas; Commercial use that includes shopping centers, open markets, restaurants and cinemas; Facility use that includes park areas, gymnasiums, libraries and galleries. These programs are further distributed according to the urban matrix of structural studies.

For program distribution, we first assign the four major function groups to the four configurative groups. Y towers have large openings and large floor plates, so they are assigned to commercial usage; H towers have smaller openings and are assigned to residential usage; X towers have the largest voids and are assigned to facility usage; Z towers have small floor areas but many opportunities for interconnectivity and are assigned to corporate usage. Next, the level of hybridity of program overlays are assigned to each material group. Steel clusters and concrete clusters are separated so they have less hybridity whereas timber clusters bridge the urban connection between steel group and concrete group and have more levels of program hybridity. The dual urban reading of the initial tower design is therefore translated into a variegated yet orderly neighborhood ecology.

### **4.2 Dynamic Programs and Micro-structures**

In addition to the standard static program planning for each tower, to accommodate the ever-changing need for activities, we proposed a second layer of dynamic programs. These programs take advantage of the porous structure as a result of TO's unique design language. They are supported by another layer of microstructure, affiliated by the main tower structure. Depending on different times of the day, or different needs of user activity, these programs have movable parts and thus can change their shape language

<sup>7</sup> Corbusier (1920).

as well as their primary functions. Take the dynamic residential tower for example (Fig. 6). It is supported by shelf-like structures which inhabit the main void of the residential towers. The shelf structures have removable floor plates and shear walls and can thus accommodate different types of residential usage. The combination of the microstructures and main structures allow the overall urban design to become more adaptable. Neighborhood relationships also become dynamic, changing itself according to either time differences or population needs.

**Fig. 6.** Figure shows the design of microstructures for dynamic living.

# **5 Façade Development and Sunlight Optimization**

The façade system on the one hand is the climate interface of buildings playing a role of shading and ventilation. On the other hand, the façade also reflects the internal functional activities.

We firstly set up specific façade types for different directions: according to different sunshine conditions, the northeast direction (yellow part) is mostly small balconies used in residences; the southwest direction (blue part) uses shading skin structure, while the transition part (green part) uses public places like outdoor platforms.

In terms of dynamic skin design, we summarized 6 kinds of façade units with different activities and then studied their specific shading capabilities. After that, we define the shading ability to a certain figure between (0, 1) that can give the performance of different kinds of façade systems in a more direct way. The lower the value is, the weaker the shading ability it has, while it does not mean this kind is useless. Another parameter of openness was also set to text if the façade can provide more eye contact between spaces which can be then used for functions in large spaces like public stages and indoor gardens (Fig. 7).

**Fig. 7.** Figure shows the final layout the operable facades.

### **6 Conclusion**

Our proposal attempts to challenge the traditional design process of an urban development, where extremes of oversimplification and overcomplicated urban scene coexist at the same time. For instance, fast-developed cities have large zones of copy-pasteable buildings displaying a highly monotonous identity and a city center with project of different developers and designers that creates a highly differentiated scene.

Combining research-based design methodologies using catalogs and simulation and generative design methodologies which allow TO semiology to develop into an interwoven built environment, we aim at creating a new paradigm for urban design. This new type of urban development is able to resolve different design parameters as shown. Identities of the architecture is overlapped and intertwined. A variegated urban order of echoing identities seems to be the middle ground between the monotonous mundane and the eye-soring, competing complexity.

The final project adopts multiple design principles of parametricism. The ground topography is generated by hydraulic patterns based on the natural river flow and changes its connectivity in reaction to the rising of the water level. The platforms, as another iteration of the same hydraulic pattern, interface between the tower volumes the ground circulation. The skyscrapers are clustered based on the performative qualities of the topological optimization and material explorations and provide flexible zoning opportunities and coreless programmatic distributions. Finally, the design of the kinetic façade is optimized based on a series of sunlight studies, differentiating the articulation of any south-facing and north-facing openings.

To conclude, [Symbios]City began formally from topological optimization, developed based on studies on ecology, and concluded the design following our phenomenological explorations, overall aiming at a complex design project that unifies the perception of all scales of design (Fig. 8).

**Fig. 8.** Figure shows the final rendering of the project.

# **References**


**Open Access** This chapter is licensed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made.

The images or other third party material in this chapter are included in the chapter's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the chapter's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.

# **Using CycleGAN to Achieve the Sketch Recognition Process of Sketch-Based Modeling**

Yuqian Li and Weiguo Xu(B)

School of Architecture, Tsinghua University, Beijing, China liyuqian17@mails.tsinghua.edu.cn, xwg@mail.tsinghua.edu.cn

**Abstract.** Architects usually design ideation and conception by hand-sketching. Sketching is a direct expression of the architect's creativity. But 2D sketches are often vague, intentional and even ambiguous. In the research of sketch-based modeling, it is the most difficult part to make the computer to recognize the sketches. Because of the development of artificial intelligence, especially deep learning technology, Convolutional Neural Networks (CNNs) have shown obvious advantages in the field of extracting features and matching, and Generative Adversarial Neural Networks (GANs) have made great breakthroughs in the field of architectural generation which make the image-to-image translation become more and more popular. As the building images are gradually developed from the original sketches, in this research, we try to develop a system from the sketches to the images of buildings using CycleGAN algorithm. The experiment demonstrates that this method could achieve the mapping process from the sketches to images, and the results show that the sketches' features could be recognised in the process. By the learning and training process of the sketches' reconstruction, the features of the images are also mapped to the sketches, which strengthen the architectural relationship in the sketch, so that the original sketch can gradually approach the building images, and then it is possible to achieve the sketch-based modeling technology.

**Keywords:** Sketch-based modeling · Sketch recognition · Image-to-image translation · CycleGAN

# **1 Introduction**

The concept design is the initial in the architectural design, and it is also the most important part in the whole process. Once the concept is determined, the design direction is also determined. And architects usually design ideation and conception by handsketching which is a direct expression of the architect's creativity. But with the computer aided architecture design system, you will spend a lot of time to covert the sketch to a 3D modeling. However, if the sketch could directly generate the computer architectural concept model which could be edited and developed by the architect, it will be efficient to the design process.

At present, the sketch-based modeling is a relatively popular research direction. Compared with the traditional 3D software modeling method, the sketch in the sketchedbased modeling has replaced the "Window, Icon, Menu, Pointer" (WIMP) interactive method in the traditional 3D software. The sketch expresses the designer's intention and then completes the modeling task. Since sketching is one of the architect's professional competence, this modeling method is very friendly to the architect, and because of its easy operation, the whole modeling process can be completed by one person alone.

However, for a sketch-based modeling system, it is very difficult to understand the design intent expressed by the sketch. That is, the realization of feature mapping from 2D sketches to 3D modeling is one of the difficulties in the system. Due to the differences in hand-sketching expressions, the ambiguity of the sketch itself increases the difficulty of understanding the sketch. So, additional knowledge and corresponding methods need to be added in the modeling process to reduce the difficulty of understanding the sketch as much as possible. People tend to use simple sketches to express initial ideas and concept and want to use as few strokes as possible to convey information. Therefore, if researches want to realize the feature map from 2D sketches to 3D modeling, the first step is to achieve of sketch recognition.

Because of the development of artificial intelligence, especially machine learning technology, Convolutional Neural Networks (CNNs) have shown obvious advantages in the field of extracting features and matching, and Generative Adversarial Neural Networks (GANs) have made great breakthroughs in the field of architectural generation which make the image-to-image translation become more and more popular.

As the building images are gradually developed from the original sketches, in this research, we try to develop a sketch-to-image translation system which could map the images' features to the sketch and in the process of the sketch reconstruction, the architectural relationships of the sketches have been strengthened, and then achieve the sketch recognition process in the sketch-based modeling.

# **2 Related Works**

Sketch-based modeling is a research about computer graphics, and there are many related research results. The earliest Sketch-based modeling study was based on contour sketch modeling. Igarashi et al. (1999) proposed a method of judging 3D geometric shapes by recognizing the contour curve of the sketch. Xu et al. (2014) developed a sketch-based True2Form modeling system, which uses selective regularization algorithms from 3D shape information such as curvature, symmetry, parallelism and other shape attributes. Bui et al. (2015) developed a method to generate 3D appearance shadow illustrations by recognizing the outline and shadow of the sketch. Xu et al. (2013) proposed the Sketch2scene framework, which can automatically infer multiple scene objects from a hand-sketching to generate a good 3D model scene. Huang et al. (2017), developed a deep convolutional neural network, in which the features of the 2D sketch are calculated as the parameters of the model, and these parameters in turn produce multiple sketches similar to the input, then the user can select an output shape, or further modify the sketch to explore other shapes.

The above-mentioned studies put forward a variety of recognition methods in the sketch-based modeling, which provide methodological reference to our study. However, because of the researchers' computer professional background, the results are universal and impractical. To develop the sketch-based modeling is undoubtedly the most suitable candidate for architects. This group is well aware of the logic of architectural design, can understand the design intent of architectural sketches, and also has strong 3D space capabilities.

Of course, architects and scholars have tried to use the machine learning and its algorithm results to study building generation tasks. For example, Matias Del Campo tried to use style transfer algorithms to generate the building skin (2019) and plan the urban city (2019). Weixin Huang from Tsinghua University and the University of Pennsylvania Hao Zheng from the University of Pennsylvania also have done some studies about the generation of indoor units through the pix2pix algorithm (2018). These results have inspired the architect's design.

In this study, we try to make a sketch-to-image translation in order to achieve the sketch-based modeling, which is also a study about architectural generation.

# **3 Methodology**

# **3.1 Network Architecture**

As mentioned above, architects have tried several different algorithms to achieve the image-to-image translation, such as style transfer algorithm, pix2pix algorithm and so on. The style transfer algorithm is actually developed from the texture generation area, which combined with the deep object recognition area, so the core of the algorithm is still a texture style; the pix2pix algorithm is an optimized version of the cGAN, and its requirement about the data is very demanding, which require paired data. However, in many tasks, paired training data will not be available. Such as the data in this study—the sketch and the image of the building, it is a set of unpaired data, which is equivalent to two modes of the same scene. For this kind of data set, the algorithm of CycleGAN could improve the problem of pix2pix algorithm's stringent data pair requirements (Fig. 1).

**Fig. 1.** The different between paired data and unpaired data

The CycleGAN presents an approach for learning to translate an image from a source domain X to a target domain Y in the absence of paired examples. The goal is to learn a mapping G: X → Y, such that the distribution of images from G(X) is indistinguishable from the distribution Y using an adversarial loss. Because this mapping is highly underconstrained, CycleGAN couple it with an inverse mapping F: Y → X and introduce a cycle consistency loss to enforce F(G(X)) ≈ X (and vice versa) (Fig. 2).

**Fig. 2.** The network architecture of CycleGAN

### **3.2 Data Preparation**

### **Principles of Data Collection**

Before the data collection, we made some principles: First, the sketch and the image of the building must be one building. It means that a one-to-one correspondence between the sketch and the image of the building in the data, although this is not required in the CycleGAN, we believed that such a data set may improve the effectiveness of model training. Second, all the designs are well-known and the sketches are made by the famous architects themselves. Third, the data collected should be extensive. Due to the subjective nature of the architect's drawing of sketches, and the design techniques of architectural schemes are diverse. By collecting a wider range of data samples, the scope of the data set is more comprehensive.

### **Data Collection**

Since it is difficult to collect the architect's sketches and corresponding images, the data that can be collected is limited. After screening and processing the collected data, a total of 200 data were selected, namely 100 sketch data and 100 image data.

### **Data Processing**

First, normalize the collected data, and each picture is 256 \* 256. After that, 160 data, that is, 80 pairs of samples are used as training data, and 40 data, that is, 20 pairs of samples are used as test data. Among them, the sketch data set is placed in the trainA folder as the source data domain X, which corresponds to the target data domain Y; the image data set is placed in the trainB folder as the source data domain Y, which corresponds to the target data domain X.

### **3.3 Training Process**

The CycleGAN is a ring structure, with two generators G (X → Y) and F (Y → X), two discriminators DX and DY: in the generator part, because the image in this study is 256 \* 256, so 9 residual blocks are used; in the discriminator part, through five-layer convolution, the number of channels is reduced to 1, and finally the average pooling size is also reduced to 1 \* 1.

The training process is that X represents the image in the sketch domain, and Y represents the image in the building image domain. The image of the sketch domain is generated by the generator G to the image of the building image domain, and then reconstructed back to the original image input in the sketch domain by the generator F; the image of the building image domain is generated by the generator F to generate the image of the sketch domain, and then generated by the device F reconstructs back to the original image input in the building image map domain. It is worth noting that CycleGAN

**Fig. 3.** The part process of the training

adds an identity mapping part, that is, generator G uses sketches to generate building images, but if the input itself is a building image, then it should generate an image belonging to the building image. In addition, for the stability of training, historically generated fake samples are used to update the discriminator instead of the currently generated fake samples (Fig. 3).

# **4 Results**

From the Fig. 4, we can see that the training from the sketch to the building image has completed the sketch recognition and through the training of the reconstruction, the features of the building images are mapped to the sketches, which strengthens the architectural relationship in the sketch, which could make the original sketch to approach the building images step by step.

**Fig. 4.** The results of the test training

### **4.1 Recognition of Sketch and Generation of Corresponding Building Image**

First, it can be seen from the Fig. 5 that in the generation of the sketch to the building image, the boundary of the sketch has been recognized. The training process has identified the building's exterior images and interior images, because the sky of the generated exterior images has been rendered to blue and in the generated interior images, the original color state of the building images has been retained.

**Fig. 5.** Recognitions of sketches **Fig. 6.** The building volume relationship

**Fig. 7.** The environmental relationship **Fig. 8.** The horizontal comparisons

Second, in the Fig. 6, the building volume relationship of the building image is well recognized and mapped in the sketch. In more detail, the virtual-real relationship of the three building volumes has also been well studied.

Third, in the Fig. 7, the environmental relationship of the building, such as shadow changes, light transmission and reflection of windows has been well reflected in the generated image.

Also, through the horizontal comparison of the different sketches and the corresponding images pairs of the generated building images in the Fig. 8, it is found that there will be differences in the generation results with different drawing levels. The simpler the sketch is, the worse the building image it generates, and the more complex the sketch, the better the result.

### **4.2 Sketch Reconstruction**

As there is an image reconstruction part in the CycleGAN, it has been reflected in the output. By training the features of the building images, a new sketch based on the original sketch is reconstructed. It can be seen from the Fig. 9 that the reconstructed sketch maps certain features of the building images and strengthens the architectural relationship in the sketch.

**Fig. 9.** The reconstructed sketches **Fig. 10.** The generations from building images to sketches

### **4.3 Building Images to Sketches**

It can be seen from the Fig. 10 that the generation from building images to sketches is also successful, even better than the result of the sketch-generated-building-image. For the sketch, its features are relatively unified and more obvious, that is, a sketch with a single color. This result reflects that if the features of the building images are uniform, the final results of the sketch-generated-image could be better.

# **5 Conclusion and Discussion**

This study is a sketch-to-image translation based on CycleGAN. Through the training of 160 data and the testing of 40 data, the study has completed the mapping process from sketch to building images. The results show that the CycleGAN can achieve the sketch recognition and reconstruction. Training is to map the features of the building image to the sketch, which strengthens the architecture relationship in the sketch, so that the original sketch can approach the building image gradually. And the sketch's reconstruction is also very consistent to the architect's cycled workflow and developed logic in the architectural design process.

Of course, the study still has some limit. First, the number of the data is not enough. Secondly, the data in this study is complex and extensive. If we add a single style or a comparison between the sketches of a certain architect and the building images, we could be able to compare the ability of data with different levels of complexity in the direction of generation from sketches to building images.

**Acknowledgement.** This research is supported by National Natural Science. Fund of China (NO. 51538006).

# **References**


**Open Access** This chapter is licensed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made.

The images or other third party material in this chapter are included in the chapter's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the chapter's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.

# **Exploration on Machine Learning Layout Generation of Chinese Private Garden in Southern Yangtze**

Yubo Liu1, Chenrong Fang1, Zhe Yang1, Xuexin Wang1, Zhuohong Zhou1, Qiaoming Deng1(B) , and Lingyu Liang2(B)

<sup>1</sup> State Key Laboratory of Subtropical Building Science, School of Architecture, South China University of Technology, Guangzhou, China

dengqm@scut.edu.cn

<sup>2</sup> School of Electronic and Information Engineering, South China University of Technology, Guangzhou, China

**Abstract.** Machine learning has been proved to be feasible and reasonable in architectural field by extensive researches recently, whereas its potential is far from being tapped. Previous studies show that the training of GAN by labelling can enable a computer to grasp interrelationship of spatial elements and logical relationship between spatial elements and boundary. This study set the learning object as layout of private gardens in southern Yangtze with higher complexity. Chinese scholars usually analyse private garden layout based on their observation and experience. In this paper, based on Pix2Pix model, we enable a computer to generate private garden layout plan for given site conditions by learning classic cases of traditional Chinese private gardens. Through the experiment, taking Lingering garden as example, we continuously adjust the labelling method to improve learning effect. The finally trained model can quickly generate private garden layout and aid designers to complete scheme design with private garden element corpus. In addition, the working process of training GAN enables us to discover and verify some private garden layout rules that have not been paid attention to.

**Keywords:** Machine learning · Generative design · Private garden in southern Yangtze

# **1 Introduction**

Since the 21st century, artificial intelligence has entered a new era of integration. Machine learning, as the core technology of artificial intelligence, is also the focus of architects' attention. Most of research which apply machine learning into generative design focus on using generative adversarial network (GAN) to generate internal layout for given boundary conditions. Studies in these fields indicate that the training of GAN through labelling samples has a strong ability to learn interrelationship of spatial elements as well as logical relationship between spatial elements and boundary. However, the research on spatial layout of traditional Chinese private gardens which have more diverse spatial elements and more complex composition relations is still very limited at present.

Among traditional Chinese gardens which plays an important part of the precious historical and cultural heritage of all mankind, private gardens in southern Yangtze epitomise the artistic achievements of gardens. Many Chinese landscape designers tried to learn from its typical artistic spatial layout to create unique experiences. Therefore, starting from machine learning method, this paper set the research object as private gardens in southern Yangtze and explores the ideas and methods of generating layout inheriting its characteristics for given specific land use conditions.

# **2 Background**

Private gardens in southern Yangtze are built by professionals with artistic accomplishment in ancient China. Most of them are ingenious designed as built on site to meet the spiritual need of the host according to local conditions. It is the myriads of changes of elements in combination that create private gardens' complex spatial arrangement. Therefore, they have rarely been formally measured and generalized.

Among the previous studies, most scholars tend to use descriptive language to analyze private gardens in southern Yangtze in basis of their observation and feelings, such as Chen [1956] in "Classical gardens of Suzhou" typically represented the integration of literature, history and philosophy with Chinese gardens in poetic way. Peng [1986] in "analysis of classical gardens" systematically analyzed the techniques and skills of traditional gardening art to interpret traditional garden space. These treatises extracted the artistic essence of traditional Chinese private gardens and laid an important foundation for further study of researchers. However, traditional study of private gardens which barely involve designing a new private garden, can hardly produce specific and effective help for generating new private garden space.

For such difficult-to-define problems, machine learning is able to conduct statistics on empirical data and make generation through probability density estimation. Recently, relevant studies have applied GAN to layout generation and proved its effectiveness [1, 3, 5]. In their research, the color block labeling method to process samples effectively improves the quality of the layout generation.

Specific labeling methods in machine learning coincide with the salient learning points in human learning. From this point of view, the improvement of labeling methods might be one of the breakthroughs for machine learning to learn more complex problems in layout field. Therefore, based on pix2pix model and data expansion for the small sample set of experimental objects, this paper starts with the sample labeling method and gradually improves it according to the experimental results, so as to realize the intelligent generation of plan layout with the characteristics of private gardens in southern Yangtze. In this way, this research hopes to deconstruct private gardens from the perspective of statistics, and on the other hand, we aim to enhance comprehension of traditional Chinese private gardens through the design process and verification.

# **3 Research Method**

The main process of exploration on machine learning layout generation of Chinese private garden in southern Yangtze is as follows:


# **3.1 Network Architecture**

The Pix2Pix model [4] this paper used is a classic model that applies GAN to supervised image-to-image translation. The input of generator is the private garden site boundary. Then the private garden layout plan output by generator and the real one we collected are input to discriminator respectively. Finally, the discriminator evaluates the probability that the input comes from the real sample. Through the iteration, generator constantly generates the plan that is close to the real sample to fool discriminator. The model converges through the continuous game between discriminator and generator. At last, the generator is able to generate a layout with traditional private garden space characteristics for any given site plan.

# **3.2 Dataset**

# **3.2.1 Collection**

Considering the sample size and the effect of machine learning, we selected cases of private gardens according to the following rules:


In this research, we collected a total of 35 samples (Fig. 1) of private gardens located in the Yangtze River delta area through the Internet and related books. Among them 5 samples were used for testing the model.

### **3.2.2 Augmentation**

The sample size is too small for the training model. Thus, the data set need to be augmented. Since the learning point is the layout relation of private garden space element,

**Fig. 1.** Part of selected private garden samples

as the same way researchers did it before [6], the remaining 30 samples were flipped in four directions to get a total of 120 samples for the experiment.

# **3.3 Processing and Labelling Based on Analysis**

Traditional Chinese private gardens are composed of four major elements, namely mountains, water, plants and buildings, creating a poetic and pictorial space as the context to express the subjective and objective needs of owners in ancient China.

In previous studies [2, 7], scholars summed up several key points on the layout relationship of the four major elements. Such as "subordination and priority" means that in the complex private garden spatial system, a part of space will be designed as the main scenic area of the whole garden; "The contrast of space" represents the main scenic spots of private gardens is usually highlighted by first repressing then developing. The "twists and turns" is the most prominent feature of private gardens in southern Yangtze, which makes private garden become more profound and winding by connection of buildings and the use of curved corridors. How to accurately extract and highlight these design features of samples to achieve better learning results is the focus of this research.

### **3.3.1 Sample Processing**

In order to enable the machine to learn the typical layout relationship directly, this paper redrew (Fig. 2.b) the private garden plan samples with different clarity in basis of the knowledge of architecture to extract the key information about their spatial layout. The principle of redrawing is as following:


# **3.3.2 Sample Labelling**

To label the samples, firstly we filled the main elements of the private garden with different colours. Then the limiting factors such as the private garden boundary and the main entrance were marked. After that, we marked out the whole private garden area as the central and the site part to distinguish the central area from other parts of the private garden.

The final samples (Fig. 2.c) with the real site boundary was paired as one-to-one corresponding samples and finally input to the machine for training.

**Fig. 2.** Processing and labeling based on analysis of private garden layout

# **4 Training and Analysis**

### **4.1 First Training**

In the first training, each input and output image pair from the dataset was labeled with color blocks representing certain functional areas (Fig. 3).

**Fig. 3.** Labeling rules for first training. An image pair example from dataset

A total of 30 samples were labeled as dataset and 7 of them were flipped horizontally for testing. The generated images were almost the same as relative samples in dataset, a phenomenon of over-fitting, because testing samples were too similar to training samples. So, we labeled 5 new samples for retesting (Fig. 4). Machine learned that the central landscape area (labeled yellow) was distributed around the water and walking flow (labeled red) was distributed around the central landscape area. But architectures (labeled black) were not in block and their distribution was irregular, also the images were fuzzy.

#### **4.2 Second Training**

#### **4.2.1 Improvement**

Because the training of fearless generation to artificial intelligence can be taken as the architectural education to specific computer programs, the changes of our training thoughts referred to the patterns of education.

**Fig. 4.** Testing results of first training (real\_A and real\_B are input image pair, fake\_B is generated image)

1. *Enlarge training dataset*

We quadrupled the size of dataset samples by flipping horizontally, vertically and horizontally plus vertically.


We determined the core patterns of private garden layout, and clearly defined learning key points, including: main entrance of core region, walking flow, main landscape architecture, open space, the formal and logical relationship between boundaries.

Based on the core requirement of traditional private gardens for hosting and creating natural view in limited urban space, we drew the following rules:


In addition, there are mathematical and logical relations between the area of the mountain, water and main landscape architecture as well as graphical and logical relations between the shape of mountain, water and the boundary. And the pix2pix model was able to self-learning according to statistical principles.

We highlighted training key points in labeling. The labeling of mountains and main landscape architecture were added. Meanwhile, we connected the architecture in private garden central area with main walking pathway, hoping that the machine can learn the layout of buildings within the central area. The labeling of special landscape space (labeled purple) was added, which is an important space formed by winding corridors and the boundary walls in many cases. So, we distinguished those inter-spaces with other landscape space.

# **4.2.2 Labeling and Results**

(See Fig. 5.)

**Fig. 5.** Labeling rules for second training. An image pair example from dataset

We relabeled 30 image pairs and a dataset of 120 was generated after augmentation, then we labeled 5 samples for testing. There were great improvements in results (Fig. 6). The overall generation was clearer. 5 output images all generated some unreasonable landscape elements because the unclear law in the dataset.

**Fig. 6.** Partial testing results of second training

1. *Corridors and private garden boundary*

The labeled special landscape space (purple tagged) was not generated. Besides, the main landscape architecture was all north-south instead of east-west direction. It was noticed that most samples in dataset had no inter-space labeling and few main landscape architectures were east-west direction. After learning massive samples, the machine was more inclined that the private garden layout had no inter-space and the main hall facing north-south.


We analyzed that the pathway distributed randomly, machine may identify those labeling as random red pixel points, leading to broken red lines.

The main problem was that the dataset was small, but the differences between samples in dataset were noticeable, an insufficient in consistency.

So far, the labeling was based on real samples. Although we simplified the law and adjusted the labeling, diversity in dataset was still noticeable and the particularity of each sample may have a great impact on the result. Thus, we began to widen the gap between real private garden cases and the samples labeled for machine to learn, and hoped to improve learning results in this way. Real private garden cases represented real life problems, while samples for machine to learn represented word problems given to students in the process of education combined with real life problems. Word problems can be modified to strengthen the learning performance, and there is no need to insist on consistency in reality. So, we began to artificially modify the samples in dataset for consistency.

# **4.3 Third Training**

# **4.3.1 Improvement**

To solve the above problems, we relabeled the samples with following rules:


# **4.3.2 Labeling and Results**

(See Fig. 7).

**Fig. 7.** Labeling rules for third training. An image pair example from dataset

In this test, we selected 5 site boundaries, then labeled them several times and generated output images through the model (Fig. 8). The testing results were in line with our expectations. They all learned the special landscape space between the veranda and the wall, and the layout relationship between the main landscape architecture and the mountain and water was reasonable.


**Fig. 8.** Partial testing results of third training

### **4.4 Result Analysis**

The layout of generated images was reasonable from quantitative analysis, with the expected distribution of landscape architecture, mountain and water. Most of them followed the dataset samples, and some of them were properly adjusted through machine learning.

The main spatial elements in the generated images were relatively complete, while some lack of the main landscape architecture. The machine learned the logical relationship in area proportion of key elements (architecture, mountain, water), it also learned the length proportion of special landscape space (inter-space). Besides, the machine retained the large depth of view for landscape architecture.

# **5 Discussion**

This study implemented a model that can generate private garden layout by inputting site plan with certain conditions based on the Pix2Pix model. Through the analysis of experimental results and adjustment of labelling method, vital rules of traditional private gardens in element layout were constantly summarized. In the future, the generated layout can be constructed in three dimensions so as to evaluate the density rhythm of private garden space reflected in the section which helps further improve the labelling rules and learning effect. Meanwhile, more effective data expansion strategy needs to be adopted to deal with the small sample problem of private gardens and reduce the specificity of generations.

Although this study exclusively focuses on the spatial pattern of the main area of private gardens in southern Yangtze, and the generated results still remain at the outset of expressing the layout relationship due to the large difference in the scale of learning cases. It Reveals the huge potential of machine learning and provides a new thought for designers to study the complex layout generative design problems of private gardens in southern Yangtze which used to be designed mainly by experience and difficult to be summarized by clear rules. More importantly, in the process of constantly revising sample labelling method, we are also encouraged to discover and verify the private garden elements and layout rules that convey the interest intention pursued by ancient literati.

**Acknowledgments.** This research is supported by National Natural Science Foundation of China (No. 51978268, No. 51978269); Natural Science Foundation of Guangdong Province (2019A1515011045); Guangzhou Science and Technology Project; Graduate Education Innovation Program, South China University of Technology.

# **References**


**Open Access** This chapter is licensed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made.

The images or other third party material in this chapter are included in the chapter's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the chapter's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.

# **Command2Vec: Feature Learning of 3D Modeling Behavior Sequence—A Case Study on "Spiral-stair"**

Wen Gao1(B) , Xuanming Zhang2(B) , Weixin Huang1(B) , and Shaohang Shi1(B)

<sup>1</sup> School of Architecture, Tsinghua University, Beijing 100084, China

{gaow19,shish19}@mails.tsinghua.edu.cn, huangwx@tsinghua.edu.cn

<sup>2</sup> School of Software, Tsinghua University, Beijing 100084, China

zhang-xm20@mails.tsinghua.edu.cn

**Abstract.** In this study, we applied machine learning to mine the event logs generated in modeling process for behavior sequence clustering. The motivation for the study is to develop cognitively intelligent 3D tools through process mining which has been a hot area in recent years. In this study, we develop a novel classification method Command2Vec to perceive, learn and classify different design behavior during 3D-modeling aided design process. The method is applied in a case study of 112 participate students on a 'Spiral-stair' modeling task. By extracting the event logs generated in each participate student's modeling process into a new data structures: 'command graph', we classified participants' behavior sequences from final 99 valid event logs into certain groups using our novel Command2Vec. To verify the effectiveness of our classification, we invited five experts with extensive modeling experience to grade the classification results. The final grading shows that our algorithm performs well in certain grouping of classification with significant features.

**Keywords:** Process mining · Design cognition · 3D-modeling · Machine learning

# **1 Introduction**

Since Computer Aided Design introduced into design industries, 3D-modeling aided design has become one of the most important ways for design creation nowadays. Through advanced 3D-modling design tools, designers could create, edit, test and delete 3D geometries freely and rapidly. As a 'visual reasoning' process [1], cognitive process of design has been greatly augmented for 3D tool is able to provide accurate 3D visual feedbacks for designers to efficiently optimize their designs [2]. With deep permeation of 3D-modeling, design thinking is at the same time evolving from human behavior to human-3D tool integrated behavior. Hence, a more cognitively intelligent humancomputer interactive system becomes an urgent quest from nowadays designers. Studies on application of artificial intelligence technologies to design have always been hot area [3]. Although, many recent studies on applying machine learning (ML) in design process are positive responses to the quest [4, 5], researches on application of ML in cognitively intelligent 3D tool are still insufficient. Designing is believed a unique process [6]. That to develop ML which especially adapts to 3D-modeling aided design process should draw more attention.

In order to build an intelligent 3D tool to be able to perceive, learn and classify design representations similarly as designers, classification is a primary ability for computer which facilitates evaluating, comparing and decision making. In this study, we proposed a novel embedding algorithm Command2Vec (available on https://github.com/initxuan/ Command2Vec) for 3D tool to identify and classify different modeling processes. For the purpose of letting 3D tool firstly be able to 'perceive and learn' the process, a structure of data 'command graph' was developed. In form of directed graph, 'command graph' was extracted from event log and could reflect relationships between commands and objects during modeling. Command graph was then embedded into vectors based on Word2Vec, a natural language processing algorithm, for further clustering by K-mean++. The whole algorithm formed our Command2Vec. In this study, an experiment was conducted with 112 junior students participating in a 'spiral-stair' modeling task. This experiment is to explain and test the performance of Command2Vec. A popular software Rhinoceros 3D (Rhino) is chosen as the 3D tool in this study. An illustration is given in Fig. 1 to show our research workflow.

After the experiment, 99-participant's data were screened as valid data for study. By constructing command graph from event logs with a total of 4728 nodes, 10059 edges, 72 different commands, final 99 command graphs were obtained without repetition. Through Command2Vec, 6 groups were finally achieved as clustering results. Five

**Fig. 1.** Work flow of research

modeling experts with extensive design experience were invited to rate the classification. High score was got in groups with significant features. In the end, the results were quantitatively analyzed and discussed (Fig. 1).

# **2 Related Work**

Our study falls into the research area of process mining in process discovery. "The goal of process discovery is to learn a model based on an event log" [7]. Event log is a kind of behavioral record happening ubiquitously in our digital world. However, that to learn and predict human behavior through process discovery is a very complicated subject of research. Process mining in design industries based on event log starts to attract attentions in recent years. Tao et al. pointed out that more attention should be paid on how to use the big data of cyber-physical models generated in product designing process to better serve the design management in the entire product lifecycle [8]. In the field of design, some research works have been done by mining event log generated by the cyber-physical models. Yarmohammadi et al. attempted to characterize the performance of modeler based on time from BIM log using sequence based mining algorithm [9]. Pan and Zhang used BIM log as data source to explore designers' preference as well as productivity pattern to improve managing efficiency. They proposed a novel clustering model for the exploration [10]. Other works done by Pan, Y., et al. include knowledge discovery and designers grouping using clustering algorithm based on BIM log [11, 12].

# **3 Methodologies**

### **3.1 Data Preparing**

In order to let computers 'understand' human modeling behavior, a novel data structure was proposed. In this study, we adopted directed graph to represent the behavior sequence in 3D modeling aided design. This data structure is extracted from event log generated by user modeling process. Figure 2 illustrates the proposed data structure. The nodes in the directed graph are of two kinds: command nodes and object nodes. The naming method for the command node is the sequential number of the command with command name operated by user. The naming method for object nodes is the sequential number in which the 3D model objects are generated. The command node and the object node are connected by a directed line which represents the cause-and-effect relation. For example, as shown in Fig. 2(a), the first command generates the first object; first object causes second command's operation which then generates second and third objects, and so on. After obtaining such a directed graph, the computer can find the command-path of any generated object, as well as object-path to any command. This form of data structure can largely reflect the cognitive activity during 3D modeling (Fig. 2).

In the case study, we observed that when people were asked how to complete a modeling task, they tended to answer the question by describing the operation process with key commands. On this basis, we simplify the command-object graph (Fig. 2(a)) by extracting subgraph of command nodes, and obtain the final command graph (Fig. 2(b)). During the extracting process, subgraph of deleted object as well as command with no object generation are excluded. The command graph becomes our data source for feature learning and clustering later.

**Fig. 2.** The process of event log extraction into command graph (a) command-object graph, (b) extracted subgraph: command graph.

### **3.2 Embedding**

### **3.2.1 Algorithm Comparison**

After getting the command graph which represent modeling behavior of designer, a novel embedding algorithm based on Word2Vec: Command2Vec, was developed to learn feature of command graph.

We first applied Node2Vec [14] and Graph2Vec [15], two graph-based embedding algorithms, to embed the command graph respectively. However, the clustering results are not explainable both within classified groups or in between them. Due to the reason that these two algorithms are only sensitive to the topology of the graph, but not the semantics and sequence of it, we turned to the Word2Vec embedding algorithm applied in natural language processing (NLP).

However, the results of learning the entire network path by Word2Vec was still inexplicable by evaluators. An assumption was the highly frequent "noise commands" in the modeling process such as gumball transform, drag, join that appeared in almost every participant's commands set. We further process the input data, command graph, by extracting the top 7 out-degree ranking commands and form into key-command sentences. The key-command sentences performed well that proved assumption. Following are the specific algorithm description.

### **3.3 Command2Vec**

Let G = {*G*1, *G*2,..., *Gn*}represent the set of all command graphs, where *Gi* = (*Vi*, *Ei*) represents the ith person's command graph, *Vi* represents node and *Ei* represents edge. First, use *Ei* to calculate the out-degree *ODivj* of each command node *vj* ∈ *Vi* in the command graph *Gi*. When the nodes have the same out-degree, sort them in ascending order according to the sequence of occurring of the nodes, and obtain an ordered array *Ni* containing all command nodes *vj*. The same processing is performed on all command graphs, and the ordered array *N* = {*N*1,*N*2,... *Nn*} corresponding to all command graphs is obtained, where *Ni* is the ordered array corresponding to the ith command graph.

*Ni* is screened according to the incoming threshold parameter θ, and the first θ command nodes *vj*(*<sup>j</sup>* <sup>∈</sup> [1, θ]) in *Ni* are selected to form a new sequence *<sup>N</sup> <sup>i</sup>* . In order to avoid interference with the final result, we will filter out the two commands "Redo" and "Delete" during the screening process. All new sequences form a new set *N* - = *N* 1,*N* <sup>2</sup>,... *<sup>N</sup> n* .

Since each command node in the command graph represents a specific command in 3D-tool, all elements in the set *N* are sequences composed of specific meaning. When we regard commands with the same name appearing in different positions as the same command, the number of different commands that constitute *N* is limited.

Let C = {*c*1, *c*2,... *cm*} represent the set of different commands that constitute *N* , where *ci*(*<sup>i</sup>* <sup>∈</sup> [1, *<sup>m</sup>*]) represents a specific commands. Let *<sup>f</sup>* : *<sup>C</sup>* <sup>→</sup> <sup>R</sup>*<sup>d</sup>* be the mapping function from command to feature representation, where *d* is a parameter that specifies the dimension of the feature vector. Equivalently, *f* is a matrix with |*C*| × *d* parameters. The vector in the ith row of this matrix is the embedding vector of the command *ci*. We use the Skip-gram [16] architecture to learn the matrix *f* based on the set *N* . After learning the embedding vector corresponding to each command, we will finally obtain the embedding vector of the ith personal command graph according to the commands contained in *N <sup>i</sup>* , using the weighted average method of out-of-degree (Eq. (1)).

$$\text{Vec}\_{l} = \frac{1}{\sum\_{c\_{j}^{'} \in N\_{l}^{'}} OD\_{\boldsymbol{c}\_{j}^{'}}} \left( \sum\_{c\_{j}^{'} \in N\_{l}^{'}} OD\_{\boldsymbol{c}\_{j}^{'}} \times f\left(\boldsymbol{c}\_{j}^{'}\right) \right) \tag{1}$$

Where *ODic j* represents the degree of output of the command *c <sup>j</sup>* corresponding to the command node in the ith command graph.

### **3.4 Clustering**

The purpose of using clustering algorithm is to test how effective our novel embedding method is. K-mean++ clustering algorithm [17] and t-SNE [18] were applied to cluster and visualize results.

# **4 Experiment**

An experiment was conducted with 112 participants of junior students majored in architecture on a 'spiral-stair' modeling task. This experiment was to collecting event logs by different individuals on same goal. The chosen 3D-modeling software was Rhino in version 6. In this experiment, a requirement description for the task is shown in Fig. 3(a). 'Spiral-stair' is a well-defined architectural problem with certain design constraints [13]. The majority of participants were beginners to Rhino. This study chose to define clear and short modeling tasks in order to obtain not too complex and comparable modeling process data. Beginners were chosen to ensure the authenticity and diversity of the data for sophisticated modelers tend to have similar solutions on modeling.

Using Rhino API, a backstage plugin to record both instructional and geometric event log was developed to run on Rhino 6. Event logs were collected based on one by one Rhino commands while modeling (Fig. 3(b)). Each command's records included the basic information such as command's name, beginning and ending time, command's result. Also, all Rhino objects' GUIDs happened in command's history were also recorded. For instance, command 'Circle' (center, radius) requires a center coordinates and a radius number to generate a circle curve in Rhino. If user choose a center by clicking on another existing object in Rhino, then the clicked object's GUID would be recorded. In this way, relationships between command and its related 3D objects can be 'perceived' by computer.

In addition to event log, screenshot of active viewport on Rhino per user-instruction (rhino command) was saved simultaneously during each modeling process. The screenshots were collected as a sequence of images named in command's index and command's name (Fig. 3(c)).

**Fig. 3.** Requirement information of the experiment (a) 'spiral-stair' task (b) participant modeling (c) records of event log and sequence of screenshots of the active viewports

# **5 Results**

### **5.1 Experiment Results**

In this case study, 99 out of 112 total participants were screened valid for further analysis. The rules for data screening are: 1. Data that has not been recorded from the beginning or has not been recorded; 2. Data not modelled from an empty file; 3. Incomplete data due to various other reasons. According to our proposed method, a total of 4728 nodes, 10059 edges, 72 different commands were extracted from event logs. Finally, 99 command graphs were created with no repetition. Figure 4(a) shows the variety of command graph examples. Through our novel embedding algorithm: Command2Vec, the behavior sequence of modelling process from 99 participants were characterized into 6 groups as shown in Fig. 4(b).

# **5.2 Evaluation**

In this study, external analysis method was used for effectiveness evaluation of clustering results. Five architectural designers with more than 5-year design experience were

**Fig. 4.** (a) examples of randomly picked command graphs in the experiment and (b) clustering result of all embedded command graphs from 99 effective data.

invited to evaluate the result. The evaluation criteria are divided into two dimensions: the similarity within current evaluating group and the difference of current group to rest of the groups. Based on the seven-point grading system, the final rating form was show in Table 1. Each participant's full screenshots in the experiment were compiled into videos for evaluators to watch and rate. By random sampling, each evaluator only needs to watch a portion of the videos to cover the whole number of videos, while with overlapped samples to other evaluators. This method greatly reduced time of evaluation.

**Fig. 5.** Typical example screenshots with key commands in G1, G4, G5 and G6.

According to evaluation result, high evaluations grading of five experts appear in G1, G4, G5 and G6 on dimension of 'similarity within group' (Fig. 5). Group1 shows significant feature in steps generation by using contour to swept surface. Group4 modeling behavior featured in using 'Block' to edit stair-step. Group5 shows significant feature in constructing steps one by one in ascending procedure. Participants in G6 show strong similarity in construction phasing which are from all 'stair steps' to all 'platforms' to all 'fences' to all 'handrails'. This proves from the side that Command2Vec has learned some significant features. However, group 2 and group 3 are more variously featured than the other four groups.


**Table 1.** Returned evaluation.

Co-existing commands (Table 2) and their out-degree rankings (Fig. 6) in each of the group were analyzed. Top 3 coexisting commands in G1 and G4 are featured in modeling behavior with key commands like 'sweep1' and 'block'. The median of each group's command's out-degree ranking was also around 3 for the top 3 frequently co-current commands but with wide ranges.

**Table 2.** Top 3 frequently coexisting commands per group.


**Fig. 6.** Box-plot of out-degree rankings for each group's top 3 frequently coexisting commands.

### **6 Conclusion and Discussion**

This study investigated the application of machine learning to empower 3D tool on perceiving, learning, and classifying modeling behaviors. An experiment was conducted to collect 3D-modeling event logs from 112 junior students. Information of commands and objects were retrieved from event logs to form command graphs. A novel algorithm Command2Vec was developed based on Word2Vec for graph embedding. Finally, 6 groups were clustered. An evaluation was done by five experienced designers by watching videos compiled from individual entire modeling screenshots and grading. Comparing to the refined input data, evaluating video included sub-process of trial-and-error and 'noise commands' which more realistically restored modeling scenario. This increased the credibility of the external evaluation results in this study.

We chose feature learning in NLP for data mining because we observed that modeling operations are similar to human languages in certain extent, such as the sequence and hidden structure of commands operated during modeling. Command2Vec performed well in feature learning of key commands and procedures on the 'spiral-stair' task.

Command-object graph has a higher data dimension than the command graph used in this study. The problem of how to embed binary command-object graph with both semantic and geometric information waits to be studied in the future. In terms of evaluation, the sample size of five experts are little, and grading process by watching many videos is very time-costing. A better sampling method to allow larger number of experts in evaluation is thought as an improving direction.

### **References**


**Open Access** This chapter is licensed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made.

The images or other third party material in this chapter are included in the chapter's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the chapter's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.

# **Exploring in the Latent Space of Design: A Method of Plausible Building Facades Images Generation, Properties Control and Model Explanation Base on StyleGAN2**

Shengyu Meng1,2(B)

<sup>1</sup> University of Innsbruck, Innrain 52, 6020 Innsbruck, Austria Shengyu.Meng@student.uibk.ac.at <sup>2</sup> Guangxi Polytechnic of Construction, Nanning, Guangxi, China

**Abstract.** GAN has been widely applied in the research of architectural image generation. However, the quality and controllability of generated images, and the interpretability of model are still potential to be improved. In this paper, by implementing StyleGAN2 model, plausible building façade images could be generated without conditional input. In addition, by applying GANSpace to analysis the latent space, high-level properties could be controlled for both generated images and novel images outside of training set. At last, the generating and controlling process could be visualized with image embedding and PCA projection method, which could achieve unsupervised classification of generated images, and help to understand the correlation between the images and their latent vectors.

**Keywords:** Building façade generation · Architectural generative design · Image high-level property control · GAN · GAN explanation · StyleGAN · Latent space

# **1 Introduction**

With the emerging of Generative Adversarial Network (GAN) based image generation method in recent years, many attempts have been made to apply GAN into architectural images and drawing generation research [1]. However, for the realistic building façade images generation task, most attempts faced different challenges, such as quality and controllability of generated image, and interpretability of model.

These challenges were due to various limitations, such as performance of the selected GAN model, the size of training dataset, the understanding of latent space, etc. In this paper, by training the state-of-the-art GAN based image generation model, StyleGAN2 [2], with high-resolution building façade image dataset, and exploring its latent space by applying PCA and GANSpace analysis, we could overcome above challenges in different extend [3].

In summary, the main functions and contributions of this paper are:


# **2 Related Work**

# **2.1 Image Generation Research via GAN in Computer Science**

Generative Adversarial Networks (GAN) are a neural network architecture consist of generators and discriminators, and have shown the potential to generate novel image instances from the learned distribution of training set [1]. Recently, the derived research of GAN become the focus of image generation task in computer vision scope. Above research could be generally classified as supervised and unsupervised learning structure. The supervised GAN require conditional input in both training and inference process, such as Pix2Pix, Pix2Pix HD, GauGAN (require paired training sets), and cycleGAN (require unpair training sets but each set must be similar content) [4–7]. Because the supervised GAN require relatively small training set and less training resource, and achieve high quality output when inputs were appropriated, most architectural image generation research based on that. However, their performance and application were limited in the practical workflow because conditional inputs were required. In the other aspect, the unsupervised GAN models, such as DCGAN, BigGAN, StyleGAN require much bigger training sets (normally in millions) and higher training resource, which have been used in less research [2, 8–11]. However, because the unsupervised GAN model could generate diverse outputs without conditional input, it is more potential to apply into real task. In addition, the features of latent space they have brings the possibility to achieve further model explanation and semantic edition of generated images [3, 11].

# **2.2 Plan Drawing Generation Research**

Most research of generating architectural plan drawing based on the supervised GAN model. Hao Zheng is one of the early researchers in this scope. In 2018, he pplied a conditional GAN, Pix2Pix to prove building plan, urban plan, and satellite images of city could be generated by given conditional input, such as footprints or color pattern images [12]. In the following research, he successfully generated plausible apartment plans, and explained the working principles [13–15]. In 2019, Stanislas Chaillou developed ArchiGAN also based on Pix2Pix, which could generate whole department building plan from building footprints [16]. In 2021, above research has been expanded to large scale plan drawing. Liu et al. applied Pix2Pix to generate campus layout by given campus boundary and surrounding roads [17]. Pan et al. implied GauGAN to generate plan of community by similar conditional input [18]. The outcomes images of above research were normally significant when appropriated conditional input was satisfied. However, plan drawing are relatively simple to generate comparing with complex building façade images.

### **2.3 Building Facades and Other Perspective Architectural Images Generation Research**

Similar with plan drawing generation, most previous research of building facades and other architectural perspective image generation required conditional inputs. In 2017, in the original paper of Pix2Pix, Isola et al. generated novel street scene and building façade images, but required street view and refine colour label as paired image inputs [4]. In 2019, Kyle Steinfeld developed GAN Loci upon both Pix2Pix and StyleGAN, when the Pix2Pix version required depth map as conditional input, and the StyleGAN version has only trained as a 512 pixel square unrefined instance, due to the limitation of computing resource and dataset [19]. Kelly et al. proposed the FrankenGAN which could generate 3D building model with detailed façade textures, but require mass 3D model as input [20]. Mohammad et al. attempted to generate novel building elevation design from AI generated datasets, but they only got low-resolution grayscale images [21]. In 2020, Chan et al. attempted to generate building facade images from hand sketch, but only got low-resolution output due to small dataset and small GAN architecture [22].

Different with previous research, In 2019, Bachl et al. developed City-GAN to synthesis novel city images from random input by learning from large street view dataset. City-GAN was developed upon the unsupervised DCGAN model with feeding additional label information to control the style of the generated city images, and allow simple interpolation between different styles. Nevertheless, the generated images were still with limited quality and resolution [23]. Chen et al. proposed another unsupervised model, embedGAN, which attempted to explore the property of latent space [24]. They embedded an interior image into the latent space as a starting point, and then guided that latent walk purposefully by a pretrained classification network, to regenerate the image into different decoration material and style. However, only trained image could be applied, and the image quality was not good enough.

### **3 Methodology**

In this paper, the state-of-the-art GAN based image generation model, StyleGAN2 has been applied on the experiment [2]. In addition, a training set with 9772 building façade images of 1024 × 1024 resolution have been implemented. Because StyleGAN2 model generates images from random sampling vectors in the high-dimensional latent space, to explore and visualize the relations between the generated building façade images and corresponding latent vectors, the methods of dimensionality reduction, clustering and image embedding have been applied. Specially, by utilizing the principal component analysis (PCA) on the intermediate latent space W of StyleGAN2 model [3], this paper achieved high-level properties control of generated building facade images. In addition, even though StyleGAN2 does not have encoder network, projecting novel building facade images (outside from training set) into existing latent space has been achieved through applying VGG16 pre-trained perceptual network. This method could locate the latent vector which could generate the most similar image with the target image [25]. Once the projection completed, we could control novel image as same as the generated one.

# **3.1 Training Building Facades Generation Model by StyleGAN2**

# **3.1.1 Introduction of StyleGAN2**

StyleGAN2 is the SOTA GAN based image generation model upgraded from Style-GAN, which was proposed by Nvidia company in 2020 [2, 11]. It has unique generator structure different from most GAN models, which provides better model performance and interpretability. Most GAN models, such as Pix2Pix and CycleGAN, have an image encoder to encode the input image as a latent vector, which used as the direct input of image synthesis network (decoder) [4, 7]. This structure requires images as input to generate others, and potentially limits the model performance, because the distribution of input images may not fit to the output images [11].

The style-based generator of StyleGAN2 could avoid using image as input. The synthesis network g of style-based generator begins with a learned constant number, and go through 18 layers to output as 1024 pixel square image. In each layer, a noise and a latent vector w will be inputted to adjust the style and content of the generated image. The latent vector w is an intermediate output in a 512-dimension latent space W, which was converted from a vector z in another randomly sampling 512-dimension space Z, by a 8-layer trainable fully connected network [11].

The improvement of Style-based generator in StyleGAN2 brings bellowing new features, which are the foundation of the further research in this paper [11].


# **3.1.2 Training Process**

In this paper, an open source architectural style image dataset was firstly integrated with other 6000 building facades photos and renderings downloaded from internet [26]. Secondly, repeated, non-architectural and low-resolution images were removed. Finally, all the images will be converted to jpg format with RGB channel at 1024 × 1024 pixels. The final training set included 9772 architectural façade images from various styles. The training was proceeding in config-f (1024 × 1024 resolution) and with auto mirror augment function. The training has been running on a single NVIDIA Tesla V100 with 16G RAM, and continued about 816 h until 12240K images.

# **3.1.3 Generation Examples**

After the training completed, this StyleGAN2 model instance could generate plausible building facades images with 1024 × 1024 resolution from random seeds (Fig. 1). The generated images were similar with the training set images, but showing the mixing features from different examples rather than simply repeat (Fig. 2).

However, for some details in the generated image were still blurry, mismatched or missing. These may because of the relatedly small dataset, not enough original resolution of some training images (were enlarged) and not enough training time.

**Fig. 1.** Example of generated building facades images of the experiment in this paper (curated)

#### **3.2 Exploration and Explanation of Latent Space**

### **3.2.1 Visualizing High-Dimension Latent Space by PCA**

The style-based generator will require an intermediate latent vector was input in the intermediate latent space W. The W was remapped from a 512-dimension randomly sampled latent space Z by 8 layers of trainable fully connected networks. To visualize the distribution of W and Z, the Principal Component Analysis (PCA) was introduced to reduce the dimensionality and projected the vectors in both spaces to 2D figure. The PCA method will firstly analysis the distribution of high-dimension latent space, then projects the vectors orthogonally with the principal axis into the low-dimension space, to present the main features in high-dimension space [27].

In this paper, for exploring both latent spaces, 2000 vectors have been randomly sampled in Z and remapped into W, and finally the generated building façade images. The vectors distributions in W and Z have been projected by PCA into 2D figure (Fig. 3). Because the vectors in Z were randomly sampled, its distribution was almost sphere. Relatively, the distribution in W has remarkable shape, which may reflect the features distribution of image contents.

**Fig. 2.** Generated building facades images examples comparing with dataset and other models.

**Fig. 3.** Vectors distributions in latent space Z and W

### **3.2.2 Explanation of StyleGAN2 Model: Images Embedding and Clustering in Latent Space**

To prove previous hypothesis and visualize the correlation between images and their latent vectors w in space W, 2000 generated images examples have been embedded at the projected locations of corresponding was dots (Fig. 4). To avoid too much overlap, only about 10% images thumbnails have been shown. In addition, the unsupervised Kmeans clustering algorithm has been applied to cluster the w vectors into 4 types, which were marked as colours of the dots and frames of thumbnails.

It could be observed that the images in same clusters shared similar features. Moreover, some features will show linear change alongside certain direction. For example, the height of buildings was descending from the top to the bottom in the Fig. 4. These proposed the hypothesis the generated image's high-level property could be controlled by moving its vector w along certain principal axis.

### **3.3 High-Level Prosperity Control**

### **3.3.1 GANSpace Method**

Above hypothesis has been proved by the research GANSpace [3]. In that research, the principal axes Vn will be firstly computed by analyzing the Latent space W via PCA (the max amount of Vn is equal to the dimension of latent space, 512), then the modified vector w' can be computed from original vector w by the Eq. 1 below, where the x is the scale parameter customed by user [3]:

$$\mathbf{w}' = \mathbf{w} + V\_n \ast \mathbf{x} \tag{1}$$

### **3.3.2 High-Level Property Control**

In this paper, the control process of principal axes 0 was visualized by setting series of equidifferent parameters (Fig. 5). Because axes 0 was assumed as the foremost axis of latent space W, it should present the most significant diversity of the training sets, which was from high-rise modern building to low-rise traditional residential house. The

**Fig. 4.** Image embedding with corresponding vectors in latent space W

modified images showed the gradual change when it moved along axis 0, and represented similar features when passing nearby images. In addition, when the modified w was approaching the border of W, the generated images become implausible, because the StyleGAN2 model have not enough training in these areas.

**Fig. 5.** Visualizing high-property control by principal axes 0 in scale parameter range [−25, 25].

More examples controlled by different main axis could be seen on Fig. 6. Ideally, each axis would control one significant feature. However, these features were still partly disentanglement in this experiment. That may because of the insufficient quantity and diversity of the images in training set.

**Fig. 6.** Examples of the high-property control by main principal axes 2 and 5.

### **3.4 Project Novel Image into Existing Model Instance**

### **3.4.1 The Projection Method**

The image's latent vector w in StyleGAN2 model is necessary when ccontrol its highlevel property. However, because StyleGAN2 does not have encoder network, the novel image (outside of training set) cannot be directly encoded as w. To solve this problem, Image2StyleGAN algorithm employed a pretrained VGG16 perceptual model. In details, a latent walk will firstly start from the average w, then the VGG16 model will be applied to compute the loss between the output image of present w and the target image. Finally, gradient descend could be used to guide the latent walk to get close to the w which could generate the most similar output image with the target novel image, in the latent space W of existing model instance [25].

### **3.4.2 Projection and Control of Novel Image**

In this paper, a white vacation house outside of training set has been projected into the w space, and the whole process could be observed by PCA projection (Fig. 7). It could be found that the moving interval of projecting vector was keep decelerating, because the computed loss of gradient descending was reducing progressively. The final project result image was not totally as same as the input target image, but very close. After that, the projected image could be controlled with high-level property by principal axis, just like the generated images (Fig. 8).

**Fig. 7.** Visualizing the novel image projection process in latent W space.

**Fig. 8.** High-level property control of projected image by principal axes 0: from high-rise modern building to low-rise residential house

# **4 Conclusion**

This paper is aiming to remove the obstacle of applying GAN based image generation model into generative design workflow. By integrated series of SOTA method from computer vision scope, this research has improved the quality of generated building façade images, and visualized the correlation between generated images and their feature vectors in latent space. In addition, by analyzing and manipulating the latent space of the trained model, high-level property control has be achieved for the generated images and the novel images.

However, some details of generated image were still blurry or mismatched, and the property control has not achieved completely disentanglement. Both of that are possibly because of the insufficiency of the quantity, quality and diversity of training set images. The present training set is less than 10K images and part of them have been enlarged, when the original StyleGAN2 research applies 200K full high-resolution training set images. Better performance may realize with larger training set and longer training time.

**Acknowledgement.** The author would like to express the appreciation to Professor Claudia Pasquero for her teaching in patience, and Hao Zheng for his kindly help. This research is supported by The Project to Improve the Academic Ability of Junior Faculty in the Higher Education of Guangxi (Grant No. 2021KY1159), and the Innovation and Research Project of Professional Education in Guangxi (Grant No. GXGZJG2019A014).

# **References**


**Open Access** This chapter is licensed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made.

The images or other third party material in this chapter are included in the chapter's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the chapter's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.

# **Punch Card Patterns Designed with GAN**

Virginia Ellyn Melnyk(B)

Tongji University, Clemson University, 3-156 Lee Hall, Clemson, SC 29634, USA vmelnyk@clemson.edu

**Abstract.** Knitting punch cards codify different stitch patterns into binary patterns, telling the machine when to change color or to generate different stitch types. This research utilizes Neural Networks (NN) and image-based Generative Adversarial Networks (GAN), with an image database of knitting punch cards, to generate new punch card designs. The hypothesis is that artificial intelligence will learn the basic underlying structures of the punch cards and the pattern makeup that is inherent across patterns of different styles and cultures. Different neural networks were utilized throughout the research, such as Neural Style Transfer (NST), AdaIN Style Transfers, and StyleGAN2. The results from these explorations offer different insights into pattern design and various outcomes of the different neural networks. Ultimately physically testing these punch card designs, these patterns were knit on a domestic knitting machine, resulting in novel fabrication and design techniques that are both digital and craft-based.

**Keywords:** Artificial intelligence · Textiles · Patterns · Human machine collaboration · Craft

# **1 Introduction**

Visual patterns are all around us in nature, mathematics, and textiles. These patterns are made of repetitive shapes and geometries. Patterns have often been associated with textiles specifically, as many pattern designs emerged based on the structure of weaving and knitting (Stewart 2015). Knitting uses a single yarn looped around itself in rows to create a textile. Multiple colored yarns are knit together to make decorative patterns. Similar to the Jacquard loom for weaving, knitting machines use punch cards as a basic binary pattern telling the machine to knit either color "A" or "B". Most domestic punchcard knitting machines come with a set of standard punch cards, and more punch cards can be purchased separately. Images of these punch cards are easily found on the internet, supplying an available data set for this research. See Fig. 1.

Punch cards were initially invented for creating complex weaving patterns on the jacquard-loom. In developing early computers that utilize binary code, Charles Babbage and Ada Lovelace adapted these punch cards to run their Analytical Engine (Essinger 2015). This intertwined history of punch cards, computation, and textiles creates a deep theoretical and foundation for the continued research into technology and textile design.

Using artificial intelligence to design new punch card knitting patterns combines these old and new textile creation methods. It looks at the heritage of computing as well

**Fig. 1.** Sample portion of punch card data set.

as exploring the future of design with AI. Building upon the use of high-tech design tools applied to low-tech fabrication and crafted methods.

AI is modeled after the human brain and is successful at understanding patterns in data sets. This research explores a data set is of knitting punch card images. This research hypothesizes the generate new knitting designs using neural networks trained on a punch card database. These new patterns will be representative of a variety of styles, cultures, and histories. The success of these patterns will be tested on their viability to generate successful knits. As the patterns could exist as only virtual punch card images, the actual test is the physical constraints of the intended materials. Several different AI techniques were tested during the research, including Neural Style Transfer, AdaIN style transfer, and styleGAN2 training. The results are images of new "fake" punch cards. These were then translated into physical punch cards that could be used to fabricate physical test samples. The results begin to reflect on what can be learned from the knitting patterns designed with AI. Underlying structures of patterns emerge based on the input dataset. The human designer also curates the data set, which biased towards creating a productive knit pattern. Creating and intertwining our history and future of patterns and the techniques used to produce them. Ultimately, the importance of textiles in computation goes beyond the textile community as these results are beginning to discuss a larger question of design computation, pattern language, ornamentation, craft, and fabrication.

# **2 Context**

Patterns are repetitive, symmetric, geometric, and balanced; the human brain is attracted to them. Gestalt theory illustrates the principles such as the laws of symmetry, figureground, similarity, and common fate as ways to describe how our minds begin to understand patterns as a whole before they recognize the specific elements (Koffka 2013). Psychologists are still studying the ways that our minds process these patterns. Since neural networks are modeled after how the human brain learns, artificial intelligence predictably should beable to understand the specific rhythms, symmetries, geometries, and spacing that make these knitting patterns.

**Fig. 2.** Sample of fair isle knit pattern and corresponding punch card.

### **2.1 Patterns in Knitting**

Punch card knitting patterns interoperate with a traditional knitting technique called Fair Isle. Its origins are credited to Scotland's Fair Isles, as it is a popular knitting pattern technique in that region. Fair Isle patterns are recognizable by their basic geometric shapes, small-scale repetition, mirroring, and simple color changes that never consist of more than two colors per row. See Fig. 2. While one color is used as activate stitches, the other colored yarn hangs or floats in the back. Floats should be no longer than three to five stitches in a successful Fair Isle pattern (Pulliam 2004). The switching between active and inactive yarn colors creates a pattern through pixel-like imagery as each stitch acts as a pixel of color across the textile design. The use of only two yarns makes this technique ideal for binary codification into punch cards. In contrast to domestic punch card knitting machines, CNC machines and hand knitting are capable of more complex patterns that include more than two colors or multiple stitch types. Therefore, this use of punch cards is not entirely low-tech or high-tech, as other methods are available. In this case, the use of knitting punch cards was due to the available data set, the historical and theoretical contexts, the simplicity of binary coding, and easy access to domestic knitting machines.

# **3 Computational Textile Design**

Pixels, punch cards, and binary all sound like computer terms, yet they are all used to design and produce textiles. This is the intersection between computation and textile fabrication and design. As computational design methods have developed over the years, so have computational design techniques for textiles. There are many examples of computation used to weave, knit, and design patterns. Although there are yet few examples addressing Fair Isle knit punch card patterns, which creates an opportunity for this research exploration to fit within the context of computational textiles.

### **3.1 Precedent Examples of Computational Textile Design**

Designers and artists have worked with algorithms and AI for knitting, sewing, and embroidery. These examples show some of the development and precedents for computational textile design. Genetic algorithms were a precursor to contemporary forms of AI, employing a metaheuristic approach to learning. Genetic algorithms were used in the research of lacemaking pattern design. These algorithms could learn the rule sets to create a knittable lace pattern. At each generation of the designs, the algorithm learned through a supervised training method what choices to make in the design to produce a knittable lace design (Ekart 2007).

Neural Networks have been used for various textile designs. Such as for the color selection and pattern design for new embroidery samplers. In this precedent research, a sentence was input into an Entertainment AI that adapted the sentence's content into the color selection and motifs for an embroidery sampler design (Smith 2017). Furthermore, knitting has been explored with generative AI design. Through the development of using a neural network to generate CNC knitting machine patterns. This example generates knitting patterns from images of unknown knit material by being trained on the structure of several sample knits and their corresponding patterns, resulting in a user-friendly interface called img2prog (Kaspar 2019). Hand knitting patterns are written out in a shorthand language, referred to as knit-speak. A natural language learning AI was trained on 500 patterns in the Sky-Knit project to develop new knit-speak directions for hand knitting patterns. Using the online community of Raverlry.com, these patterns were physically knit by artisans and artisans; the resultant designs were ultimately very strange looking (Shane 2019).

These examples show the development of computation within textiles and design. Eventually, creating actual physical manifestations of the crafted knitwork is essential. Currently, so many of AI designs happen and remain within the computer. Working with textiles allows the design from the computer to easily be fabricated physically and test the constraints of those designs.

# **3.2 AI Designed Punch Cards**

Domestic knitting machines were a popular craft between the 1940s until the 1980s. Unfortunately, as a hobby, machine knitting has since decreased in popularity. Resultantly, many of the companies that sold knitting machines and punch cards no longer produce them. Fair Isle knit pattern punch cards have an almost 1:1 relationship with the image of the punch card. Making this an easy starting point to design AI knit patterns as one can visually see the potential design in the punch card results. Standard punch cards are 24 dots or stitches wide and 60 stitches long. They can be looped in the vertical direction, and patterns are repeated in the horizontal direction to create larger knit fabric pieces.

There are three main types of Fair Isle patterns; geometric patterns, organic floral patterns, and object-based or images. The database of knitting punch card images was generated by image scraping from Google. Eventually, these images were sorted manually to affirm the best quality of images for training. See Fig. 1. This sorting could generate bias, but also allowed a way to collect only clear legible punch card images, which would work best for pattern learning. Google images may also provide bias as the search results from what is available on the internet; perhaps specific patterns may appear more frequently than others due to trends and popularity. These types of bias could be considered as positive weighing of designs as it creates a way for some clarity and influences to be embedded into the data. With this in mind, to begin the research, different deep learning methods were used to design sample punch cards and ultimately to fabricate them into knit designs.

### **3.3 Neural Style Punch Card Designs**

Neural Style Transfer (NST) was developed in 2015 (Gatys 2015). It utilizes a Convolutional Neural Network (CNN) to understand the underlying structure of an image separate from an image's style. This network uses only two images to generate a new image. NST has been used to generate AI art, where one style of a well-known artist is applied to another image. In the case of knitting patterns, the goal was to apply the style of one knitting punch card pattern to the underlying structure of another pattern. The prediction is that the results might find a median between the two punch cards and express elements of both designs at once.

In the tests, punch cards with different pattern styles were used to see how the organic forms and geometric patterns would combine. When running the neural network, the number of iterations was adjusted to test the quality. A more refined image occurred with a higher number of iterations. While a lower number of iterations resulted in undefined dots and small dots that did not fit within the punch card grid. See Fig. 3. Secondly, adjusting the weight of the images allowed for more or control over the influence of style over structure. Several versions of the style transfer were run, with different knitting patterns. In this process, the human and machine collaboration is clear from adjusting the settings to get desirable results to the section of the images to use for style and structure inputs. The results achieved were based on what the designer felt was most successful at creating a legible punch card and displaying features from both inputs. This is important because the machine generates designs, but ultimately, the human and machine collaborate to achieve the best output. The final designs resulted in a structure from the content punch card that distorted and manipulated the patterns without any understanding of cultural significance or meaning behind the pattern. See Fig. 3.

**Fig. 3.** Example result of neural style transfer.

### **3.4 AdaIN Style Transfer Punch Card Designs**

Developed in 2017 as a faster alternative to NST, Adaptive Instance Normalization (AdaIN) uses a single feed-forward neural network to produce similar results (Huang 2017).

This network was used to generate another set of knitting punch card patterns. AdaIN only uses two image inputs for a style image and a content image. Although running much faster than NST, the results were less effective as the dots were more distorted and did not arrange to the original punch card grid structure. Moreover, there are fewer settings and controls to manipulate the results of AdaIN Style Transfers. The resulting images had an underlying grey shade from the input pattern's geometry rather than a discrete dot matrix. See Fig. 4. Thus adding additional human manipulation necessary post AI production to interoperate this design into a useable punch card design. Thus, the grey shade is negated, but there are minor deviations and subtle shifts to this pattern from the style image. See Fig. 4.

**Fig. 4.** Example results from AdaIN

### **3.5 StyleGAN2 Designed Punch Cards**

StyleGAN2 was released in early 2020 by NVIDIA; it is an update to the earlier version of StyleGAN developed in 2018. Generative Adversarial Networks (GAN) consist of two neural networks, one to generate images and one to test the images (Karras 2020). StyleGAN2 learns the characteristic artifacts in a data set of images to produce new images. The GAN first generates images from a random noise pattern; the discriminator tests them and feeds back information to the generator to correct. Each epoch, the generator gets closer to the desired results until eventually, the generated images can fool the discriminator into believing that the image is real.

The results of image scraping from google was manually sorted into small data set of clear resolution punch cards of about 120 images. These were primarily curated on their legibility and not on the pattern content. This data set included various patterns, from floral, geometric and object-based designs such as cats and owls. The hypothesis did not want to depict one style or type of punch card in the first tests. However, to test the ability of the network to understand the underlying structure of the punch cards, such as the constraints in Fair Isle knitting to consist of floats no more than 3–5 stitches. Simultaneously, other design constraints, such as not having too many substantial areas of one color for aesthetic reasons. The pattern should also be repeatable; it would need to have balance across the card rather than be weighted to one side or the other. Furthermore, the knit is constructed in rows, each row can exist independently, but a successful pattern has vertical and horizontal repetition and geometry.

In order to generate a more extensive set of data from the few quality images collected, the punch cards were broken down into smaller sections. Typical punch cards are 24 dots wide and about 60 dots long; the 24 dot width defines the pattern in the machine. Meanwhile, the various lengths of images would cause issues in training; thus, cutting them into equal-sized images would produce a more controlled data set. Each image was cropped into several smaller square-proportioned images, resultantly about 24 dots by 24 dots. These images consisted of overlaps between them. Mirroring was also used, as well as cropping the images since punch cards do not necessarily have a front or back and can be fed into the knitting machine facing any direction. These approaches to manipulating the images multiplied the data set ten times, achieving a final set of close to 1200 punch card images for training. Since the research was focused on the knitting patterns' underlying structures, it was not essential to need the overall pattern and emphasis on the more minor relationships of spacing and localized dot matrix. The 24 dots width was kept consistent in the data set so that the results could then be recombined to create more extended patterns more similar to their original proportions.

**Fig. 5.** Results from StyleGAN

The data set was uploaded to a base model of StyleGAN2, trained initially on bird illustrations, for the training. At around 1500 epochs, the images started to begin to look like new punch card designs. After that point, the model seemed to face mode collapse, where the punch cards generated began to look all self-similar, and the individual dot matrix was to be lost. This is most likely due to the original data set being too small and self-similar as well. Further investigation of this could be explored. Although at 1500 epochs, a set of 50 successful sample image results was downloaded. The human user judges the success of these images based on their appearance to look like a punch card and no longer have any reference to the original training set of birds and unique enough to have diverse results. Since these images were in the square format of the input images, three images close in aesthetic quality were selected and combined vertically to generate a more common proportioned punch card pattern. The images created varied widely, from ones with a high density of dots to relatively sparse ones. Some patterns seemed very random, while others had clear underlying diagonal patterns embedded within them. Although these appeared random and stochastic at a glance, some underlying structure was revealed upon further reading and inspection. Some patterns did emerge, such as checkered patterns and diagonal striping, and underlying vertical designs. These smaller repeated structures can be seen in many of the input patterns from the data set. See Fig. 5.

# **3.6 Physical Results**

After the designs were digitally generated, physical punch cards were made. The image results from each method were not clear enough to directly use as a punch card and needed to be processed. Grasshopper for Rhino was used to trace the large, clear dots from the images into vector linework, which was then organized on the grid structure by moving these circles to the closest grid points. See Fig. 6. The patterns were laser cut out of thick Mylar to make them usable punch cards. They were then used to knit on a Brother KH836 Domestic Punch Card knitting machine with a standard 4.5 mm gauge. Since the punch card pattern is only 24 stitches wide, this would result in a small pattern of only four inches wide. Therefore, the pattern was set up to repeat once in width. This created an eight-inch by eight-inch test swatch, allowing it to knit once vertically through the pattern.

One of each of the designs were tested using two different colors of yarns. To visually and texturally make the pattern apparent. While physically knitting the patterns, a better understanding was developed of their successes and failures for generating Fair Isle knitting punch cards, as they could be tested with material constraints of the different yarn types as well.

# **3.7 Neural Style Transfer Knit**

The NST pattern tested consisted of an evident pattern with large swatches of each color. When knitted, the pattern had a noticeable vertical structure with some variation that seemed organic. The pattern appeared very intriguing and exciting as a punch card design was less successful when knit. The spacing and gaps horizontally resulted in large floats, which are undesirable in Fair Isle knits. This is undesirable because it leaves extra yarn hanging on the back, weakening the structure and snags on things. Mainly this occurred when the pattern is knitted doublewide; therefore, perhaps the NN could not understand how the pattern would be used to repeat and did not consider this edge condition. The structure overall is recognizably vertical. Vertical patterns are successful because they do not usually create long floats, but perhaps the vertical design proportions of this resulting design were too large.

On the other hand, there is evident mirroring of the design, which is a typical feature in Fair Isle designs. Finally, when repeated, the relationship of the repeat is successful as there is not a clear seam line. The pattern has evident balance from left to right that there is no noticeable start or end to the pattern. See Fig. 7.

**Fig. 6.** Translation of AdaIN training to knit results

#### **3.8 AdaIN Style Transfer Knit**

The AdaIN style. This design ultimately resulted in a more random pattern than the NST. The knitting was more successful as the structure was at a smaller scale, and there were no long lengths of floats. In addition, there was still an underlying diagonal and argyle type style despite that it was disrupted by some non-repetitive structure as well. When knit, the pattern was also repeated, and it had some shifts in the repeat, making the pattern not completely seamless yet was not a very noticeable edge to the pathen. See Fig. 7.

**Fig. 7.** Results of NST, AdaIN, and StyleGAN2

#### **3.9 StyleGAN2 Knit**

The StyleGAN2 training resulted in various outputs, consisting of very dense dots to very sparse dots. The sparse dot patterns were going to an issue, as there were some rows with only one or two changes in color, resulting in very undesirable long floats.

The denser StyleGAN2 generated patterns were more successful punch cards as they had adequate spacing for short floats, all under six stitches. They also had a noticeable clear structure of diagonal pattern disrupted by some random stitches. This is possibly because the data set had a lot of diagonal patterns in it. However, the structure became less apparent when knitted because the diagonals were only a single stitch wide. As well as when knit, the pattern is squished, and the results are not as long as the input punch card. The yarn structure with this pattern resulted in making it appear so small that it was no longer clear diagonals but more of an overall descript dot hatch or fill between the two colors. In addition, because the structure was invisible when knit, it is difficult to tell that the pattern is repeated more than once horizontally, which can be quite successful. The pattern ultimately has a particular movement to it as it resonates between the different diagonals and checkered designs.

# **4 Conclusion**

Each of the resulting tests developed unique patterns that never existed before. The results varied in success, yet there were clear underlying structures that each training method could understand and replicate. The training on the different networks proved always to make an easily repeatable pattern. The neural networks learned the patterns underlying structures, which results in noticeable styles from the existing dataset. These underlying structures worked to create visual appeal essential to the knit material's tectonics and the principles of patterns defined in gestalt theory. Ultimately, each of the neural networks had different positives and negatives.

The human and machine collaborate back and forth through the collection of data images and the curation of these images. The human is also controlling the weights of this input data and the number of epochs that the networks run. Ultimately, the knits are produced using domestic craft techniques and result in unique fabrication methods that integrate high-tech and low-tech processes.

For the development of this research, there are still opportunities to have more control over the data, such as inputting only geometric patterns or testing the knits as tuck or lace patterns rather than Fair Isle. These patterns also have further potential in how they can be utilized in fashion, décor, or architecture. These new patterns ultimately combine the structures and mish-mash the cultural meanings behind these patterns into something new and designed computationally. This work is reflecting and connecting our technological past and the design potentials of contemporary AI and technology.

# **References**


Koffka, K.: Principles of Gestalt Psychology. Taylor & Francis, Oxfordshire (2013)


**Open Access** This chapter is licensed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made.

The images or other third party material in this chapter are included in the chapter's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the chapter's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.

# **Bio-Tile: An Intelligent Hybrid-Infrastructure**

Sara Pezeshk(B)

Florida International University, Miami, USA spezeshk@fiu.edu

**Abstract.** Bio-tile is a multipurpose artifact designed for protecting the coastline from erosion while creating a landscape element and an architectural experience for visitors. Bio-tile performs as a mitigation strategy to slow down erosion while promoting biodiversity. This paper describes the methodology used to develop the bio-tile as the nexus between digital and environmental for resolving coastline challenges through material tectonics. A non-linear algorithm and nature's inherent code are used to develop the Bio-tile, a nature-based hybrid infrastructure. This approach aims to generate a performance-oriented design by using emergence theory to construct shoreline elements adaptive to climatic conditions.

**Keywords:** Hybrid infrastructure · Formation · Materialization · Construction · Biomimicry · Promoting biodiversity · Performance-oriented design

# **1 Introduction**

Coastlines and marine life are under significant climatic pressure due to global climate change and sea-level rise. Erosion and flooding are increasingly destroying marine life in coastal areas, particularly in some coastal habitats such as mangroves and seagrass beds, which remain essential breeding grounds and potential CO2 capture zones. Some of these concerns are "loss of habitat for numerous marine species and wading birds, further erosion of the property and adjacent properties, water quality degradation, and the interruption of natural shoreline processes."1

In response to coastal problems due to environmental distress, creating "hard infrastructure" is a common and effective technique. This technique involves constructing a concrete or steel barrier along the shoreline to protect against storm surge and wave attack. However, this strategy is causing tremendous harm to our environment increasing the vulnerability of coastal areas to hurricanes and damaging coastal habitat.

As a result, eco-engineers and designers have shifted their efforts to projects that have a minimal impact on the ecosystem while contributing to environmental quality and human experience. One of the more sustainable approaches to respond to this global distress is a nature-based technique to protect the shoreline. This technique, called living or natural infrastructure (green), a common management approach because it is costeffective, preserves biodiversity, and, most importantly, it is resilient. Nevertheless, this

<sup>1</sup> Florida Department of Environmental Protection, n.d., para. 4. [2].

<sup>©</sup> The Author(s) 2022

P. F. Yuan et al. (Eds.): CDRF 2021, *Proceedings of the 2021 DigitalFUTURES*, pp. 80–91, 2022. https://doi.org/10.1007/978-981-16-5983-6\_8

**Fig. 1.** Morphological design procedure

approach alone is not enough to protect our shoreline due to the high rate of extreme weather events such as storms and hurricanes, and storm surge.

A more innovative approach is a combination of both built (gray) and natural systems (green), called hybrid infrastructure, which is an alternative approach to coastal protection and resilience.2 This method is more adaptive and responsive. According to Weinstock, "the emerging architecture that relates pattern and process, form and behavior, with spatial and cultural parameters, has a symbiotic relationship with the natural world."3

In this project, a performance-based material system combined with restored natural green infrastructures, including salt marsh and mangrove is implemented, the project's ambition is to prevent or minimize erosion, maximize biodiversity, and create a healthier environment along the shoreline by designing tiles with bio-enhanced material and surface treatment that host marine organisms (Fig. 1). The creation of process models and simulations and design experimentation that begins with functionality and morphology relationships can be used to investigate new building concepts and geometries.4 Bio-tile is a multipurpose living shoreline, designed using a responsive material system

<sup>2</sup> Sutton-Grier et al. 2015 [5].

<sup>3</sup> Weinstock 27–33 [12].

<sup>4</sup> Ibid. [12].

that aims to protect and maintain the natural environment's health and welfare from anthropogenetic harms.

The project focuses primarily on the intelligence of the material system of Biotiles, their configuration, interconnectivity, and evolutionary fabrication process. The Performance-based hybrid infrastructure is generated through a multi-level (micro, macro, and nano levels) simulation process, and four tile typologies that emerge as a result of these process. All four Bio-tile typologies (solid-tile, mangrove pods, rockpool, and seagrass blanket) are responsive to climatic pressures, and each tile has its exclusive performance behavior toward the environment. Simultaneously, all of the tiles promote biodiversity by integrating the material system consisting of material, geometry, and texture deformation (see Fig. 2). Both the Bio-tile configuration and fabrication assembly incorporate self-organization operations within the embedded algorithm, resulting in an optimized arrangement in a specific new territory. According to De Landa, "it refers to the integration of a collection of elements into an assemblage that is more than the sum of its parts, that is, one that displays global properties not possessed by its components."5

**Fig. 2.** Rhizomic connectivity

Simultaneously, the multi-performative resilience grounded in the landscape uses a rhizomatic model as fluid networks (roots and water flow) connects all the elements from the coastline to the sea. Rhizomes metaphorically smooth out space and cut through boundaries imposed by hierarchical and orderly vertical lines. Deleuze explained, "Smooth and striated space 'exist only in mixture: smooth space is constantly being translated, transverse into a striated space; striated space is constantly being reversed, returned to a smooth space."6

# **2 Design Methodology**

Climate change is inevitable; however, the coastal area is witnessing the first impacts of marine biodiversity loss and the degradation of the ocean ecosystem and its ability to respond to this transition.7Architecture, as a material practice, can demonstrate an alternative solution to this crisis. As Hensel explains, "materials make up our build environment, and their interaction with the dynamics of the environment they are embedded within results in the specific condition we live (Fig. 3).<sup>8</sup>

<sup>5</sup> DeLanda 20 [20].

<sup>6</sup> Deleuze and Guattari 316 [16].

<sup>7</sup> http://ocean-climate.org [1].

<sup>8</sup> Hensel et al. [6].

### **2.1 Inherent Code of Nature**

**Fig. 3.** Bio-tiles configuration rhizomatic connection

According to De Landa, biomimetics is a subfield of materials science that studies biological creatures to extract design concepts applied in a manufacturing setting. He also claims that the intention is not to recreate a material that already exists in nature.<sup>9</sup> An emergent action constitutes the formation and configuration process in this project. The whole is generated through an equilibrium process. The procedure as an emergent process is not only by the performance behavior and geometrical configuration of these element's relationships at the local scale but also by the performance behavior and geometrical configuration of their relationships at the global scale. According to De Wolf and Holvoet, "Emergence behavior in a complex system occurs as a result of simple interaction among local emergent" when coherent emergent at the macro-level that dynamically arises from the interactions between the parts at the micro-level. As pointed out by Otto, "the geometrically exact forms are rare in nature."10 As a result, we can only recognize the system behind them by comprehending the phenomenon that causes them.<sup>11</sup>

During the fabrication process, the principle of form-finding is used to construct and configure the Bio-tile formation. Here Material systems manifest as self-organization in these processes, which occur in a far-from-equilibrium model.12 Far-from-equilibrium processes are forcing on the foundations of condensed matter and materials physics.

<sup>9</sup> De Landa [12].

<sup>10</sup> Otto and Burkhardt [17].

<sup>11</sup> Ibid.

<sup>12</sup> De Landa [12].

These materials have structural properties that indicate that they are liquids under equilibrium conditions, but they can act like solids (Fig. 4).<sup>13</sup>

### **2.2 Rhizomatic Occupied Territory**

**Fig. 4.** Rhizomic layering at multi-level

Several ways can be used to connect occupied points, lines, surfaces, and spaces. The surfaces are occupied by living organisms that want or need to interact with one another to live and survive.<sup>14</sup> The process of occupying the territory at the macro level in this project consists of connecting the elements at the micro level through rhizomatic arrangement, which is made up of many physical lines and invisible links between water flow, mangrove, saltmarsh, and seagrass roots. Deleuze claims that "there are no points or positions in a rhizome, such as those found in a structure, tree, or root. There are only lines."15 Rhizomes connect the space and cut through boundaries imposed by vertical lines of hierarchies and order. The rhizome achieves the sensation of "becoming" and it creates a correspondence between the self and the other.16 Rhizomic connectivity also allows for the flow of energy, which eventually leads to the flow of materials along its paths. Weinstock argues, "the topography of the earth's surface emerges from the interaction of tectonic force that acts on the land from below and the weathering and erosional force that act on it form above."17 The circulation of energy and material, such as sediment, nutrition, or run-off, follows the sea's shortest path. Otto refers to this as an "invisible path" and states that "transport paths connect the occupied territories. Neither the occupation nor the transport paths have to be material. Often there are no or only the temporary traces."18

Each Bio-tile module is designated as a space for organisms, plants, or even people congregating together. Even though the final space is referred to as a "space of place"


<sup>13</sup> Jaeger and Liu [15].

<sup>14</sup> Otto and Burkhardt [17].

<sup>15</sup> Deleuze and Guattari 8 [16].

<sup>16</sup> Leach 90 [10].

it becomes intelligent due to optimal interaction and module combination. The entire system acts and behaves as though it were a set of living organisms wishing to interact with one another. Consequently, "continental flows of energy and material are likely to intensify as intelligent inhabited infrastructure that unites and ecological service systems rather than divide the come on-line."<sup>19</sup>

# **3 Material-System and the Increase of Biodiversity**

Urban sprawl is widely considered as having one of the most significant impacts on habitat loss and ultimately extinction at local and regional scales. Furthermore, biodiversity loss and depopulation of marine fauna and flora are caused by high water temperatures (which caused bleaching), poor water quality, overfishing, deforestation of mangroves, and erosion. The interaction of formalization and materialization processes centered on material and environment interaction will affect architecture and our human environment by providing a performative setting for human inhabitation*.* <sup>20</sup> The relationship between the material system and the environment is a crucial concern in the present climate change context. Any decision we make as architects is critical in avoiding current challenges for future generations. The goal here is to create a material system that develops from interactions between material properties, environmental stimuli, and structural forces (Fig. 5).

**Fig. 5.** Morphological design procedure<sup>21</sup>

<sup>20</sup> Hensel et al. 35–38.

<sup>19</sup> Ibid p. 268.

<sup>21</sup> Based on the article, O'Shaughnessy et al. and https://wgnhs.wisc.edu/wisconsin-geology/fos sils-of-wisconsin/coral-gallery/corals/.

### **3.1 Ecological Interventions and Surface Manipulation**

Architects use the emergence of mathematics which illustrates our complex natural systems, to create complex forms and effects or intelligent materials and processes for the innovative design of active structures and responsive environments*.* <sup>22</sup> Consequently, in this project, I have explored the possibility of creating a surface deformation inspired by the pattern of Fossiliferous limestone, a coral fossil in the local limestone, to create the proposed morphology intervention (Fig. 5). As Zizek describes, the lesson of ecology can extract the rhythms of patterns that are ultimately referenced order and stability.<sup>23</sup>

Increasing texture and surface modulation techniques, according to studies, would increase the abundance of intertidal flora and fauna. In this project, the surface deformation size and morphology are based on a research paper that compares the effectiveness of common eco-engineering approaches and the ecological consequences of adding microhabitats to urban facilities during construction or retrofitting using a quantitative metaanalysis and a qualitative review of 109 studies. The outcomes are catalogues and tables that represent the effect of various interventions such as texture, crevices, pits, subtidal holes, small and high elevations, and soft structures on the abundance of habitat-forming taxa (barnacles, bivalves, branching coralline, canopy algae, and coral).<sup>24</sup> During the formation process, a textured structure is used to generate the desired surface texture pattern on the concrete. This pattern is based on nature's extracted code to maximize porosity, crevices and holes to create a microhabitat for marine wildlife. Furthermore, the texture morphology provides self-shading in the Bio-tile, which decreases surface temperatures.

### **3.2 Material Selection**

Given the scarcity of raw materials and the emissions associated with extracting, manufacturing, and transportation, we as designers must devote more time to seeking out how to use our resources more intelligently to have a lower impact on environmental quality. As Hensel argues, why aren't all materials considered intelligent, given that none are entirely inert in a dynamic environment? Why hasn't material's inherent responsiveness been recognized and exploited?25

This project aims to restore marine biodiversity to the shoreline by selecting the appropriate material for hybrid infrastructure. As a potential material, bio-enhanced concrete is a form of concrete known as eco-concrete, which minimizes the environment's effect during its production. Today's "neo-concrete" era can provide the opportunity to process the formation more intelligently. Concrete fascinates like no other material because it can be forced into any conceivable mold in its liquid. The concept of new materialism, Leach claims, "that we can open up an inquiry into the non-linear logic and morphogenetic tendencies in the matter and into the capacity of matter to self-organize and play an active role in its formation."<sup>26</sup> Bio-enhanced concrete minimizes the effect


<sup>22</sup> Weinstock [12].

<sup>23</sup> Zizek [8].

**Fig. 6.** Material formation of the bio-tile

on the environment during its construction. The enhancement challenge is to reduce the proportion of cement while preserving flowability, processing time, durability, and consistency, thus reducing greenhouse gasses by 30–70 percent.27 Also, "Admixtures such as slag sand and pulverized limestone are used to reduce the percentage of Portland cement."28 Additionally, reinforcements such as fiberglass or carbon fiber have been added to strengthen some of the concrete's limitations, such as corrosion of saltwater, poor tensile stability, and weight. Carbon sequestration is a viable option for reducing pollution. As a low-cost mineral, Olivine is also an excellent candidate to use as a concrete additive since it is widely available and can permanently dispose of CO2 in an environmentally sustainable and geologically stable way.29 The textured bio-enhanced concrete intervention invites microhabitats. At the vertical level and the base, the sea mattress provides more habitat for marine wildlife<sup>30</sup> moreover, it was ultimately minimizing the ecological footprint (Fig. 6).

# **4 Evolutionary Process of Making and Material Formation**

The form generation methodology in this project is based on a bottom-up approach. According to Leach, "The difference, then, lies in the emphasis on form-finding over form-making, on bottom-up over top-down processes", and on formation rather than form. "Formation" itself must in turn be recognized as linked to "information" and "performance"31. The assembly technique used in this case is site-specific, and since it is not predetermined, it allows for flexibility and shape variation. (Fig. 7).

The assembly and configuration of the tiles for construction are based on "random occupations," which seem to have no connection concepts at first glance. "However, there tend to be no occupation processes without concepts of regulation," according to Otto, but they are difficult to define.32 The strategies of fabrication assembly are simulated through an evolutionary and optimization engine that produces multi-generation configuration. This project's design approaches to address the intersection of digital and environmental

<sup>27</sup> Knaack et al. [19].

<sup>28</sup> Ibid.

<sup>29</sup> Béarat et al. 4803 [20].

<sup>30</sup> Perkol-Finkel, and Sella [11].

<sup>31</sup> Leach 21[18].

<sup>32</sup> Otto and Burkhardt [17].

**Fig. 7.** Evolutionary process using Wallace plugin

problems through informed material tectonics and their connectivity within and across the site. As Picon claims, "It is about how humans are inextricably linked to this dynamic world, and about how materiality, specifically the materiality of architecture, mediates their relationship with it."33

The fabrication assembly strategies generate multi-generation configurations using an evolutionary and optimization engine. The closed-packed Voronoi formation is generated within a specified multi-fitness objective. The objectives identified in this evolutionary script consist of two critical criteria: 1) the size of the Bio-tile within the 3' by 4' box does not exceed more than 450 sq. ft, and 2) the height differentiation to maximize the shadows on neighboring tiles. We used the Wallacie (a rhino plugging) as an evolutionary solver to find the optimal solution and potential configuration (Fig. 8).

**Fig. 8.** Form generation technique

As Hensel portrays, "materials enter production and manufacturing processes as raw substances. These strategies are defined by how a material's desired performance is becoming increasingly specific through particular treatment that affects the material."<sup>34</sup> The Bio-tiles are formed in the multiuse and flexible molds using malleable plastic

<sup>33</sup> Picon [21].

<sup>34</sup> Hensel et al. 35–38 [6].

sheets that are bend into a circular shape formed by a predefined fixed Voronoi boundary framework; this approach is inspired by the experiment minimal path apparatus with soap bubble and "the flexible territory" done by Fei Otto at the Fürleichte Flächentragwerke. Here the process of formation is bottom-up approach and is site-specific, since the method of framework is not fixed and predetermined, it allows for flexibility and shape variations.

# **5 Broader Consequences**

Multiple fields and disciplines are needed, all of which must be organized concurrently to create a responsive infrastructure. The sense of understanding the area's ecosystem, the local topographic, surface-water, groundwater, and coastal water hydrology, and geological information of the landscape are all considered concurrently as individual units of the complex. The whole complex is located in the visual space of the structure in a "Becoming" course. As Leach explains, ""Becoming" clearly an interactive process… Becoming always involves reciprocity, a mutual interaction."35

Now we can ask ourselves if rhizome could be a path especially in relation to transmission, the appropriation and multiplication of our projects by others who participate in it. It's not just networking. It's a "living", open networking. Our architecture builds the conditions of possibility of this rhizome of projects, it is to "make rhizome", "or" it is to go to others, in a perspective of alliance and construction of a collective "becoming" territoriality to share. Because we need to save our planet belonging to every socio-spatial entity and to every living being.

**Acknowledgement.** This material is based upon work supported by the National Science Foundation under Grant No. HRD-1547798. This NSF grant was awarded to Florida International University as part of the Centers of Research Excellence in Science and Technology (CREST) Program. This project is also part of the Doctorate Project of the DDes Program at Florida International University's Department of Architecture.

# **References**


<sup>35</sup> Leach 83 [10].


**Open Access** This chapter is licensed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made.

The images or other third party material in this chapter are included in the chapter's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the chapter's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.

# **Bio-Design Intelligence**

Ana Zimbarg(B)

Florida International University, 11200 SW 8th Street, Miami, FL 33199, USA azimb003@fiu.edu

**Abstract.** Architecture has a substantial influence worldwide as it shapes our cities, and it is made to last. Urban areas are also responsible for 70% of the world's carbon emissions. Consequently, architects are responsible for minimising the destructive effects of construction on the environment. How can biological intelligence be inserted in architecture as a possibility to increase environmental performance? Bio-design goes further than biology-inspired approaches. Biodesign refers to incorporating living organisms as an essential component of a system, changing the natural and built environment boundaries. It contains living and machine intelligence, whether embedded in the design process or in the building itself. This paper seeks to give an overview of bio-design and how it can be seen as a strategy of thinking of new research pathways.

**Keywords:** Bio-design · Hybrid architecture · Biodigital

# **1 Introduction**

Architecture has an extensive presence in the world. It is made to last a long time and impacts various scales in the urban context as well as the natural environment. Since 2007, more than 50% world's population has been living in cities, and it is expected to rise to 60% by 2030. Urban areas are also responsible for 70% of the world's carbon emissions (Cities – United Nations Sustainable Development 2021). Consequently, architects are responsible for minimising the destructive effects of construction on the environment.

Current technology and easy access to other disciplines such as biology and computation can greatly value developing an architecture that can simultaneously benefit humans and the environment. Cities are designed following a linear organisation, meaning that the resources come from one side, producing waste that leaves the system on the other side; this model adds strain into the biosphere, pushing some of the ecosystems outside of a healthy balance. However, the planet does not function following a linear model; it works as a nonlinear, complex dynamic system with millions of interactions that support life (Poletto 2018). Thus, using design embedded with microorganisms can play a part in creating a circular urban metabolism.

It is necessary to clarify that the work discussed in this paper is not about biomimetics but about embedding bio-intelligence within architecture to expand the space of solutions to given problems. In essence, biomimetics uses biological entities and processes to shape design and the production of materials, structures, and systems. Although mimicry

© The Author(s) 2022

P. F. Yuan et al. (Eds.): CDRF 2021, *Proceedings of the 2021 DigitalFUTURES*, pp. 92–101, 2022. https://doi.org/10.1007/978-981-16-5983-6\_9

may be a productive methodology, it implies incorporating nature into existing methods (Rigobello 2019). At one point in the process, biomimetics has an abstraction and detachment element from the biological model to achieve its goals (Primrose 2020).

Therefore, how biological intelligence can be inserted in architecture as a possibility to increase environmental performance? Bio-design goes further than biology-inspired approaches; Bio-design refers to incorporating living organisms as an essential component of a system modifying the boundaries between natural and built environment (Myers 2018). Additionally, bio-design scaffolds on computational and synthetic biology and it is applied through material driven design, interactive systems, or a combination of both (Gough et al. 2020). This paper seeks to give an overview of how biology can be incorporated into design through a brief analysis of recent projects and research and how bio-design can play a part in straightening relationships between humans and the natural world.

# **2 Natural Intelligence**

The second half of the twentieth century was marked by great biological discoveries such as Charles Darwin and Alfred Wallace's evolution theories, signifying an enormous change in paradigm and understanding of the planet's dynamics. These discoveries influenced other disciplines. The impact of biology in architecture appears in the form of biomorphism (Melkozernov and Sorensen 2020), as seen in Antoni Gaudi work and Frei Otto's light structures.

Frei Otto's theoretical and experimental investigation of multiple objects of different nature has an extensive impact on the field of architecture, engineering, and design. Otto's work references many developments in current building design and scientific research, including architecture, engineering, and biology. His idea of not using bionics but looking and understanding nature itself for technical or architectural applications (Burkhardt 2016; Aldinger 2016) plays an integral part in combining other disciplines in the making of architecture. Frei Otto's ability to use physical models as design tools for form-finding contributed immensely to embedding natural laws into architecture. In a way, Frei Otto inserted nature in his work; the lightweight tensile structures, have within their materiality, the bubble properties used to experiment surface tension (Figs. 1).

**Fig. 1. Left:** Frei Otto experiment with soap bubble (Zexin and Mei 2017) **Centre and Right:** La Sagrada Familia, Barcelona, Spain (images by author 2018)

The idea of bringing nature into design is now possible with technological advances. Claudia Pasquero proposes that architects should recognise that architecture, instead of taking from, should provide to other disciplines, contributing to their realisation (Pasquero and Poletto 2020). Pasquero's work uses architecture as a meta-language to communicate with the non-human by developing systems embedded with biomaterial. Additionally, Marco Poletto points out that our society has established the idea that bacteria and microorganisms are dangerous and should be removed from the urban environment (Poletto 2018). Excluding other species from urban spaces is questioned, motivated by a frantic race for sustainable solutions to stop environmental changes. Although it is unlikely that humankind will find a solution for all the consequences of temperature rise, these problems push and open new research fields.

The current urban setup primarily relies on energy, consequently responsible for the extreme amount of carbon dioxide emissions into the atmosphere. The fourth U.S. National Climate Assessment (NCA4) has stated strong evidence that these changes are highly likely to result from human activities (Wuebbles et al. 2017). Climate change leads architects and designers to think of solutions to reduce the human impact on the environment and reduce the speed of the effects that greenhouse gases have in the biosphere.

Research on algae has been increasing within the architectural community to reduce the consequences of the Anthropocene age. Contemplating embedding living organisms in buildings can often be perceived as a science fiction idea; however, this technology is becoming more accessible and feasible. There are initiatives from the government and companies. "VIDA\_BIOCAS, a Bio Self-Sufficient City from algae" is a proposal developed by a consortium of companies specialised in energy along with a partnership with the Spanish government that targets to combine life with lifeless. Integrating algae with technologies and materials capable of conducting liquids, along with microalgae, CO<sup>2</sup> and nutrients that can be converted through photosynthesis, could be a possibility to reduce the pressure that energy production has in the environment. Such projects can use bio-reactors1 integrated into buildings surfaces (Pacheco-Torgal et al. 2021). VIDA\_BIOCAS tries to find a system applicable on different scales (from building facades to street furniture).

The significance of such commercial projects is that it illustrates that companies and the government are considering hybrid design<sup>2</sup> as a possible solution, which could encourage designers to develop more sophisticated use and techniques of integrating natural intelligence in architecture.

<sup>1</sup> Bioreactors are piping systems with water, algae, nutrients, and carbon dioxide inside. These pipes are clear to allow for the algae to effectively capture sunlight to process and convert inorganic matter into sugar. The biomass produced can be valuable to humans (Pacheco-Torgal *et al.* 2021).

<sup>2</sup> Hybrid Design is referring to the fusion of technology and living organisms and its relationships as components of a design/ architecture system or structure.

### **2.1 BioHybrids**

The application of a hybrid system to construction is relevant, as automation in architecture is growing in popularity, and biological organisms are highly efficient in producing material with limited resources; therefore, investigating biohybrid robotics can be perceived as an emerging trend (Heinrich et al. 2019).

The majority of existing biohybrid construction combines biological organisms with manual manipulation. It is done with mechanical elements, and there is research on how to reduce human interference in the maintenance of the systems and increase automation. Projects such as Claudia Pasquero and Marco Poletto's "BioTechHUT" involve a structure supporting the organisms that live within its interior. There is a symbiotic relationship between the user and algae. The user nourishes the environment with carbon dioxide, and the symbiosis between bacteria and algae collect gases to produce oxygen, biomass, and electricity. The researchers developed the system to be integrated within the building's skin; this system produces 1 kg of algae per day, meaning that it can release 10 kW/h of energy, which is what an average U.K. home needs to power its system (Poletto 2018) (Fig. 2).

**Fig. 2.** Industrial microalgae photo-bioreactors, Alga Energy company installations (Provided by Alga Energy in Pacheco-Torgal et al. 2021)

A step further would be taking the biohybrid concept more literally and applying it used to produce intelligent material. Material intelligence, in this case, refers not only to the material performance but also to the intelligence used in the material design process. Developing building components from living material such as cellulose shaped into membranes produced by bacteria to create intelligent materials such as translucent membranes illustrate this concept.

This partnership between technology and nature touches the realms of creating symbiotic3 relationships between robots and living organisms endeavouring to produce architecture. Claudia Colmo's "Restless Labyrinth" is research on producing growing

<sup>3</sup> In biology, symbiosis identifies a continuing relationship between two or more species with different associations: mutualism, where the interaction between species benefits both sides; commercialism, where the interaction benefits only one species but does not injure the other and parasitism where one species benefits while the other is harmed (Šijakovi´c and Peri´c 2018) sija. The most fundamental symbiotic relationship is animals eating plants and animal physiological wastes becoming fertiliser for plants. This relationship between species is most important for

architecture targeting achieve technological breakthroughs such as growing mycelium composites at building scale and manipulating the mycelium network to act like a computationally active material (Rigobello 2019).

Colmo's research contributes to expanding a research field that explores a symbiotic relationship between human and other species by using biology to produce structures to remediate contaminated sites. The fabrication and growth of these structures have been investigated through experiments aiming to find compostable material compatible with 3D printing. The researcher's findings have proven robust, biocompatible, and symbiotic with the tested organism. The research will increase the scale by implementing lignocellulosic substrates within a 3D printed soil-based composite (Colmo and Ayres 2020) (Fig. 3).

**Fig. 3.** BioTech HUT (Photography by NAARO in (Poletto 2017)

"Restless Labyrinth" is successful concerning its material composition as they present themselves as robust, biocompatible, and symbiotic with the introduced organisms. Nevertheless, incorporating living organisms into an artificial system is a slow and challenging process. The project's limitations lie in fabrication. The research demonstrated that there are still many issues to resolve related to scale.

William Myers said that the demand for different design solutions, more integrated with the natural world, is growing and accelerating interdisciplinary collaborations (Myers 2018). The presented projects exemplify many solutions to incorporating biological elements within an architectural structure and creating symbiotic relationships between humans, non-human and machines.

Our settlement practices generally do not consider the impact on local ecosystems. Ecosystems and their services are not limited to borders (Sachs 2015 in: Joachim and Gervasi 2020). The design approach used to develop urban organisation ignores existing natural processes and species, indicating that we are executing poor treatment of the surrounding environment. As a consequence of human settlement and spread, invasive species naturalise in areas causing a negative impact on the original ecosystem (Joachim and Gervasi 2020).

Embedding design with natural intelligence does not mean that living organisms are necessarily part of an architectural system. More importantly, it may be perceived to

most species' survival (Frederick 2015). The symbiotic relationship referred in this paper is a mutualistic relationship between the parts invoveld.

include the natural world in the design process instead. Mitchell Joachim and Nicholas Gervasi point out that "our treatment of plants and animals reflects our society's values". Moreover, they reflect human settlement behaviour, where humans choose a specific region to settle where resources can be exploited and converted into economic gains resulting in the decline of the non-human sector.

The concept of smart cities began to emerge about ten years ago, introducing the Internet of Things idea into the urban scale. We now understand better the potential of digital technologies and how artificial intelligence can manage cities. Digital technology has the potential to allow the planning and construction of cities as natural ecosystems (Guallart 2020).

Machine intelligence is an essential component in bio-design, whether it is embedded in the product or the design process. Introducing life into architectural systems can use digital and robotics technology to simulate, predict, and control (through complex mathematical equations) these biological systems (Marcos Cruz 2017). Pasquero and Poletto (founders of EcoLogic Studio) are working on new urban planning strategies – called Deep Green - to contribute to waste management, water conservation, recycling, and energetic circularity. There must be a shift in the perception of the urban space to achieve these ambitions, which these two researchers approached by designing the city as a refuge for humans and wildlife. United Nations Development Programme and EcoLogic Studio are testing artificial intelligence's potential to develop a new green planning interface using algorithms to analyse high-resolution data on urban landscape and infrastructure to simulate possible sustainable urban scenarios development (Pasquero and Poletto 2020).

By integrating other species into the design process, a connection between humans and wildlife (in the case of Deep Green) is generated through artificial intelligence, demonstrating the wide variety of solutions that can be achieved with bio-design. Such initiatives could generate new relationships with other species resulting in more sustainable and resilient cities.

# **3 Bio-Material Intelligence**

As Mark Weiser said, "The most profound technologies are those that disappear", meaning that they become part of everyday life until they are not visible (Pataranutaporn et al. 2020). Human settlement turns other species 'invisible'. Being aware of this invisibility may open paths to integrate and recreate relationships between human-made and the unseen, resulting in a 'healthier' invisibility.

Biological processes are considered alternatives to traditional technologies to achieve material and energy-saving and reduce carbon footprint. CiTG from the Delft University of Technology in the Netherlands is developing bio concrete. A specialised bacterium (*extremophiles*4) is inserted into the concrete where these bacterias can thrive and naturally produce limestone. This technique is beneficial to reduce the carbon emission in the production of concrete, as the limestone burned in the process, producing calcium oxide, releases significant quantities of carbon dioxide into the atmosphere (Myers 2018).

<sup>4</sup> *Extremophiles* are resilient and can survive in harsh conditions (González, Keller and Joachim no date).

Further research in this field found that bacterial spores and calcium lactate when inserted in the concrete. For years, the spores lay dormant until water ingress within the concrete structure throughout time. Humidity triggers the bacteria to produce CO2. In concrete's highly alkaline environment, this CO2 combines with calcium ions to form solid calcium carbonate, which can seal cracks up to one millimetre wide, preventing further water damage (Mark Peplow 2020) (Fig. 4).

**Fig. 4.** Configuration of the 3d printed single skin prototype before inoculation and transfer to incubation tank(top) and time-lapse of mycelium colonisation over35 days (Colmo and Ayres 2020)

Many types of bacteria have been documented by biologists, with innumerous features. *Vibrio Fischeri* is a bacterium that has bioluminescent properties. Its bioluminescence is caused by *quorum sensing*5. Therefore, to show the bioluminescent properties, the bacteria need to have their population density high enough to allow cell interactions to induce the enzymes necessary for the bacteria to glow. *Vibrio Fischeri* populations get the nutrients from a symbiotic relationship with animals. In exchange, it helps animals to find mates, ward off predators, attract preys or communicate with other organisms due to its bioluminescent properties (González, Keller and Joachim, no date). Eduardo Mayoral González performed a series of tests to check their glowing properties and investigate possibilities of using these bacteria to illuminate natural environments, commercial billboards, signposting, or ambient lighting (Fig. 5).

Incorporating living elements in material generates another form of intelligent material, where intelligence is found in the outcomes that design made with living organisms have (illuminating outside the energy grid or self-repairing materials). This invisibility that brings other species closer contact with our built environment is now considered a possibility to mitigate the negative impacts of human settlement. The issue in using living material is that it needs much more investigation on maintenance. Living beings have life spam, but they are susceptible to illness and more sensitive to environmental changes, not to mention ethical issues related to genetics and synthetic biology. Nevertheless, the

<sup>5</sup> Quorum sensing (QS) is a process of cell-to-cell communication that bacteria use to orchestrate collective behaviors in response to changes in cell population density and species composition of the community (Duddy and Basslerid 2021).

**Fig. 5.** Left: The bacteria and its food source, calcium lactate, are packed into tiny capsules that dissolve when water enters the concrete cracks. Once released, the bacteria consume the calcium lactate, creating limestone, filling in the gaps. Centre: bio-concrete healing process. Right: Bio-concrete healed (Greg Beach 2018)

possibility of inserting living bacteria into the material is creating a different relationship with other species. Bacterias are no longer feared and disposed of, as Marco Poletto mentioned, but they are also part of daily spaces and contributing to more sustainable solutions (Fig. 6).

**Fig. 6.** Vibrio Fischeri tubes (González, Keller and Joachim no date)

# **4 Conclusions**

Since the beginning of the nineteenth century, the great discoveries made in biology opened new paths for other disciplines. Architecture has been referring to the natural laws as inspiration as well as a benchmark for performance. With the advance of genetics and technologies, new research fields have been opened for architects and designers. Through technology, projects can refer to natural laws and use biomimetics as a problem-solving framework.

However, the endless search for anthropocentric problems leads us to nonanthropocentric paths. Our curiosity for the living world reached a point that thinking of 'living machines and buildings' is no longer conceptual. The projects presented in this paper illustrate growing interest in adding living complexity to architecture to reduce the stress that the Anthropocene era has caused to the planet; and, consequently, create new relationships with other living beings that share the urban space.

Merging biological elements with architecture can be thought of in multiple scales, methods with the assistance of many different technologies. Although the application of bio-design is ample, nature, as complex and compelling as it is, has its downside as biological processes may take a long time, whereas our development pace is incredibly fast. Traditional techniques follow the fast pace of construction and manufacturing; however, the consequences of human activities are growing in speed, forcing us to think of alternative solutions to adapt to the changing environment. Architects are experiencing a critical time to consider nonstandard solutions for climate change and sustainability as the possibilities are infinite.

# **References**


**Open Access** This chapter is licensed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made.

The images or other third party material in this chapter are included in the chapter's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the chapter's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.

# **A Study of Bio-Computational Design in Terms of Enhancing Water Absorption by Method of Bionics Within the Architectural Fields**

Gang Mao(B)

The Bartlett School of Architecture, University College London, 22 Gordon Street, London, UK g.mao@ucl.ac.uk

**Abstract.** This essay aims to explore an architecture computational design intended to accept and absorb moisture through geometrical and material conditions, and using design strategies, help deliver this moisture upwards through capillary action to areas of cryptogamic growth including mosses and smaller ferns on the surface of architecture. The purpose of this research project is to explore the morphology of general capillary systems based on research into the principle of xylematic structures in trees, thereby creating a range of capillary designs using three types of material: plaster, 3D print plastic, and concrete. In addition, computational studies are used to examine various types of computational designs of organic structures, such as columns, driven by physical and environmental conditions such as sunshine, shade, tides and other biological processes to explore three-dimensional particle-based branching systems that define both structural and water delivery paths.

**Keywords:** Bio-computational design · Bionics · Bio-receptive design · Xylematic structure · Parametric design

# **1 Bio-Receptive and Bio-Colonization**

In ecology, the growth of epiphytes on tree bark is scenery that can commonly be seen practically everywhere. This is an example of what is known as bio-colonisation (Cruz and Beckett 2016). Figure 1 illustrates the most typical examples of this natural phenomenon.

These photographs illuminate different trends of green growth on tree bark, showing the importance of direction and the effects of surrounding micro-climatic conditions (Woodell 1979). The growth never encompasses the entire circumference of the tree and also changes with the season. Moisture is a crucial factor in this. Plants tend to grow on the branches and at the base of the host plant. One possible reasons for this is that tree trunks and branches tend to have a greater capacity to hold moisture than the other plant parts (Honda 1971). These two photographs show how important water is for growth. A tree can be considered to consist of two distinct parts, each with different degrees of moisture: the tree itself and the transition zone, which is a bio-receptive part defined by the amount of water.

**Fig. 1.** (Left): Common view in London (Right): A tree colonized by micro-plants (Author)

Water has been identified as the critical defining factor in terms of the subsequent area where growth occurs (Wen et al. 2019); this phenomenon can also be seen in relation to architecture, as shown in Fig. 2, which depicts moss coverage on the vertical surface of a building. The notable element here is the fact that the moss is not growing everywhere. This results from water dripping down the wall, which slightly saturates the rock, providing suitable conditions for plant growth. There is also a transition zone seen on building surfaces, not only from plants to the building but also from nature to architecture.

**Fig. 2.** (Left): Common view in London (Right): A tree colonized by micro-plants (Author)

This transition zone and the relationship between these two elements are of particular interest. In fact, the transition zone can be seen as more of a link between architecture and water, since the water can define the area of vegetative growth (Wen et al. 2019). This development is not only created by nature; human beings also attempt to use this method to control the relationship between nature and buildings.

Taking the city of Suzhou Fig. 3 as an example; a number of traditional buildings have either directly or indirectly sunk into the water. Historically, this was a common design strategy in traditional Chinese architecture so as to deal with the link between water and buildings. Many building materials provide an agreeable environment for the growth of vegetation such as moss and algae. With these kinds of plants growing on the surface, a transition zone can occur.

**Fig. 3.** Garden in Suzhou (Liuhe Travel 2017)

# **2 The Introduction of Transition Zone**

The aforementioned examples show that designers in the past tended to form a natural link between architecture and nature. This link can be defined as a transition zone. In addition, as previously mentioned, the transition zone is a place where building materials allow for easy vegetation growth on its surface. The junction area in architecture should also be mentioned. The absorption of water by the surfaces of buildings subsequently creates an environment conducive to the growth of plants. This means that buildings and water are not isolated from each other, as they have a transition zone between them. The transition zone between the two elements can be regarded as a man-made place (Cruz and Beckett 2016), which provides a habitat for plants to grow in; in other words, this zone makes the building surface bio-receptive (Cruz and Beckett 2016). It is worth paying attention to the transition zone, as it is essential for the benefit of our environment.

While undertaking this project, focus will be placed on computational designs that can be used to enhance water absorption (extra water is not useful for the building but can still be absorbed by the building material) which makes it bio-receptive, thus in turn encouraging plants like moss or algae that can benefit the environment to grow. Building materials can be designed to deliberately absorb water and encourage chosen vegetation to grow on the surface. Therefore, it is feasible for architecture to be designed in a way that encourages the growth of green walls, also known as vegetated walls, to control heat. The temperature of the ambient air, interior air, and exterior walls of buildings can be effectively reduced through the use of a green façade (Jeffrey W. 2010). This has led to the idea of an initial design concept that encourages the absorption of more water to encourage growth of vegetation.

The project presented here is based on the following two points:


With these points in mind, it is important to gain a basic understanding of how water absorption occurs in nature.

### **3 Water Delivery Path in Nature**

As previously mentioned, it is essential to understand the basic principles of water delivery as they exist in nature. First, trees are to be considered, as trees are generally the existing tallest plants, and they can obtain water from both the air and soil. Despite the fact that many trees have a height in excess of ten meters, it is still possible for water to be successfully transported from the tree root system directly to the highest point of the tree (Susman et al. 2011). Such water transportation ability found in trees provides an initial reference for further research into this phenomenon, so as to fully understand the capacity of trees to effectively move water in such a manner. The main reasons for this, which will be explored in more detail, are osmosis, transpiration, and capillary action. As Susman said in 2011, "… 'pumping activity' originating in the 'life form' for the upward flow of water is not necessary…", and thus other less understood mechanisms will not be considered in this work.

The first factor that influences water absorption is capillary action. This is the most common answer to the question of how water can be transported to the top of trees, and indeed, capillary action is one of the mechanisms that enables this upward flow. However, it is not sufficient in itself.Water channels are also used to transport water across membranes. These channels are believed to be involved in many physiological processes, including water transportation in trees. Cohesion, adhesion, and surface tension combine to create capillary action. While some energy is consumed using this method, it supplies a significant source of moisture for vegetation without a large outlay of such energy (Thomas 2015).

There are two types of transport tissues found in plants. Water is transported through a complex tube system that is formed by hollow dead cells named xylem. The main function of xylem is to transport water from the roots within damp soil to the crowns of trees. The phloem transports nutrition from the leaves to the rest of the tree. The water-transportation cells found in mature xylem are all dead, which highlights the fact that the transport of water is an essentially passive process with only a minuscule active root pressure component. The function of the 'hairs' on the roots is to absorb water that can then be transported via the xylem vessels to the crown. A further phenomenon which facilitates this movement of water is transpiration, process in which water evaporates from the leaves of the tree (Fig. 4).

**Fig. 4.** Vertical section through tree (Chahhla Sadek 2015)

# **4 Computational Design Principle**

Based on the results gathered from research conducted into the tree, an initial design method can be created and computational methods can be used to design construction techniques through use of appropriate technology. Architects use a varying range of different design strategies, but this type of computational design is becoming a common trend in the design field, and is described as an efficient way to gather and calculate information (Menges 2012). With the assistance of a computational model, it is possible for architects to forecast the form of any other alternative design way (Menges and Ahlquist 2011). Turing, who is known for his work in computing, but was also a biologist and mathematician, was one of the most important people in the field of computational generation. In his book, *Morphogen Theory of Phyllotaxis,* he explains the process of using geometrical descriptions of patterns. He also implemented mathematical equations to gain an understanding of the reaction-diffusion effect, and a chemical model to solve the equations for small perturbations.

This current project makes use of the methodology used by Turing in his study of phyllotaxis, effectively tackling the problem in two stages. The first stage is an attempt to analyze the structure of plants; following this, a computational design is created to explain the prototype.

The concept of this project originates from the theory of *Water Transport in Plants as a Catenary Process*, written by Van Den Honert, T., in 1948. In this book the author indicates that the transportation of moisture through roots, xylem and leaves may be considered to be a catenary process. Thus, the initial particle-based branching system concept can serve as the biological design. An important feature of the first stage is examination of the structure of xylem based on the vital role it plays in water transportation. The xylem system works on a particle system principle, as the xylem system is formed by many dead cells. Therefore, water is transported to the top through means of differences in density. Simulation of this crucial particle system was conducted during this project through use of the Houdini modelling software.

Xylem vessels are combined with dead cells, with the structure of xylem as a tube system with a range of tissues, as introduced by Turing in his theory. A large number of particles can be connected in order to create such a tube, and several tubes can be combined so as to form a xylem system. The impulse count reflects the number of original particles, and different impulse counts create different forms. This concept can be seen in Fig. 5, which illustrates the two- and three-dimensional results obtained from use of various quantities of original particles.

**Fig. 5.** (Left): 2D branching system (Author) (Right): 3D branching system (Author)

The two-dimensional form can be used to mimic the results of a xylem system, however, if the particles are modelled in a three-dimensional space, the resulting outcome will be different. This is just like emergence: where a large archetype appears through interactions by smaller and simpler entities, causing the larger archetype to exhibit properties that the smaller and simpler archetype does not show. Particles are the small simple entities. When a trail follows the growing particle the smaller piece will disappear and a new piece will appear. For example, the first particle is used and upward motion is applied to it, thereby simulating a natural growth pattern; subsequently, the next generation of particles will follow this upward motion. If both are kept in this design, the result is a large piece and even though the small particle will not be apparent, but is still there. This is similar to a collective intelligence, using the emergence phenomenon to create a new piece.

Once the basic simulation system has been tested by mimicking the xylem form, further research is required so as to enable the generation of more complicated designs. As the xylem tube system has a horizontal component based on branching systems, research into branching systems is therefore required. Though utilization of Turing's method, this analysis can be simplified and adapted to the investigation of several different kinds of trees in the UK, which will in turn lead to an understanding of the common features of such trees. After completing relevant analysis, results point to the fact that there are fewer branches on the northern side of most trees in comparison to the southern side. Moreover, moss is most likely to grow on the shady side of a tree. Branching systems can play a significant role in the growth of trees. One key point is the growth method at the fork of branches, which represents the major growth factor. This is because the way in which each branch grows is decided by the position of the fork and the amount by which a twig is separated from the fork. This growth can be mathematically determined using the system developed by Aristid Lindenmayer in 1968.

For the purpose of the simulations conducted as part of this study, Lindenmayer's system, also referred to as the 'L-system' was employed. The L-system is both a parallel rewriting system and a type of formal grammar. It is thus necessary to understand how to use digital language to rewrite generated strings and translate these into geometric structures.

Basic plant models and natural-looking organic forms are relatively easy to define, however, if the recursion level is increased, the form will slowly grow and become more complex. This recursive principle is shown below in Fig. 6:

**Fig. 6.** Growth method of the L-System

**Fig. 7.** Growth method of the L-System in Houdini (Author)

**Fig. 8.** Growth method of the column shape-based L-System in Houdini (Author)

The Houdini software was chosen to simulate this L-system growth process. The first generation is a single pillar, and subsequently, the pillar splits at the top from the next generation. Repetition of this process results in simulated growth of a tree, as illustrated in Fig. 7. This effectively replicates the fundamental growth principle as is found in nature. The simulation of this natural growth process is a means of developing the design required for the purpose of this project, therefore, the growth method of the column shape-based L-System in the Houdini software has also been tested, as seen in Fig. 8.

In terms of design, it seems unreasonable to merely mimic a principle from nature. It would be a shame to merely design an artificial tree or some other plant for our world. Simulating the principle of the L-system is something worth doing at the beginning of the design process. As the process shows below, 'growth' starts with a grid, then it begins to split and have an upward motion which represents phototropism in plants. After five generations, numerous points in this object start to split, creating a branching system at the end of the process. This is one example of how the L-system principal is simulated through use of upward motion. However, roots in nature also use this principle in downward motion, so it is necessary to also design a downward motion derived structure. Figure 9 and Fig. 10 illustrate the results obtained; the natural root system performs two functions, anchoring the structure in the soil and also absorbing moisture from the soil. Both of these features can be incorporated into the design for this project.

**Fig. 9.** Design simulation of L-system with upward motion (Author)

**Fig. 10.** Design simulation of L-system with downward motion (Author)

The design simulation of the L-system is shown in Fig. 9 and Fig. 10. At first, only one grid exists. After applying an upward motion, the grid starts to split. This is similar to the behavior of the particle in the basic L-system principle, however, does not simply mimic the process of the L-system, as the split points do not occur just on the top of the model, but can occur almost everywhere. It is important to note that growth can occur in both upward and downward directions.

# **5 Bionics Within the Architectural Fields**

In general, this simulation clearly shows the mathematical principles of tree growth, providing a theoretical basis for the establishment of a tree model. However, the difference

### 110 G. Mao

between this case and the project as a whole is that the project is based on the natural principle of biological simulation, with a purpose to create a new structure in which certain aspects are based on simulated branch shapes rather than simply copying natural branch forms. As the intention is to formulate a new scenario for the transition zone (i.e., the place where architecture transitions to water), many possible design variants may exist. Some possibilities have been explored, including urban furniture (Fig. 11) and different sizes of columns (Fig. 12). Assigning different numbers of points and different particle sizes in Houdini, results in a variety of designs can be generated, as shown below.

**Fig. 11.** Designs of different urban furniture (Author)

**Fig. 12.** Designs of different urban columns (Author)

# **6 Fabrication Method**

### **6.1 Material Test**

A variety of tests were conducted in order to determine the optimal manner in which to control the movement of water from the bottom of the structure to the top. The testing of various materials played a helping hand in determining the most appropriate materials for the design. One of the tests conducted related to ease of fabrication of the structure. Plaster was a prime candidate, as an easy to acquire building material which can be quickly formed into shapes. Thus, it was considered to be a potential material for use in this project. However, it is not an ideal material for the structure since it easily absorbs water, making it fragile. Based on this, the next step was to consider MPC concrete. This is a porous material that is beneficial for vegetation growth. The material tests conducted using MPC concrete revealed that it has the capability to contain water, but also revealed that this ability does not depend on the size of the aggregate. After many material tests, a small sized aggregate was chosen as one of the materials for this project, as it can be used as a structure system. From examining these two materials, it can be seen that each has advantages and disadvantages. While plaster is capable of absorbing more water, it tends to be quite fragile. Conversely, MPC concrete is very bio-receptive but it is not an ideal material for absorbing water. The solution to this problem is the introduction of an additional layer consisting of 3D printed plastic. This layer not only serves as a structural layer but also provides the model with an elegant geometric element. Therefore, it is determined that this proposed combination of construction materials would have the greatest probability of delivering an optimal outcome.

**Fig. 13.** (Left): Three layers of materials. (Middle): Details of the structure. (Right): Left-side is the design, right-side is the section (Author)

### **6.2 Fabrication Test**

As shown in Fig. 13, the model consists of three layers. The inner layer will be filled with plaster that can absorb water; the second layer will be a 3D print layer, providing geometry and helping with growth; and the third layer will be concrete, forming a structural layer. However, use of these three models still poses a problem as the 3D print model may prevent the water from coming out, making the entire project redundant.

The solution to this is the inclusion of several holes in the model that would allow water can come out from the inner plaster, thus allowing plant growth. This is also a method which grants the designer the ability to decide the exact locations in which the growth of plants will be permitted. It provides more flexibility because different conditions exist in different environments, meaning that growth will not always happen in the same place. The use of these holes will allow the quantity of water that is delivered to be adjusted accordingly, thus controlling the precise area of growth. Finally, the results obtained from creating these branching systems will be combined with the inner extraction plaster system and the bio-receptive architecture bark system. In relation to the fabrication method, the idea of spraying concrete is under consideration.

# **7 Conclusion**

This project aims to design and fabricate an organic and flexible construction through simulation of the growth processes of natural plants, with particular focus on their capillary systems, based on particle-based growth for a transition zone. The design result is thus decided by environmental conditions. The basis for this work is provided by an analysis of xylem and capillary systems, with the design being completed through creation of digital models of these two systems based on the conclusion of the analysis. The proposed outcome of this project is the creation of a bio-receptive scaffold that can absorb water from the soil and provide a suitable habitat for micro plants to colonize. The proposed design for this project is specifically tailored to the transition zone. Despite not being a true plant, it does employ a plant-inspired basis of water delivery. In addition, although the structure does not follow conventional architectural protocols, it uses building materials that are beneficial for the environment. It could, therefore, be considered to represent a transition design between nature and architecture, similar to the origin of the design concept - the transition zone.

# **8 References**

Turing, A., Saunders, P.: Morphogenesis, 1st edn. North-Holland, Amsterdam (1992)


**Open Access** This chapter is licensed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made.

The images or other third party material in this chapter are included in the chapter's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the chapter's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.

**Simulation, Prediction and Optimization**

# **Environmental Data-Driven Performance-Based Topological Optimisation for Morphology Evolution of Artificial Taihu Stone**

Z. Feng1, P. Gu2, M. Zheng1, X Yan3, and D. W. Bao4(B)

 Suzhou University of Science and Technology, Jiangsu 215000, China Architectural Association, London WC1B 3ES, UK Centre for Architecture Research and Design, University of Chinese Academy of Science, Beijing 100190, China

<sup>4</sup> School of Architecture and Urban Design, Centre for Innovative Structures and Materials, RMIT University, RMIT University, Melbourne 3000, Australia

nic.bao@rmit.edu.au

**Abstract.** Taihu stone is the most famous one among the top four stones in China. It is formed by the water's erosion in Taihu Lake for hundreds or even thousands of years. It has become a common ornamental stone in classical Chinese gardens because of its porous and intricate forms. At the same time, it has become a cultural symbol through thousands of years of history in China; later, people researched its spatial aesthetics; there are also some studies on its structural properties. For example, it has been found that the opening of Taihu stone caves has a steady-state effect which people develop its value in the theory of Poros City, Porosity in Architecture and some cultural symbols based on the original ornamental value of Taihu stone. This paper introduces a hybrid generative design method that integrates the Computational Fluid Dynamics (CFD) and Bi-directional Evolutionary Structural Optimization (BESO) techniques. Computational Fluid Dynamics (CFD) simulation enables architects and engineers to predict and optimise the performance of buildings and environment in the early stage of the design and topology optimisation techniques BESO has been widely used in structural design to evolve a structure from the full design domain towards an optimum by gradually removing inefficient material and adding materials simultaneously. This research aims to design the artificial Taihu stone based on the environmental data-driven performance feedback using the topological optimisation method. As traditional and historical ornament craftwork in China, the new artificial Taihu stone stimulates thinking about the new value and unique significance of the cultural symbol of Taihu stone in modern society. It proposes possibilities and reflections on exploring the related fields of Porosity in Architecture and Poros City from the perspective of structure.

**Keywords:** Bi-directional Evolutionary Structural Optimization (BESO) · Intricate Architectural Form · Computational Fluid Dynamics (CFD) · Poros City · Porosity

# **1 Introduction**

Taihu stone (Fig. 1) [3] is limestone and the water's erosion in Taihu Lake for hundreds or even thousands of years makes it porous and intricate. It has become a common ornamental stone in classical Chinese gardens and a symbol of Chinese culture for hundreds of years [1]. This research posits an innovative design methodology, including computational fluid dynamics (CFD) and bi-directional evolutionary structural optimisation (BESO), to design new artificial Taihu stones. The focus of this paper is the experiment to design a new Taihu stone with different parameters. This research contributes to Porosity Crafts and *The Theory of Porous City* [2], *Porosity in Architecture* from a structural standpoint and cultural symbols of Taihu stone in society.

**Fig. 1.** Taihu stone painting

# **2 Design Methodology**

# **2.1 Principle of Experiment**

The experiment of "Taihu stone" reveals a cross-iteration process of CFD simulation [4–6] and topology optimisation (BESO) [7, 8]. The whole process consists of three parts: *a. Test of Taihu Lake's original fluid condition; b. Pressure analysis of original mesh by CFD simulation; c. Topology optimisation based on pressure analysis*. The fluid condition will change each time after the change by topology optimisation (Fig. 2); thus, iteration happens from *a* to *c* again (the flow chart is seen in Fig. 3).

**Fig. 2.** Boundary Conditions during the process of topological optimisation

**Fig. 3.** Flow chart of CFD simulation and topology optimisation

### **2.2 Environment Parameters of Experiments**

Studies [9–11] indicate the average depth of Taihu Lake, which is a dead lake with no current, is 1.8 m, and the flow direction and velocity are mainly related to the speed of the prevailing wind over Taihu Lake. These studies have shown that, at the bottom of Taihu Lake, where Taihu stone is, the direction of wind-induced current is over 90% likely to be opposite to the wind above Taihu Lake, and the velocity of lower flow is relatively slower than the wind above.

Therefore, the CFD simulation of water flow in this paper applies data of the perennial water flow direction and velocity at the bottom of Taihu Lake, identified by Suzhou and Shanghai's prevailing wind direction and velocity (called SuHu area) in China.

Among the widely accepted views on the formation of the Chinese Taihu stone [1], water acidity is important for the erosion of limestone such as Taihu stone. The part that is corroded by acid is mainly dense calcium carbonate and easy to wash away by water currents. Moreover, this part is a variable that is difficult to determine its proportion and location in BESO. This paper calculates the part directly corroded by the acid in the water and eliminated by the water flow as a random ratio parameter.

# **2.3 Special Parameters Tested in the Experiments**

### **2.3.1 Percentage of Low-Density Volume Compared with the Whole Volume in Taihu Stone**

It should be noted that each part of the stone's density will also affect the experi-ment's influence in reality [1]. Thus, a new constant is set to reveal the phenome-non called 'percentage of low-density part'. As a result, a fixed constant percentage of mesh will undoubtedly be eliminated in every turn of iteration, closer to the ac-tual result.

# **2.3.2 Percentage of Elimination Volume Compared with the Whole Volume During the Process of Topological Optimisation**

In nature, both stone and water changes should happen simultaneously, so the interval time between each step of iteration should be extremely short, which is unrealistic in software operation currently. Thus, we will try different volume reduction to reveal the result of a certain interval before the next iteration in which fluid condition refresh. The experiment catalogue will be analysed in the next chapter.

# **2.3.3 Minimum Radius of Influence Number (Rmin)**

**Rmin** is a parameter during BESO [12] that will affect the outcome of the calculation. Refer to new research of topology optimisation, and the result will be different from different settings, including Rmin, which means there is over one solution.

### **2.4 Self-criticism of the Experiment Methodology**

The whole simulation can be regarded as a certain ideal condition of the formation of Taihu stone. The core factor should be noted that the principle of BESO is not entirely in compliance with the reality of volume reduction of Taihu stone because the change of stone in the river contains both structure optimisation and some erosion/corrosion process**.** However, in a sense, the experiment here proposed should be more conducive to structural stability.

# **3 Quantitative Definition of Physical Characteristics of Artificial Taihu Stone with the Parametric Method**

# **3.1 Criteria of Traditional Aesthetics of Chinese Taihu Stone**

According to the literature [13, 14] on the cultural connotation of the artistic symbol of "Taihu stone", the ancient Chinese created a set of theoretical principles and proposed the standard of phase stone. The four elements of "Shou, Lou, Zhou, Tou" were used to judge the value of Taihu stone.

"Shou" refers to the ingenious structure of Taihu stone, supporting the shape with the least amount of material, which is similar to BESO's effect [15–17]. As the number of iterations increases, the volume decreases, but the structure will always be one of the optimal solutions; "Lou" means that most of the holes in the Taihu stone are connected, which is a flowing space. "Zhou" is a judgment of formal aesthetics, which mainly refers to the undulating rhythm of the shape of Taihu stone - pattern in the strange and the rhyme in the difference, which parameters cannot quantify. We adopted the design of the initial prototype of Taihu Stone to reach this standard as much as possible and expressed it in the form of a model in this research. The object of "Tou" evaluation is the material characteristics of Taihu stone, which cannot be achieved by the CFD & BESO method, nor is it the focus of this experiment, but the later construction materials can reflect it.

In conclusion, two aesthetic factors of "Zhou" (i.e., "wrinkling textures and furrows") and "Tou" (i.e., "passing through or transparency") need to be expressed by images, while the remaining two spatial factors of "Shou" (i.e., "leanness or thickness") and "Lou" (i.e., "eyes or hollowness") are expressed in a parametric way.

### **3.2 Parametric Definition of "Shou"**

In this paper, the number of spatial nodes is used to test the complexity of the spatial topology, which is "Shou", higher values indicates that the spatial topology is more complex and the ingenuity of the structure.

### **3.3 Parametric Definition of "Lou"**

"Lou", explained by the original Reference, is the interconnection between the holes in the Taihu stone. Combined with the actual survey of Taihu stone, we found that it is unadvisable to use the connectivity rate of holes = n/N (N represents the total number of holes, and n represents the number of interconnections) to prove "Lou" because the actual Taihu stone is not that the more holes connected to the middle, the higher the value "Lou" of the Taihu stone, which is also related to the size of the connecting space of the Taihu stone. The extent of hollowness relates to not only the ratio of interconnections but also the size of connected holes.

Referring to the cell division method in biology, the ratio of volume to surface area is preferable to show the Taihu stone's connectivity. For example, when an organism grows, it is because its cells are dividing not getting bigger, it is challenging to keep up with taking in the extra nutrients it needs and expelling more waste, which means as the cell gets bigger, it has less surface area compared to its size—the surface area to volume ratio of the cell decreases. Cell division solves the problem of increasing size by reducing cytoplasm volume in the two daughter cells and dividing up the duplicated DNA and organelles, thereby increasing the surface to volume ratio of the cells. In this case, treating cells as a whole like original Taihu stone, then the division can be regarded as the connectivity of holes in Taihu stone, which means more cell divisions lead to higher connectivity rate of holes, mathematically, increasing surface to volume ratio leads to higher connectivity rate of holes. From this analysis method, if the cell's growth rate is stable, the higher the difference between surface and volume, the higher the cell division rate (indicates, the higher connectivity rate of holes in Taihu stone).

This research comprehensively selects the optimal solution of Taihu stone according to respective weights of 50% and combined with the visual images from the two parameters above.


**Fig. 4.** The profile results of 10 iterations of stones with different parameters of "Lou"

# **4 Screening and Evaluation of Artificial Taihu Stone Through Experiment**

# **4.1 The Degree of Complexity of the Spatial Topology**

The experience uses the same piece of Taihu stone to experiment with controlling variables for the above-mentioned main parameters. (Estimate the theoretical range of the parameter range before the experiment). For seven different data, 70 different results are obtained after ten iterations of each, and their cross-sectional forms are recorded as shown in Fig. 4. Furthermore, take the parameter of Volume Fraction (Vf) = 80\_ Random ratio (Rnd) = 40%\_ Minimum Radius of Influence Number (Rmin) = 1 × Size to illustrate the iterative calculation process of the experiment (seen in Fig. 5).

**Fig. 5.** Iterative calculation process in CFD

In this paper, spatial nodes are the joints in the artificial Taihu stone skeleton to show the degree of complexity in Taihu stone (Fig. 6).

To find the number of spatial nodes of each Taihu stone. Moreover, draw a line chart (seen in Fig. 7).

**Fig. 6.** Spatial nodes in artificial Taihu stone

**Fig. 7.** Numbers of spatial nodes of each Taihu stone

According to the line chart:


### **4.2 The Degree of Connectivity Between the Holes**

In this paper, the ratio of the difference between the standard value and the experiment value to the standard value is used to express the degree of connectivity between holes.

Compare the area to volume ratio of the experiment with the area to volume ratio when the Vf = 90%, Vf = 70%, Vf = 80%, data shown in charts. To make the data more intuitive, division level is used to show the connectivity between the holes (seen in Fig. 8).

**Fig. 8.** Division level of a different situation

**Fig. 9.** A composite indicator of Taihu stone

To conclude the charts above (Fig. 9):


# **5 The Value of Artificial Taihu Stone in Fields of Crafts and Porous Space**

# **5.1 Porosity Crafts**

In this article, relative design is made by new material like stainless steel and inlaid drawn steel wire or glass fibre, new technology like sound, light, electricity to extract the beauty of shape from Taihu stone. Using parametric design methods through CFD&BESO, the Taihu stone structure gets improved and can provide more possibilities (seen in Fig. 10). The definition as a symbol of aesthetics was then researched in the following parts to discuss how traditional crafts can have a new way of living with contemporary society.

**Fig. 10.** Parametric Taihu stone samples

# **5.2 Porosity Architecture**

Porosity in Taihu stone, featured in porous shape and façade, can be used in modern architecture design [18]; the sample of porosity architecture. In Suzhou traditional gardens, Taihu stone has an architectural contribution in light changes and circulation connection; different interpretation methods can be absorbed and improved in architecture design.

# **a) Porosity City**

In today's city organisation, as the addition of single building units, Poros City [2] tries to get a holistic result with uniform density. We try to conclude a series of space prototype and typical ways of combination in Taihu stone. As Fig. 11 shows, the comparison was made between diagrams of Poros City and diagrams of Poros structure extracted by Taihu stone, and methodology differs in organisation methods.

(a) Diagrams of Poros City (b) Diagrams of Poros structure

**Fig. 11.** (a) Diagrams of Poros City (b) Diagrams of Poros structure

# **References**


**Open Access** This chapter is licensed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made.

The images or other third party material in this chapter are included in the chapter's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the chapter's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.

# **Custom-Fit and Lightweight Optimization Design of Exoskeletons Using Parametric Conformal Lattice**

Fuyuan Liu1, Min Chen1(B) , Lizhe Wang1, Xiang Wang1, and Cheng-Hung Lo2

<sup>1</sup> School of Advanced Technology, Xi'an Jiaotong – Liverpool University, Suzhou, China min.chen@xjtlu.edu.cn

<sup>2</sup> School of Film and TV Arts, Xi'an Jiaotong – Liverpool University, Suzhou, China

**Abstract.** This paper presents an integrated design method for the customization and lightweight design of free-shaped wearable devices, illustrated by a lower limb exoskeleton. The customized design space is derived from the 3D scanning models. Based on the finite element analysis, the structural framework is determined through topology optimization with allowable strength. By means of generative design, the lattice library is constructed to fill the frames under different conformal algorithms. Finally, the proposed method is illustrated by the exoskeleton design case.

**Keywords:** Custom-fit · Lightweight design · Conformal lattice · Generative design

# **1 Introduction**

Civilian wearable robots, specialized as exoskeletons, are in vigorous development due to the increasing demands from mobility defection and the rehabilitation of aged people [1]. Enhanced strength and endurance are the functional expectations of those devices, while conformability and aesthetic property are the critical aspects considered by the users [2]. To satisfy functional expectations and user requirements, there are several challenges required to solve, involving lightweight optimization and adaptation of individual differences [3].

The reduced mass leads to less resistance to the motion and also less harm to human beings in the case of collision [4]. For the lightweight of wearable structures, typically there are two solutions: to choose lighter material but sufficient strength or to optimize the structure design [5]. Currently, aluminium alloy is applied in the most available exoskeleton products. One of the key reasons is due to its good balance between mass and cost [6]. For lightweight material, carbon fibre composites and titanium, both with extremely high strength and low density, are occasionally adopted in some special cases [7]. The application of advanced material is not a good choice due to the high expense. Therefore, using topology optimization to reduce the structure mass is a typical way for lightweight design. However, most cases are for solid material [8]. With the development of additive manufacturing, there is one more possibility to minimalize the mass by using lattice material, which is designed for higher specific strength (the ratio between strength and density) [9]. Although many optimization schemes using predesigned lattice structures have been proposed, it is yet to practice on the lightweight design of exoskeletons.

Besides lightweight, customization is another crucial factor for wearable devices. Custom-fit here indicates the fitting of physical shape and the customer's preferred pattern. Different users have different physiological conditions, like height, weight and gait pattern. The prescribed size, form and trajectories of exoskeletons are required to match individuals for a snug fit [10]. Using 3D scanning technology is an effective way to get the user-specified digital model [11], which provided an accurate reference for customizing exoskeletons. However, previously it is not feasible to provide various appearances due to the limitation of manufacturing and the design approaches. Generative design is a method that is capable of creating unrepeatable forms under certain principles and mathematics algorithms [12, 13], which can develop aesthetic and functional performance.

In this paper, conformal lattices, based on the generative design method, is proposed to enhance the lightweight and custom-fit purpose. The cellular structure can be expanded in a novel way through specific rules.

# **2 Multi-dimensional Customized Design Method**

To address the Lightweight and Custom-fit challenges, an integrated approach is proposed here to cope with the design of the exoskeleton. The overall strategy is illustrated in Fig. 1, which mainly contains three parts:


**Fig. 1.** Flowchart of the integrated design method

### **2.1 Design Domain and Modelling**

For the custom-fit design purpose, the 3D scanning was used to obtain the digital customer models, from which the closely fit design space was derived, as illustrated in Fig. 2. Here a uniform thickness of 5 mm is assumed for the aluminum plate. The material properties are shown in Table 1.


### **2.2 Numerical Analysis Based on Topology Optimization**

To achieve an optimized morphology with specific boundary condition and constraints for the design space of a thigh, finite element analysis is executed in this stage. Load

**Fig. 2.** Custom-fit initial design domain **Fig. 3.** Loading and boundary

conditions

condition is regarded as when the upper body is supported entirely by the exoskeleton in the static mechanical model, proving the stability of the thigh shell structure. According to GB/T 10000-1988, the allowable force formula is expressed in (1):

$$F\_{ma\chi} = mg \times n\_h \times n\_s \tag{1}$$

where, the weight of consumer: *<sup>m</sup>*; safety factor: *ns* <sup>=</sup> <sup>1</sup>.5; *<sup>g</sup>* <sup>=</sup> 10 m/s2; the ratio of upper body mass: *nh* = 65.6%

According to the formula, the joint is set as fix support and 800N force acting on the joint of the thigh exoskeleton, as shown in Fig. 3. After a specific load condition, meshing sensitivity analyses are carried out to obtain a reasonable element mesh size for an effective simulation at a low computational cost.

In Table 2, the sensitivity analysis of mesh density is conducted. Max stress stabilizes gradually and arrives at a peak when mesh size is set as 3 mm; and the number of mesh is increasing dramatically as decreasing mesh size, especially appearing a sharp increase from 3 mm to 2.5 mm. It means that the mesh size could be set as 3 mm for a considerable result in an affordable time.


**Table 2.** Mesh sensitivity analysis

The conventional topology optimization method, namely solid isotropic material with a penalty (SIMP), was used for initial optimization in this study. SIMP was an element density-based optimization method by making a density approach to a specific range.

This element density, namely pseudo density as design variables, was used for topology optimization. This pseudo density value was set in the range of 0 to 1, presenting the void and solid element. The value between 0 to 1 was used to prevent the stiffness matrix from scaling linearly during the solution process. Therefore, with the stiffness at each iteration, the interpolation function was presented by the expression of power law (2):

$$P(\rho\_l) = \rho\_l^P \tag{2}$$

where ρ*<sup>i</sup>* is pseudo-density variables and *P* is penalty factor (stiffness), *P* is set to be 3 in this study.

The objective function for each element is to find ρ*i* satisfying the minimum overall compliance of structure (3):

$$\text{Min: overall coupling of structure} = \frac{1}{2}F^T U \tag{3}$$

where, U and F present the nodal displacement vector and the external load vector, respectively.

Meanwhile, the volume and the static equilibrium should be satisfied (4):

$$\sum\_{l=1}^{n} \rho\_l \upsilon\_l = V \le V\_{upper bound} \quad 0 < \rho\_l \le 1, \ i = 1, 2, 3, \dots, n$$

$$F = KU \tag{4}$$

where, V is the initial volume of the target domain with upper bound, *vi* is the volume of the i-th element.

The objective function here is the minimal mass, which is equivalent to minimal volume with the same density condition, as shown in Table 3.


**Table 3.** Original domain and its optimizations

However, those topology results are fragmentized and irregular structure far from an available product, which needs further design and modifies. Hence, setting the lightweight optimization as the functional target, the shape is trimmed according to material distribution, as shown in Fig. 4. To ensure that the lightweight result follows the scientific and accurate analysis, the strength of the result trimmed is analysed, and the maximum stress is 207.23 MPa, less than the yield strength of Aluminium. This provides an effective design frame for the lightweight optimization with lattice structure.

**Fig. 4.** Topology optimization and remodeling

### **2.3 Conformal Lattice Generative Design**

Filling conformal lattice works for 2.5D non-concave design domain, which is constructed by trimming or extruding; mainly two conditions are considered, the boundary conformal condition and the surface conformal condition, illustrated in Fig. 5.

**Fig. 5.** Conformal conditions

Four proposed algorithms of filling conformal lattice are described in response to the loading condition, as shown in Table 4.

a. Voxel-like conformal filling.

A series of voxel-like cells are populated on the non-concave design domain, which can be filled with tailored lattices. This algorithm covers four parts including adjusting UV direction, setting a unit cell, tessellating, trimming. In response to the loading, UV direction is manipulated and adjusted by four control points *ao*, *bo*, *co*, *do* which are located on the boundary of a NURBS surface derived from non-concave surface. Following that, the unit cell is represented by *So*- the length of lattice and *To*- the thickness of lattice; the NURBS surface is tessellated to voxel-like cells. In order to trim cells out of the design domain, each voxel cell is assigned a logic value (i.e., one or zero), with one denoting a filling space and zero indicating void space. Assuming the trimming result is denoted by design domain E, unit cell F and the filling cells are G, this can be represented by (5).

$$G\_n = \begin{cases} 1, \text{ if } (E) \cap (f(F\_n)) \\ 0, \text{ otherwise} \end{cases} \tag{5}$$

where *n* is each cell indices, *f* (*Fn*) is an operation on *Fn* to determine the location of each cell. Grasshopper's "Morph to Twisted Box" component can be used to perform lattice filling function.


**Table 4.** Four conformal algorithms

Additionally, for obtaining optimal lattice structure, a fractal filling algorithm is constructed for various applications.

Assuming the fractal result is denoted by fractal parameters *kh*, fractal domain D, fractal cell R, this can be represented by (6).

$$\mathbf{T} = f(R\_m, k\_h) \quad R\_m = \begin{cases} 1, \,\, \mathrm{if}\,(D) \cap (f(G\_m)) \\ 0, \,\, otherwise \end{cases} \tag{6}$$

where *m* is each cell indices, *f* (*Gm*) is an operation on *Gm* to determine the location of each cell, *f* (*Rm*, *kh*) is an operation on *Rm*, *kh* to determine the fractal quantity of each cell. Grasshopper's "Subdivided Twisted Box" component can be used to perform the fractal function.

This voxel-like filling algorithm remains the structure feature of each lattice; the population direction of the lattice can be adjusted in response to the loading. The disadvantage of this algorithm is that the lattice structure has a 'zigzag' boundary, which may result in the stress concentration.

b. Boundary shrinkage conformal filling.

For this algorithm, the contour profile of the filling domain is shrinking in the inward direction to the centroid. The population of the lattices can be determined by *kb*, defined as the hoop gradient, as well as *ks*, expressed as the radial gradient. These two parameters are employed to construct shrinking cells to fill lattice. This method avoids the 'zigzag' boundary but twists filling cells. However, the form of shrinking conformal population results in the dense population along the radial direction and an N-side blank region sited in the centre, which cannot populate the cube lattice in the centre.

c. Voronoi conformal filling.

Analogous to Boundary shrinkage conformal filling, *kb*-the hoop gradient and *ks*-the radial gradient, as decisive parameters, are employed to construct a dot array, by which the Voronoi diagram is built. The diagram is projected on the non-concave surface for thickening to lattice structure. This filling method is aesthetic but natural; however, the machinal properties of lattice are difficult to measure and verify.

d. Triangular mesh conformal filling.

Different from the conformal way of the boundary shrinkage, this algorithm constructs the lattice structure from the centroid to the contour profile through the line frame. *kp*, defined as the scattering parameter, is applied to construct the radio dot array in the outward direction to the circumference.

Assuming the number of the dots in each boundary shrinkage level is denoted by the scattering parameter *kp* and boundary shrinkage level B, this can be represented by (7).

$$P\_b = k\_p B \quad B \in \{0, \dots, k\_b\} \tag{7}$$

where Grasshopper's "Divided Curve" component can be used to construct a dot array.

Delaunay mesh is built based on the radio dot array. Following that, more complex wireframes are constructed and thickened to the lattice structure. This algorithm will lead to better-proportioned mesh. Although organic and aesthetic is lattice structure, the mechanical properties of lattice structure is hard to verify and measure. Besides, this algorithm is just appropriate for non-concave space with small curvature.

Generally, the lattice can be designed and populated in the design domain within the frame of the various algorithms. Based on these four algorithms, the exoskeleton can be optimized with an customized and lightweight form, illustrated in Fig. 6.

**Fig. 6.** Customized design using lattice library

# **3 Conclusion**

In this paper, an integrated method for the free shape customization and lightweight design is represented thoroughly with the case of the lower limb exoskeleton. The research fruits can be demonstrated as follows:


**Acknowledgement.** The research is financially supported by National Natural Science Foundation of China (51805447), XJTLU Key Program Special Fund (KSF-E-01, KSF-E-27) and Research Development Fund (RDF-17–02-44), Open Project for Vehicle Application Engineering of Beijing Jiaotong University (BMRV20KF03).

# **References**


**Open Access** This chapter is licensed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made.

The images or other third party material in this chapter are included in the chapter's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the chapter's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.

# **A POI-Based Machine Learning Method for Predicting Residents' Health Status**

Shicong Cao1 and Hao Zheng2(B)

<sup>1</sup> School of Architecture, Clemson University, Clemson, USA <sup>2</sup> Stuart Weitzman School of Design, University of Pennsylvania, Philadelphia, USA zhhao@design.upenn.edu

**Abstract.** Health environment is a key factor in public health. Since people's health depends largely on their lifestyle, the built environment which supports a healthy living style is becoming more important. With the right urban planning decisions, it's possible to encourage healthier living and save healthcare expenditures for the society. However, there is not yet a quantitative relationship established between urban planning decisions and the health status of the residents. With the abundance of data and computing resources, this research aims to explore this relationship with a machine learning method. The data source is from both the OpenStreetMap and American Center for Decease Control and Prevention (CDC). By modeling the Point of Interest data and the geographic distribution of health-related outcome, the research explores the key factors in urban planning that could influence the health status of the residents quantitatively. It informs how to create a built environment that supports health and opens up possibilities for other data-driven methods in this field.

**Keywords:** Health environment · Point of interest · Local data · Health-related outcome · Machine learning

# **1 Introduction**

# **1.1 Health Environment**

Healthy living is a goal of many 21st century cities. Since 1999, the WHO Healthy city project has brought up the principles of urban planning supporting health and example cities whose development can be learned from (Duhl and Sanchez 1999) The approach is mainly narrative and case study, which doesn't provide enough ground and guidance for decision making in a quantitative way.

Previous studies suggest three domains where urban planning can most effectively focus support for health and well-being – physical activity, community interaction and healthy eating, since these domains address some of the major risk factors for chronic diseases (Kent and Thompson 2014). A literature review methodology is used in this study.

With the ever-rapid changing of the world, it is difficult to understand how those principles work and how much they will actually influence the health status of the residents. Since building a healthier living environment is cooperation across the society, it would be beneficial for stakeholders to share a common ground. Data can be the common ground.

With the availability of health data and the abundance of computing resources, it is now possible to quantitively evaluate a planning decision's outcome on residents' health. The research showcases the possibility that using open-source city point of interest data to predict the health status of the residents using machine learning methods.

# **1.2 Problem Statement**

In order to estimate the obesity rate, one of the machine learning model, Convolutional Neural Network (CNN), has been used to analyze the satellite image (Newton et al. 2020). Analysis of the convolutional layers gives a suggestion of which visual features are more important for a low obesity rate. The limitation of the imagery method is the amount of computing it requires and the obscurity of the conclusion due to the limitation of dataset as well as the black box effect of the algorithm. Street view imagery is also used as a source of data to measure visual walkability (Zhou et al. 2019). The advantage of this approach is that it considers human perception of the built environment. However, the amount of data processing and redundancy is a problem of this method.

The use of OSM data to generate socio-economic indicators and urban crime risk has been studied and testified (Feldmeyer et al. 2020); (Cichosz 2020). The data processing method can be used for reference and it showcases the possibility that POI data can be a good indicator of urban conditions and activities. POI data analysis can also be integrated with other methods of data collection. POI data, location-based service positioning data and street view images are used in conjunction to measure greenway suitability and give suggestions on greenway networks planning (Tang et al. 2020).

# **1.3 Objectives**

This research aims to use machine learning to analyze the relation between POI data and residents' health status. By looking into the pattern behind the data, the objective is to testify the existing healthy city planning principles as well as discovering new relations between built environment and health. Compared with imagery methods, using POI data from OpenStreetMap provides a more quantifiable result and requires less computing resources. Also, the varieties of features of OSM makes it possible to search for the most important factors among many aspects of built environment.

# **2 Methodology**

The workflow of this research follows five steps. In the first step, the number of the POI data from OpenStreetMap for the California state was collected and some initial data exploration was conducted. Second, the health-related outcomes data was collected and spatially joined with the census tract boundary and the POI data count. Third, a principal component analysis (PCA) analysis was conducted, and the features that best capture the variance were selected. Fourth, the selected features were used to train the supervised machine learning model. Finally, the models were used to predict the health-related outcomes in the test set. The results were validated and mapped accordingly.

# **2.1 Data Source**

The data source constitutes of three data sets, the POI data from OpenStreetMap, the local health data from CDC, and the place boundary file from the Census Bureau. The test region is within the state of California, United States as a compromise between data availability, handling capacity, and statistical accuracy.

# **2.1.1 Local Health Data from CDC**

Local Data for Better Health is a project that reports county-, place-, census tract-, and ZCTA-level data and uses small area estimation methods to obtain 27 chronic disease measures for the entire United States. The dataset is generated with an innovative peer-reviewed multilevel regression and poststratification (MRP) approach that links geocoded health surveys and high spatial resolution population demographic and socioeconomic data. The 27 measures include 5 unhealthy behaviors, 13 health outcomes, and 9 prevention practices. The measures include major risk behaviors that lead to illness, suffering, and early death related to chronic diseases and conditions, as well as the conditions and diseases that are the most common, costly, and preventable of all health problems (*Places: Local Data for Better Health*, no date). For the research, the specific dataset used has 18 health outcomes available in place level. The population size of each place is also included as a column in the dataset.

# **2.1.2 POI Data from OpenStreetMap**

OpenStreetMap is an open-source database with volunteers mapping geographic elements of the world. It represents physical features on the ground using tags attached to its basic data structures (its nodes, ways, and relations). The research uses Overpass API to query the database by the tags to get the geographic location of certain features. To begin with, 54 features among all the primary features were queried. The selection was based on the number of data points available and the relation with the physical activity, community interaction, and healthy eating of the residents as mentioned in the literature review. The 54 features can be categorized into food, healthcare, transportation, community service, leisure, tourism, building, and nature. The data points of each feature were spatially joint with the place boundary data and the count was calculated.

# **2.1.3 TIGER/Line Shapefiles Place Boundary Data**

Since the local health data provided by CDC can be spatially joined with the TIGER/Line mapping system, TIGER/Line Shapefile is also used to count the number of POIs in each place. Each shape generates one row of data with the number of POIs and local health outcomes. There are in total 1468 rows and so is the sample size. The area of each place is calculated from the dataset and added as a column. Figure 1 is a map of the 1468 places in California. A place is defined by the United States Census Bureau as a concentration of population that has a name, and it typically has a residential nucleus and a closely spaced street pattern, and it frequently includes commercial property and other urban land uses (*Census Bureau Definition*, no date). It is a geographic level data covering most of the population with relatively condensed areas. It can be seen that there is a concentration of places in the metropolitan area, near Los Angeles, Bay Area and Sacramento.

**Fig. 1.** Sample dataset of places in California

# **2.2 Data Analysis**

After joining the three datasets mentioned above together, we get a dataset with 1468 rows and 74 columns. The first 54 columns are the POI data counts, then the population and area of the place, and the last 18 columns are health-related outcomes.

In the first step of the data exploration, the population and area of the places are plotted to get an initial idea of the sample selected. As in Fig. 2, most of the places are comparable in size and population, with a few outliers which are possibly denser areas within the metropolitan. The one sample with a large population and area is Los Angeles and the second largest in area and population is San Diego.

The test plot of POI data as in Fig. 3 shows some correlation with the area and population. The test plot of a test plot of the health outcome shows that most of the sample is within a certain range with no explicit pattern.

**Fig. 2.** Area and population of the sample places

**Fig. 3.** Number of doctor offices and the crude prevalence of coronary heart disease

#### **2.3 Principal Component Analysis (PCA)**

The next step is to conduct a principal component analysis (PCA) with the POI data to decide which features best captures the variance. This is achieved with the sklearn package of Python. As the cumulative explained variance chart (Fig. 4 left) shows, 10 components can explain about 88% of the variance, and 20 components explain about 96% of the variance. After calculating the first principal component (PC1), 15 features with the highest scores were selected. Figure 4 right shows the features selected among the original 54 features. Note that the category of building has no feature selected.

#### **2.4 Machine Learning**

With the 15 POI features, the area and the population of the place as the input data, several machine learning models were trained with randomized training dataset to predict the 18 health outcomes. The median accuracy were calculated for each model with the test dataset, then an average accuracy rate is calculated for the 18 predictions.

**Fig. 4.** Left: number of clinics and the crude prevalence of coronary heart disease. Right: features selected with highest first principal components

We implemented four machine learning models (Fig. 5). A Random Prediction model is implemented with the DummyClassifier of sklearn package. Random values within the test data range are generated. Then a Linear Regression is conducted as a basic statistic prediction. It's implemented with the LinearRegression function of sklearn. Random Forest Regression is preferred since it's a machine learning algorithm based on decision tree and relatively fast to train. It's implemented with the RandomForestRegression function of sklearn. Artificial Neural Network (ANN) is a deep learning method that digitally mimicry the human brain to predict values. A 5-layer neural network is used in the research and a training step of 10000 achieves the best accuracy. The model is implemented with Tensorflow.


**Fig. 5.** Prediction accuracy for different machine learning models

# **3 Results**

After comparing the Average Median Accuray and Mean Absolute Error, Random Forest and ANN have similar accuracy in the test set prediction while the Random Forest is much faster to implement and calculate. A feature importance analysis is conducted with the sklearn built-in function. As shown in Fig. 6, population and area are the two most important features. For the POI data, the most important features are water, park, platform, and restaurant.

**Fig. 6.** Feature importance of random forest model

The prediction result on the test set is mapped as in Fig. 7. The Mean Error is calculated as the average of the 18 health outcomes with the prediction minus ground truth. The overall range of the Mean Error is between −0.18 to 0.18. Since the test data is already normalized within 0 and 1, the Mean Error could represent the average accuracy of the prediction by percentage.

There is a tendency to underestimate the health outcomes for larger and denser places for example Los Angeles and several places on the southeast. For medium and smaller-sized towns and rural areas, the prediction is closer to the ground truth. Note that the prediction accuracy with the Random Forest model is much better than Linear Regression on outliers in Metropolitan areas, the latter has a Mean Error of –1.4 in Los Angeles.

# **4 Discussion**

### **4.1 Prediction and Theory Testimony**

This research uses POI data from OpenStreetMap to predict the residents' health status. From the hundreds of features on the OSM, the research selected 54 features that have the higher density and fall into a category that could relate to health city planning. The PCA method reduces the features to 15 items, which could best represent the variance in the data.

With the permutation feature analysis on the test data, except for the area and population of the places, water, park, platform, and restaurant are the most important POI features. This corresponds to three domains: physical activity, community interaction, and healthy eating that could best support health and wellbeing. (Kent and Thompson 2014) Water and park usually correlate with public space for activity, platforms relate to transport accessibility and the abundance of restaurants (in this case formal eating

**Fig. 7.** Mean error for 18 health outcomes with random forest model

places) could represent the food availability of a certain area. The data-driven research supports previous qualitative research and testifies existing healthy city planning theory.

For model selection, compared with Linear Regression and Random Prediction, the Random Forest model and Artificial Neural Network model have better accuracy, while the Random Forest is faster to train and use. Also, for outliers, Random Forest and ANN work better than Linear Regression because of the non-linearity.

### **4.2 Challenges and Future Research**

Since the research proves that it's possible to predict the health outcomes with POI data from OpenStreetMap, the next step could be to enlarge the test area and see if this methodology can be used for other areas.

There is a difference between metropolitan areas and suburban areas in terms of prediction accuracy. For places of a metropolitan for example Los Angeles, there is tendency of underestimating the health outcomes while for suburban areas there are many places overestimated. Further research could be conducted to see if the deciding factors are different for different urban contexts.

Another challenge is that the health outcome data is still from a model-generated source, although according to CDC, the model is based on survey and local data. If more direct health data is available in the future, probably more linkage between the built environment and health could be realized and testified.

# **5 Conclusion**

The research explores a method to predict the health outcomes of the residents by using the POI data available from OSM, thus exploring the linkage between the built environment and the health of the residents. Different machine learning methods were used and evaluated. The result shows that the Random Forest model has the best balance point between prediction accuracy and ease of implementation.

With the abundance of data and computing resources, the research proposes a way of using data to support urban planning ideas. In order to improve the health status of society, decisions have to be made and data can be the common ground. This approach shows a vast potential in the future, that data could assist decision making towards a healthier built environment.

# **References**


**Open Access** This chapter is licensed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made.

The images or other third party material in this chapter are included in the chapter's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the chapter's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.

# **Research on Epidemic Prevention and Management Measures in University Based on GIS and ABM – Taking South China University of Technology (Wushan Campus) as an Example**

Mingxi Chen(B)

South China University of Technology, Wushan. 381, Guangzhou 510630, China

**Abstract.** Prevention and management of epidemic is a protracted war. As large community in city, universities are key regions in the anti-epidemic period. However, the current epidemic prevention and management measures in many universities do not compatible with the spatial form and the characteristics of the population, likely to lead to waste of resources and cause conflicts. The research simulates campus environment by constructing GIS model, and simulates the behavior of campus crowd by ABM. Under the coupling effect of the two, the real-time calculation of the spread of epidemic in universities can be calculated in real-time, making up for the deficiency of GIS model which can only do static data analysis. On this basis, research takes South China University of Technology as an example and assumes three epidemic prevention management measures, i.e. closed-off management, zoning management and self prevention, respectively to simulate the spread of the epidemic, sum up the results of different management measures and provide certain suggestions.

**Keywords:** Epidemic simulation · Prevention and management · GIS · ABM

# **1 Introduction**

As large community gathering a large amount of people in city, universities are the most important areas during the epidemic, thus proper epidemic prevention measures should be adopted during the normal period of epidemic. However, most universities fail to formulate effective management measures in combination with the spatial form and population characteristics of the university. On the one hand, it leads to the waste of human source and materials, and the low efficiency of epidemic prevention. On the other hand, it is likely to trigger conflicts, which is not conducive to the management of the epidemic.

In view of the above problems, this research attempts to take South China University of Technology (Wushan campus, hereafter referred to as SCUT) as an example, to obtain real-time and dynamic data through coupling GIS model and ABM1 [1, 3], and

© The Author(s) 2022

<sup>1</sup> ABM(Agent-Based Modeling): a modeling method used to simulate complex system, such as traffic and crowd.

P. F. Yuan et al. (Eds.): CDRF 2021, *Proceedings of the 2021 DigitalFUTURES*, pp. 148–157, 2022. https://doi.org/10.1007/978-981-16-5983-6\_14

to simulate the effects of different epidemic prevention and management measures, providing more intuitive guidance on epidemic prevention and management strategies for universities.

# **2 Methodology**

# **2.1 Research Scope**

SCUT is located in Wushan Street, Tianhe District, Guangzhou, with a total area of 294 hectares. SCUT is relatively open campus, which is adjacent to South China Agricultural University and Wushan metro station in the east, Tianhe Coach Terminal in the north, and several vocational colleges in the west, which leads to complicated crowd around SCUT (as shown in Fig. 1).

**Fig. 1.** The scope of SCUT **Fig. 2.** Zoning and access gates of SCUT

# **2.2 Research Data**

The whole model consists of two parts: one is to build the spatial model of SCUT using GIS based on geospatial data; the other is to build the behavior model of related people living and working in SCUT using ABM. GIS model is the workspace of ABM model, and the simulation process of ABM model will be reflected in the space in real time, and the corresponding results will be fed back together with the GIS model. The relationship between the two is shown in Fig. 3.

# **2.2.1 Spatial Analysis Based on GIS**

To carry out GIS spatial analysis, a geographic information model is firstly constructed through spatial data, including road network, open spaces, buildings, boundary, and access gates.

**Fig. 3.** The relationship between GIS model and ABM model

On this basis, this research analyses static geographical elements with GIS model, and simulates crowd activities though ABM model. Crowd activities data will be processed with kernel density analysis, which reflects areas of high concentration of activity that need to pay more attention to.

### **2.2.2 Behaviour Simulation Based on ABM**

ABM is a modeling method used to simulate complex system. The components in a system such as people and buildings can be described as "agent", and the interaction between "agent" is the relationship between the components in the system.

This research uses GAMA (GIS and Agent-based Modeling Architecture) platform to build ABM model. Agent in the ABM model can interact with the environment and other agents to change its own state. Each type of agent has a consistent list of attributes, goals, and ways of behaving, just as human shares same characteristics. The attributes, purposes and behavior of each agent may vary, just as individuals differ from one person to another.

In addition to spatial data of GIS model, the construction of ABM model also includes a series of heterogeneous data used to simulate crowd activities [2], such as types of people, number of different kinds of people (see Table 1), the schedule of people and behaviours of peoples (see Table 2).

The above data can be used to simulate the activities of different groups in SCUT in the ABM model. The crowd distribution at a specific moment can be exported as a SHP file recorded with "points", and then imported into ArcGIS to reflect the aggregation of


**Table 1.** Simulation number of different kinds of people

**Table 2.** Schedule for various groups of people


crowd at different time by kernel density analysis. In addition, the spread of the epidemic can be simulated using ABM model and the infection rate can be shown through chart.

# **3 Result**

### **3.1 Analysis of General Elements**

### **3.1.1 Analysis of Space Utilization**

The building function layout of SCUT is shown in Fig. 4. Due to the long history of SCUT, its overall form has evolved into a relatively complex state. The campus has formed an obvious axis organization along the East Lake in the north-south direction, and there are large community groups in the southwest, southeast and west sides. Research buildings such as laboratories on campus tend to cluster, while other types of buildings do not have obvious aggregation features.

SCUT interweaves with the surrounding urban areas, presenting an irregular campus boundary, and can be clearly divided into two areas, north and south (see Fig. 2). There are 13 access gates in the campus, and there are more access gates on the southeast side of the campus with more contact areas with the city, as shown in Fig. 5.

There are two types of residential buildings in SCUT, student dormitories and community residences, which are mainly distributed near the campus boundary. The population of each residential building is shown in Fig. 6. On the whole, student dormitories are more densely populated than community residences, while community residences account for a larger proportion of campus area.

# **3.1.2 Analysis of Crowd Activities**

In this research, an ABM model is constructed according to behavior habits and schedule of different types of people, which can reflect the simulation results in real time. Agents

**Fig. 4.** Distribution of building function

can move to various locations on campus through roads, as shown in Fig. 7. And when the agent is infected, it will turn into red. If the building contains infected people, a gradient of red will reflect the proportion of infected people in the building, as shown in Fig. 8.

In the simulation process, the model simultaneously exports the location points of crowd in the format of SHP file, and then analyzes the aggregation of crowd at different time through kernel density analysis. The results are as follows (Fig. 9):

### **3.2 Comparison of Management Measures**

# **3.2.1 Without Any Management**

Before simulation, the model will randomly select a person in campus as the infected person. According to academician Zhong Nanshan's research on COVID-19 [4, 5], the probability of infection of close contact (within 2 m) is set to 5%. During the simulation, if a normal person is within 2 m with an infected person, the normal person will be infected with the probability of 5%.

In the absence of any management measures, the change of infection rate is shown in Fig. 10. It took about 3 days to reach 100%<sup>2</sup> infection rate.

<sup>2</sup> This research focuses on the impact of different management measures on the spread of epidemic, simplifying the relevant epidemiological principles, so it is assumed that the epidemic in this research is not fatal.

**Fig. 7.** People agent move around campus **Fig. 8.** People or building get infected

### **3.3 Closed-Off Management**

Closed-off management restricts people entering and leaving the campus. Under this management measure, the campus would be prohibited from entering from outside, and all the staff, students and community residents would live inside the campus.

**Fig. 9.** Kernel density analysis of crowd aggregation

**Fig. 10.** Spread of epidemic without any management

The rate of infection with closed-off management over time is shown in Fig. 11, reaching 100% infection after about 4 days. Without considering medical measures, the closed-off management can prevent infected people from outside. However, it only slightly delayed the growth of the infection rate inside the campus rather than inhibit the spread of the epidemic.

**Fig. 11.** The spread of epidemics under closed-off management

### **3.4 Zoning Management**

On the basis of closed-off management, zoning management divides SCUT into two zones, north and south, bounded by the north-south gate (see Fig. 2), which are managed independently. Since there is no residence in the North zone, teachers living in the South zone need to reach the North zone through the north-south gate for classes, which are not restricted by zoning management. Other school members are restricted to their own areas.

The infection rate of zoning management over time is shown in Fig. 12. The infection rate reaches 100% after about 4 days, which is not very different from closed-off management. This is because teachers, as a group that can travel freely between the two zones, are likely to become the host of the epidemic, bringing virus to other zones.

**Fig. 12.** Spread of epidemics under zoning management

# **3.5 Self Prevention**

In order to simplify the simulation procedure, the research takes "wearing a mask" as the measure of self prevention, which reduces probability of infection to 2% [4, 5].

The infection rate of self prevention over time is shown in Fig. 13. The outbreak period was approximately 6–9 days after the start of the simulation, and it took approximately 20 days for reaching 100% infection rate.

**Fig. 13.** Spread of epidemic under self prevention

# **4 Conclusion**

According to the simulation of crowd activities, a large number of people will flock to certain dining places like canteen during dining time, which will cause a highly aggregation of crowd and cross-infection. Therefore, during the epidemic period, universities should set up temporary dinning place evenly on campus to reduce aggregation. In addition, different management measures lead to different effect: 1) Closed-off management cannot stop epidemic spreading if infected people are already inside the campus. 2) Zoning management requires different zones are able to operate independently. 3) Self prevention such as "wearing masks" is the most direct and effective measures of epidemic prevention and management. Universities should strengthen education about epidemic and enhance self prevention awareness.

# **Referencess**


**Open Access** This chapter is licensed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made.

The images or other third party material in this chapter are included in the chapter's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the chapter's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.

# **Environmentally Driven Aggregate Façade Systems**

Pablo Cabrera Jauregui(B)

School of Architecture + Design, Virginia Polytechnic Institute and State University, 811 4th Street NW, Unit 620, Washington, D.C. 20001, USA pcabrera@vt.edu

**Abstract.** Even though computer simulation of environmental factors and manufacturing technologies have experienced a fast development, architectural workflows that can take advantage of the possibilities created by these developments have been left behind and architectural design processes have not evolved at the same rate. This paper presents a design to fabrication workflow that explores data driven design to improve performance of facades, implementing for this purpose computational tools to handle environmental data complexity and proposes robotic fabrication technologies to facilitate façade components fabrication.

**Keywords:** Design computation · Simulation · Fabrication · Design robotics

# **1 Introduction**

Advances in the computer simulation of environmental variables such as light, wind, and sound due to significant growth in the computing power of the day-to-day tools available to designers are expanding the scope of variables that affect design decisions, revealing like never before natural conditions that had remained hidden because of the means of representation traditionally employed by architects to represent their buildings and, consequently, are connecting architecture to the natural sciences (Peters and Peters 2018).

Architectural designs that respond objectively to these environmental constraints are increasingly complex (Schwitter 2005), both in design processes and formal manifestations. The design process is oriented toward performance, but it is important to distinguish two types of performance in architecture: "the kind that can be exact and unfailing in its predictions of outcomes, and the kind that anticipates what is likely, given the circumstantial contingencies of built work. The first sort is technical and productive, the second contextual and projective. There is no need to rank these two in a theory of architectural performance; important instead is grasping their reciprocity and joint necessity" (Leatherbarrow 2009, p. 18). In the context of this paper, performance is understood as the second category.

On the other hand, formal complexity is pushing the boundaries of construction and demanding new means of fabrication. Computer-aided manufacturing processes are increasingly being employed in the fabrication of complex forms, from small prototypes to large architectural components, due to increases in the availability and versatility (Willmann et al. 2018) of technologies that until a few decades ago, were the exclusive domain of engineers, such as industrial robots. This versatility plays an important role in the adoption of fabrication technologies in the design and architecture fields.

Even though computer simulations of environmental factors and fabrication technologies have developed rapidly, architectural workflows that take advantage of them have been left behind, and architectural design processes have not evolved at the same rate. Despite some interesting proposals, this is an important area of research that is yet to be fully explored, and now is a good opportunity to create such an integral workflow, from design to fabrication (Hauck and Bergnin 2017), that can negotiate computational frameworks, environmental simulation, and fabrication.

According to the U.S. Green Building Council's "Buildings and Climate Change" (n.d.), the commercial and residential building sector produces 39% of carbon dioxide emissions in the United States, more than any other sector. Most of these emissions come from the combustion of fossil fuels to provide heating, cooling, and lighting. If embodied emissions, which are the first emissions generated from building materials, products, and construction processes, are taken into account, another problem surfaces. Currently, about 5.7 billion square feet of new buildings are erected in the U.S. every year, and their embodied emissions amount to around 300 million metric tons per year (Strain 2016).

**Fig. 1.** Independent variables such as materials, environmental conditions, and structural behavior were taken as a framework in which workflow instances were generated based on dependent variables such as geometry, orientation, and assembly logic. Each design experiment explored a specific material system as a means of fabrication and as a manufacturing constraint.

It is not only the performance of the buildings but their materials and construction processes that need to be more energy efficient and climate friendly. At present there is a disconnect between the performance optimization and the fabrication of buildings and their components. This paper explores workflows designed to reconcile this disconnect by proposing new design processes, material systems, and fabrication methods with the aim of moving toward improved performance. The assumption is that by making the built environment more energy efficient and climate friendly, the building sector can play a major role in reducing the threat of climate change. Through immersive case studies focusing on fabrication, we propose workflows that explore data-driven design to improve the performance of facades, implementing for this purpose computational tools to handle complex environmental data and proposing robotic fabrication technologies to facilitate façade-component fabrication.

### **1.1 Design Experiment: Timber Façade**

The first case study was developed as part of the Eco-Park Learning Center project, a collaboration between the Prince William County Solid Waste Division and the Center for Design Research in the School of Architecture + Design at Virginia Tech. Visitors to the Eco-Park Learning Center are taught about a range of alternative energy sources, including solar, wind, and methane.

**Fig. 2.** From top left Figures (a) through (i) showing steps of the computational framework including geometry generation, environmental and structural simulation and robot toolpathing.

### **1.1.1 Aim**

The shading screen for the PWC project was conceived of as a way to reduce buildings' solar exposure by means of a facade shading device made from the recycled wood commonly used in construction scaffolding (Fig. 2a).

The aim in the first part of the research process is to develop a computational framework that can produce instances of a geometric system informed by material constraints and environmental considerations. The aim of the second part is to explore the fabrication of the system's assembly logic based on notches, to understand the limitations of the fabrication and the opportunities for a robotic mill fabrication workflow to produce wood joints, and to inform the construction of the system as a fabrication constraint.

### **1.1.2 Computational Framework**

Within the computer-aided design environment Rhinoceros, a custom Python script was developed that could generate components through a bottom-up, additive process in which the final morphology is a result of the initial-condition rules.

Figure 2b shows all the variables implemented to control the geometry of the screen. The addition of each wood member is linked to a yearly direct-incidence solar radiation simulation based on the Ladybug plugin (Fig. 2c). The algorithm implemented in the script places a new member at each point the simulation determines to have the highest exposure to the sun, reducing the global facade exposure.

Figure 2d shows the assembly logic of the system. Notches are added between intersecting members so that every time a new member intersects an older one, it jumps a layer outward by a proportion of its width and a scale-down variable. This controls the cross-section of every new layer. The potential here is for the fade-out the facade as it continues adding layers by cutting the original wood piece in halves, quarters, and so forth. For the studied prototypes, the same size of cross-sections was used in all facade layers.

It is also possible to specify as a variable in the computational framework the minimum number of intersections before a new member jumps out a layer. Figure 2e shows case of two intersections.

#### **1.1.3 Manufacturing**

The fabrication workflow involves several manufacturing processes, as shown in Fig. 1. To test the workflow, a mockup instance of the computational framework was developed for fabrication.

The generation process was informed using the direct solar radiation on the position for the components, and using computational finite-element structural analysis to determine the stress lines and align the angles of each member.

Figure 2f and Fig. 2g show different support cases, displayed as wire boxes, which generate different stress line patterns. Members are aligned to these patterns so as to provide material continuity (Fig. 2h). The mockup screen is 8- × 8 and composed of three layers of members with 2- × 4 cross sections and a length of 3 feet, responding to spring-summer solar radiation.

A sorting procedure in the form of a Python script was developed that arranges layers and parts and also tags, subtracts, and develops the members generated by the computational framework (Fig. 2i). A second custom Python script takes the output of the sorting procedure and automatically generates toolpaths for the robot (Fig. 3a).

#### **Robotic Fabrication**

The fabrication of the mockup made from 2- × 4 pine members took place in Boston as part of a residency at the Autodesk BUILD Space (Fig. 3b), a research and development workspace focused on innovation in architecture, engineering, and construction. The residency period was divided into two parts, one to develop, produce, and test tools for the fabrication process, and one for the actual fabrication of the complete prototype.

A manufacturing cell composed of an ABB robot model IRB 4600 with a Spindle tool and a safety guard was used for the fabrication (Fig. 3c). Because the industrial

**Fig. 3.** From top left Figures (a) through (i) showing simulation of robot toolpaths, the robot employed for fabrication at Autodesk BUILD Space, and custom gripper production and early tests.

robot is a versatile machine, it is open-ended, meaning that in every fabrication project involving robots, all the tools needed must be designed and fabricated and from a design point of view. New fabrication skills must also be acquired, from operating advanced CNC equipment to precision machining. In the case of the fabrication of the mockup, the focus was on the work holding.

**Fig. 4.** From top left Figures (a) through (i) showing robot calibration, milling and labeling of a façade component, and assembly of screen.

Fingers to hold the wooden pieces were designed and produced based on a pneumatic gripper. Parts were waterjet-cut from ¼-inch aluminum plate (Fig. 3d) and drilled using a Bridgeport (Fig. 3e). This is an especially critical step for tolerances, so calibration equipment was used to ensure perpendicularity between the gripper's component parts. Finally, the parts were tapped using a Haas machine (Fig. 3f). Figure 3g shows the custom-made parts that compose the work holding.

After the work holding was assembled (Fig. 3h), two problems were detected. The longest aluminum parts were too thin for their length and tended to vibrate when forces were applied to their extremes. These were replaced with ¾-inch aluminum parts. The second problem had to do with the grip capacity of the fingers contacting the wood pieces. When working on naked aluminum, the wooden pieces tended to slip, so low-grain sandpaper was added as a surface contact (Fig. 3i).

Because the floor of the manufacturing cell was not perfectly horizontal, a calibration procedure was established to read the inclination of the work holding in relation to the robot (Fig. 4a). The toolpath-generator script has the flexibility to read test coordinate points taken from the physical gripper, so the toolpaths generated for the robot can deal with this discrepancy between the digital and physical worlds.

Figure 4b shows the milling process, and Fig. 4c shows the final wooden piece after milling. To aid in the assembly process, information generated by the sorting script such as component name, intersecting component at each notch, and direct-incidence solar radiation value at the time of the component being added—was engraved on every component (Fig. 4d) using a laser cutter machine.

The mockup was assembled in Blacksburg at the woodshop facilities of Virginia Tech (Figs. 4e, 4f, 4g). While the notch logic helped secure every piece in place, 4½-inch structural screws were used to fasten them locally using power screwdrivers (Fig. 4h). Figure 4i and Fig. 5d shows the completed mockup.

#### **1.1.4 Results and Findings**

As Fig. 5b shows, the computational framework plus the simulation evaluator can reduce the facade solar exposure in the PWC project from 890.76 Kwh/m<sup>2</sup> to 369.32 Kwh/m2 by adding a screen on the southeast-facing facade.

The assembly logic based on notches (Fig. 5a) works for positioning the components. A tolerance of 2 mm was introduced during the milling process to account for calibration errors and material changes, such as wood swelling due to humidity. For this reason, structural screws were also employed to secure each notch.

As a manufacturing proof of concept, part of the facade was fabricated at the Autodesk BUILD Space in Boston and later displayed at the ICFF exhibition in New York (Fig. 5c).

**Fig. 5.** From top left Figures (a) through (i) showing assembled parts of the screen, the architectural target façade, the metal façade fabrication and assembly process, and the proposed fully automated manufacturing scenario.

# **1.2 Design Experiment: Metal Façade**

A parallel fabrication method was tested to explore the consistency of the workflow from design to production. The outcome of the computational framework and the simulation evaluator was diverted to another fabrication method.

# **1.2.1 Aim**

The objective of this experiment was to test the viability of an alternative fabrication method for the outcome of the computational framework and simulation evaluator used for the shading screen.

# **1.2.2 Computational Framework**

In the computational framework developed for the Timber Façade design experiment, a Python script was hooked up to collect center-line geometric information of the facade components. The number of layers composing the façade was kept at zero. A custom Grasshopper plugin running inside the Rhinoceros CAD environment was then used to export the comma-separated value files, which were inputted into the Howick frame machine (Fig. 5e).

# **1.2.3 Manufacturing**

The generated morphology was fabricated using a Howick frame machine, which can bend, cut, and punch out thin metal rolls. Figure 5f and Fig. 5g show the assembly process and the final metal piece.

### **Light-Gauge Steel Framing**

According to Howick's manufacturer information, steel framing machines place all punching and fixing holes using accurate computer control. This allows the frames to be manufactured with high precision and to be self-locating and jigging. All the frame components are produced quickly, with the location dimples and pre-punched screw and rivet holes ready for assembly and clearly marked. No further cutting or post-processing work is needed, so low-skilled local labor can be used to assemble the buildings with little supervision.

### **1.2.4 Results and Findings**

The viability of an alternative fabrication method was demonstrated (Fig. 5h). The addition of custom code made it possible to expand the material alternatives in which the outcome of the computational framework could be fabricated.

# **2 Conclusion**

Contributions of this work are weighted heavily toward the process rather than the products, and central to this is multivariable design to fabrication workflow that interrelates a computational framework, environmental performance simulation, and computer-aided manufacturing. A sub-process of this workflow is a form-finding computational strategy for an environment-driven facade in the form of Python scripts. Instances of the workflow have been used to fabricate prototypes as a proof of concept of the process.

From the case studies presented here, is possible to state that a robotics-based fabrication method informed by a multi-variable computational framework and a simulation evaluator integrated into a design-to-fabrication workflow is feasible. Instances of this workflow, such as the shading screen for PWC, show a responsiveness to environmental conditions that stems from the logic defined in the workflow. From a representational point of view, when environmental data are made visible with numbers, as in the solar radiation simulation on the shading screen, they can be integrated as a design variable because the designer is objectively aware of their influence in the same way she is aware of a drawing or an area schedule. From a computational point of view, the use of scripts and subroutines for environmental data processing allow larger data quantities to be considered, such as the yearly solar radiation results used in the shading screen. Finally, from a manufacturing point of view, the versatility of industrial robots allows them to be used in a wide range of fabrication scenarios. The definition of a tool through the fabrication process is what gives it specificity, and complex forms can be fabricated with a well-designed robot tool.

# **3 Future Research**

One promising line of research would be the consolidation of separate manual processes (Fig. 1) into a comprehensive robotic fabrication workflow, in which a continuous robotic process manufactures the façade, from material sourcing to assembly (Fig. 5i). Integrated design-to-fabrication workflows can also help the building industry become more energy efficient and climate friendly on two different scales. On a material scale, embodied carbon emissions can be reduced by the use of recycled or self-grown materials. In the case of the shading screen, for instance, the workflow proposes as material variables both rescued timber from construction scaffolding and pine wood from responsibly managed forests that provide environmental, social, and economic benefits. During the manufacturing process, embodied carbon emissions can be lowered through the use of more efficient prefabrication dry methods. For instance, in the case of the shading screen, a manufacturing scenario employing the Howick steel framing machine was proposed because of its use of light-gauge steel, which is a modern form of building that has been proven to reduce environmental impact.

This leads to interesting questions about the role of the architect. The increasing availability of advanced manufacturing technologies, means the profession is returning to the consideration of construction and manufacturing as a part of the design process, and this returns to the architect control over the fabrication of her work, blurring the line between design and construction that was artificially created by modernism. But it also blurs the traditional role of the architect, confronting her with a multidisciplinary set of new skills and knowledge.

# **References**

Buildings and Climate Change (n.d.). U.S. Green Building Council. http://www.usgbc.org


**Open Access** This chapter is licensed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made.

The images or other third party material in this chapter are included in the chapter's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the chapter's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.

# **Optimization of Daylight and Thermal Performance of Building Façade: A Case Study of Office Buildings in Nanjing**

Hainan Yan1(B) , Yiting Zhang2, Sheng Liu3, Ka Ming Cheung3, and Guohua Ji1(B)

<sup>1</sup> School of Architecture and Urban Planning, Nanjing University, Nanjing, China jgh@nju.edu.cn

<sup>2</sup> School of Architecture, Carnegie Mellon University, Pittsburgh, USA

<sup>3</sup> School of Architecture, The Chinese University of Hong Kong, Shatin, NT, Hong Kong

**Abstract.** In China's hot summer and cold winter areas, the façade design of buildings needs to respond to a variety of performance objectives. This study focuses on the optimization of daylight and solar radiation of building façade of office buildings in Nanjing and proposes a simple and efficient method. The method mainly includes a random sampling of design models, simplified operation of daylight performance criteria and selection of optimal solution. The results show that the building façade can improve the indoor lighting uniformity and reduce the indoor illumination level compared with the unshaded reference building. Besides, the amount of solar radiation received by office buildings in summer and winter becomes more balanced with the building façade. The optimization design method of building façade proposed in this study can be of guiding significance for office buildings in Nanjing.

**Keywords:** Daylight · Solar radiation · Office buildings · Building façade · Parametric analysis

# **1 Introduction**

Total building area in China has reached 63.487 billion m3, of which about 11.506 billion m<sup>3</sup> are public buildings [1]. In terms of energy consumption intensity per unit area, the energy use intensity is the highest for public buildings, and has been growing. Among public buildings, office building has great energy-saving potentials, especially in the economically developed region such as the Yangtze River Delta. Building energy performance is usually affected by various factors, e.g., building envelope designs, occupants' behaviours, and heating, ventilation and air-conditioning system. Among them, almost half of the energy consumption of buildings is directly or indirectly caused by the building façade performance [2]. In recent years, architects often utilized energy efficiency design strategy to achieve net-zero energy buildings with high performance. Building façade plays an important role in the whole system of buildings to regulate the microclimate around buildings, which can reduce the energy consumption and adverse environmental impacts, see Fig. 1 [3].

**Fig. 1.** A full system of the building.

With the development of technology and the demand of use, building façade have gradually separated from the structural parts. The building façade design becomes an important part of architectural design processes. The building façade design process normally has the composite and flexible characteristics. Tabadkani et al. [4] summed up the concept of the dynamic façade and adaptive façade. Many terms have been introduced, including kinetic façade, intelligent façade, interactive façade, responsive façade, and smart façade. These studies on façade mainly focused on improving indoor thermal comfort and reducing building energy consumption. For instance, Ricci et al. [5] proposed a building dynamic façade based on climate adaptability by using the parametric performance analysis platform LadybugTools. The dynamic façade is simulated and verified in different climate regions in Europe. The results showed that the system has better building energy-saving effect and indoor thermal comfort due to its dynamic and changeable characteristics. Sabry et al. [6] analyzed the optimal configuration of composite façade system with the design objective of daylighting performance of office space in hot and arid areas. Sheikh et al. [7] studied the design of adaptive bionic façade based on Oxalis. The simulation results showed that the proposed bionic façade can significantly reduce the energy consumption of high glazed buildings with the minimum reduction of visual comfort. To sum up, the method and objectives of designing building façade are diversified and multi-objective, because designing of building façade is an expression of modern architectural aesthetics and culture, and is an important way to adjust the physical environment of buildings.

In terms of research methods, the generation of flexible parametric models of building façade and finding the optimal solution of building façade with high efficiency are the two main concerns of related research [8, 9]. However, the traditional building façade shaping and optimization process often rely on the global heuristic search algorithms, such as the genetic algorithm. Such a process makes the entire workflow time-consuming and inefficient [10]. Based on this, this study proposes the following questions in different stages of workflow:


The above questions ultimately point to the effectiveness and efficiency of the design workflow, which is also the main content to be explored in this study.

This study aims to develop a novel parametric design method for the building façade in the context of the Yangtze River Delta region of China. First, algorithms of parametric design and shape generation are studied, and the parts that can be applied to the shape generation of building façade are summarized. Then, taking the an office building in Nanjing as a case study, this study applies this novel design method for design building façades and discusses its application on optimization of the solar thermal performance. This method enables designers to take into account the form of the building façade and its influence on the internal performance of the building at the early design stage, so as to carry out rapid scheme comparison and selection, and improve the design efficiency. In addition, the building façade generation and optimization method proposed in this study should be universal and can be applied in other types of building façade.

# **2 Research Method**

The primary goal of this study is to propose an innovative modular dynamic building façade system. To that end, the Voronoi diagram and its deformation are studied. Voronoi diagram is a kind of subdivision of space plane, which is characterized by that any position in a polygon is closest to the sample points of the polygon, far away from the sample points of adjacent polygons, and each polygon contains and only contains one sample point [11]. Grasshopper is a plug-in for Rhinoceros 3D modelling software, which can integrate with many functions, such as parametric modeling and building performance analysis. In recent years, it is widely used in the field of digital architectural design. In this study, the building façade module is generated by the Grasshopper platform and its variable types and ranges are set. This study focuses on the climatic conditions of Yangtze River Delta region of China and takes the office buildings in Nanjing as an example to apply the above variable building façade. Meanwhile, the reference building was set up in this study to compare and analyze the benefits of variable building façade in terms of solar radiation and indoor lighting through simulation data. Figure 2 shows the research process and research methods of this study.

**Fig. 2.** Research processes and methods.

#### **2.1 Shape Generation**

The experimental site is set in Nanjing, in which summer is hot and winter is cold. An office building in the Gulou campus of Nanjing University is taken as an example. The office building has 8 floors, with spatial dimensions of 60 m (length), 15 m (width) and 34 m (clear height). Field measurement results show that the window-to-wall ratio (WWR) of this office building is about 50% (Fig. 3). The trade-offs between different design performances of office buildings in Nanjing area is complex, such as the relationship between shading and artificial lighting, the relationship between wide field of vision and personal privacy, the relationship between daylighting and glare, etc. Different from the demand of the buildings in other areas in southern China, the buildings in Nanjing need shading in summer and lighting in winter, so architects need to conduct performance analysis to guide the shading design.

For the office space, the position of the staff is usually fixed, so the indoor light and environmental requirements are higher than that of other building types. Building façade plays an essential role in providing shading or sufficient lighting for the interior. Based on this consideration, this study considers applying Voronoi form to the south façade design of office space. Four types of changes are carried out in the form of façades, including the deformation of façade units, the scaling of façade units, the thickness of façade units and the hole scaling. These changes can change the indoor lighting and thermal environment (Fig. 4). In the process of building façade sampling and performance simulation, it is necessary to constrain the variable range of the parametric model of building façade and give the change interval. According to the actual façade design and construction experience, it is appropriate to set the façade unit size between 1.0 m and 2.0 m, and the maximum displacement distance of the façade unit is 1 m in the horizontal direction. In

**Fig. 3.** The setting of office building form and size.

addition, due to the high requirements for natural lighting for offices, the proportion of windows in the façade should not be too small, and the change range is determined to be between 0.5 and 1.0. The thickness change range of building façades is set between 0.2 m and 0.6 m. To sum up, the variable parameters of building façade are summarized in Table 1.

**Fig. 4.** Analysis on the formation of building façade.

### **2.2 Performance Simulation**

The main purpose of this part is to analyze the influence of the parameterized variable model of the building façade proposed above on solar radiation and natural lighting. To this purpose, Honeybee environmental plug-in for Grasshopper was used to investigate the daylight performance of each building façade configuration. Honeybee provides the advanced grid-based daylighting mode in which it allows the designer to control façades according to the amount of light on the elevated task area with 75 cm height in this research. In addition, the Ladybug plug-in for Grasshopper is used to calculate the solar radiation received by the south façade of the building in winter and summer.


**Table 1.** Variable intervals and variable types of building façade.

This study selects the lighting uniformity based on the static light environment evaluation and the indoor lighting level at a specific time as factors. The reason is that the method proposed by this study is mainly used in the design phase, and pays more attention to the efficiency of performance simulation. Compared with the dynamic environment simulation, the static light environment simulation is more efficient in saving time. In addition, the solar radiation received by office buildings in winter and summer is considered in this study. The setting of performance criteria is shown in Table 2.

In order to verify the simulation results, the initial office building without a building façade is set as the reference building. The reference building also needs to be simulated and calculated independently to obtain the reasonable results.

**Table 2.** Performance criteria setting.


• URD: The daylight factor (DF) is one of most commonly used building daylighting evaluation indicators [12, 13]. It is easy to calculate but cannot represent various weather brightness changes. Basing on the reference to the "Standard for Daylighting Design of Buildings" published by the Ministry of Housing and Urban-Rural Development of China, this study sets the uniformity ratio of daylight (URD) to evaluate the distribution of indoor illuminance in buildings. URD is the ratio of the lowest to the average of the daylight factor (DF) on the reference plane. The optimization objective is the maximum of URD, and the calculation of URD is as follows:

$$\text{URD} = \text{DF}\_{\text{min}} / \text{DF}\_{\text{avg}} \tag{1}$$


$$\text{RAD} = \text{Radiation}\_{\text{summer}} - \text{Radiation}\_{\text{winter}} \tag{2}$$

# **2.3 Data Analysis and Visualization**


# **3 Result Analysis**

This research utilizes an interactive parallel coordinate system to filter the target performance criteria of 500 random samples, then obtains several optimal variable combinations. After the numerical filtering of each performance, some non-optimal samples will be eliminated, and the optimal solution of building façade variable combination will be retained. As shown in Fig. 5a–d, the left side of each figure is the value of building façade variability, and the right side is the performance criteria. According to Fig. 6(a), the numerical range of each performance criteria can be preliminarily determined. Furthermore, the optimal filtering interval of URD is set to 20%–30%, the optimal filtering

**Fig. 5.** Simulation results of daylight and thermal performance of office buildings; full spectrum of the results for 500 iterations (a), best performing design scenarios for the URD (b), best performing design scenarios for the URD & RAD (c) and best performing design scenarios for the URD & RAD & hUDI (d).

interval of RAD is set to 20 kWh/m2–30 kWh/m2, and the optimal filtering interval of hUDI is set to 30%–40%. Finally, six sets of optimized solutions that meet the filtering requirements are obtained.

The process above based on 500 random samples can significantly reduce the number of samples, and produce several optimal solutions that meet the requirements. Though this method faces the risk of losing more optimal solutions, it can be alleviated by increasing the number of samples appropriately. Compared with the genetic algorithm optimization method, this study can get the optimal solution more efficiently and quickly. The simplification of the design process makes it easier for designers to understand and adopt.

Figure 6 (left) illustrates that the overall distribution of hUDI value is between 16.38% and 54.83%, with an average of 39.05%. Compared with the reference building (50.25%), the addition of building façade makes the indoor lighting level decline. Meanwhile, Fig. 6 (middle) shows that the overall distribution of URD value is between

**Fig. 6.** Analysis on the optimization potential of daylight and thermal performance of building façade: optimization potential analysis of hUDI (left), optimization potential analysis of URD (middle) and optimization potential analysis of RAD (right).

8.16% and 25.05%, with an average of 15.04%. In contrast, the URD value of the reference building is slightly lower (14.48%). Due to the ability to block the glare near the window and reflecting more light into the room, building façade components can improve indoor lighting uniformity. According to the box chart of solar radiation data on the south face of the building (Fig. 6, right), RAD values are generally distributed between 8.41 kWh/m2 and 58.33 kWh/m2, with an average of 32.87 kWh/m2. Compared with the reference building (34.60 kWh/m2), the difference of solar radiation heat gain in summer and winter has a certain optimization potential. In general, the façade added to the south face of the building reduces the level of indoor lighting, but it improves the uniformity of indoor lighting. In addition, the amount of solar radiation received by office buildings in summer and winter becomes more balanced than the reference building.

# **4 Conclusion**

The design objectives of building façade are often multifaceted, involving lighting, solar radiation, building energy consumption and vision, etc. This research focuses on the optimization of office building façade for daylighting and solar radiation. The innovative design process of building façade in this study can be applied to designing the south face of office buildings in Nanjing. The originality and value of this research method are as follows:


This study only proposes a specific design process and does not consider the face of other directions of the building. The research on the performance of building façade in building energy consumption or indoor thermal comfort is also missing. In further research, it is necessary to have more diverse building façade forms and a more comprehensive process on building performance evaluation and optimization.

**Funding.** This research was funded by the Opening Fund of Key Laboratory of Interactive Media Design and Equipment Service Innovation, Ministry of Culture and Tourism (Project Number: 20204).

# **References**


**Open Access** This chapter is licensed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made.

The images or other third party material in this chapter are included in the chapter's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the chapter's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.

# **Integration of Algorithm-Based Optimization into the Design Process of Industrial Buildings: A Case Study**

Mirjam Konrad(B) , Dana Saez(B) , and Martin Trautz

Chair of Structures and Structural Design, RWTH Aachen University, Schinkelstr. 1, 52062 Aachen, Germany mirjam.konrad@rwth-aachen.de, saez@trako.arch.rwth-aachen.de

**Abstract.** Algorithm-based optimization is widely applied in many fields like industrial production, resulting in state-of-the-art workflows in the production process optimization. This project takes the cultural lag of conventional industrial architecture design as a motivation to investigate the implementation of algorithmbased optimization into traditional design processes. We argue that an enhanced way of architectural decision-making is possible. Current approaches use a translation of the whole design problem into a single, overly complicated optimization system. Contrary to that, this paper presents a novel workflow that defines precise design steps and applies optimizations only if suitable. Furthermore, this method can generate relevant results for factory planning design problems with contradicting factors, making it a promising approach for the complex challenges of i.e. resource-efficient building.

**Keywords:** Algorithm-based optimization · Evolutionary Multi-objective Optimization · Industrial architecture · Architectural design · Design methodology

# **1 Introduction on Form Finding**

Designing from the abstract to the concrete implies a series of different transformation, evaluation, and decision-making processes [2]. In traditional design processes the background or intuition of the designer serves as a basis for the decision-making of the iterative actions that give form to the object of design. Although the design process is generally non-linear, it renders strict, systematic methods that hamper an integral part of the process: experimentation [3]. Nowadays, facing the information age, decisionmaking and all its facets are increasingly challenging to obtain. In the past, interdisciplinary approaches to design strategies have proven to be successful. The interactions between computer science, mathematics, and architecture have led to the invention of CAD software, for example, from which *Building Information Modeling* (BIM) or parametric applications have evolved. However, the spread of new technologies, such as form-finding tools, is very slow, and their utilization in the architectural design process is scarce [14]. New digital methods are no longer merely a digitalization of the conventional analog planning process. They could also be implemented into design processes to enhance the decision-making workflow when working with complex programs as industrial facilities.

### **1.1 Form Finding in Industrial Architecture**

Industrial production is known for its advance in process optimization, economic efficiency, and hierarchical structures. Hence, factory planning requires a different design approach than conventional projects. A specific, clearly defined program, its possible expansions, and the detailed data of existing productions should be considered. The goal of economic efficiency should not only be achieved by optimizing production processes but also by planning the building structure and equipment [10]. Typically, the production design and the building design occur separately and interact too late so that the building results into a polygonal envelope of the simplified space program [9]. Concerning architecture, the continuous optimization of production and the extensive collection of data regarding utilization and profitability, is in severe contrast to the planning and design of the factory building [9]. Production technology is undergoing increasingly rapid change. The advent of automation and partial automation of work steps has shown that the demands placed on production spaces can change several times during the building's service life [10]. Current *box-in-box* principles are often used as a reaction but require factory constructions with the broadest possible, column-free enclosed space. That is why space shortage and, above all, sustainability and resource economy represent a considerable challenge for contemporary industrial architecture.

It is worth considering what the digitalization and interdisciplinary aspects of the industry have to offer and question the current architectural planning process. This research project pursues an alternative approach, investigating how to integrate evolutionary single- or multi-objective optimizations into the design processes of industrial architecture. The aim is to solve specific design decisions, i.e. production process layout or accessibility of factory spaces, based on the provided data.

### **1.2 Architecture Optimized by Algorithms**

Algorithm-based optimization is one of the technologies used to improve architectural designs. An algorithm could be defined as a set of rules consisting of distinct, finite steps to solve a problem [13]. Supposing one considers the design task a problem of partially contradictory influencing factors, which are in equilibrium in an optimal state, an algorithm can be used to find a potential solution. Optimization principles are implemented by parameters and their relations defined by design factors and constrains. An algorithmoptimized architecture can be archived by solving the problem concerning one or more defined goals by optimizing the parameters. Thus, the design derives from its restrictions and demands on the space. This bottom-up process allows architects to influence the design even without knowing the final shape [1]. By generating solutions that are neither known to the designers nor imaginable for them, a new way of decision-making is created [15] (Fig. 1).

**Fig. 1.** Differences in decision-making

# **2 Methodology**

In industrial design projects, data is usually available regarding the production process. Within this project's scope that deals with a new type of serial production of natural fiber composite dinghy hulls, this data was established by the author. It consists of an organizational chart showing the various relationships between the production steps and room data sheets. The latter lists the requirements for the production steps as well as room specifications regarding minimum dimensions, operational safety, and work environment.

**Fig. 2.** Diagram of the overall methodology of *informed design*

In order to implement algorithm-based optimizations, a simple translation of design requirements into a single, overall optimization system was not a viable approach. Therefore, the design steps were formulated and dissected to investigate which could benefit from the optimization application.

Figure 2 shows how the data and optimization influenced the so-called *informed design process*. Whether digitally enhanced or not, each design decision formed the basis of the next and most directly made use of the data provided. In this way, the method was sensibly developed from the global to the detail. The aim was to find a method where human design skills and digital tools can work together. The key of the process was to implement specific tools readily available in a transparent way so it can be adapted to other factory design projects.

# **2.1 Overview of the Design Steps**

All optimizations were implemented with the CAD software *Rhinoceros 3D* in the visual programming environment *Grasshopper*. They were partly based on existing plug-ins or, i.e. the volume cluster optimization, explicitly developed for this research (Fig. 3).

**Fig. 3.** Design steps. Digitally optimized decision-making steps in red, conventional design steps in blue

*Process Analysis.* The process structure was developed, employing the *Grasshopper* Plug-In *Syntactic* by Pirouz Nourian, which offered a toolset that generated a function graph via points and lines using a force-based graph drawing algorithm [12]. Production steps and their significance based on the frequency of use and their connection were input into the optimization. The graph was then displayed as a bubble diagram. A catalog summarized the resulting diagram-like solutions to develop a model of the *process structure*.

*Space Groupings.* Based on the *process organization*, the production steps were manually grouped into areas.

*Volume Cluster.* This step will be further explored in Sect. 2.2.

*Access Concept.* Since an efficient production relies on an adequate access concept, the previously generated abstract *volume clusters' arrangement* was optimized. The basis for this optimization was the *Grasshopper* Plug-In *Magnetize*, a project by Egor Gavrilov, which creates an optimal corridor system that links rectangular, two-dimensional spaces based on previously specified, necessary relations between said spaces [4]. The footprints of the abstracted *volume clusters* were arranged by the optimization and provided with optimal access. The generated floor plans were analyzed to form rules, which were then applied to the design.

*Urban Context.* Intuitive but informed decision-making strategies were used to situate the building in the urban context. For this purpose, all findings from the design steps and the SWOT analysis were consulted. By combining these principles, an overall cluster was produced, which can function as the starting point of the formulation.

*Material and Structure.* The final step was the design of the construction and the choice of materials. Following the project, which focuses on resource-saving boat building, sustainable materials were prioritized. Often, however, the function of the room determines the materials used.

**Fig. 4.** Simplified functionality of the *volume clustering optimization*

### **2.2 Volume Clustering Optimization**

Using the *Evolutionary Multi-objective Optimization Octopus,* [16] not all optimization factors had to be calculated into a single optimal state, but solutions were generated according to several criteria. Thus, it remained comprehensible why options were classified as better, which facilitated the decision process. The data was processed to determine the numerical target value over several steps using the following Grasshopper script [5–8, 11] (Figs. 4 and 5).

**Fig. 5.** Section A. *Volume clustering optimization* script in *Grasshopper/Rhinoceros 3D*

The aim was to optimize the clustering of selected production steps, represented by abstract boxes, into a target box. Consequently, they were simplified by using voxels as a base unit. Section A of the script was responsible for selecting the main settings, such as *voxel size* and *global target dimensions*. It also handled the data import of the production spaces and the processing, mainly splitting the data into identity-related and geometry-related data (Fig. 6).

In Section B the optimizable parameters were generated and provided the basis for the first modelling. The boxes' form was defined by creating dependencies between the imported, fixed values *room height*, *minimum depth* and *volume* of each production space, and the variable *width x* while factoring in the *maximal division value* of the spaces. Then the parameters *x*, *y,* and *z* for the positioning of the space were defined. Finally, gene pools were set for these two parameters, determining the location and the width.

**Fig. 6.** Section B. *Volume clustering optimization* script in *Grasshopper/Rhinoceros 3D*

The model of the production spaces was generated using the genes. This was scripted using an iterative method of first sorting the boxes decreasingly by volume and then stacking them on top (Fig. 7).


**Fig. 7.** Section C. *Volume clustering optimization* script in *Grasshopper/Rhinoceros 3D*

From this model, the three target values were then determined in Section C, towards which the optimization was aimed. The *best volume layout* objective pursues the most realistic stacking and arrangement of the voxel volumes. It is obtained by adding the normalized and weighted values of several factors, such as the *minimum stacking area*, the volume exceeding the *target bounding box,* the *preferred proximity* of the space to the base plane, and the *actual* and *nominal footprint difference*.

The objective *minimum bounding box volume* numerically represents the size of the bounding box of the generated volume. The objective *best production configuration* represents the sum of the distances of the connections between spaces, weighted by their priority. The evolutionary optimization *Octopus* was executed to manipulate the given gene pools to minimize the target values. After generating the first population of solutions, the following child generations were subsequently derived through selection, crossing, and mutation.

**Fig. 8.** Section D. *Volume clustering optimization* in *Octopus/Rhinoceros 3D*

In Section D (Fig. 8), a pool of solutions was created by repeating the optimization for a sufficient number of generations. The best solutions were analyzed by using the *pareto* mesh of the three objectives. The individual final clusters and their iterations were examined for their functional principles of spatial arrangement, e.g., vertical stacking or prefixed volumes. Finally, these principles or arrangement schemes were incorporated into the design.

### **3 Results**

This research project integrates algorithm-based optimizations into the design process of an exemplary industrial architectural project to improve the coherence of the production and the factory building. Using the developed design method, an arrangement of symbiotic production rooms in the final architectural design was achieved, providing proof of concept.

Figure 9 (left) shows the design decisions derived from either data processed by optimizations or conventional design techniques. The diagrams on the right display retrospectively the influences of these decisions on the final structure. The results, as shown in Fig. 9, indicate that the production of the factory and its form are connected within all the design steps.

### **4 Discussion and Future Development**

The results indicate the benefits of defining precise design steps and applying optimizations if suitable. Compared to the conventional process, the proposed system is able to utilize an extensive amount of data provided to improve the outcome. It is possible to generate relevant results, even for design problems with contradicting factors.

The effectiveness of the suggested method stems from its adaptability. Contrary to many concepts in parametric design, this approach can handle complicated and straightforward problems. By breaking the design task down into steps, the problem can be simplified, and the solutions can individually be implemented into the design process.

The presented design method developed in this case study should be understood as a guideline, as it was not intended to create a design generation tool. A proper application

**Fig. 9.** Results of the design steps and their representation in the final design. Digitally optimized decision-making steps in red, conventional design steps in blue

of optimizations can only be recommended if the limitations and operating methods are known and not hidden as a *black box* application. The abstract logic inherent in technologies such as optimizations can promote and improve conventional design, as long as their use is transparent and plausible.

Another interesting benefit is the possibility of using the design structures after completion of the planning or construction. An evaluation of built designs through their designing factors could provide crucial information of performance and degree of capacity utilization. This could become a *digital twin* of the functioning, making projections for expansions or restructuring more precisely.

# **5 Conclusion**

As parametric design has proven to be valuable and effective after years of practical use, optimization should be applied more often in industrial architecture. Optimized and conventional design steps should not be seen as contradictory or in competition with each other. Instead, their differences offer opportunities to tackle problems in different ways. The outcome of this project shows that they work particularly well in combination.

This new design process could be an essential element for responsible and sustainable factory planning. In today's age of information, optimizations offer technology that, using available data, could solve the complex problems of resource-efficient building and should therefore be investigated further.

### **References**


**Open Access** This chapter is licensed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made.

The images or other third party material in this chapter are included in the chapter's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the chapter's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.

# **From Separation to Incorporation - A Full-Circle Application of Computational Approaches to Performance-Based Architectural Design**

Yuhan Chen, Youyu Lu, Tianyi Gu, Zhirui Bian, Likai Wang(B) , and Ziyu Tong

> Nanjing University, Hankou Road 22, Nanjing, Jiangsu, China wang.likai@nju.edu.cn

**Abstract.** In performance-based architectural design, most existing techniques and design approaches to assisting designers are primarily for a single design problem such as building massing, spatial layouts, or facade design. However, architectural design is a synthesis process that considers multiple design problems. Thus, for achieving an overall improvement in building performance, it is critical to incorporate computational techniques and methods into all key design problems. In this regard, this paper presents a full-circle application of different computational design approaches and tools to exploit the potential of building performance in driving architectural design towards more novel and sustainable buildings as well as to explore new research design paradigms for performance-based architectural design in real-world design scenarios. This paper takes a commercial complex building design as an example to demonstrate how building performance can be incorporated into different building design problems and reflect on the limitations of existing tools in supporting the architectural design.

**Keywords:** Performance-based architectural design · Computational design · Building performance · Design optimization · Research design

# **1 Introduction**

Performance-based design has become a trend in architecture and has been widely applied to building massing, floor plan layouts, and façade design. When confronting the complex challenge of designing a high-performance building, computational design approaches and tools are becoming an indispensable component in achieving a performance-based architectural design. Over the past decade, there have been a fastgrowing number of design tools and methods proposed by researchers and developers, such as Galapagos (Rutten 2013) and Octopus (Vierlinger 2013) to offer architects assists in the building design process. However, the applications or studies of using these tools are often too targeted: they typically focus on one specific design problem, such as massing generation (Wang *et al*. 2020a), floor plan layouts (Dino 2016), or facade design (Wright *et al*. 2014). While the relevant studies show that using computational optimization or other computational techniques such as simulations can help to improve the performance of the building design, there are few attempts at or investigation on incorporating these computational techniques into real-world architectural design scenarios and reflecting on the gap between research and practice.

Since architectural design is a process that needs to synergize different design problems, the exploration of using different computational design tools, from beginning to end, in all aspects and elements of architectural design is essential to make performance a real driving factor consistent in a whole architectural design circle. In this regard, it is pertinent to examine how to incorporate these existing computational design techniques and approaches in a complete design circle. Hence, this paper presents a full circle application of different computational design approaches based on an undergraduate design studio project, intending to explore a new design paradigm for performance-based architectural design.

In addition, as an initial attempt at combining different computational techniques and approaches in a complete architectural design task, the study bears many limitations from the practical point of view. However, it should be stressed that the contribution of this paper is to demonstrate an example of how computational design techniques and approaches can be applied and promoted a research design (design by research) paradigm in architectural design, where these techniques and approaches serve as a means of systematic inquiry of the design problem, helping the designers overcome data-poor situations, and eventually, synthesizing building performance into the design process. At the same time, while the study does not advance computational design approaches from the technical perspective, the example presented is also aimed to provides an opportunity for the research community to inspect the gap that needs to be filled between research and practice. Thus, in the conclusion of the paper, we discuss the deficiency in the current computational design approaches and techniques that we identified during the presented design process.

# **2 Method**

In order to make building performance a driving factor in architectural design, we apply computational design techniques and approaches to different design stages/problems and try to improve the performance using different computational design methods. The design process still follows the ordinary architectural design process, from stages of building massing design, to spatial layouts, and, finally, to facade design. In each design stage, different computational tools are used (Fig. 1). Beyond performance concerns, we also try to incorporate other architectural design intentions such as functions, aesthetics, and building codes or regulations into the computational design.

First, for the building massing design stage, EvoMass (Wang *et al*. 2020b), a plugin for agile building massing generation and exploration, and DIVA, a radiance-based performance simulation tool in the Rhino-Grasshopper, are used. Unlike typical applications of performance-based design only considering performance factors, the architectural design intentions are also transformed into constraints and objectives for the optimization.

Second, for floor plan design, performance simulation based on DIVA is first carried out to evaluate the indoor environment quality in the different sections of the building. On

**Fig. 1.** The proposed design process

this basis, different functional spaces, such as exhibition and office rooms, are arranged primarily according to the environmental quality requirements of these spaces.

Finally, the building's facade design is generated by a combination of methods of generative design and performance simulation to achieve an adaptive building skin that can well respond to the surrounding urban environment as well as the local climate condition.

# **3 Case Study**

To illustrate the efficacy of the proposed design process, a case-study design of a commercial complex building in Shanghai is presented to elaborate on how different computational tools are used (Fig. 2). Located in Shanghai, the site has a park on its north and is surrounded by several high- and middle-rise buildings on its south, west, and east sides. The building's surrounding urban environment poses a huge challenge to achieve a favorable performance in the design. Thus, the primary objective of the case study focuses on the daylighting performance of the building and its impact on an adjacent public park. In the following part of this section, the process of our application utilizing multiple computational tools to each of the design stages/problems that lead to the final design work will be introduced.

### **3.1 Building Massing Design**

At the outset of building massing design, design intentions are specified to define the overall properties of the building massing and then guide the process of optimization search for satisficing solutions. The properties include the range of its overall size and maximal gross floor area. These intentions consider the site and the program (Fig. 3): First, the building massing design that will be transformed into an office complex building which needs to meet the basic functional requirements since we conceive the final design as a practical project. Second, the performance of the building is expected to be

**Fig. 2.** Site overview

**Fig. 3.** Intentions in relation to the settings for optimization

maximized, and its negative impact on the park is to be minimized. Third, other requirements and intentions related to the formal characteristics of the building need are also considered.

In the building massing design optimization process, functional requirements, performance requirements, and formal characteristics are all transformed into quantifiable indicators that can be encapsulated into the fitness function.

For functional requirements, although building performance is our primary design goal, functional requirements are still the most important concern in architectural design. In this case study, the functional indicators evaluating against each of the generated building massing designs including total area, building density, floor area ratio (FAR), and the number of floors (Area: 15000 m2, Density: 0.5, FAR: 5, Number of Floors: 8).

For performance requirements, due to the building plot situated in a complex urban environment, this study focuses on the building massing design's performance on daylighting and thermal comfort quality of interior space of the generated building design and its impact on the park. The evaluations were carried out based on the DIVA simulation tool in the grasshopper platform. We took the average value of annual natural lighting and the average solar heat radiation value of the park as the two fitness-related values affecting the generation and optimization of the building massing and intend that the generated volume can have higher overall fitness.

For formal characteristics, we determined that as urban architecture and a public building, the design should be spatially and formally interesting, friendly for use, and open to urban space. Considering that the building is faced with the main road on its entrance, we hope that the entrance of the building can retreat and form a semi-outdoor public space for pedestrians. In addition, we hope that the volume of the building will display richness in shape so that we have more chances to provide shared and activity spaces.

Note that, in a similar way to ordinary architectural design, our design process was also an iteratively reflective process (Schön 1992), where we found unintended consequences from optimization results and, thereby, added up new intentions into the optimization. In other words, these intentions were iteratively included in the design process alongside using optimization, and it took us several iterations to reflect on the on-progress optimization result, reformulate our optimization design problem (fitness functions), and gradually reach the final design.

In the optimization stage, all the above-mentioned fitness-related values are used for optimization. The final fitness function consists of multiple variables and can be expressed as:

# *Fitness* = *SI* ∗ *sDA* ∗ *num*\_*roof* ∗ *p*\_*area* ∗ *p*\_*den* ∗ *p*\_*entrance* ∗ *num*\_*roof*

Among these parameters, *SI* and *sDA* respectively represent the solar irradiation received by the park and the daylighting value of the building, *p\_area* the punished value of the total area, *p\_den* represents the punished value of density, *p\_entrance* represents the punished value of the projected area of entrance, and *num\_roof* represents the number of roofs.

As shown in the fitness function, we use a penalty function to control the result of optimization. If the values of building area, building density, and entrance area exceed the preset values, the penalty function will be used to punish the excess part, resulting in a lower value of its fitness. In this way, the more the values exceed the expected value, the lower the value of the variable in the formula will be. As such, we managed to interfere with the optimization process by controlling the significance of the parameters contributing to the value of fitness.

We used the optimization algorithm embedded in EvoMass, called SSIEA (Wang et al. 2020c), for running the optimization. SSIEA is a diversity-guided algorithm that can produce optimization results with variants showing large design differentiation. The diversity in the optimization, on the one hand, helps us to extract more information from the optimization result and obtain a better understanding of the design problem. On the other hand, the design diversity in the optimization result also provides more optional design solutions that we can choose from when considering other unquantifiable concerns. Figure 4 shows the optimal design variants found by the last iteration of design optimization that can generally satisfy all our intentions and performance concerns, and we selected one design variant for further design development.

**Fig. 4.** Final building massing design optimization results (red-dotted rectangle: the selected design)

### **3.2 Floor Plan Design**

In conventional floor plan design, the relationship among different functional spaces is the priority, but in this study, we treat performance equally important: we started from the analysis of natural light accessibility simulated by DIVA and allocated each of the functional spaces according to each volume's characteristics.

Based on the design variant selected in the building massing design stage, we first adjusted the volume to make the spatial logic clearer. We found the selected design solution consists of five interlocking blocks: three large volumes enclosing and spinning clockwise, located on the corners of the building plot, one small vertical volume inserted into the former three, and a last horizontal one connecting all the vertical ones.

According to the daylighting simulation results, we arranged the function of each block according to its daylighting quality as well as functional requirements. (1) According to the natural lighting simulation, the small vertical volume has the worst daylighting quality and situates at the intersection of all other blocks. Thus, it is most suitable to serve as the vertical circulation; (2) for the three large volumes, the southern one has better daylighting, therefore used as a shared functional space, (3) the northern one has its lower part directly accessible to the urban street interface, therefore chosen as the cultural space such as book store and coffee shops facing the city, (4) the western one relatively independent with an unfavorable daylighting condition was used as the office area; (5) as for the horizontal volume, the large and unobstructed plan, and northward lighting makes it suitable for the exhibition that requires a coherent and flowing space. In this way, several large functional spaces are sorted out, and then other subordinate spaces and functions such as fire escalators, restrooms, and receptions were placed (Fig. 5).

**Fig. 5.** Development of floor plan design

### **3.3 Facade Design**

Facades are an important element in architectural design and play a critical role in indoor thermal comfort and daylighting. Hence, we developed an algorithm that can generate a facade based on the solar irradiation received by the surface of the volume. Based on the solar irradiation intensity, the algorithm calculates the specific window-to-wall ratio of each facade surface, so as to make the facade more responsive and adaptive to the environment (Fig. 6).

**Fig. 6.** The generation process of the façade pattern

First, based on the building volume and the interior functions that have been defined so far, each of the facade surfaces is assigned with different types of facade schemes according to the function or thermal requirement of the space. For instance, the threestory cantilevered large space involving assembly functions such as exhibition and viewing, full glazing curtain walls are used. In contrast, the facade of other spaces for commercial and office rooms on the south side of the plan, its relationship with thermal comfort should be considered.

Second, we used the DIVA to calculate the total solar irradiation on each facade surface in summers and winters, and do the difference calculation, visualize the results through an RGB image, and use the color value to express the difference. The darker the part is, the smaller the difference, on the contrary, the brighter, the greater the difference. Because the received solar irradiation in summers is always higher than that in winters, if the difference of heat radiation between summer and winter of the facade surface is small, it means that this part of the facade surface is less overheated in summers while can be also heated in winters. Thus, using a large window-to-wall ratio is more favorable. If the heat radiation difference is large, it is not suitable for large window-to-wall ratios, as the window may allow excessive solar irradiation to enter the building in summers while its role in passive heating in winters is trivial.

Finally, with the data controlling the window-to-wall ratio for each of the facade surfaces, we modularized the facade and set each facade unit to 3.7 × 1.0 m. To avoid repeated facade patterns, we assigned a random number to each facade unit and compare this number to the window-to-wall ratio to decide whether this unit is windows or walls.

# **3.4 Final Design Work**

Figure 7 shows the final design work of this case study. The final design work presents many underlying architectural implications related to building performance. For example, in regards to the building massing design, the building shows a strong tendency towards strip shape volumes, which is more favorable for daylighting due to shallow floor plans. In addition, the building massing has a lower northwest corner in the building height that allows more sunlight to reach the park. In regards to the floor plan and facade design, the design of the facade surfaces (fully glazing curtain walls or fenestrations) reflects the function, daylighting, and thermal comfort requirement of the space behind it.

**Fig. 7.** Artistic impression of the final design work (left: pedestrian perspective, right: bird view)

Other than the performance consideration, the design work also shows architectural features reflecting our design intentions. The multiple building volumes allow for more rooftop terraces that provide space encouraging social events, viewing, and relaxation. The entrance with a large overhanging structure shows a friendly gesture to pedestrians on the streets and in the park and welcomes them to enter the building. The building facade, serving as the interface between the building and urban environment, not only ensures that each section of the building has desirable daylighting and thermal comfort conditions but also satisfies the visual requirements of different functional spaces. For example, higher openness to the exhibition area but more privacy for the office rooms.

### **4 Discussion and Conclusion**

The preceding sections illustrate an example of how different computational design techniques and approaches can be applied to an architectural design task from beginning to end with using building performance as a driving factor. While the example shows the potential in existing techniques and approaches to assist designers in overcoming datapoor situations and achieving a performance-informed, it is more important to reflect on this process and identify the gap between the functionality of these techniques and approaches and the need in design processes.

The most significant issue when we reflect on our example is that the detachment of design objects and design stages. During the design process, the three design objects were designed separately, and the interaction among these objects was ignored, despite its importance, which has been demonstrated recently (Zhang et al. 2021). This, on the one hand, is because the tight design schedule did not allow us to undertake more iterations to bring the new design information found in the later design stages (floor plan and façade design) to the preceding design stage (building massing design). On the other hand, it also lacks applicable approaches that can incorporate these design objects in a integrated design process. Thus, linking these design objects in one design generation process is a possible solution to address this issue. The second issue is the lack of handy design tools. Apart from the building massing design, the other design stages required us to spend a great amount of time and effort to establish design simulation and design generation workflows from scratch, which not only slowed down the design process but also made the design exploration inefficient due to the tedious and repeated workflow establishing process. Thus, transforming the state-of-art into agile and flexible design tools could a crucial step to bridge the gap between research and practice.

To conclude, this paper demonstrates the usage and efficacy of computational design techniques and approaches in performance-based architectural design and promoting a research design process. While the potential of the computational design techniques and approaches in supporting sustainable building design has been clearly proven in this study, we also argued that there is still a large gap between the state-of-the-art and practice. Hence, this study also attempts to an opportunity to inspect and evaluate existing techniques and tools from a designers' perspective as well as to understand what designers are expecting in the future.

### **References**


for Computer-Aided Architectural Design Research in Asia (CAADRIA) 2020, pp. 385–394 (2020a)


**Open Access** This chapter is licensed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made.

The images or other third party material in this chapter are included in the chapter's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the chapter's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.

# **Interaction and Perception**

# **4D Soft Material Systems**

Giulia Grassi1(B) , Bjorn Sparrman2, Ingrid Paoletti1, and Skylar Tibbits2

<sup>1</sup> Material Balance Research, ABC Department, Politecnico di Milano, via Ponzio 31,

20133 Milan, Italy giulia.grassi@polimi.it

<sup>2</sup> Self-Assembly Lab, Massachusetts Institute of Technology, 265 Massachusetts Avenue, Cambridge, MA 02139, USA

**Abstract.** This work introduces multi-material liquid printing as an enabling technology for designing programmed shape-shifting silicones. The goal of this research is to provide a readily available, scalable and customized approach at producing responsive 4D printed structures for a wide range of applications. Hence, the methodology allows customization at each step of the procedure by intervening either on the material composition and/or on the design and fabrication strategies for the production of responsive components. A significant endeavour is initiated to develop and engineer two different material systems that enable shape-shifting: silicone-ethanol composites and polyvinyl siloxane swelling rubbers. The printed samples successfully comply with the expected swelling behaviour through a variety of printed test patterns.

**Keywords:** 4D printing · Responsive material systems · Shape-shifting silicones

# **1 The Dance of Agencies**

Polish scientist and philosopher Ludwik Fleck (Fleck 1979), introduced a vision of research practice in which the active part of the researcher deals with setting up the material assemblage, and the passive part consists in observing what material will do and how they will perform. Those phases are repeated by the researcher in a loop where the steps of human passivity can be seen as material activity, in a *"dance of human and non-human agency in which activity and passivity on both sides are reciprocally intertwined."*

Material agency denotes the possibility that things can act on their own which contributes to a broader challenging of the boundaries between ontological categories (Van Oyen 2018). In recent years we have experienced a fertile generation of architecture focused on material systems, such interplay of material innovation, advanced material processes and emerging fabrication technologies is increasingly expanding our understanding of material practice (Perez 2011). A material is nowadays perceived as the active generator of design (Grassi et al. 2021), made possible through techniques like 4D printing which allow designers to fully exploit material engineering and fabrication techniques to produce responsive material systems.

In this research, 4D printing design strategies, such as shape-shifting multi-material bi-layers, are coupled with Rapid Liquid Printing, a printing technique which entails physically drawing three-dimensionally in a gel suspension, with the aim of investigating soft responsive material systems. Contemporary explorations on the aesthetics of soft spaces and architectures have been exploring silicone as a soft, transparent and flexible rubber, e.g. for inflatables structures such as "Liquid Printed Pneumatics" (Sparrman et al. 2019). Hence silicone is a valuable material for this investigation, thanks to its elasticity, enabling kinetic morphable shapes. Taking advantage of the inherent material properties by distributing the actuation throughout the surfaces, the self-transforming process eliminates the need for external forces, actuators, or human/robot intervention during the shape-shifting process.Moreover, silicone possess strong bonding characteristics, flexibility (a wide array of shore hardness availability), bio-compatibility, versatility, fire resistance and durability.

# **2 4D Rapid Liquid Printing**

Previous research on 4D printing typically relies on the use of high-end multi-material printers (such as Stratasys Connex 500), which employ their own proprietary materials, or custom lab-engineered material properties. As a result, experimentation on active materials is difficult for designers to access and scale towards applications. A novel printing technique called Rapid Liquid Printing (RLP) (Hajash et al. 2017), spatially extrudes two-part liquid materials immersed within a tank of gel, avoiding the need for scaffolding. In this study we demonstrate that, by utilizing such technology coupled with multi-material extrusion, it is possible to print silicone-based responsive material systems that can self-transform.

This project investigates two main research questions:


As shown in Fig. 1, a three-axis grantry-style CNC machine has been equipped with the two-part pneumatic deposition system and a tank of gel which serves as a suspension medium. The pneumatic deposition system consists of two cartridges filled with twopart liquid material (1:1 ratio) that is pushed out by a compressor conveying through a static mixer. Different nozzles can be employed depending on the printing diameter (or line spacing) and printing speed. These factors are influenced by the viscosity of the liquid printing material and its curing time (thus printing time for the overall piece or set of prints). Multi-material Rapid Liquid Printing was achieved by swapping material cartridges throughout the printing process.

**Fig. 1.** Machine setup for RLP

# **3 Shape-Shifting Silicones**

By exploiting bi-layer compositions and multi-material printing it is possible to mix the silicone matrix with a responsive material coupled with a passive silicone layer. We analyzed the state of the art and possible applications of two material systems with shape-shifting abilities. These have been tested for feasibility, first in terms of desired adaptive behavior as well as machine compatibility. The two material systems are:


Hereby we consider as an active transformation when the shape-shifting is triggered by a change in the external environment, whereas a passive transformation is generated by internal forces due to chemical reactions.

### **3.1 Ethanol-Based Active Responsive Material System**

The first material system that was tested for printing was a compound of silicone and ethanol. While undergoing phase change from a liquid to gas when reaching the boiling point, ethanol expands. This composite material has been shown to combine high actuation stress and expansion of up to about 900% (Miriyev et al. 2017). After activation, the component reverses to the previous state once the heat source is removed. Previous studies include projects developed at IAAC (Institute for Advanced Architecture of Catalonia), So.ar (Abasova et al. 2019) and Pneu.flex (Jose et al. 2018). These investigated the potential of cast mixtures of silicone and ethanol heated through a coiled Nichrome wire for fabricating responsive skins. Other significant studies have been conducted at Columbia University, at the Creative Machines Lab. They have 3D printed composite materials with the purpose of creating soft actuators (Miriyev et al. 2019) with custom shapes, however in a small scale (the build plate is 406 × 406 × 76 mm). Moreover, the system doesn't allow for growing in height with the print, because of the liquid nature of silicone, despite UV curing. On the other hand, RLP allows larger scale, due to the bigger size of the tank and tanks to the support given by the gel.

# **3.1.1 Material Testing**

For the material system we tested different compositions starting from cast samples. As described in the literature (Miriyev et al. 2017, 2018a, 2018b; Jose et al. 2018), 20% vol. ethanol in the mixture allows for the optimal expansion rate of the compound. We tested 10, 20 and 30% weight (slightly higher than % in vol. because the specific gravity of ethanol is 0.8), where 30% left the specimens wet while not exhibiting an increase in expansion. We cast a few samples of bi-layers made of a passive layer (plain silicone) with an active one, composed of silicone and ethanol, which starts responding to heat at 40 °C. Upon heating, the stress mismatch between the two layers initiates the shapechange. The samples were heated with a heat-gun and the response was approximately within one minute. Two different silicones have been employed as a matrix: Smooth On Sorta Clear and Polytek PlatSil Gel-25. The cast samples with Polytek silicone achieved a more dramatic bending radius because of the lower Shore hardness of the material.

Figure 2 displays a printed bi-layer strip (10 cm length, 3 cm width) which is actuated through a heat-gun in 1 min and 40 s. The two layers are printed starting from the lower one and swapping cartridges to change material. In this phase it is crucial to carefully consider the single layer height, thus the two layer distance, in order to print the second layer on top of the first one. To ensure adhesion, when the first layer starts to cure (but is not yet fully cured), the second has to be pushed into the first layer by setting the layer distance lower than the layer height. This approach takes into account the viscosity and the thixotropy of the material and most importantly, the curing time.

**Fig. 2.** Printed bi-layer activation, the transition from 0 to 3 took 1min and 40 s.

### **3.1.2 Thermo-Responsive Apertures**

By exploiting the successful bending feature of activated bi-layers, more complex and larger geometries have been printed such as the star-like aperture shown in Fig. 3. The sample demonstrates the ability to print interesting kinetic geometries that can be exploited for environmentally adaptive products and structures. The star-like aperture has a diameter of 15 cm and was printed in less than 5 min. It consists of two layers where the first is plain silicone and the second is a silicone-ethanol mixture. The planar line spacing is 0.75 mm and the printed path has been optimized ("spiralized") with a script in order to result in a single continuous printed line for each layer. The layer height is 3 mm, hence the distance between the two layers was set 1 mm apart in order to have a 0.5 mm of superimposition through pushing downwards to ensure adhesion.

Several tests on the material composition have been performed to obtain the right viscosity, printing time and cure time of the overall objects. Table 1 indicates values of percentage in weight of silicone, ethanol, thickener and retarder for the active layer. The active layer is printed by mixing the material pushed out by the two cartridges, each one filled respectively with part A and part B, where additives are included in the same ratio to both, to obtain the same viscosity.



This composition has to be customized in respect to the printing strategy adopted, especially with respect to the retarder, which influences the cure time. For instance, if the same print consists of two layers with two different materials, a higher percentage of retarder will allow it to keep the material in the fluid state (not fully cured) while printing the second layer, which will guarantee bonding. We observed that above 5% retarder resulted in weakened material properties or uncured parts.

The transition of the responsive element is quick at high temperatures (approximately two minutes at 70 °C) however, if we imagine such systems to be activated through solar heat we'd have to integrate an additive to render the composite more heat conductive, thus transforming faster. This issue has been addressed by Xia (2020) which used diamond nanoparticle-based thermally conductive filler to improve the actuation speed.

### **3.2 PVS-Based Passive Responsive Material System**

Passively actuated adaptive systems represent another possibility within shape-changing systems. Expanding upon the work of Pezzulla et al. (2015), we conducted a series of experiments aimed at testing the swelling capacity of polyvinylsiloxane (PVS) bi-layers. As investigated by Prof. Holmes from the MOSS Lab at Boston University, harnessing anisotropic swelling allows for precise control over the curvature in bilayer structures.

**Fig. 3.** Star-like shape growing in height upon heating

### **3.2.1 Material Testing**

PVS is a two-part silicone mixed in a 1:1 ratio. The tested specimens were cast in bilayers disks with two different silicones (PVS Zhermack Elite Double 32, green, and Zhermack Elite Double 8, pink) in which one layer expands relatively to the other. The differential swelling of the two silicones, with different Young modulus (pink silicone has a lower Shore hardness A, in respect to the green, 8 to 32), is accomplished by the residual polymer chains left in portions of cured elastomers. Consequently, the system exhibits a physical transformation in response to its internal micro behavior and induced stresses. We'll refer to the PVS 32 green as the passive material and to PVS 8, pink, as the active material.

Following the Timoshenko bimetal model we produced casted circular disks of 10 cm diameter, with different thickness ratios for the two layers. Once the two-part material cured, the samples were removed and tested for their transformation. Their final curvature was affected both by the total thickness as well as the relative thickness of one layer to the other.

**Fig. 4.** Tests arranged in order of magnitude of curvature, from left: 14, 11, 10, 13, 18, 0, 17, 15, 12, 16

As shown in Fig. 4, the final curvature achieved after curing decreases as the overall thickness of the disks increases. For a fixed quantity of green material, the highest curvature has been achieved with the lower quantity of pink. However, the highest curvature has been achieved with a higher quantity of pink (by weight) with respect to the green layer. These tests have shown that the best results can be achieved having a ratio of 2.5 between the green and the pink layer thicknesses. Assuming that the same ratio of 2.5 could apply to the volume too, when printing, the results were achieved by either having different layer heights or by changing the patterns (thus surface area). Viscosity was measured and tuned through the thickener to achieve a range of 800.000–1.000.000 cP to allow printability and to influence the print speed which, for instance, has to faster for a more liquid paste. Furthermore, a retarder was added (1–4%) in order to tune the cure time according to the design and fabrication needs.

# **3.2.2 Shape-Shifting Swelling Rubbers**

Initial printed experiments were conducted with two-layered simple geometries such as rectangles and circles by varying the layers thicknesses and patterns to achieve a gradient of curvatures accordingly to the printed tests. An interesting finding, as shown in Fig. 6 (bottom left), was by taking advantage of the liquid phase of the silicone (increasing the amount of retarder according to the printing time), it was also possible to print a layer of PVS 8 (pink) underneath, on top, or inside the green layer. While the viscosity of PVS 32 was still low, the nozzle could pass through it and print underneath it simplifying the multi-material printing process. Indeed, in order to print a three layer structure made of two materials, instead of swapping the cartridges two times, it was possible to change them only once. Regarding the pattern actuation, the main direction of the material expansion/shrinkage, and the resulting bending orientation, depended on the main direction of the active material. For instance, as shown in Fig. 5 concentric circles would create a positive Gaussian curvature. Other printed objects demonstrated the ability of lattice structures to constrain the deformation of a surface (Fig. 7). While in Fig. 8, the local curvature induced by the swelling, creates a surface change in the regular dotted pattern on the other side of the piece, generating a morphable fur.

**Fig. 5.** Left: from left to right - ratio values of the green layer area over the pink are 1, 2 and 3. Right: the ratio of the green layer over the pink is 3 and the overall thickness is reduced.

**Fig. 6.** Printing patterns of the active layer: from one side to two sides stripes and full layer

**Fig. 7.** Lattice structures used to constrain the actuation of a surface

**Fig. 8.** Morphable fur

# **4 Discussion and Conclusions**

Throughout our experiments silicone exhibited extreme flexibility in terms of material testing enabling optimization both of the mix design and of the fabrication process. As a matter of fact, the methodology applied in this research allows for customization at each step of the procedure by intervening either on the material composition and/or on the design and fabrication strategies for the production of active components. This expands the current domain of 4D printing research which has been constrained to proprietary materials and machines or complex processes in laboratory environments that are difficult to scale. Our approach exploits off-the-shelf two-part silicones that are widely available and have a wide range of material applications, combined with a novel form of printing to create complex and precise structural transformations. Applications of tunable 4D printed silicones can include active hybrid material systems as sensorialresponsive environments (Ahlquist 2019), soft shape-change tangible interfaces (Ou et al. 2016), and wearables emotional interfaces (Farahi 2018).

Both the silicone-ethanol mixture and PVS bi-layers have proven to be feasible materials for RLP. The main challenges for silicone-ethanol were related to silicone porosity and ethanol volatility, which jeopardize the durability of the material system. On the other side, PVS bi-layers tests, are not reversible once actuated. Therefore, future works can focus on improving the durability of the material systems herein presented and designing more complex structures.

Finally, 4D Soft Material Systems have proven to enable a wide array of kinetic design with an easy-available material such as silicone.

# **References**


**Open Access.** This chapter is licensed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made.

The images or other third party material in this chapter are included in the chapter's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the chapter's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.

# **Material Response: Technology, Material Systems and Responsive Design**

Marcus Farr1(B) , Andrea Macruz1(B) , and Alexandre Ulson2(B)

<sup>1</sup> Tongji University, Shanghai, China marcusfarr@mac.com, andrea.macruz@uol.com.br <sup>2</sup> Belas Artes, Sao Paulo, Brazil

**Abstract.** This paper investigates the role technology and materials play in making meaningful connections between people, architectural space and the workplace. It indicates that design can synergize with responsive technology and material systems to leverage new power for future workplace interaction design. We have created a spatial prototype paired with a series of simulations that act as a proposal to stimulate workplace interaction. The project employs a responsive ceiling that combines a fluid computational pattern with temperature-responsive bi-material laminates with thermochromic coatings and electrically programmed micro-controllers. The project is then connected to a computer code that computes readings based upon ongoing interactions with humans wearing body sensors. The methodology categorizes the simulation results into aroused states and calm states. As the computational patterns and colors change, we are made aware of the relationships between space, technology, and the human sensorium. This conversation brings insight into how we can design more effectively for workplace interactions.

**Keywords:** Technology · Responsive · Interactive · Materials · Humans

# **1 Introduction**

In the 1950s, Turing proposed a test for machine intelligence, arguing that if a machine can make humans believe it is human, then it has intelligence. Shortly after, artificial intelligence, industrial robots and chatbots were developed, which started a dialogue between designers and technology that has expanded exponentially. Currently, there are numerous technological tools that connect people with one another. Nevertheless, because of COVID and other issues, isolation and polarization are becoming more and more common in the everyday life of people. The link between technological innovation and social segregation presents an ongoing issue that can be addressed by architects. Is there a way that technology can help us to strengthen verbal and non-verbal communication in the workplace to allow for better understanding between colleagues? Can this enhance the experience of designed spaces and generate stronger emotional connections?

This paper and the corresponding project explore the consequential role technology and materials play in making meaningful connections between people, architectural space, and the workplace. It indicates that design can synergize with responsive technology and material to leverage new power for future workplace interaction design. We have created a spatial prototype paired with a series of simulations that act as a workplace critique. The project employs a responsive ceiling that combines a computational pattern with temperature responsive bi-materials, which are coated thermo-chromically, and electrically programmed with micro-controllers. This is then connected to a computer code that makes readings based upon human interaction with wearable technology. The methodology categorizes the simulation results into "Aroused States" and "Calm States". As the computational patterns and colors change, we are made aware of the relationships between space, technology, and the human sensorium. This conversation brings insight into how we can design more effectively for workplace interactions.

The project and its simulations examine ongoing contextual awareness and demonstrate how a workplace can be designed more intelligently depending on the input and biological data of the user. It examines the role of the human brain in the interaction between the user, technology, and the environment. The project makes use of technology to capture one's body information using biosensors that create a synergistic relationship between this technology and optical surface material (thermal-chromic paint and bi-material to change rotation in the Z-axis associated with electrical input), which is connected to a piece of interactive, intelligent furniture that presents this information back to the users in real-time. As users actively engage with the workspace and ceiling, the moods, emotions, and senses of well-being are brought to the forefront of the experience and user awareness.

This information is displayed not virtually, but physically to stimulate non-verbal communication between people, as a means to be more effective in communication, understand one another and to promote new experiences and meaningful encounters. Understanding is the key to empathy and this results in having better communication between people in the workplace.

# **2 Forms of Communication: Verbal, Non-verbal, Sensorium**

Neuroscientists are continuously exploring the connection between the brain and the sensorial channels through which we understand and perceive the world. For example, touching something with a texture can change a person's mood and influence the decisions a person makes.<sup>3</sup> Touch also seems to be very important to a human's well-being, and it has been found to convey compassion from one human to another. But maybe the most interesting topic related to this theme is how human sensorium can intensify emotional connections. To move further into the understanding of sensorium, we can look at light quality and shadow. Technology has introduced both positive and negative effects to our lives. On one hand, it has brought a positive impact in keeping us more informed and connected. On the other hand, it has created new disorders and diseases such as internet depression, FOMO (fear of missing out), diminished comprehension, and deep retention, affecting the morality of people, among other things4 (Fig. 1).

Our human anatomy is becoming more and more in sync with technological devices. We have a multitude of options to help us through our daily lives in the workplace and beyond that offer conveniences and make us almost constantly connected to technology

**Fig. 1.** Simulations of ceiling surface changes with multi-person non-verbal interaction using Houdini

in one form or another. According to Andy Clark, a professor of philosophy and Chair in Logic and Metaphysics at the University of Edinburgh, *"we are already cyborgs" or "human-machine hybrids",* which means the physical merge of flesh and electronic circuitry, without the need for wires, surgery, or bodily alterations. He argues that it is arbitrary to say that the mind is contained only within the boundaries of our brain because it has always collaborated with external, nonbiological sources to solve the problems of survival and reproduction in humans. He states that, "*with the advent of texts, PCs, coevolving software agents, and user adaptive home and office devices, our mind is just less and less in the head. In other words, the separation between the mind, the body, and the environment are seen as an unprincipled distinction.*"5 Following this line of thought, if we are already cyborgs and technology is increasingly contributing to that, we have to enjoy the positive aspects and place boundaries, reduce or extinguish the negative ones. Moreover, if the opposites enable us to boost experience and add value to a situation, it is interesting to design a product that increments that, through technology, by filtering the received input data and balancing the output outcomes (Fig. 2).

**Fig. 2.** Operational diagrams of responsive material behaviour including process of mood application, wearable sensors, algorithm, human interaction, and subsequent physical material change.

# **3 Materials and Methods**

The potential contributions that can arise from a process that uses computational logic and new media as real-time predictive or interactive tools can be an acute method for designers interested in workplace design. This process of simulation also aids in creating a more robust prototype. Using digital tools such as micro-controllers, Rhino, Grasshopper, and Houdini, paired with physical testing, it interrogates a workflow between computational design, production methods & material logics. Through this process, it manifests a methodology that categorizes the test results into two separate morphological conditions: "Aroused States" and "Calm States", as diagnosed by the wearable Upmood sensors. As users engage with the space, the computational patterns and colors change, and we become aware of the relationships between space, technology, and the sensorium.

The results here explore a range of scenarios, such as one-person interactions, and group interactions, which produces insight into how the project could work at a larger scale in the built environment. As simulations for the project began to expand, it became necessary to break them down into a series of categories that demonstrated how the results could impact the overall design and show how the different simulations could affect our decision-making process for future projects. In doing this, we were interested in using the computer to assist in making decisions relative to the physical prototype. Trying to simulate a person's state of being (excited, calm, etc.), is difficult to do when making a physical model, but much easier and more accessible with a series of digital models. To explore this further, we used a Houdini model to generate patterns responsive to color that could be simulated based upon information gained from the Upmood algorithm. It then was paired with the Rhino model to explore the capacity for responsive geometry in the process of different material states.

The process of making this method was something we had not previously experienced, but it proved beneficial because the final version of the prototype was very expensive and time-consuming to construct, and therefore required simulation to help with decision-making. It required an intense amount of electronic manipulation and conversion of algorithmic data. The digital workflow allowed us to not only visualize before building but make decisions for how we wanted this expensive prototype to be built in the end, which was different compared to a traditional furniture prototype. Also, during the process of designing the interface, it was also difficult to test physically. For this project, we felt that in some ways, designing the "process" was more important than designing the finished product.

Along with the simulations, we started testing several different versions of bimaterials, this included combinations of metal materials paired with natural materials such as metal and paper together. The bi-material utilized in this project is composed of two separate metals joined together and consists of layers of different materials that vary in thickness and property. The bi-materials are useful because they convert a temperature change into recognizable form displacement, and then because of the different material properties, revert back to an initial position after the increase in temperature has subsided. The embodied energy of the materials become visual as the energy is released and absorbed, this displacement becomes evident and is therefore very useful in working with temperatures and electronic inputs.

It was apparent that certain bi-metal materials were not going to work for this particular project due to the fact that we were unsuccessful in testing and laminating the materials together in a consistent way. Also, the two coefficients of expansion for the bimetals was attractive in the beginning due to the obvious visual perception the material provided. Examples of this are bending when heated and then returning to its original state when cooled. In this way, the bi-metals behaved as expected, however, it did not fall into our desired temperature ratios for the way we were designing the heating interface. In the end, the thickness of the various bi-metals we explored proved to be problematic because the correct thickness of the metals was not achievable at the scale of the ceiling installation in this particular setting.

It became clear that extremely thin metal combined with a paper-based product would offer the most performative value at this scale and was consistently the most successful combination. This was a significant breakthrough because the metal allowed for enough rigidity, and the paper-based material allowed for a more direct relationship between the thermochromic paint and the temperature differences. The resiliency of the material palette, both in terms of the thermochromic coatings, and the responsiveness of the bimaterial was a critical factor. Our studies indicated that 500 W of potency was enough to achieve our desired outcome but the speed and rate of thermal expansion needed additional ranges to portray the full range of options that the installation provided for human interaction. At 500 W, the biomaterial starts to move, but it moves in a slower rate. The higher the temperature, the more the metal deforms and the more it is respondent to the temperature. The activation temperature for the thermochromic paint is 30 °C (Fig. 3).

We utilized a coating of Leuco water-based dye on the materials because this finish allows colors to change in the presence of temperature variation. This is a phenomenon that certain substances exhibit, known as thermochromism. The process is reversible up to 60°, and Irreversible from 60° onward. It allows for the mixing of shades between paints of the same turning point, but in this case, it wasn't mixed. The color intensity in this process depends on the designer's needs, but the weight of the covering (paint coat) should be 2 to 3 times that of normal paint. The dyes are rarely applied on materials directly; they are usually in the form of microcapsules with the mixture sealed inside. The dyes most commonly used are spirolactones, fluorans, spiropyrans, and fulgides. The acids include bisphenol A, parabens, 1,2,3-triazole derivates, and 4-hydroxycoumarin and act as proton donors, changing the dye molecule between its leuco form and its protonated colored form; stronger acids would make the change irreversible. We found that leuco dyes are available for temperature ranges between about −5 °C (23 °F) and 60 °C (140 °F).

**Fig. 3.** Material tests of individual units based on temperature and heat-responsive inks. The decision was made to utilize the morphologies that are marked and that bended most successfully with heat. Research indicated 40 s for the leaves to bend and 2 min for it to return to the initial position.

# **4 Responsive Technologies**

To monitor and evaluate how people perceive certain criteria, users wore emotion-sensing bracelets when they visited the project. In order to predict the potential overlap between the architectural and the sensorial, we worked with Upmood technologies to begin understanding how to measure and estimate the feelings people have in response to a given environment. The bracelets collect biodata from the user and result in 11 different emotional states: calm, pleasant, unpleasant, happy, sad, excited anxious, confused, challenged, tense. This data was continuously fed into an App that revealed the different states back to the user. The evaluations and the use of these overlapping technologies acted as a way to gain insight into a more profound human experience. Through this process, the project addressed user insight relative to emotional patterns and management (Fig. 4).

Homeostasis, the tendency towards a relatively stable equilibrium between interdependent elements, especially as maintained by physiological processes, is a big part

**Fig. 4.** Plan drawings of the surface incorporating color change based on temperature

of human survival and is relevant to the theories of Antonio Damasio. In the paper, *The Nature of Feelings: Evolutionary and Neurobiological Origins*, Damasio writes, "Survival depends on a homeostatic range", and "feelings and experiences facilitate the learning of the conditions for homeostatic imbalances plus the anticipation of conditions. Feelings are mental experiences that accompany a change in the body state." He goes on to write that, "*external changes - displayed in the exteroceptive maps of vision or hearing (sensorium) are perceived, but largely not felt. It can trigger drives or emotions, causing a change in the body state, and subsequently felt.*" Human survival depends on homeostasis, or the regulation of the body's self-repair and defense. The body can regulate itself without the person having a feeling, or "conscientious experience". However, when the person does have a feeling, and therefore he/she is aware of it, it facilitates the learning of a change in body state for a better prediction of future situations and thus increases behavioral flexibility. With these concepts in mind, this installation tries to increase felt experiences, using the different stimuli to increase senses and also offer a recording of what a user felt to bring about potential self-awareness.

Wearing Upmood bracelets allowed us to monitor different emotional states in real time as we interacted with the project. What we found is that the technology can be sometimes accurate and sometimes surprising, indicating that our heart rates change in different ways and can be highly situational. It was an interesting part of our experiment because it allowed us to interact with a user in a way not normally accessed in architectural projects. The conclusions for these findings via Upmood wearable technologies indicate that stable peaks, highs, and lows, in experience are crucial to accuracy, and variance from person to person can differ. There can also be discrepancies between what the users "thought" they were feeling, and what the technology actually indicated. Fluctuation in experience will also affect the results.

A relay module is used along with an LCD display to connect with a microprocessor and WiFi, this is in turn connected to a heating element and the surface hardware along with other auxiliary components. To receive the information, the ESP32 microprocessor from Expressif must be connected to the internet through a WiFi network, which will be used to communicate with the Upmood server, and firmware updates via OTA must also happen. Once emotions are received from the server by the microprocessor, they are shown on the display, and the heating element is activated or deactivated by the relay module. For testing and demonstration purposes, the surface also has a manual override mode, where the heating element can be activated or deactivated regardless of the information received by the server. The set described will be referred to asMoodSpace (Fig. 5).

**Fig. 5.** Upmood bracelet extracted from https://techmash.co.uk/2018/08/29/upmood-wearable/# jp-carousel-99635. Testing setup, putting technology into play.

MoodSpace changes its behavior according to the emotions captured by the Upmood bracelet, reacting to two possible states, being Calm and Aroused. In order for MoodSpace to react to emotions, the Upmood bracelet captures the heartbeat of the person using it, the Upmood App then receives this data and sends it to the Upmood Server, where it is analyzed by the algorithm and translated into emotions. In turn, MoodSpace communicates with the Upmood Server through a REST API (see reference images, diagrams), which returns the information of the linked people and their emotional state in JSON format. The entire process happens automatically and, as long as MoodSpace is connected to the internet and the App is receiving data from the bracelet and subsequently sending it to the server, the surface will react to the emotions of whomever is using it. The two states, "Calm" and "Aroused", which are responsible for activating or deactivating the surface are obtained from the grouping of various emotional states captured by the bracelet. The emotional states are as follows:

**Calm:** Calm; Pleasant; Zen; Sad.

**Aroused:** Happy; Excited; Unpleasant; Anxious; Confused; Challenged; Tense (Fig. 6).

Therefore, when the emotion captured by the bracelet is defined as "Zen", the project enters the state of "Calm", and so on. Although a person's emotions change quickly depending on the situation, from this segmentation by two states, it was possible to have a uniform and consistent response between all emotions within both states.

# **5 Conclusions**

In this project, we explored the potential of workplace design relative to digital technologies and material interactions in the field of bi-material, thermal properties, and the human sensorium. We designed a small, spatial workspace project which cultivated a sensory experience between users with a goal of facilitating better non-verbal communication between people in the workplace. Throughout the process, we asked questions

**Fig. 6.** Operational diagram illustrating process of QR code, wearable, sensors, algorithm, server, human interaction.

about the capacity to design with technology and its subsequent impact on human beings. It started by employing typical parametric & computational software, thinking of the potentials between the digital and the real, and incorporated this potential by accepting the role of material manipulation and response to temperature as a way to interact with architectural space and people. The purpose was to engage people sensory experiences that force humans to re-perceive our physical world. The outcomes imply that through research and design, stronger sensorial experiences can be used to increase awareness, perceptibility, and create new design conversations. The paper is meant to document and critique the process.

What we have learned offers a great deal of input relative to humanizing the design of everyday objects to allow a more heightened experience and relationship between human and object. Although there is much work to be done and this paper only documents one sample series of prototypes, the use of computer simulations allowed us to begin to predict and forecast how our design could respond to specific levels of human interaction. The use of bi-materials allowed us to create physical and visual responses that relate directly to human interaction, and the role of sensors and wearable technology allow us to further dial in and program the project in a way that illustrates human connectivity and interaction in real time.

This project has a sensory approach to emphasize the positive aspects of technology and reduce the bad ones. So, it enhances the sensorial channels to provide the users new experiences and build emotional connections with one another. Furthermore, through non-verbal communication, this surface helps people to be open, honest, and integrated, stimulating real and meaningful rapports. In this way, one can understand the other better and have more efficient communication.

# **References**


**Open Access.** This chapter is licensed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made.

The images or other third party material in this chapter are included in the chapter's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the chapter's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.

# **Diversifying Emotional Experience by Layered Interfaces in Affective Interactive Installations**

Sijia Gu1(B) , Yue Lu1, Yuwei Kong1, Jiale Huang1, and Weishun Xu2(B)

<sup>1</sup> Zhejiang University, Hangzhou, Zhejiang, China 3170103514@zju.edu.cn <sup>2</sup> Zhejiang University, Office 311, Yueya Building, No. 866 Yuhangtang Road, Hangzhou, Zhejiang, China xuweishun@zju.edu.cn

**Abstract.** This paper aims to improve users' experience in affective interactive installations through the diversification of interfaces. With logically organized hierarchical experience, diverse interfaces with emotion data as inputs enhance users' emotional interaction to be more natural and immersive. By using facial affect detection technology, an installation with diverse input interfaces was tested with an organic formal setting. Mechanical flowers and support structure based on the organic form were deployed as its physical output for a multitude of sensorial dimensions. With actions of the mechanical flowers, such as blooming, closing, rotating, glowing and blinking, a layered experiential sequence was created and the atmosphere of the installation was evaluated to be more engaging. In this way, the layered complexity of information was transferred to users' immersive emotional experience. We believe that the practices in this work can contribute to deeper emotional engagement with users and add new layers of emotional interactivity.

**Keyword:** Affective interactive installations · Diverse interfaces · Experience

# **1 Background**

Affective interactive installations become an exploratory field with the development of emotion recognition technology. With technologies like EEG, facial affect detection and body gesture recognition, affective interactive installations are widely used in many fields. In media field, detected emotions can be reflected in the media content in real time (Altieri et al. 2019). In healthcare field, affective interactive installations can be used to treat children with ADHD (Adina et al. 2020). In rescue field, they can be essential for appearance-constrained robots used in search and rescue (Bethel and Murphy 2010). Affective interactive installations are used to meet the human spiritual needs and explore the future (Bialoskorski et al. 2010), leading to a more colorful world.

The boom of affective interactions has in turn called for the increase in diversification of interfaces. For example, the installation "Mood Swings" uses luminous orbs to present changes in the moods of users (Bialoskorski et al. 2010). In another instance, detects moods are detected and reflected with a digital board (Altieri et al. 2019). Other forms of interfaces, such as interactive landscape with oriented screens (Herruzo and Pashenkov 2020), virtual model of Kenji Miyazawa electronic portraits interacting with users, smart textiles combining an affective interaction installation (Jiang et al. 2020) shows the potential of interfaces.

Therefore, affective interactive installations with unimodel interfaces have proactively contributed to explore the potential of interactivity. However, a unimodel interface limits the composition of multi-layered interaction, which plays an important role in interactive installations. The research of L. Mignonneau and C. Sommerer indicates that multi-layered interaction could arise users' engagement with installations, which develops through the interactive process (Mignonneau and Sommerer 2005).

Based on their research, we believed that layered interaction provides users with smooth and immersive experience in affective interaction installations by setting a structured interactive system, which calls for diverse interfaces to provide richer hierarchical interaction.

One of the convincing models for structural emotional engagement between human and computer is the theory put forward by John McCarthy and Peter Wright (McCarthy and Wright 2004). In their theory, a good setting of "four threads" helps users get better experience. The four threads, named as "compositional thread", "sensual thread", "emotional thread" and "spatio-temporal thread", are respectively related to the composition of experiential hierarchies, user's preference, user's impression of the installation, and atmosphere of the installation. Their theory provides a potentially viable model of building layered interactive system and diverse interfaces to improve users' experience in an affective interactive installation.

# **2 Proposal**

To provide an immersive experience for users in affective interactive installations, the diversification of interfaces could be set to complete the "four threads", which can be achieved through multi-dimensional input data and output media. At the same time, elements of various inputs and outputs should be organized as an organic system, thus composing a layered experiential sequence.

Among the "four threads", the "compositional thread" can be constructed by advancing the depth of interaction. Constructing an experiential sequence leads to layered experience hierarchies, which encourages users to proactively reflect the clues in the process and spontaneously interact with installations. The "spatio-temporal thread" can be strengthened by mobilizing an engaging atmosphere and creating a mentally allenvoloping environment that redefines users' experience of the world. The experience for "sensual thread" and "emotional thread" can be enriched by creating an engaging atmosphere through the architecture of physical space.

Both composition of experience hierarchies and atmosphere call for diverse interfaces of physical and computational setup. With diverse actions of interfaces, layered experiential hierarchies are possible, which creates the narrative of the experiential sequence. Moreover, multiplicity of physical interfaces leads to richer actions of the installation, which helps to create an engaging atmosphere in a physical-digital space, so that users are able to interact with the installation in a more immersive and more playful way.

The diversification of interface derives from two aspects. One is the increased dimensions of input data, and the other is the diversification of output media. For input data, enriching the sources increases its diversity. Besides human emotions, human behaviors could be a source of data. Performing multi-dimensional analysis to decompose existing data is also a viable way. For output media, adding diverse actuators to the installation and increasing the possibility of their actions lead to dynamic behaviors of the installation, thus making the experience richer.

The various input data and output media should be organized hierarchically. To integrate various elements in the reaction system, a mechanism is proposed as shown of Fig. 1.

**Fig. 1.** An illustration of reaction mechanism for our installation

# **3 Reaction Mechanism**

According to the proposal, we create an affective interactive installation that provides an immersive experience sequence of emotional interaction for users by deploying diverse interfaces, for which a reaction mechanism is set (Fig. 1).

To create smooth and dynamic behaviors of the installation, which are supposed to take on different forms in response for different mood state of users, an organic form is chosen to apply on the installation (Fig. 2).

Diverse input data and output media are deployed in the installation. Multidimensional data are collected by sensors, analyzed by an emotional recognition system, and directed to actuators as outputs, which perform a variety of actions to response for different human behaviors. The actions are integrated to be a logically organized hierarchical experience.

A typical interaction experience process includes four parts according to the depth of interaction.

**Fig. 2.** The form generation process of prototype


# **4 Prototype**

### **4.1 Integrated Physical-Digital Setup**

According to the proposed mechanism, we aimed to create an affective installation that provides an immersive experience sequence of emotional interaction for users. In order to provide a mentally all-envoloping environment and highlight the interactivity of the installation, the overall structure should create an immersive atmosphere while also becoming one of the emotional inputs of the users. Thus, a form where the emerging and dispersing spheres distributed in the space is created (Fig. 3).

The overall installation is composed of four parts, containing the support structure, the transparent glass bodies (M1), the frosted glass bodies (M2) and the mechanical flowers. Among the flowers, those directly placed on the structure (M3) are interactive. The positions of them are shown in Fig. 3.

The support structure, which serve as a carrier for other components. Instead of using a solid model, we chose layered polyethylene plates in different shapes to increase its visual transparency, which allows users to perceive dynamic behaviors of the whole installation (Fig. 4). M1 are hanging or supporting transparent glass bodies, which contains mechanical flowers used to express the other six emotions. M2 are suspended or supported frosted glass bodies, in which mechanical flowers that serve as light fixtures are contained. M3 are interactive mechanical flowers placed directly on the structure, which are used to express one's dominant emotion. Through the arrangement of glass bodies, interactive mechanical flowers are distinguished from others.

**Fig. 3.** Schematic diagram of prototype composition

**Fig. 4.** The scene of interaction

Seven mechanical flowers are installed as an interactive group, which includes one mechanical flower presenting the dominant emotion and six mechanical flowers presenting the other six emotions. Five sets of mechanical flowers are placed to simultaneously provide interactive experience for multiple people.

### **4.2 The Design of Mechanical Flower**

In order to create an engaging atmosphere through the decoration of the physical space, actuators of the installation are designed as mechanical follows (Figs. 5 and 6).

There are two types of mechanical flowers. One is positively placed and the other is inversely placed. Both of the two types are composed of a control shaft, a circular ring, petals and a component box which contains a servo and a stepper motor.

The petals consist of outer petals and inner petals. Outer petals, which are made of translucent silica gel, serve as main image of flowers, while inner petals, which are made of winded aluminum wire, can be acted by the circular ring, driving the outer petals to open or close. A RGB LED is placed in the center of the petals.

The control shaft contains a main shaft and a circular ring. Fixed on the component box, the main shaft supports the double-layer petals, a thread (or strip) and a RGB LED. The servo in the component box controls the circular ring. In addition, the stepper motor inside the component box will drive the rotation of the main shaft, thereby controlling the overall rotation of the mechanical flower (Fig. 7). The component box is designed to hide in the support structure, leaving only the main shaft and petals visible.

When the flower is positively placed, the ring is on the outside of the petals. The ring will be driven down by the strip which is connected to the servo, and the petals will be opened by its gravity. When the flower is inverted, the ring is inside of the petals. When the ring is pulled up by the wire, the inner petals will be opened by the ring.

**Fig. 5.** Construction of positively placed flowers

**Fig. 6.** Construction of inverted flower

**Fig. 7.** The mechanism of rotating and opening of positively placed and inverted flowers

### **4.3 The Process of Emotional Interaction**

We used the camera and infrared sensors as the sensors to capture data, and program in Arduino and Pycharm as control system, and servo and stepper motor connected to mechanical flowers as actuators. In this way, we set up a series of actions for the mechanical flowers and finally integrated the actions as a layered experience sequence (Fig. 8).

a. *The installation is awakened.* When a user approaches an interactive mechanical flower, if the infrared sensors detect that the distance between the user and the mechanical flower is less than 1 m, the program in Arduino IDE will enable the servo on mechanical flowers that present emotion of users to rotate. A ring with a

**Fig. 8.** Illustration of the interaction process between mechanical flowers and users

strip or thread is connected the servo, so the servo could drive the ring on flower to lift or lower, controlling the opening or closing of the petals.

b. *The user's emotions are reflected.* When the petals open, the camera will be activated and get the image of human face. The image will be transmitted to an emotion recognition program based on facial recognition and training of neural networks in Pycharm, which will analyze the characteristic of image and output seven representative emotions' percentages in the image (angry, disgusted, scared, happy, sad, surprised, neutral). The percentages are sent to the Arduino IDE, in which the program controls mechanical flowers to take actions. The emotion with the largest percentage of values is selected to be the dominant emotion, which is presented by interactive mechanical flowers placed on the structure, while the data of other emotions are expressed by mechanical flower placed in clear glass spheres.

The actions of mechanical flowers in this experience phase include glowing, rotating, and blinking. For the action of glowing, it is applied to seven mechanical flowers. Corresponding to the seven emotions, we set seven colors of RGB LEDs which are placed in mechanical flowers. When Arduino receives the emotional data, the mechanical flower will present its corresponding color respectively (Fig. 9). The correspondence between emotion and color is as follows (Table 1).


**Table 1.** The relationship between different emotions and colors

**Fig. 9.** Colors of mechanical flower physical model (R, G, B were selected)

In addition, we set the brightness of the LEDs according to the percentage of emotion. The higher the percentage, the greater the brightness.

For the action of rotating and blinking, it is only applied to the mechanical flower that presents dominant emotion. After Arduino receives the emotional data, it outputs a command to make the stepper motor rotate. The stepper motor is fixed to the flower. When the stepper motor rotates, it drives the flower to rotate at a certain speed, which is controlled by the percentage of dominant emotion. Besides, the blinking frequency is controlled by the percentage of dominant emotion, too. The relationship among percentage of dominant emotion, rotating speed and blinking frequency is shown in Table 2.

**Table 2.** Relationship between emotions' percentages, rotating speed and blinking frequency



# **5 Experiment and Results**

In order to verify whether diversified inputs and outputs are indeed helpful to improve users' experience, we conducted an evaluation experiment through VR with a focus group of 12 architecture students aged between 20 and 22 to experience the installation with two different interaction settings. In the first setting, we make the prototype to recognize emotions' types and turn the data of types into different colors of RGB LEDs. And in the second setting, we detected people' location, and the petals would open or closed. In this setting, we also detected emotions' type and percentages. Correspondingly, the LEDs would change colors, and mechanical flowers would change the speed of rotation.

After 5 min of interacting with the installation, subjects are interviewed about which experience is more immersive and why. Ten out of twelve among them did agree that, "The progressive experience is more attractive", "The rich feedback makes them more immersed in it", and "The diverse expressions of flowers make them more interested in changing facial expressions to observe changes." Through diverse interfaces, the installation became more impressive in "sensual thread", "emotional thread" and "spatiotemporal thread". And the arrangement of experiential process made "compositional thread" richer.

In addition, in the experiment, we noticed that after a series of interaction, 75% of the subjects' facial expressions gradually tended to be "happy", achieving two-way feedback and influence between users and installations.

### **6 Conclusion and Discussion**

In this paper, we aimed to improve the users' experience in affective interactive installations with diverse interfaces. Based on proposal and the theory of "four threads", we created an installation with diverse interfaces by diversifying detected emotional data, and directed different input data to diverse outputs media. The installation was built with an organic form, and mechanical flowers that could interact with users were injected to it. With the actions of the flowers, layered experience hierarchies were created and the atmosphere in the installation became engaging. In this way, the users got an immersive interactive experience of emotion.

To explore our proposal's feasibility, an experiment is conducted. The positive feedback from the subjects on the rich experience shows that diversified inputs and outputs can indeed improve users' experience. We hope that the conclusion we put forward can be applied to future installation design, inspiring designers to introduce diversified inputs and outputs into affective interactive installations, thereby providing more engaging and immersive experience for users.

Additionally, the two-way feedback and influence can be widely used in the process of guiding the user's emotions to achieve a better installation experience. We believe that this subtle guidance will also inspire future design and application of affective interactive installations.

# **References**


McCarthy, J., Wright, P.: Technology as experience. Interactions **11**(5), 42–43 (2004)

Mignonneau, L., Sommerer, C.: Designing emotional, metaphoric, natural and intuitive interfaces for interactive art, edutainment and mobile communications. Comput. Graph. **29**(6), 837–851 (2005)

**Open Access.** This chapter is licensed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made.

The images or other third party material in this chapter are included in the chapter's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the chapter's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.

# **Measuring Human Perception of Biophilically-Driven Design with Facial Micro-expressions Analysis and EEG Biosensor**

Andrea Macruz1(B) , Ernesto Bueno2, Gustavo G. Palma3, Jaime Vega4, Ricardo A. Palmieri5, and Tan Chen Wu6

<sup>1</sup> Tongji University, Shanghai, China

andrea.macruz@uol.com.br

<sup>2</sup> Universidade Presbiteriana Mackenzie, Sao Paulo, Brazil

<sup>3</sup> Centro Universitário Belas Artes, Sao Paulo, Brazil


<sup>6</sup> Instituto do Coração – InCor – HCFMUSP, Sao Paulo, Brazil

**Abstract.** This paper investigates the role technology and neuroscience play in aiding the design process and making meaningful connections between people and nature. Using two workshops as a vehicle, the team introduced advanced technologies and Quantified Self practices that allowed people to use neural data and pattern recognition as feedback for the design process. The objective is to find clues to natural elements of human perception that can inform the design to meet goals for well-being. A pattern network of geometric shapes that achieve a higher level of monitored meditation levels and point toward a positive emotional valence is proposed. By referencing biological forms found in nature, the workshops utilized an algorithmic process that explored how nature can influence architecture. To measure the impact, the team used FaceOSC for capture and an Artificial Neural Network for micro-expression recognition, and a MindWave sensor manufactured by NeuroSky, which documented the human response further. The methodology allowed us to establish a boundary logic, ranking geometric shapes that suggested positive emotions and a higher level of monitored meditation levels. The results pointed us to a deeper level of understanding relative to geometric shapes in design. They indicate a new way to predict how well-being factors can clarify and rationalize a more intuitive design process inspired by nature.

**Keywords:** Algorithmic design · Neuroscience · AI · Biosensor · Biophilia

# **1 Introduction**

Nature has been guiding our behavior, from 2-million-years in outdoor environments. From the past 12,000 years on, with the appearance of Homo sapiens, we started producing food, changing our surroundings, and building shelters, indoor environments [11]. However, currently, we spend ninety percent of our time in indoor environments [22], so there is a discrepancy between the type of spaces we are used to experiencing in the past and the ones we are experiencing now. So, what is the impact of that on our bodies and minds? Suppose the environment is a source of information that guides our behavior, such as temperature, light, and seasonal changes. What kind of information are we managing to extract from the spaces that we are creating? With the contemporary advances in neuroscience, advanced technologies, and Quantified Self (QS) practices, is it possible to further understand how our current interactions with our built environment might be affected by the natural setting?

This paper investigates the role technology and neuroscience play in aiding the design process and making meaningful connections between people and nature. As of the date of this paper, two workshops were used as a vehicle, adding 47 participants, 57% men and 43% women. The average age of the participants was 23 years old, with a minimum age of 19 and a maximum of 42, composed of undergraduate and graduate students of architecture and design. During the workshop, the team introduced advanced technologies and QS practices that allowed the use of neural data and pattern recognition as feedback for the design process. The objective is to find clues to natural elements of human perception that can inform the design to meet goals for well-being. The means developed toward this objective include proposing a pattern network of geometric shapes that achieve a higher level of monitored meditation levels [10] and point toward a positive emotional valence [18]. By referencing biological forms found in nature, the workshops utilized an algorithmic process that explored how nature can influence architecture and design.

To measure the impact, the team used interfaces designed for this purpose based on low-coast technology and open-source software, such as FaceOSC and Processing, which documented the human response further. The participants started by creating geometries inspired by nature through algorithmic modeling; this allowed specific parameters to be enforced (further details presented from Sect. 3.1 and onwards). The projects that were most aligned with the biophilic design were compiled into a seven-minute presentation, which each participant watched while recorded a video-selfie. An inter-individual analysis was made to classify the emotional valences provoked by each instance. The participants' facial expressions were then analyzed and ranked using FaceOSC and an Artificial Neural Network (ANN) for Gesture Recognition [5]. This process was trained to correlate with micro-expressions using Geneva Affective Picture and Database [7]. In parallel, an intra-individual analysis was used, and the designs were tested with a commercial electroencephalography (EEG) sensor. The same designs were then observed one more time to estimate brain modulations in the observer's attention and meditation levels (AT, MT). These datasets were used to redesign the initial drawings based upon the degree of positive emotional valence.

Upon review of the drawings as evidence, the low-cost interface (FaceOSC and Max MSP) andMindWave (explained in Sects. 3.2 and 3.3 of this paper) allowed us to establish a boundary logic, ranking geometric shapes that pointed to positive emotions and a higher level of monitored meditation levels. The results pointed us to a deeper level of understanding relative to geometric shapes in design. They indicate a new way to predict how well-being factors can clarify and rationalize a more intuitive design process inspired by nature. This investigation seeks a better qualitative understanding of the fundamental preferences we all share, densely overlaid with individual and cultural interactions that add endless complexities and variations. It does not intend to be universally applicable; these are initial steps towards developing designs that align with our preferences.

# **2 Biophilic Design Framework**

The term biophilia was coined in 1964 by the psychologist and philosopher Erich Fromm, meaning "love of life" [8]. In 1984, it was diffused and published by biologist Edward O. Wilson [25]. The biophilia hypothesis suggests that humans have an innate tendency to seek connections with the natural world. Also, in that same year, Roger Ulrich released a landmark paper that established the healing power of nature in which he compares recovery rates of patients that have and do not have a view to nature [24]. In 1986, Gordon Orians and Judith Heerwagen posited the Savanna Hypothesis, defending that we should be genetically predisposed to prefer particular types and natural scenery, such as the savanna, due to our evolution in East Africa during the Pleistocene [16]. This hypothesis was a turning point for biophilia and much of the recent discoveries in that area. Evolutionary psychology and related research suggest that human beings prefer colors that refer to the savannah, especially colors found in healthy vegetation such as blue, green, and earth tones. Colors commonly found in healthy natural landscapes indicate clean water, nutrient-rich vegetation, fruits, and flowers.

In 1993, Stephen Kellert and Edward Wilson published the book The Biophilia Hypothesis [13]. According to them, the mid-range fractal dimension of a savannah landscape provides survival advantages such as the effortless conveyance of basic structural information environments with higher fractal dimensions, such as forest, can hide predators and thus present more danger. In contrast, environments with much lower fractal dimensions are too open and too exposed to offer protection and food sources. That is why humans have an innate preference for those kinds of environments, leading to natural comfort and well-being.

The transition of the biophilia hypothesis to biophilic design was the topic of a 2004 conference on the built environment and a latter book on biophilic design, called Biophilic Design: The Theory, Science, and Practice of Bringing Buildings to Life [12]. Biophilic design is based on the biophilia hypothesis. It encourages natural systems and features to create built environments, taking advantage of nature's affinity to create spaces where we can improve people's physical and well-being.

In 2014, Terrapin Bright Green released The 14 Patterns of Biophilic Design: Improving Health and Well-Being in the Built Environment [9]. It recently evolved in 2015 to five principles and 24 strategies with the manual The Practice of Biophilic Design, by Stephen Kellert and Elizabeth Calabrese [11]. Recently, many other architects, mathematicians, neuroscientists, and psychologists have been revising the theories and adding significant contributions to biophilic design. Relative to the geometrical approach, analysis and composition of architectural forms, Nikos Salingaros' [21] and Omid Kardan's [6] papers are very relevant, along with the books from Christopher Alexander [1] and from Sussman and Hollander [23]. In Cognitive Architecture: Designing for How We Respond to the Built Environment, Sussman and Hollander show how people are visually oriented and why they prefer a bilateral symmetrical form due to biology [23].

The biophilic design parameters used for the development of the workshops' projects, based on [3] and the references above, were:


The works created in the workshops are not Biophilic Design samples since this involves the design alignment with most of the principles and strategies related to biophilic design. Just the parameters mentioned above were chosen because they relate to vision, which is the most developed within all the human sensoria.

# **3 Materials and Methods**

# **3.1 Algorithmic Modeling of Biophilically-Driven Geometry**

Once the biophilic design concepts had been introduced and determined, the algorithmic modeling started. The workshops then focused on the modeling methods that had the most potential for applying these concepts and biophilic patterns. The modeling tools used were the NURBS modeling software Rhinoceros, with the graphical algorithms' editor Grasshopper and plug-ins, such as Mesh Edit, Mesh Tools, Pufferfish, and Weaverbird [19]. The time of the training stage was short, and the participants did not know how to program. Thus, it was decided not to work with code programming routines and to discard the use of other algorithmic plug-ins that would otherwise be relevant, such as Anemone [9].

The participants were instructed to recreate the shapes of the geometric basis of the biological system chosen by them. The similarities and differences of organic forms with NURBS surfaces were considered, and their attributes to define strategies for subdividing and populating surfaces [4] were discussed. Congruently, the focus was on constructing surfaces by edge curves combined with extrusion and loft operations [4].

Algorithmically, it was demonstrated how to define transformations in sequence, forming hierarchical sets or progressive growth, by setting lists of transformation vectors. Unlike precedents [9], the generative growth was presented with examples that exhibit a recursive logic without the need for loops or recursive functions. For this, the golden ratio was used for incremental scale and rotation, and the Fibonacci sequence was used as a variable for the translation. This was possible due to the recursive nature of these mathematical constants, which have been found in patterns in nature [15] (Fig. 1). Another transformation relevant to biophilia was the Mirror Cut function, supported by Pufferfish. This function allows composing a symmetrical shape out of any shape by joining it with the mirrored part, merging the joints. It is advantageous with polygon meshes due to its efficiency in both computation and visualization.

Concerns about efficiency are relevant to visualize complex shapes through fast rendering and augmented reality in the following experiments. For this, ways of discretizing NURBS surfaces as meshes were taught, as well as how to and apply the interpolation necessary to create base shapes with continuous and organic curvatures. Taking advantage of the work with these meshes, subdivision and smoothing functions were added (such as the Catmull-Clark and the Charles Loop algorithms implemented by Piacentino in Weaverbird [19]), and also, maelstrom deformations and mesh face reductions by Boolean patterns of repetition and pseudo-random sorted values. Adaptations of one algorithmic definition to different base geometries were demonstrated to promote design alternatives [17]. Next, the definitions were shared with the participants to build upon these or use as a reference for developing their own design ideas and applying productive creativity [9]. Later, the participants were directed and advised in the development of their proposals.

**Fig. 1.** Students' work: natural reference and geometric constructions of natural patterns

### **3.2 Facial Micro-expressions Analysis**

This part of the methodology was based on an inter-individual analysis to classify the emotional valences provoked by the projects. From the geometries inspired by nature created by the participants, the projects that were most aligned with biophilic design were selected and compiled into a seven-minute presentation. Each participant recorded a video-selfie while watching this compilation to be used as a source for reading their facial micro-expressions. These expressions were registered using open-source software FaceOSC, from which the data was sent to an interface that uses an Artificial Neural Network (ANN) implemented on Max MSP by IRL Labs, the Machine Learning Library for Gesture Recognition [5] (Fig. 2). Based on a previous version, the Interface for Capturing and Identifying States of Poetic Presences (ICISPP), that classifies states of Poetic Presence in performing arts [18], this interface correlates facial expressions to emotional responses found in the participant's video-selfies. For this purpose, the ANN was trained using an actor's facial expression in reaction to the Geneva Affective Picture and Database – GAPED [7]. This database was designed to trigger the viewer's emotional responses according to three emotional valences: Neutral, Negative, and Positive [7]. Therefore, the interface can correlate participant's expressions according to each emotional valence category caused by each project using their video-selfies. It delivers graphics that allow us to infer subjective relations, correlating that seven-minute presentation to the intensity of the valences through time. Subsequently, a discussion was established about specific aspects of these projects (lines, shapes, perimetry, surface patterns, depths, symmetries, the focus of attention, the overall organization of the gaze, and others), which may have relations with the state indexes and classification of valences raised before.

**Fig. 2.** Screen captures of the ICISPP setup with an ANN trained for micro-expressions analysis of recorded video-selfie at two different moments

# **3.3 EEG Biosensor and Customized Data Capture Tool**

In addition to the above, an intra-individual analysis was used to detect the levels of attention required by each project. Participants who never saw the projects watched this seven-minute presentation, and their neural states were collected using a commercial EEG: the MindWave biosensor, manufactured by NeuroSky [14]. First, a digital toolset was built to collect data from the brainwave sensor [10] and converted these numbers in graphics to further analysis and comparisons with data captured from other devices and biofeedback techniques developed by our team. The idea of this application came from the simplicity and flexibility in making code using the open-source development interface called Processing [20]. With this technological strategy, it was designed a process using this workflow to deploy the toolset: NeuroSky MindWave [10, 14] → ThinkGear Connector → TCP/IP commands → ThinkGear Java socket [2] → Processing 2.2.1 [20] → Timer → Suggested Image → generate a TXT file with all the data collected at the end of the process.

With this customization developed using Processing IDE [20] as the main management tool, it was possible to visualize the workshop's images. Simultaneously, the software recorded the biosensor data generated, writing it in a text file. With a configurable timer to determine the amount of time of exposure that the users could have over the analyzed image, the MindWave sensor collected and filtered four different kinds of data from the ThinkGear Java Socket [2]: Attention Level, Meditation Level, Poor Signal Quality, and EEG Events (Delta, Theta, Low Alpha, High Alpha, Low Beta, High Beta, Low Gamma, and Mid Gamma Waves).

**Fig. 3.** Graphic plot for the readings of average attention levels (AT) and meditation levels (MT) from all the perceptions of one of the projects from workshop 1 (showed at Fig. 4, upper right) in 60 s

In addition, the toolset was able to collect potential interference or signal noise. By detecting the level of these signals emitted by the hardware, the software allowed, through its interface, to check if the communication is full, noisy, or null. Finishing this data collection process, generated during the image visualization, it was possible to extract the data using a spreadsheet software that allowed creating tables and graphics, like the one in Fig. 3. These graphics were necessary to understand and compare all the numbers collected by the application.

# **4 Quantified Self Practice Results**

The inter and intra-individual analyses were compared, verifying possible patterns in the degrees of attentional excitement, relaxation, and valences that could be associated with the projects. In the first workshop, it was observed that the results that pointed to positive emotions and a higher level of monitored meditation levels were the ones that presented symmetric shapes, fractal geometries, and soft colors, blue or green (Fig. 4). In the second workshop were the ones that had circular forms, growing sequences, or harmonic, continued curves (Fig. 5). It was observed that designs that had fewer curves or that the curves were discontinued had significantly lower meditation levels.

**Fig. 4.** Designs that achieved a higher level of monitored meditation levels and pointed towards a positive emotional valence on workshop 1

**Fig. 5.** Designs that achieved a higher level of monitored meditation levels and pointed towards a positive emotional valence on workshop 2

# **5 Conclusions**

This research explores the potential that technology and neuroscience have in helping the design process and making meaningful connections between people and nature. The team used two workshops as a vehicle to incorporate experimental technology and QS practices, which enabled people to use neural data and pattern recognition as design feedback. It started with research in biophilic design and employed algorithmic modeling software to develop the projects, resulting in bidimensional images. The projects that were most aligned with biophilic design were tested with two methodological strategies, inter and intra-individual analyses. The first one used the IICPP instrumentation to acquire and analyze facial micro-expressions, and the second one used a customized EEG data workflow to verify the intensities of brainwaves. Their results were divided into two categories: emotions valence and arousal and level of monitored meditation. It was important to have both analyses working together to cross data and better understand the human feedback towards the drawings. These specific toolsets and methods were chosen because they are both low-cost and accessible during the workshops. These outputs were used to rethink the designs that had been produced.

The objective was to inform the design for well-being by identifying clues of natural elements of human perception. For this, a pattern network of geometric shapes that point toward high visual quality and appealing features was proposed. By referencing biological forms and conditions found in nature, the works utilized an algorithmic process to explore how these attributes can be applied to architecture and design. Upon review of the drawings as evidence, the two methodological strategies mentioned above allowed us to establish a boundary logic, ranking geometric shapes that pointed to positive emotions and a higher level of monitored meditation levels. The results pointed us to a deeper level of understanding relative to the perception of geometric shapes in design. They indicate that our surroundings' characteristics, natural and artificial, may bear to some of our innate behaviors in the natural settings and point to a new way to predict how well-being factors can clarify and rationalize a more intuitive design process. This paper can also be used as a reflection on how designers in the age of AI may use science to design spaces that help us develop humanistic traits and abilities to enhance the built environment.

What we have learned offers an input relative to using biological and emotional data as feedback to support a more heightened experience of the relationship between humans and nature. Although there is much work to be done and this paper only documents two sample series of projects, the methodology allowed us to begin to predict and forecast how our design could respond to human interaction. The use of algorithmic design allowed us to create complex drawings. The use of a biosensor and a customized interface for facial micro-expressions analysis allowed us to understand biological feedback and to synthesize geometries that trigger an autonomic response. For the future, continued research will grow our analyzed data for stronger conclusions, such as new methodological procedures that take the difference of individuals into consideration. The following steps might include augmented reality and interactive holographic experiences in expanded immersive spaces to test the morphology of the projects that move from the canvas to the physical space and its possible changes in perception.

### **References**

1. Alexander, C.: The Nature of Order, Book 1: The Phenomenon of Life, 1st edn. Center for Environmental Structure, Berkeley (2002)


**Open Access.** This chapter is licensed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made.

The images or other third party material in this chapter are included in the chapter's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the chapter's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.

# **Subjectively Measured Streetscape Qualities for Shanghai with Large-Scale Application of Computer Vision and Machine Learning**

Waishan Qiu1(B) , Wenjing Li2, Xun Liu3, and Xiaokai Huang4

<sup>1</sup> Department of City and Regional Planning, Cornell University, Ithaca, NY 14850, USA

<sup>2</sup> Center for Spatial Information Science, The University of Tokyo, Tokyo 113-8654, Japan

<sup>3</sup> School of Architecture, University of Virginia, Charlottesville, VA 22904, USA

<sup>4</sup> Graduate School of Design, Harvard University, Cambridge, MA 02138, USA

**Abstract.** Recently, many new studies emerged to apply computer vision (CV) to street view imagery (SVI) dataset to objectively extract the view indices of various streetscape features such as trees to proxy urban scene qualities. However, human perceptions (e.g., imageability) have a subtle relationship to visual elements which cannot be fully captured using view indices. Conversely, subjective measures using survey and interview data explain more human behaviors. However, the effectiveness of integrating subjective measures with SVI dataset has been less discussed. To address this, we integrated crowdsourcing, CV, and machine learning (ML) to subjectively measure four important perceptions suggested by classical urban design theory. We first collected experts' rating on sample SVIs regarding the four qualities which became the training labels. CV segmentation was applied to SVI samples extracting streetscape view indices as the explanatory variables. We then trained ML models and achieved high accuracy in predicting the scores. We found a strong correlation between predicted complexity score and the density of urban amenities and services Point of Interests (POI), which validates the effectiveness of subjective measures. In addition, to test the generalizability of the proposed framework as well as to inform urban renewal strategies, we compared the measured qualities in Pudong to other five renowned urban cores worldwide. Rather than predicting perceptual scores directly from generic image features using convolution neural network, our approach follows what urban design theory suggested and confirms various streetscape features affecting multi-dimensional human perceptions. Therefore, its result provides more interpretable and actionable implications for policymakers and city planners.

**Keyword:** Subjective measure · Human perception · Street view image · Computer vision · Global comparison

# **1 Introduction**

Urban design qualities such as the enclosure directly affect a person's appreciation of a place (Ewing and Handy 2009). Recently, with prevalence of Street View Imagery (SVI) data in environment auditing (Yin and Wang 2016), computer vision (CV) have been widely applied to extract streetscape features, making the large-scale urban scene understanding possible (Yin et al. 2015). However, studies are limited to the objective measures. Only the view index of individual features such as tree and building are analysed, while the viewers' overall perceptions are ignored. Human perceptions have subtle relationships which cannot be fully represented by individual view indices nor a simple combination of them (Ewing and Handy 2009; Lin and Moudon 2010).

Conversely, the "subjective measure" which refers to evaluative scores collected from surveys questions can capture more subtle relationships (Lin and Moudon 2010). It is more user centered (Naik et al. 2014), although the definitions of perceptual qualities are inconsistent across studies (Ewing and Handy 2009). However, few studies have addressed subjective measures' effectiveness in capturing more subtle perceptions using SVI data.

To bridge the gap, we took Shanghai as an example and applies CV and ML to subjectively measure four perceptual qualities, namely the **enclosure, human scale, complexity, and imageability**. These perceptions have been identified important in affecting pedestrians' behaviors, residences' mode choices, and home buyers' willingness to pay (Ma et al. 2021). Our work enriches subjectively-measured urban perception studies. It is also the first cross-study for global cities. Urban renewal implications are derived for policymakers based on the global comparison. Furthermore, we contribute to future studies by proposing a framework integrating AI applications with classical urban measurement frameworks.

# **2 Literature Review**

#### **2.1 Objective and Subjective Measures**

Street environment significantly affects people's appreciation of a place, as well as residence's physical activities, mode choice, and willingness to pay (Ewing and Cervero 2010). Street qualities have been mostly measured using objective quantities such as building height, street width, and number of trees (Cervero and Kockelman 1997). However, physical features alone cannot represent people's overall perception which have more subtle relationships (Ewing and Handy 2009).

Conversely, subjective measures often derive from interview and surveys. They explain people's behavior more completely, as behavior is mediated by the "cognitive map" of the environment (Lynch 1960). Conventional approaches relied on interview or telephone survey to collect people's overall perception have problems (Ewing and Handy 2009). First, the consistency and reliability of the operation can be questioned due to individual differences. Secondly, measurements based on surveys is time-consuming and expensive. The low throughput method limits subjective measure's application to larger geographic contexts (Naik et al. 2014). Third, the results are difficult to interpret, hence providing less instructive implications to policymakers (Lin and Moudon 2010).

Nevertheless, the subjective and objective measures could have been integrated. Ewing and Handy (2009) reviewed 51 subjective perceptual qualities from a pile of urban design literature. They statistically correlated subjective scores rated by experts for watching street view video clips, to the objectively-quantified elements like people and tree from field survey. They successfully operationalized to objectively measure five seemingly subjective perceptions.

### **2.2 Computer Vision and Machine Learning in Street Measures**

Recently, new studies emerged to take the advantages of open-source big data and AI algorithms. First, SVI data covers a handful of cities and spreads to new cities rapidly since 2007, which can be used to measure street-level human eye views inaccessible from the bird view (Li et al. 2015). A few recent studies measure the built environment using SVIs. For example, Rundle et al. (2011) used Google SVI to manually audit neighbor environment. Later, with the advance of AI such as CV and ML, automatically extracting features from images became possible. Yin and Wang (2016) applied ML to measure visual enclosure from SVI. Their results showed that the ML algorithms performed well to recognize and calculate the sky areas, allowing the measurement to be done reproducibly. Other researches have measured pedestrian, trees, sky, building, façade etc., respectively (Chen et al. 2020; Li et al. 2015; Ma et al. 2021). However, as discussed formerly, these objective view indices cannot represent viewers' overall feelings with the street scenes (Qiu et al. 2021).

Besides open-source SVI data, integrating crowdsourcing with AI has become viable to uncover large-scale public perceptions (Naik et al. 2014). Online data collection allows greater number of participants to evaluate perceived qualities from images, largely increasing the accessibility of urban perception data (Naik et al. 2014; Salesses et al. 2013). Naik et al. (2014) collected perceived safety online by asking participants to rank pairwise street photos. These preferences were converted to ranked safety scores and became the training data to train ML models to predict perceived safety score for 21 cities worldwide. The method was also applied to investigate the correlation between urban appearance and neighborhood income as well as housing prices (Glaeser et al. 2018).

Despite the effectiveness of subjective measures in incorporating more subtle human perceptions, most studies using SVI data are limited to objectively extracted visual elements. Little has been done to construct global maps of the subjectively-measured perceptions for the many perceptual qualities identified by classical urban design studies, such as imageability and complexity (Ewing and Handy 2009).

Therefore, our work sets to enrich the subjective measures of urban perceptions. It contributes to analytical frameworks by extending classical urban design framework with AI and big data (Fig. 1a). While Ewing and Handy (2009) relied on human labor to manually count physical features from video clips, we applied CV to extract the pixel ratios or counts of each important feature. While Naik et al. (2014) only mapped perceived safety score, we measured four important qualities identified by literatures in urban design and validate the scores with objective POI data. Furthermore, it is the first cross-study for several global cities with application of CV and ML which sheds light on urban renewal implications for global studies.

**Fig. 1.** Analytical framework (a) based on literatures in urban design qualities; and (b) the selection of the four perceptual qualities and their contributing features

# **3 Data and Methods**

### **3.1 Study Area and Data Preparation**

Pudong District in Shanghai is the financial center of China. Since the housing reform in 1998, Pudong has become one of the most expensive and vibrant housing markets in China (Chen et al. 2020). An empirical analysis for the street quality for a city-wide Pudong would provide essential implications for urban renewal. The data includes (1) SVIs collected from Baidu Street View API, (2) POI data from DaZhongDianPing and AutoNavi Map, and (3) shapefile of road networks from Open Street Map (OSM).

### **3.2 Calculating Subjective Qualities**

### **3.2.1 Downloading Baidu SVIs**

SVIs were downloaded from Baidu Street View Static API with consistent camera settings. The 'heading' was set using the street angle; image size was 600 × 300 pixels. The FOV (the horizontal field of view) was 120°. The 'pitch' which specifies the up or down angle of the camera was 0°. To ensure our training images would cover most urban area types, 300 images were randomly sampled across Shanghai region (Fig. 2).

**Fig. 2.** Downloading Baidu SVIs (a) A typical SVI downloaded for this study. (b) The camera settings were controlled by "heading", "FOV", "pitch" and "resolution". (c) SVI training samples.

# **3.2.2 Collecting Public Perceptions as Training Labels**

To collect people's preferences on street scenes as the training labels, we developed an online questionnaire platform where people can select the image preferred in pairwise comparisons regarding the four perceptual qualities (Fig. 3a). During a one-week period, we collected 3,120 valid entries from 23 volunteers who are mostly architecture students in Shanghai. In average, an image was compared to 10 other images, which is sufficient to lead the results converge (Naik et al. 2014).

These preferences were then translated to ranked scores with TrueSkill Algorithm (Microsoft 2005) which has also been applied to rank perceived safety (Naik et al. 2014). The ranked scores were normalized into 0–10 scale. People seemed to favor streetscapes with less sky exposure, more trees, and more pedestrians (Fig. 3b). These 300 labelled images become our training data.

**Fig. 3.** Collecting the collaborative image of streetscape with an online survey platform. (a) Our online survey system asking participants to click on one of pairwise SVIs in response to evaluative questions. (b) High score, low score example images, and the histogram of score distribution, for each of the four perceived street qualities.

# **3.2.3 Physical Feature Classification**

Pyramid Scene Parsing Network (PSPNet) is an image segmentation algorithm to produce reliable results on the scene parsing task (Zhao et al. 2016). We used PSPNet to extract and calculate the pixel ratios of individual features as view indices from SVIs. 35 kinds of streetscape elements have been detected (Fig. 4a). For the quantity of cars, peoples, signs, street furniture, the pixel ratio makes less sense, therefore, we applied MASKRCNN (He et al. 2017) to count the amounts (Fig. 4b).

# **3.2.4 Predicting Subjective Scores**

We then applied several ML algorithms such as K-nearest neighbors (KNN), support vector machine (SVM) and random forest (RF) to predict the four perceptions. Mean Absolute Error (MAE) was set as the loss function, resulting in best models with an average MAE of 1.83, which is acceptable. With a scoring system of 0–10, an error of 1.83 will not alter the interpretation of a quality. We then applied the best performance models to all the downloaded 14,274 Baidu SVIs and derived the four subjective scores for Pudong Area.

**Fig. 4.** CV segmentation results (a) Pairwise PSPNet semantic segmentation results with its raw input (b) Mask R-CNN instance segmentation results counting objects

### **3.3 Correlation Test and Result Validation**

Meanwhile, a logistic regression analysis was conducted among four qualities to check their correlations. The result shows that the degree of 'imageability' is significantly and positively correlated with 'enclosure' and 'complexity' (Fig. 5a). Furthermore, we crossed reference complexity score to the POI density (using food & beverage, entertainment, and recreation). Higher complexity score is correlated with more POIs, indicating the predicted complexity score effectively captures the impacts of urban amenities and services (Fig. 5b).

**Fig. 5.** Validation of results using (a) correlations test between four scores and (b) cross-reference to actual POIs density (c) Cognitive maps of four perceptual qualities

### **3.4 Global Comparison with Other Cities**

To validate the generalizability of our framework and to inform what kind of environment facilitate urban innovation, we selected five renowned innovative districts, namely Cambridge Kendall Square, London Knowledge Quarter, Manhattan Wall Street, San Francisco Downtown and Seattle South Lake Union as the benchmark. The scores of Zhangjiang High-Tech Park were compared to that of the five benchmarks. Implications for urban design and renewal for Pudong and Zhangjiang were discussed based on comparison results.

# **4 Results and Findings**

# **4.1 Spatial Distribution of Perception Qualities**

Figure 5c provides the first comprehensive cognitive maps for Pudong District. The distributions of four perceptual qualities are heterogeneous, with downtown area (i.e., Lujiazui) conceived highest. The result indicates that when considering allocating renewal resources, more could be invested in the periphery residential areas and industrial parks where street qualities are conceived low, but with large residential population and employments, such as Zhangjiang High-Tech Park.

# **4.2 Comparison with Other Cities**

Pudong's street qualities fall behind global best practices. Zhangjiang has the lowest average score compared to other five best practices (Fig. 6b), indicating more urban design implementations could be considered to improve the overall appreciation of street environment, as good street environment facilitate innovation. Second, five global districts have smaller variance in scores, while the scores in Zhangjiang are highly polarized, implying its uneven development (Fig. 6a). It suggests future study to investigate whether such uneven distribution have posed inequitable issues to specific population segments (Salesses et al. 2013). Last, the result confirms our method is applicable to a wide range of regions.

**Fig. 6.** Comparing six cities' perceptual qualities. (a) Score distributions (b) Averaged scores

# **4.3 Cross-reference with Zoning Metrics**

To provide actionable policy suggestions, we cross-referenced perception scores with objective metrics of urban form and density, such as the average block size and floor area ratio (FAR) (Fig. 7). Zhangjiang has the widest roads but the lowest density measured, which explains its lowest perceived enclosure, since lower building heights and wider streets lead to less enclosure (Yin and Wang 2016). Less enclosure limits the neighborhood walkability and results in less walking behaviors, which is confirmed by the pedestrian counts from SVIs.

**Fig. 7.** Comparing (a) urban fabrics and block metrics, (b) development density and metrics

# **5 Conclusion**

### **5.1 Effectiveness of Subjective Measures Using SVI and AI**

While this method may not immediately replace the long-existing techniques in urban environment auditing, it offers many merits. For example, being closely related to the pedestrians' perspective, low-cost, requesting nothing from proprietary software or methods, and is commonly applied to where SVI dataset is available. The proposed method provides a useful alternative for planners and policymakers.

First, the cross-study of six global urban cores including Pudong district confirms the generalizability of our proposed framework. The method is reproducible and consistently predict perceptions from open-source SVI dataset that widely exists. Second, subjective measures capture more comprehensive and subtle human perceptions than using individual view indices. All the four important human perceptions suggested by urban design theory have been operationalized and the accuracy rates have been improved comparing to prior works (Ewing and Handy 2009). Third, although measured from simply images, perceptual scores capture many urban space qualities and characteristics that traditionally viable through objectively measured urban metrics. For example, the FAR, street width, building height, block size, and amenity density. We find a significant correlation between the complexity score and the POI density, as well as the enclosure with urban form and density metrics including FAR and street width. While the objective urban metrics must be measured using massive POIs and urban 3D model data with complicated workflows with ArcGIS and Rhino, our framework can stand alone without any licensed programs and software. All information needed are open-source. Therefore, compared to objective measures of urban form, our proposed framework is more accessible and higher throughput. Lastly, the cross-study indicates the polarized and uneven urban development in Pudong District. Unlike other benchmark cities, Pudong have large variances and lower average scores within all four perceptions, which suggests more equable allocation of urban design efforts and investment resources.

### **5.2 Limitations**

First, our segmentation only used pre-trained models. Future studies could train specific models to fulfil more tailored tasks, such as to extract façades and windows which significantly affect many perceptions. Second, our training data was limited by the scarce of volunteer raters, and raters were not randomly selected. Third, further investigation could be done to address the divergence and coherence between subjective and objective measures of urban perceptions.

# **References**


Zhao, H., Shi, J., Qi, X., Wang, X., Jia, J.: Pyramid scene parsing network (2016)

**Open Access.** This chapter is licensed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made.

The images or other third party material in this chapter are included in the chapter's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the chapter's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.

# **From Visual Behavior to Signage Design: A Wayfinding Experiment with Eye-Tracking in Satellite Terminal of PVG Airport**

Chengyu Sun, Shuyang Li(B) , Yinshan Lin, and Weilin Hu

Tongji University, Yangpu 200092, Shanghai, China

**Abstract.** Passengers principally rely on signage to making wayfinding decisions in transportation buildings. Most existing research focuses on the analysis of the wayfinding trajectory, but there is less attention on the process of how passengers make the wayfinding decision. So, it is hard to accurately locate the causes of the wrong wayfinding decision. Taking the Satellite Terminal of Shanghai Pudong International Airport (PVG Airport) as an example, we adopted the eye-tracking technology and recorded the eye-tracking data of passengers observing the signage and making wayfinding decisions. Then, we compared and analyzed the data, presenting it by data visualization. This study found the causes of passengers making wrong wayfinding decisions and the visual behavior of wayfinding: the reconfirmation behavior, the priority of attention, and the clockwise observation. Finally, corresponding suggestions for signage design optimization are put forward regarding some wayfinding decision points. As a result, the optimized signage system in the satellite terminal is welcomed by the passengers two months later according to monthly questionnaires.

**Keywords:** Signage design · Data visualization · Visual behavior · Eye-tracking · Wayfinding · Transportation building

# **1 Introduction**

Transportation building is an important type of architecture in wayfinding research. Passengers usually have a clear wayfinding purpose in transportation buildings, so it is necessary to design a perfected signage system to improve the wayfinding efficiency.

Wayfinding is a process from environmental perception to decision-making. Eyetracking technology is used to record this wayfinding process, and the specific causes of the wrong wayfinding decision can be accurately located by analyzed the eye-tracking data. We proposed a data visualization Based on the HSB color code to analyzing the relationship between the interior space, the signages, and the wayfinding behavior. In this way, we can intuitively see the gaze paths and observation areas of various passengers, which is convenient for concluding the visual behavior of wayfinding. These visual behaviors can provide designers with more references when designing a signage system.

# **2 Background**

The Satellite Terminal of Shanghai Pudong International Airport (Hereinafter PVG Airport) is a typical mega-scale transportation building with 134 boarding gates and approximately 3,600 signboards. The construction area of the Satellite Terminal of PVG airport is about 670,000m2. Inside the building, the interior space along the walking path is extremely similar. Besides, there are intricate passenger paths in the Satellite Terminal of PVG airport, with paths from entrances to gates taking 30 minutes on average for passengers.

Considering that the unaffordable cost and the difficulty of an on-site wayfinding experiment, a virtual wayfinding experiment platform is developed. Based on the method of "high-resolution panorama + low level of detailed 3D model", the interior space of the Satellite Terminal of PVG airport is reproduced as a virtual reality scene (Fig. 1). Participants were assigned wayfinding tasks randomly, and their wayfinding trajectories were recorded by the program. As a result, a total of 3382 virtual wayfinding experiments (175 people) were completed in three weeks.

Through the analysis of the wayfinding trajectory (Fig. 2), we can quickly find at which wayfinding decision points participants make wrong decisions. However, the trajectory is still not enough to explain why the participants make wrong decisions at this wayfinding decision point. Therefore, it is necessary to design a further experiment to explore the specific causes of the wrong decision.

**Fig. 1.** Virtual wayfinding experiment platform

**Fig. 2.** Trajectories of virtual wayfinding experiment

# **3 The Necessity of Exploring Visual Behavior**

Wayfinding behavior is related to human cognition. In transportation buildings, passengers need to find the destination quickly and accurately, so the signage system is remarkably influential (Arthur and Passini 1992). Tzeng and Huang (2009) discovered that signages play a more important role in the enclosed space and the open L-shaped space. Xu, et al. (2010) studied the impact of the distance, position, and color of signboards on the efficiency of wayfinding. They concluded that the signboards at the exit and the turning point are the most concerned and the passengers will follow the direction of the signboards again and again even though it is wrong.

The above studies showcase the relationship between signages and users. While, designers tend to ignore passengers' wayfinding behavior when they design the signage system and wrongly assumed that wayfinding is synonymous with signage (Carpman and Grant 2002). As a result, the signage system could not satisfy users.

Through the eye-tracking technology, passengers' visual behavior can be recorded, which provides an opportunity to study how passengers observe the surrounding environment and how they make wayfinding decisions (Till and Babcock 2011). Schrom-Feiertag, et al. (2017) developed a virtual wayfinding experiment platform with eyetracking technology, to explore the attention maps and objects of interest for passengers. Wiener, et al. (2012) took a series of eye-tracking experiments to survey visual behavior and decision making in wayfinding. They found that participants tend to choose the path with a longer line of sight and the gaze bias effects are a general phenomenon.

From the eye-tracking data, we can specifically analyze the visual behavior of wayfinding in a more targeted way, so that it can support the melioration of signage system design and interior design.

# **4 On-site Wayfinding Experiment with Eye-Tracking**

### **4.1 Setting and Participants**

In this experiment, we selected 8 wayfinding decision points where participants made wrong decisions frequently in the virtual experiment as a trial group. 2 wayfinding decision points with no record of decision error were added as a control group. The red fan-shaped area (Fig. 3) represents the initial viewing angle of the participants before the experiment started. There are 8 wayfinding tasks on each wayfinding decision point, including the search for the boarding gate, VIP lounge, currency exchange, etc. 80 wayfinding tasks in total.

8 people participated in this experiment between 18 and 46 years old, with an approximately equal number of men and women. All the participants had the experience of taking the airplane and knew the boarding process, but they had never been to the Satellite Terminal of PVG airport. Participants were equipped with a mobile eye-tracking device (Dikablis Glass 3) in this experiment. This device has three independent cameras: front camera for what the wearer is observing, and two cameras for eye movements of the wearer.

**Fig. 3.** Wayfinding decision points of the on-site experiment

### **4.2 Procedure**

In order to eliminate the influence of sunlight, this experiment was carried out at night with sufficient and stable lighting. The participants corrected the eye-tracking device with the help of the experimenter, and aimlessly observed for 3 minutes to adapt to this state. Then, the experimenter brought the participants to the wayfinding decision point and randomly selected a wayfinding task. After the experimenters having explained the wayfinding task, the participants began to observe the surrounding environment and signboards. Participants informed the experimenter orally when they made a wayfinding decision. Finally, the experimenter interviewed and recorded the feelings and doubts of the participants when they were trying to find the way (Fig. 4).

### **4.3 Data Analysis**

### **4.3.1 Data Visualization of Single Wayfinding Task**

Eye-tracking data is shown on the panorama of the wayfinding decision point. The fixation duration is presented as a heatmap (observation more than 0.2 s was considered as fixation). The gaze path is based on the time sequence of the fixation area.

We divide different observation areas and display the eye-tracking data on each observation area respectively. The fixation area and gaze path will change when the

**Fig. 4.** Participant in the on-site wayfinding experiment with mobile eye-tracking

turning head occurs, so different observation areas should be differentiated for more accurate analysis. The observation areas are also sorted by the time sequence, and the order is represented by red numbers (Fig. 5).

**Fig. 5.** Data visualization of single wayfinding task

# **4.3.2 Data Visualization Based on the HSB Color Code**

Taking a wayfinding decision point as the research object, we try to find the common rules of the visual behavior when participants searching for the wayfinding information at this decision point. A coordinate system is established in the panorama of the wayfinding decision point, and the region of interest is coded in the form of coordinate points. The horizontal axis of the coordinate system is divided into 360°. The vertical axis of the coordinate system is divided into 180° and the sightline as high as the height (1.65 m in general) as 0°.

HSB (Hue-Saturation-Brightness) colors were used to label the interest area for participants and we define that the horizontal coordinate as hue value and the vertical coordinate as the brightness value. The depth of space can be represented by the saturation value, but this study does not involve the depth now. To make the color label more intuitive, we cut out a part of the whole chromatography as the new chromatography, in accordance with the horizontal axis of the coordinate system. The color in the bar graph corresponds to the HSB color and represents the region of interest, the horizontal axis of the bar chart represents the proportion of fixation duration. Considering that the fixation duration of each participant is different (from 4 to 30 seconds), we pay more attention to how the participants allocate the observation time, so the proportion of fixation duration is better more informing than the absolute value. Meanwhile, Fixation duration presented on the building layout, the radius of the sector represents the cumulative value of the 8 participants' observation time.

**Fig. 6.** Data visualization of wayfinding experiments at V153 wayfinding decision point

### **4.4 Results**

### **4.4.1 Visual Behavior of Wayfinding**

• The Reconfirmation Behavior

To save energy and get to the boarding gate in time, participants appear to have a reconfirmation behavior when they search the wayfinding information in this huge interior space. When they notice the information of the target place on a signboard nearby, participants will then follow the pointed direction and search for the information of the target on the next distant signboard (Fig. 5). If they can't find it, they will feel anxious, leading them to make mistakes in wayfinding.

• The Priority of Attention

Such special places like the escalator, doorway, and other places where building space changes, passengers subconsciously think that there should be signboards or other important information (Fig. 6). Meanwhile, passengers will ignore the narrow passage, or they are unwilling to choose such a path.

Color and luminescence are also important factors to the priority of attention. The lamps, the LED electronic display screen, and the luminous shop front are easier to attract the attention of the participants than the signboards. When having observed the signboard (the color of the panel is blue), the participants were first attracted to the yellow fonts rather than the white fonts.

• The Clockwise Observation

When there is no signage in front of the visual field or no needed information on the signboards, the participants usually observe along the clockwise direction (right side). In the bar chart of Fig. 7, the color of each line changes from yellow to orange, and then to purple and blue. It means that most people turn to the right and search for information, which may be related to the fact that most of the participants are right-handed. While a left-handed participant always observes in a counterclockwise direction.

**Fig. 7.** Data visualization of wayfinding experiments at V140 wayfinding decision point

### **4.4.2 The Causes of Making Wrong Wayfinding Decisions**

Combined with the oral statements and eye-tracking data of the participants, various ingrained causes of the wayfinding problems are analyzed to guide subsequent design optimization (Table 1):

# **5 Optimization of Signage Design**

# **5.1 Guidelines of Optimization**

After sorting out the results of the wayfinding experiment, we propose a guideline for optimizing the signage design:


**Table 1.** Causes of wayfinding problems


# **5.2 Cases of Optimization**

# **5.2.1 Set Signboards at Specific Positions**

If the sightline is blocked that passengers can't see the signboards in the distance when the interior space changes, the signboards should be set at these positions. At the V153 decision point, two redundant signboards are removed and two signboards are added to ensure the continuity of observation (Fig. 8).

# **5.2.2 Adjust Wayfinding Information Based on the Reconfirmation Behavior**

The wayfinding information on the signboards should be coherent to ensure the information can echo with each other. As shown in Fig. 9, four redundant signboards are removed, and a crossroad signboard (directing to three directions) is added. All the wayfinding information appeared twice, on the hanging signboards and crossroads signboard, so passengers can easily confirm.

# **5.2.3 Adjust the Content by Priority**

Process wayfinding information (Departure, arrival, boarding gate) is the most important, followed by information of the functional facilities (restaurant, shop, toilet). We can differentiate wayfinding information by designing it as luminous or not, in colors and sizes (Fig. 10).

**Fig. 8.** Optimization of signage design at V153 wayfinding decision point

**Fig. 9.** Optimization of signage design based on the reconfirmation behavior

**Fig. 10.** Optimization of signboards design

# **6 Summary**

As a powerful supplement to conventional wayfinding research, eye-tracking technology has great potential. Taking the Satellite Terminal of PVG Airport as an example, the eye-tracking data can be visualized on the panorama of wayfinding decision points. Therefore, the specific causes of the wrong wayfinding decision can be accurately located. Finally, suggestions for signage design optimization are put forward in some wayfinding decision points. It is estimated that the optimized project will take 9 months to complete which start in October 2020. After that, a systematic evaluation will be carried out to assess the performance of the optimized signage. Up to now, in some multi-directional wayfinding decision points which optimized signage has been constructed, the signage effectively guides and diverges passengers, and it is welcomed by the passengers according to monthly questionnaires.

The approach for data visualization based on HSB color code proposed in this paper provides a new perspective to analyze eye-tracking data. Taking the wayfinding decision point as the ordinate origin, three-dimensional coordinates of the interest area is established corresponding to the HSB value. It is helpful for us to directly analyze the relationship between the visual behavior, the signages, and the interior space. This method is helpful for researchers to make some assumptions and provides a definite direction for quantitative analysis of eye-tracking data. It can be used in the research of wayfinding, as well as environmental psychology and environmental behavior research.

(Note: all the charts in this paper are taken or drawn by the author).

**Acknowledgements.** The research was supported by a project of Natural Science Foundation of China titled "An internet-plus-based approach of crowd simulation for public buildings" (no. 51778417).

# **References**


**Open Access** This chapter is licensed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made.

The images or other third party material in this chapter are included in the chapter's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the chapter's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.

# **A Framework for Cypher-Physical Human-robot Collaborative Immersive MR Interaction – Beaux Arts Ball 4.0**

Risu Na(B) and Haocheng Dai

University of California Los Angeles, Los Angeles, CA 90095, USA narisu960209@ucla.edu

**Abstract.** In this paper, we presented a human-robot collaborative mixed reality application – Beaux Arts Ball 4.0, in which a real-time interactive hybrid and physical architectural environment were designed and experienced through the tools and techniques of mixed reality, cypher-physical, teleoperation, telepresence, and automation. The application engaged the user and observer in a continuous loop of architectural transformation during the experience, where every type of sensory was blurred between physical and digital perception.

**Keywords:** Mixed reality · Avatars · Robotics · Human-robot collaboration

# **1 Introduction**

In reference to the new virtual environments of industry 4.0, we proposed a contemporary interpretation of Beaux Arts Ball, a historic social environment where participants not only included people but also iconic buildings in the forms of costumes and other kinds of stage design elements. Beaux Arts Ball 4.0 is an interactive mixed reality environment, in the tradition of the Architecture League of New York's Ball, within the context of the 4th industrial revolution.

We explored how to bridge between the physical and digital world with a system of sensors in a spatial context, expanding upon the current forms of mixed reality experience. In this process, both the human body and robots were designed as an 'aggregated' character, whose behavior and performances helped build the 'aggregated' environment. The process culminated in an architecturally augmented robotic performance. An observer's position and point of view will be tracked in real-time to reveal the augmented environment, complete with avatars of the tele-present participant and digitally augmented physical robots. The digital avatar and augmented KUKA robots are actors in the synthetic scene, they interact with each other based on participants' input and distinct behavioral patterns through machine learning (Fig. 1).

The research was based on HTC VIVE VR system, a consumer-grade VR device, and two KUKA 6-axis KR150 articulating robotic arms. The main used platforms and programs were Unity 3D, Steam VR, and Autodesk MAYA in addition to Robot drivers that use UDP/IP User Datagram Protocol for exchanging data with robots.

**Fig. 1.** Telepresence composition diagram

# **2 Related Works**

Originating back to the Ecole des Beaux Arts Ball in France, the ball included costume design, cross-dressing acts, often from human to building, iconic floats on river Seine and other antics that question and re-interpreted the presence of human and require the participation of design forms in a social space (Fig. 2). The design elements in the forms of costumes and sets were intended to be the participants, contributing to the complexity of social interactions through the creation of an alternative reality that was visually and experientially different in its interpretation of the human body and its adjacency to other bodies. The spirit of Beaux Arts Ball and other historic costume parties where the human bodies were altered and merged with some conceptual design contents serves as a precedent to many social VR platforms today. These platforms constitute humanoid avatars for tele-present human participants in a virtual environment and automated nonphysical elements in the form of simulation and interface to help streamline the whole communication process.

Virtual reality was used by architects for design concept presentation. In their paper, Schnabel, Wang, and Kvan stated that the virtual environment gave architects an opportunity to express and explore their imagination more easily [1]. Beyond adding a virtual entity to the real world view, VR technology also enhanced the collaboration between design team members [2]. The collaborative interface allowed the users to share the virtual space with the other party promoting collaboration. Some of the collaborative interfaces even used different viewports in VR and AR to support different collaborative roles [3]. However, many researches in VR and AR space just focused on collaboration between users in either only VR or AR environments. The most recent work that combined the VR and AR space for collaborative experience was CoVAR [14]. It used the depth data of the physical environments captured by AR devices to construct a VR environment for other users.

**Fig. 2.** Photo from 1931's Beaux arts architect ball [15]

The most common multi-user virtual environment approaches focused on representation and interaction purely within the virtual space. With the launch of low-cost head-mounted displays (HMDs), networked mixed reality environment have increased its popularity in the remote collaboration field. MR occupied the ranges of the continuum between pure real environments and purely synthetic virtual environments, by merging them together. Strauss et al. presented "Murmuring Fields" as a mixed reality shared environment installation for several users based on a decentralized network. In this scene, "Murmuring Field" is a sound reacting to the users' body movement, movement triggers sound in the virtual space which could be heard in physical space [4]. Georges and Cédric introduced a setup and framework of avatar staging theatrical mixed reality application. The research focused on the relationship between performer, avatar, and audiences in an environment mixing 3D digital space and physical 'traditional' staging space [5]. However, there were very limited researches focused on the mixed reality collaborative experience, where users/agents working on the same 'project' at the same time.

Human-robot collaboration was a new trend in industrial robotics as part of strategy industry 4.0. The main purpose of this strategy was to create an environment between humans and robots to work collaboratively. Even the technology was still in its infancy, some researchers developed applications regarding human-robot collaboration. Exquisite Corpus was an installation by Songwen Chung, in which she painted with her three robotic collaborators. In this performance, she explored a process of human & robotic co-creation [6]. In a series of papers, Dragone and Holz raised the idea that displaying a humanoid avatar upon the robot platform could help people understand the robots' status more effectively [7]. Similarly, Dragone and Holz presented a novel methodology, which combines the physical robot body and a virtual character displayed through a mixed reality overlay [8]. Our experiments complemented that work, further exploring the mixed reality robot/avatar design space.

In this paper, we established a human-robot collaborative mixed reality interaction to let users experience the real-time construction process of an 'aggregated environment'.

# **3 Methodology and the Main Procedure**

Each participant wore HTC VIVE headset, which determined the exact x, y, z position and rotation of the participant's field of view within the ecosystem of sensors and streamed a camera in the virtual world to screen projections. The screens and projections represented a portal from the physical world to the digital world. Additionally, the robots were augmented as avatars, too. They reacted in real-time in the scene and work as separate entities.

### **3.1 Virtual Aggregated Environment**

In nature, large masses of granular substances, like sands, stones, gravels, form the different kinds of landscape and objects through the process of erosion and accretion. We imitated this natural behaviour in architecture and created its own configuration [9].

In this application, we explored the construction of an aggregated spatial enclosures through designed granular materials, which consisted of a large number of particles. Three types of particles with different behaviours were defined, convex spheres, which could flow, cubicles, which could be used to form the edges, and convex hexapods, which could interlock to be self-supporting structures. These particles were not bonded together in this process, they only interacted through frictional contact. Such unbound granular structures revealed the unique property to be both stable as a solid material and reconfigurable like fluid.

**Fig. 3.** Avatar behaviors and control diagram

The construction event was executed with the participation of an avatar and two augmented robots. During the experience, users could control avatar and robots to generate/shoot particles into pre-made transparent boundary container molds. We selected a few typical architectural elements, like columns, hexagon wall structures, and landscape, to visualize the whole process (Fig. 3).

### **3.2 Avatar**

The avatar was a remote, spatial, and abstract representation of the user/participant in this event. It trans-located the physical users from the actual location into the scene. The avatars were designed according to the formal and behavioral characteristics related to the 'aggregated' environment. During the 'construction scene' experience, users could use physical gestures like shaking-off, jumping, hitting to interact with the particles, or could use pre-programmed buttons to execute a specific task, such as generating particles on own body, shooting particles into the scene, etc. (Fig. 4).

**Fig. 4.** Avatar behaviors and control diagram

The system transformed the sensor data obtained from the user's head-mounted display (HMD) and two hand controllers in the lighthouse tracking system of HTC VIVE, which detected the exact location of the devices in a room-scale environment. All the sensor data were translated into position and orientation information in world coordination with commercially available Unity3D plug-in Open VR. The position and coordination of the user's head and two hands were updated in every frame to determine the velocity, direction, and other information for motion reconstruction.

A process was presented within which an individual avatar was created using a simple 3-point tracking system and moved with inverse kinematics. Considering in our application, all the interactions were implemented by using two hand controllers, users would pay more attention to the motions of upper limbs and communication latency rather than a full-body reconstruction accuracy. So, we selected Limb (IK algorithm), which came with the commercially available Unity 3D plug-in Final IK. It had a shorter communication time, better Mean Per Joint Position Error (MPJPE), and better Mean Per Joint Rotation Error (MPJRE) per Table 1 [10].


**Table 1.** Comparison between different IK algorithms

During the experiment, we instructed the user to take a standard T-pose to take the world position of the left hand, right hand, head, and the height of virtual body was assumed. We performed a random dancing experiment to verify the validity and effectiveness of our method. As shown in Fig. 5, some random poses in a free dancing process were shown to prove that our method could estimate feasible and stable behaviors. For the upper body, the IK worked well to reconstruct the arms and upper body motions according to the 3 points tracking data. For the lower body, the legs and hips are centered at head and chest, which helps to get a stable and natural lower body orientation.

The goal of this method was to reconstruct a visually appealing motion in real-time by 3 tracking points, the precision of the result is not our strength, since we don't have tracking data of the lower body. In the application, the user's view was blocked by HMD. They didn't see the difference between an avatar and themselves as long as the reconstruction motion is natural and stable. So it was worth minimizing the tracking device instead of improving the precision.

**Fig. 5.** Motion reconstruction capabilities, reconstruction results of random dancing

### **3.3 Robotic Movement**

Two augmented robots were the remote and abstract representation of physical robots in this event. The augmented robots were also designed to work in connection with the aggregated environment, as they were set to be assistants of avatars in 'construction scene'. During the experience, on one hand, users could use the controller button to let one robot generate/shoot particles into the scene to help construction. On the other hand, the other robot was set to do one specific job programmed in the application, such as sweeping the particles already generated from robot and avatar to arrange the scene (Fig. 6).

A complete 3d model of KUKA robots was assembled and imported into Unity as a.fbx file extension. The model was imported as a tree of connected joints with constraints between, so that each part was a child of the previous part starting from the base point. After imported into Unity, each part of the robots including aggregated elements were assigned to different game objects, which was sorted into certain hierarchical order corresponds to the DOF links of real-world robots (Fig. 7)

**Fig. 6.** Robot behaviors and control diagram

**Fig. 7.** Virtual 'Robot Cage' arrangement

There were two game object groups set for each robot, 'Robot\_GRP' (actual game object seen in the scene) and 'Robot\_Ghost' (invisible in the scene). Each group contained the same hierarchic tree structures of joints and game objects. In our case, the tracking data was extracted from left-hand controller of HTC Vive once per frame in Unity. These data were used to generate the robot's parameter which defined the actual robot's movement. So, we used a data filtering process to eliminate the outliers, which helped the robot move smoother and avoid collision because of unstable controller movement. And this process took place in 'Robot\_Ghost'. The 'clean' tracking data was sent to 'Robot\_GRP' to calculate joints rotation data, which were sent to the actual robot by UDP connection. In this case, one more layer of movement protection was added between actual robot and hand controller, and the movement of actual robot and virtual robot avatar matched perfectly. The formula of data filter was given in this equation:

$$\mathbf{R\_{Ai}} = \mathbf{a} \ast \mathbf{R\_{Ai}} + \mathbf{b} \ast \mathbf{R\_{Ai}} (\mathbf{i} - \mathbf{l}) \tag{1}$$

where:

R\_Ai: is the current position of target object. a: float effector. It helps to find a optimized path between position data.

By implementing IK (Inverse Kinematics) [11] based on the transformation matrix of the target object of 'Robot GRP', we computed a set of generalized parameters for each joint [12].We attached the update scripts on every joint so that the position and orientation of every joint got updated in every frame. And the generalized joint parameters in the virtual robot 'Robot GRP' were sent to the actual robot using an Ethernet connection (UDP/IP).

# **3.4 Portal**

The projection and screen represented a portal to see from the physical world into the digital world, which provided a third-person view for this event to watch the performance of operator/avatar and robots.

As shown in Fig. 8, an HTC VIVE tracker was mounted above the projection screen, which defined the orientation of the screen into Unity 3D. And during the experience, users held another HTC VIVE tracker. Both trackers' orientation parameters defined the camera parameters in the virtual world, which helped users to peek into the virtual world through a portal.

**Fig. 8.** Portal – view from the physical world into the scene

# **4 Conclusion and Discussion**

In this paper, we proposed an immersive mixed reality human-robot collaboration experience, by imitating the construction process of large-scale aggregated spatial enclosures. The application opened up a new possibility to an architectural immersive experience, where the sensory including the color, depth, materials, and geometries were constantly blurred between the physical and digital world.

Beaux Arts Ball 4.0 envisions and plans for future cyber-physical social environments where the participants are not limited to humans that are physically present in a particular space but constitute robots, artificially intelligent beings in the forms of sounds and simulations, digital and robotic avatars of other tele-present humans, and sensor-enhanced smart AI environments that are responsive and actively engaged with the social life of their context.

# **References**


**Open Access** This chapter is licensed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made.

The images or other third party material in this chapter are included in the chapter's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the chapter's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.

# **Materialization and Construction**

# **Towards Self-shaping Metamaterial Shells: A Computational Design Workflow for Hybrid Additive Manufacturing of Architectural Scale Double-Curved Structures**

E. Özdemir(B) , L. Kiesewetter(B) , K. Antorveza(B) , T. Cheng, S. Leder, D. Wood, and A. Menges

Institute for Computational Design and Construction, Keplerstr. 11, 70174 Stuttgart, Germany {icd.mail,e.ozdemir,laura.kiesewetter,icd.mail, k.antorveza}@icd.uni-stuttgart.de

**Abstract.** Double curvature enables elegant and material-efficient shell structures, but their construction typically relies on heavy machining, manual labor, and the additional use of material wasted as one-off formwork. Using a material's intrinsic properties for self-shaping is an energy and resource-efficient solution to this problem. This research presents a fabrication approach for self-shaping double-curved shell structures combining the hygroscopic shape-changing and scalability of wood actuators with the tunability of 3D-printed metamaterial patterning. Using hybrid robotic fabrication, components are additively manufactured flat and self-shape to a pre-programmed configuration through drying. A computational design workflow including a lattice and shell-based finite element model was developed for the design of the metamaterial pattern, actuator layout, and shape prediction. The workflow was tested through physical prototypes at centimeter and meter scales. The results show an architectural scale proof of concept for self-shaping double-curved shell structures as a resource-efficient physical form generation method.

**Keywords:** Self-shaping wood · 3D printing · Robotic fabrication · Mechanical metamaterials · Material programming

# **1 Introduction**

Shell structures are advantageous in architecture because the geometric stiffness resulting from the double curvature makes them highly material-efficient [1]. Recent advancement in computational modeling and simulation technology gives architects and engineers sophisticated tools to design elegant and geometrically performative structures with ease [2, 3]. However, constructing double-curved designs with conventional construction techniques is laborious and requires complicated machining or large formwork with excessive waste production. On the construction site, forming or using large formwork depend highly on skilled manual labor and intricate scaffolding [4].

This research proposes an alternative to contemporary shell construction practice: By using a material's intrinsic properties, forming instructions can be embedded into the

© The Author(s) 2022

material system and conventional forming methods can be replaced with self-shaping. The structure is made of a hybrid material system combining self-shaping wood actuators with a tunable 3D-printed metamaterial patterning (MMP). It is designed to be fabricated in a flat configuration, reducing the complications and excess of 3D forming. It then autonomously self-shapes on-site into the pre-programmed double-curved geometry (Fig. 1). The shape can then be locked by constraining the edges or inter parts of the structure to avoid further deformation.

**Fig. 1.** (a) A lightweight, self-shaped double-curved shell structure manufactured additively from 3D-printed bio-composite metamaterial patterning (MMP) and integrated wood actuators. (b) Detail of the stiffness-tunable patterning and actuator connection

# **1.1 Self-shaping Structures**

The development of self-shaping systems is a growing field of research in material science and engineering. Geometric self-shaping mechanisms in planar lattices have been demonstrated on smaller scales [5, 6]. At a similar scale, prestressed reinforced elastic membranes have been used for deployable elements that spring from flat to curved when released [7]. These systems are scalable but require control of high stresses at deployment.

**Self-shaping Wood.** In the form of a bilayer, wood becomes a natural actuator that generates curvature. It is both hygroscopic and anisotropic, which makes it respond to its surrounding relative humidity (RH) with shape changes depending on its grain direction and wood moisture content (WMC) [8, 9]. When used for shaping large curved timber elements, wood bilayers are fabricated flat and self-shape during drying [10]. Although single curvature can be achieved from wood bilayers, the creation of double curvature in solid wood plates is only achievable to a limited extent and difficult to predict [11]. Self-shaping wood gridshells have also been investigated with finite element (FE) models and tabletop prototypes but have limited design freedom so far [12]. The principle of self-shaping wood actuators combined with a 3D-printed structure for single-curved geometries has been studied with prototypes on a similar scale [13].

# **1.2 Large Scale Additive Manufacturing (LSAM)**

Large Scale Additive Manufacturing (LSAM) is a high-resolution additive fabrication method that enables tuning on a local level. Lightweight, filigree structures can be designed and printed with specific local bending properties and bent into shape. These properties allow for the creation of shell structures with little to no waste production but still require extensive and coordinated physical shaping on-site [14].

### **1.3 Bending-Active Metamaterial Surfaces**

Auxetics are a class of metamaterials that show unusual behavior when stretched or compressed due to their engineered geometry that causes negative Poisson's ratio [15]. Unlike most other bending-active plate materials that create single curvature when bent [16], these materials can create double curvature [17]. By carefully designing these MMPs, the resulting bent geometry can be pre-programmed to exhibit anticlastic, single curved, and synclastic areas [18]. The potentials of this principle for building structures have so far only been explored on a small scale due to the significant forces required to bend larger structures.

# **2 Methods**

Our approach focuses on the computational design and fabrication of a 3D-printed auxetic MMP with integrated wood bilayer actuators to achieve self-shaping shell structures at a large scale. The concept is investigated through the development of a computational design workflow (Fig. 2), validated with a 1:10 scale physical prototype, and a full-scale demonstrator.

### **2.1 Design and Analysis**

The computational design workflow starts with the design of a double-curved shell geometry in which surface curvature is adjusted within a range related to the bending stiffness of the materials. The design intent is evaluated through curvature and structural analysis: While the structural analysis is used to determine where to reinforce the geometry with increased stiffness through a gradient in structural depth, the curvature analysis plays a key role in designing the MMP and determining the actuator locations and orientations.

Since predictingthe behavior of a hybridmaterial systemis complex, physicaltests are used to understand the relationship between the wood actuators and the 3D-printed MMP.

**Fig. 2.** Computational design workflow overview: defining the system parameters, hybrid additive manufacturing (AM) of the prototypes, self-shaping through drying and evaluation

A simulation is also developed as a design tool allowing for quick design iterations with the feedback of both structural and shaping parameters. In parallel, scaled physical prototypes (1:10) are produced to test the simulation accuracy and to validate the self-shaping of the structure. The scaled physical prototypes are geometrically compared to the outcome of the simulation, and the self-shaping parameters are calibrated until satisfactory results are achieved. Based on a chosen configuration, the wood actuators are produced and the fabrication instructions for the LSAM are generated.

# **2.2 Physical and Digital Testing**

While physical tests (Fig. 3) are essential to study the complex relations between different design parameters, they are time and material-consuming. The simulation tool developed for this research combines Timoshenko's bimetal theory [19] and a latticebased model with varying expansion coefficients [20, 21] using an incremental solver for large deformation analyses in Karamba 3D [22]. Restrictive and active layers of the actuators are represented by a lattice of beams, rigidly connected in each node. The external stimulus for shrinkage in the active layer is represented by changes in the thermal load with the corresponding thermal expansion coefficient. The radius achieved by the bilayers in the simulation is compared and calibrated to actuated physical wood bilayers. The 3D-printed pattern is modeled as a system of vertically oriented shell elements with mechanical properties matching that of the printed material, rigidly connected to the lattice structure of the bilayers.

# **2.3 Hybrid Additive Manufacturing**

First, the wood actuators are produced from passive layers and active layers that were equalized at 92% (RH) for a month. Subsequently, they are CNC milled into their final shapes and once again put back into the moisture chamber until the start of the hybrid additive manufacturing process.

**Fig. 3.** Initial cm scale tests showing variable MMPs' influence on curvature. (a) Physical tests with variations in cell geometry to produce anticlastic, single curved, and synclastic bending with a desktop fused deposition modeling (FDM) printer, polylactide (PLA) filament, and a wood veneer bilayer actuator. (b) Initial digital tests with variations in cell geometry (T = angular parameter determining concavity)

To 3D-print the MMP, a single continuous polyline is created for the G-Code generation [23]. The base layers are the first to be 3D-printed using robotic LSAM, leaving pockets for the actuators. Then, the actuators are inserted into the MMP, and enclosure layers are printed on top.

### **2.4 On-site Self-shaping**

Once the fabrication is completed, the structure is wrapped in airtight packaging and flat-transported to the deployment site. There, it is allowed to dry in ambient humidity conditions, gradually decreasing the WMC and self-shaping into the final structure.

# **3 Prototyping and Results**

Before upscaling the system, the developed workflow was tested using a small-scale physical prototype (1:10) to ensure that the design parameters could meet the geometrical and behavioral expectations. After this preliminary study, the design was scaled up ten times and a full-scale demonstrator was produced to prove the scalability of the material system.

### **3.1 Small Scale Prototype**

A shell geometry was designed to self-erect from the ground as much as possible from a 60 cm × 20 cm flat pattern while exhibiting areas of anticlastic, single, and synclastic curvature (Fig. 4). The pattern was printed in three pieces and embedded with eleven actuators made from 3 mm thick beech-maple bilayers. The three pieces were assembled by using glue and were left in interior ambient conditions (40% RH) to self-shape as a single structure in 36 h (Fig. 4c). The resulting self-shaping geometry achieved reasonable approximations to the simulation prediction, although continuous curvature was not achieved most likely due to the gradient being interrupted between the pieces.

**Fig. 4.** Small scale prototype. (a) Gaussian curvature values of the goal geometry are used to adjust the MMP, actuators are placed on the cells with the highest first principal curvature. Depth gradient is applied to ensure curvature continuity. (b) FE simulation of self-shaping structure. (c) Results of self-shaping in 1:10 physical prototype

### **3.2 Large Scale Demonstrator**

The significant shift in the scale and self-weight required recalibration of the parameters in the computational design workflow. The cell sizes and actuator placement were adapted for the workspace (3.0 m × 1.5 m table) of a robotic LSAM setup incorporating a 6-axis industrial robot (Kuka KRC420) equipped with a Fused Granular Fabrication extrusion system (CEAD GB.V.). To achieve a stable geometry, local utilization values from the structural analysis were used to inform the depth gradient that reinforces the structure. The simulation was also adjusted to the increased cell size and pattern depth, as well as the new nozzle diameter (3 mm) and printing material, a cellulose-filled plastic bio-composite (UPM Formi 3D 20/19 - UPM). Two 3D-printed MMP specimens were subjected to a deflection test and compared to the simulation for calibration. The same approach was used to calibrate the model to the wood bilayers. A 16 mm thick bilayer, consisting of 12 mm beech conditioned to a WMC of *>*25% and 4 mm white wood was considered most suitable. For calibration, a sample actuator was left to dry unbound in indoor ambient conditions (ca. 25 °C/50% RH). The final actuator placement was based on the comparison of simulation outcomes with the design intent.

The demonstrator was robotically fabricated in four sequential steps (Fig. 5a) within which the actuators were placed and encased during 15 consecutive hours.

The demonstrator with the overall dimensions of 250 cm × 150 cm was produced in one piece using eleven embedded actuators (Fig. 5). The completed demonstrator was then left to self-shape in indoor conditions (ca. 21 °C/RH 35%–47%). After 72 h, only minimal shape change was achieved due to the large self-weight of the prototype. To aid in the shaping process it was then supported from its center of gravity. In the next 80 h, the hanging demonstrator bent to the intended geometry and remained in the

**Fig. 5.** Demonstrator fabrication. (a) MMP structure and sequential printing steps. (b) Detail of variations in the MMP and associated printing tool paths. (c) LSAM of the base layers. (d) Integration of the wood actuators within the tuned MMP before printing the enclosure layer

shaped configuration under its self-weight (Fig. 6a)**.** Structural supports were added at the ends of the structure post shaping to prevent any further movement in the structure due to ambient condition changes. The shape retention was verified by 3D scanning the demonstrator (Fig. 6d) after over four months in ambient conditions and comparing the point cloud to the simulation.

**Fig. 6.** Demonstrator. (a) Self-shaping sequence of the large-scale demonstrator over 152 h. (b) Predicted geometry from the simulation. (c) Self-shaped structure in the actuated state. (d) 3D laser scan of the self-shaped geometry with areas of synclastic (+/+), anticlastic (+/-) and single curved (0) bending

# **4 Discussion**

The large-scale demonstrator showcases the potential of scaling up self-shaping wood and 3D-printed MMP to achieve double-curved shell geometries. We developed a computational design workflow for pre-programming self-shaping surfaces with goalgeometry-oriented auxetic pattern generation and actuator orientation, resulting in a structure with embedded self-shaping information. Our simulation proved to be capable of predicting both the self-shaping process (Fig. 4) and the final geometry (Fig. 6b, d), serving as a useful design tool. Finally, we implemented an optimized printing sequence for the auxetic metamaterials, used during the robotic LSAM process. While the preprogramming of geometry with the developed workflow and upscaling of this material system were successful, the self-shaping process did not occur entirely as intended. The most notable issue resulted from the separation of the actuators and the MMP which caused stress-related failures and deviations from the model. This could have been avoided if the prototype was supported from the start of actuation, and by better integration of the actuators.

To improve the accuracy of the model, future research could incorporate moisture dependency of the mechanical properties of wood. The presented model was developed by relying on the inherent structural benefit of double-curved surfaces, therefore further work is needed to verify that the self-shaping is compatible with structural design requirements. Form stability after actuation is a known requirement of self-shaping processes applied in construction. The addition of supports at the edges of the structure or the usage of tension cables to lock areas of the geometry as used in gridshell structures could increase the structural stability after self-shaping. Another interesting approach is to lock the geometry using integrated snap-fit locking mechanisms (Fig. 7) that could be included in the printing of the MMP.

**Fig. 7.** Initial development of an integrated 3D-printed snap-fit locking mechanism. (a) Flat state diagram. (b) Bent state diagram. (c) Physical prototype flexible state. (d) Physical prototype locked state. (e) Locking mechanism detail

# **5 Conclusion**

A computational design workflow was developed for designing and predicting selfshaping hybrid structures. The framework was tested through two physical prototypes at different scales, demonstrating how MMP can be used for self-shaping architectural scale double-curved structures. This system can be adopted for the construction of shells that are typically structurally well-performing, but which are usually avoided due to the difficult production and erection processes. We envision a construction approach that achieves complex 3D geometries from simplified 2.5D fabrication, with the potential of reducing carbon footprint through flat-packed transportation and decreasing labor and scaffolding by self-shaping on-site (Fig. 8). With further improvements to the fabrication setup, the research presented offers a low-impact solution for the construction of complex architectural structures. In this way, it contributes environmentally and technologically to the way we build, taking a step towards more sustainable construction methods.

**Fig. 8.** Concept for the self-shaping of a component based double-curved shell structure

**Acknowledgements.** This research was conducted as an M.Sc. thesis in the framework of the Integrative Technologies and Architectural Design Research (ITECH) program at the University of Stuttgart, led by the Institute for Computational Design and Construction (ICD), and the Institute of Building Structures and Structural Design (ITKE). The research was supported by the Sino-German Center for Research Promotion: DFG and NSFC through the project Performative Design Methodology based on Robotic Fabrication for Sustainable Architecture (GZ 1162). T. Cheng, S. Leder, D. Wood, A. Menges acknowledge the support of the German Research Foundation under Germany's Excellence Strategy - EXC 2120/1 – 390831618. Printing materials were sponsored by UPM Formi. The authors especially thank Riccardo La Magna for his advice on auxetic metamaterials, Michael Schneider and Michael Preisack for their help in the faculty wood workshop, Urs Basalla and the Institute for Engineering Geodesy for laser scanning, Zied Bhiri, and BEC GmbH and Lucas Janssen of CEAD GB.V. for their extended efforts in robotic extrusion integration.

# **References**


**Open Access** This chapter is licensed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made.

The images or other third party material in this chapter are included in the chapter's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the chapter's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.

# **Additive Manufacture of Cellulose Based Bio-Material on Architectural Scale**

Yimeng Wei1,2(B) , Areti Markopoulou1, Yuanshuang Zhu2, Eduardo Chamorro Martin1, and Nikol Kirova1

<sup>1</sup> Institute for Advanced Architecture of Catalonia, Barcelona, Spain yimeng.wei@iaac.net

<sup>2</sup> Shanghai Digital Fabrication Engineering Technology Center, Shanghai, China

**Abstract.** There are severe environmental and ecological issues once we evaluate the architecture industry with LCA (Life Cycle Assessment), such as emission of CO2 caused by necessary high temperature for producing cement and significant amounts of Construction Demolition Waste (CDW) in deteriorated and obsolete buildings. One of the ways to solve these problems is Bio-Material. CELLULOSE and CHITON is the 1st and 2nd abundant substance in nature (Duro-Royo, J.: Aguahoja\_ProgrammableWater-based Biocomposites for Digital Design and Fabrication across Scales. MIT, pp. 1–3 (2019)), which means significantly potential for architectural dimension production. Meanwhile, renewability and biodegradability make it more conducive to the current problem of construction pollution. The purpose of this study is to explore Cellulose Based Biomaterial and bring it into architectural scale additive manufacture that engages with performance in the material development, with respect to time of solidification and control of shrinkage, as well as offering mechanical strength. At present, the experiments have proved the possibility of developing a cellulose-chitosan- based composite into 3D-Printing Construction Material (Sanandiya, N.D., Vijay, Y., Dimopoulou, M., Dritsas, S., Fernandez, J.G.: Large-scale additive manufacturing with bioinspired cellulosic materials. Sci. Rep. **8**(1), 1–5 (2018)). Moreover, The research shows that the characteristics (Such as waterproof, bending, compression, tensile, transparency) of the composite can be enhanced by different additives (such as xanthan gum, paper fiber, flour), which means it can be customized into various architectural components based on Performance Directional Optimization. This solution has a positive effect on environmental impact reduction and is of great significance in putting the architectural construction industry into a more environment-friendly and smart state.

**Keyword:** Chitosan · Cellulose · Biomaterial · Performance directional optimization · Robotic construction · Additive manufacture

# **1 Methodology and Proposal**

### **1.1 Methodology\_A Multi-Scalar Approach**

This thesis will conduct material research and testing from three different levels: MICRO - MESO - MACRO, and finally applied to architectrual-scale application [4].

The MICRO research will focus on the microstructure of the crystallization reaction of chitosan-cellulose mixed materials, as well as the intermolecular reaction of other additives.

The MESO research will focus on the testing of various properties of the mixed material, including the mechanical properties of the dried sample and the printability of the mixed material in wet state.

The MACRO research, new structural forms suitable for additive manufacturing and robotic construction will be studied, and cellulose-chitosan based bio-material will be used to complete architectural-scale printing work (Figs. 1, 2, 3).

**Fig. 1.** MICRO: Intermolecular Mechanism and Reaction: (a) Cellulose with chitosan (1g chi in 1%w/w acetic acid). (b) Paper fiber with chitosan (1g chi in 1%w/w acetic acid).

**Fig. 2.** MESO: Material Innovation and Properties experiments: (a) Sample of Chitosan-Cellulose based Biomaterial in the dry state. (b) 3D-Printing accumulation test of Chitosan-Cellulose based Biomaterial.

### **1.2 Proposal**

Starting from the current issues in the construction industry, to the current academic research ideas in each field, it is not difficult to find an intersection: based on new materials (such as bio-materials), grafted to new productivity (Robotic Construction), and new construction methods (Additive Manufacturing), to practice new design concepts (BESO).

### **1.2.1 Proposal 1: Chitosan-Cellulose Based Biomaterial (Fig. 4)**

CELLULOSE and CHITON is the 1st and 2nd abundant substance in nature, which means significantly potential for architectural dimension production. Meanwhile, the mechanic properties can be enhanced with certain additives, such as pectin, Xanthan Gum, or even recycled paper fiber. Moreover, the printability of the mixture makes it possible to cooperate with robotic 3D printing.

### **1.2.2 Proposal 2: Large Scale Robotic 3D-Printing (Fig. 5)**

A proper fabricating method is strongly required once we have the proper biomaterial for architectural application. The way of this thesis focusing on is large scale robotic 3D printing. This kind of biomaterial can be used in a traditional manufacturing way, such as model casting. But with the help of robotic 3D printing, we can significantly speed up the construction efficient and achieve a more organic form.

# **1.2.3 Proposal 3: Bidirectional Evolutionary Structural Optimization [3] (Fig. 6)**

Once we decide the material and the fabrication method, we need to think about how we should design accordingly. This thesis is trying to explore the biomaterial application in architectural scale, which means it should be a structure part, like a beam or pillar. With the idea of BESO, we can figure out a reasonable and organic result.

**Fig. 4.** Proposal 1 **Fig. 5.** Proposal 2 **Fig. 6.** Proposal 3

# **2 Experimental Procedure**

# **2.1 Micro\_ Intermolecular Mechanism and Reaction**

# **2.1.1 Cellulose Based Biomaterial Innovation**

From the case study, we learned that Chitosan will combine all the fiber-like substances such as cellulose, wood fiber or paper fiber during the crystallization reaction. But also, even in the most proper ratio, such as 1:8 (chitosan : pure cellulose), Chitosan and Cellulose mixture will not have the proper mechanical strength for architectural structure application. In this MICRO state research, we gonna explore several additives to make an innovation to pure chitosan- cellulose composite. The aim is to enhance the strength and reduce the shrinkage.

# **2.1.2 Intermolecular Mechanism\_Optical Microscope Observation Research**

We made Chitosan solution (1g chi in 1%w/w acetic acid) as the base reference of MICRO level research. First, the molecule is all separate when there is only chitosan (Fig. 7), but once we add cellulose inside (Fig. 8), the molecule starts to turn to combine reactions (Fig. 9).

From here we can see the chitosan solution will combine all the fiber-like substances during the Crystallization reaction. Just like concrete and rebar, This is the basement of why chitosan-cellulose based biomaterial has remarkable mechanical properties.

# **2.2 Meso \_ Material Innovation and Properties Experiments**

# **2.2.1 Additives Research**

In order to strengthen specific characters of Cellulose-Chitosan composite, additives will be necessary. The current problem of pure composite is mainly low mechanical strength

**Fig. 7.** Cellulose only (1g chi in 1%w/w acetic acid)

**Fig. 8.** Cellulose with chitosan (1g chi in 1%w/w acetic acid)

**Fig. 9.** Paper fiber with chitosan (1g chi in 1%w/w acetic acid)

and heavy drying shrinkage. To optimise these problems, we have to look into certain additives [5].

### **The Optional Additives**

Gelatine: To increase the mechanical strength and speed up the solidification process. Xanthan Gum: To make the composite more stable and homogeneous, and more suitable for printing.

Glycerin: To enhance flexibility and generate bio-plastic. it might not be structure but other architecture parts.

Flour: to decrease the dry shrinkage.

Paper Fiber: to decrease the dry shrinkage and enhance mechanical strength. Pine Resin: to speed up the solidification process.

### **Experiment Recording (Fig. 10)**

Sample 01: 1g cellulose + 1g chitosan (1g chitosan in 1%w/w acetic acid)

Sample 02: 1g cellulose + 1g gelatine + 1g chitosan (1g chitosan in 1%w/w acetic acid)

Sample 03: 1g cellulose + 1g glycerin + 1g chitosan (1g chitosan in 1%w/w acetic acid)

Sample 04: 1g cellulose + 1g Xanthan Gum + 1g chitosan (1g chitosan in 1%w/w acetic acid)

Sample 05: 1g cellulose + 1g Pine Resin + 1g chitosan (1g chitosan in 1%w/w acetic acid)

Sample 12: 1g cellulose + 1g flour + 1g paper fiber + 1g chitosan (1g chitosan in 1%w/w acetic acid)

### **Summary**

By setting up a control group experiment, we find some additives can greatly optimise the properties of chitosan- cellulose composite. Xanthan Gum will make the mixture more stable and easier for printing. Flour will enhance the mechanical strength and reduce the dry shrinkage. Paper fiber can significantly reduce the dry shrinkage since it offers a macro level fiber system to cooperate the micro level (cellulose).

**Fig. 10.** Sample 01–12

# **2.2.2 Ratio Research**

A proper ratio is necessary for this kind of composite to reach the best operating state. From the case study we learned that the most reasonable ratio of Chitosan : Cellulose is 1:8. But according to my experiments, this number will significantly change if the type of cellulose is different.

### **The Optional Cellulose**

Carboxymethyl cellulose: 1:12 Microcrystalline cellulose: 1:10 Nano fiber cellulose: 1:8 Pure fiber cellulose: 1:8 Lignocellulosic: 1:6

### **Observation**

To reach a similar density and viscosity, different cellulose required different ratios [6].

As is mentioned above, the ratio can be significantly different once the type of cellulose changed. Basically, the ratio will grow higher when the fiber size grows smaller.

# **Experiment Recording (Fig. 11)**

Sample 13: 10g cellulose + 1g flour + 1g Xanthan Gum + 10g paper fiber + 1g chitosan Sample 14: 8g cellulose + 1g flour + 1g Xanthan Gum + 8g paper fiber + 1g chitosan Sample 15: 6g cellulose + 1g flour + 1g Xanthan Gum + 6g paper fiber + 1g chitosan Sample 16: 8g cellulose + 1g flour + 2g Xanthan Gum + 8g paper fiber + 1g chitosan Sample 19: 8g cellulose + 3g flour + 1g Xanthan Gum + 8g paper fiber + 1g chitosan Sample 20: 8g cellulose + 2g flour + 1g Xanthan Gum + 8g paper fiber + 1g chitosan (Sample 13–20: 1G CHITOSAN IN 3%W/W ACETIC ACID)

### **Summary**

According to the sample testing result, the best ratio for now is 8g cellulose + 2g flour + 1g Xanthan Gum + 8g paper fiber + 1g chitosan. In this state the shrinkage is min and the printability is the best.

**Fig. 11.** Sample 13–24

# **2.2.3 Properties Test**

**Lightweight Test Based On Water Absorption And Evaporation (Fig. 12):** When we finished the material making process, we got a wet, printable composite. Here is an experiment of how much the weight will reduce after it dries. We made 6 groups of samples, weighed it before and after it dry, calculating the weight change.

**Conclusion:** 70% weight lost after dry. This kind of material has a significant advantage for architectural application, that it is super light material.

**Shrinkage Test (Fig. 13):** There is a problem for all biomaterial shrinkage. After dry, the cell structure loses water, and the original form can never be maintained. Here we also create 6 groups for this testing. We measure and record the size in 3 dimensions before and after it dry, and calculate the shrinkage rate.

**Conclusion:** Around 5% size shrinkage after it dries. This is an average number. During the testing we noticed the shrinkage direction and size can be slightly different on one sample in different positions. Meanwhile, the model material and the dry environment will significantly influence the result, which means a proper way to do the drying process is the key to control shrinkage and final form.

**Fig. 12.** Lightweight test based on water absorption and evaporation

**Fig. 13.** Shrinkage test

**Fig. 14.** Moisture test

**Fig. 15.** Compression test

**Fig. 16.** Bending test

**Moisture Test (Fig. 14):** Here we also create 4 groups r oisture testing. We measure and record the size in 3 dimensions for all samples and put them in the testing box, with the temperature setting in 27, humidity in 80% and put them there for 2 month. Every 15 days we measure the size and calculate the deformation ratio.

**Conclusion:** The deformation is 0%. According to the testing result there is no significant deformation on the testing sample. This means, at least for form keeping, this kind of material is quiet humidity resistant.

**Compression and Bending Test (Fig. 15/Fig. 16):** Here we use the air pressure device to do the compression and bending testing. This device is developed by IAAC and the maximum air pressure is 8 pa. We put one full dried cubic sample for compression and bending testing. When the air pressure goes to the max there is still nothing happening to the sample. After that we tried to put weight on the sample to reach the compression/ bending limitation, but even 100kg weight on it there is still not even one crack on surface.

The air pressure testing calculation is based on the following formular: F = P (pressure in bars) \* Area (related to piston diameter in cm2) <sup>F</sup> <sup>=</sup> p\*A *<sup>&</sup>lt;*=*<sup>&</sup>gt;* <sup>F</sup> <sup>=</sup> P\*{<sup>π</sup> \* 1.62} *<sup>&</sup>lt;*=*<sup>&</sup>gt;* <sup>F</sup> <sup>=</sup> (P \* 8.03) kg RTF <sup>=</sup> (3 \* F \* L) / (2 \* b \* h2)

**Conclusion:** The compression/bending resistance is remarkable. According to the formula calculating we figured out the number is *>*10 N/mm2, which is a significant advantage for the application in architecture, especially for the architectural part.

# **2.2.4 Data Recording**

Here is the data recording for all the sample testing. From the Sample 2 we figured out that Xanthan Gum is super important for viscosity. From Sample 3 we found flour influences consistency a lot. From Sample 4 we learned that Paper fiber is the key to control the shrinkage.

As for ratio testing recording, from sample 13 we found the best ratio for chitosan and microcrystalline cellulose is 1:10. From Sample 14–17 we learned the amount of xanthan gum and flour is 1:2. And finally, in sample 20, we got the best ratio of this kind of Chitosan-Cellulose base biomaterial composite.

# **2.2.5 Summary of Each Additive's Fuction**



**Fig. 17. Sample 20 parameters:** Dry time: 48 h (80 °C)/7 days (25 °C), Lighting weight: 60%-70%, Shrinkage: 5%, Humidity resistance: 0%, deformation: (80% hum/2w), Compression resistance: *>*10 n/mm2, Bending resistance: *>*10 n/mm2, Biodegradability: 100%, Recyclability: 100%

### **2.3 Macro\_Architectural Scale Application**

### **2.3.1 Geometry Design–Research Strategy**

This part is focusing on the application of biomaterial on architectural structure. Here we assumed a specific pavilion structure, and used ameba to do the beso analysis to figure out the most reasonable state of all the bearing elements, and thereby generated the skeleton of this pavilion (Fig. 20).

This process is focusing on the combination of the advantages of BESO and Additive Manufacture (like robotic 3D printing on architectural scale), and the possibility of Advanced Architecture Design- Construction Workflow which is super different from the Traditional Construction Methods.

BESO is a form-finding theory developed by Ameba research team, the leader is Xie Yimin, from RMIT. Ameba is a plug-in software based on Grasshopper.

Here in the following chart (Fig. 18), we can see the iteration information and the analysing state. Finite element is growing fewer and fewer while the Total energy is growing higher and higher. The result of analysis can be displayed in real time once there is an updated calculation (Figs. 19 and 20).

**Fig. 18.** Iteration information **Fig. 19.** MISES

diagram.

**Fig. 20.** Displacement diagram.

**Fig. 21.** Generation process

### **2.3.2 Generation Process**

Here we have the BESO analysis interaction (Fig. 21). there is 45 times interaction in total. We recorded here every 5 Iterations. BESO is an analysis system based on Finite element. In each iteration it will automatically remove some low-efficient elements and put more elements in the weak part.

The left elements will grow fewer and fewer since the iteration is building up. Usually when we do the architectural analysis, the iteration will meet up the most reasonable state when the iteration comes to 40–60.

Here in this tree pavilion design (Fig. 22), at the iteration of 45 we got the ideal analysis result. The next step is to modify this basic form accordly, and optimise it into a more logic, smooth state that can be fabricated.

# **3 Additive Manufacture**

### **3.1 PC - Extruder - 3D Printer Workflow and Extruder Fabrication**

In the early stage of Bio-material printability test, we developed a small-scale printer suit based on ANYCUBIC KOSSEL 3D printer. This series of processing equipment can fully meet the printing requirements of pneumatic/motorized extrusion and continuous breakpoint printing, and the small size is suitable for house office.

**Fig. 22.** Tree pavilion

### **3.1.1 Delta Extruder (Fig. 23)**

This delta extruder is developed for small scale 3D printing. The Function of this device is mainly to make sure that there is no bubble inside the material, instead of pushing material out.

To make sure the material comes out in the right direction instead of goes up, we put two bearing washers to clap the bearing.

# **3.1.2 Robotic Extruder (Fig. 24)**

This robotic extruder is developed for large scale 3D printing. The length of the material tube is 60 cm, which means the printing distance is remarkable.

To make sure the step motor outputs the right speed to cooperate the motivation of the robot arm, we add a gearbox to match the speed.

# **3.2 Biomaterial Printability Testing**

This part is printing testing with biomaterial. There are three different testing aspects: First is the extrudability of biomaterial. Secondly is the accumulability and Third one is Decline Angle limitation testing.

# **3.2.1 Prototype 3D Printing Test Summary**

The operation process is smooth and the stability of the prototype is good. Quantitative analysis of the carrying capacity will be followed up. Based on the current research results, it is initially determined that the cellulose-chitosan-based Biomaterials are feasible for 3D printing structural members in the construction field (Figs. 25, 26 and 27).

**Fig. 23.** Delta extruder

# **3.2.2 Form Optimization for Large Scale 3D Printing**

### **Optimization 1 \_Opening Printing**

Limited by the conditions of the site equipment, this time it was not possible to do opening printing. However, based on the previous printing experience, opening printing can be used for this kind of bio-material (Fig. 28).

### **Optimization 2 \_Decline Angle Limitation**

The angle setting for this test was relatively conservative, and a 40-degree safety angle was selected. However, based on the previous testing results, the upper limit can reach about 60 degrees, thereby creating a more efficient structure (Fig. 29).

### **3.2.3 Large Scale 3D Printing Test Summary**

This version is a buildability test conducted in the middle of the design. The selected form is relatively conservative, with a maximum tilt angle of only 40 degrees. In addition, due to the site and equipment conditions during the epidemic and the difficulty of material transportation, the selection of the test material for this time was the cellulose-pla-based ecological material provided by a third party. This material is basically the same as the early composition of this research, and the characters are also relatively close. So using this as an alternative version of the final material, the deviation of the test results will be comparable (Fig. 30).

**Fig. 24.** Robotic extruder

**Fig. 25. Extrudability testing:** The extrudability of this kind of material is quite good. According to the air pressure powered extrude testing, the operating gas pressure is 2 pa, which means it is quite easy to extrude and the operation condition is quite safe.

**Fig. 26. Accumulability testing:** This testing version is focusing on accumulability of this composite. According to the test result, the accumulability is significant. We accumulated more than 100 layers and it remains super stable.

**Fig. 27. Decline angel limitation testing:** The decline angle limitation testing shows that, while the material is in wet state, the angle limitation is around 20 degree. But once we blow it with hot wind to speed up the water evaporation, the angle limitation is significantly improved.

**Fig. 28.** Large scale form optimization **Fig. 29.** Printing process

**Fig. 30.** Printing parameters. Prototype height: 1 m. Max decline angle: 40 ° Prototype Weight: 3.25 kg Total Routine Length: 202 m Nozzle Diameter: 4 mm Material Amount: 8080 cm Printing Time: 6 h 25 min

# **4 Conclusion**

# **4.1 Combination: Biomaterial + Additive Manufacture + Beso**

This thesis research is focusing on the economic problems and natural issues caused by today's construction industry [7], and coming up with a possible solution in the aspects of Biomaterial, Additive Manufacture and Bi-direnctional Evolutionary Structural Optimisation (BESO).

### **4.1.1 Biomaterial**

Cellulose and chitosan are the first/second abundant materials in this world, with natural characteristics such as biodegradability and renewability. Moreover, this thesis proved the mechanical character [8] can be enhanced or oriented optimised with specific additives [9], which means it is of great potential for architectural application.

### **4.1.2 Additive Manufacture**

With the cooperation of a new booming construction ideal, robotic labor and data oriented management, "printed house" is coming to this real world step by step. Additive manufacture is definitely one of the best ways to cooperate with biomaterial. This thesis proofed the possibility of 3D printed biomaterial on architectural scale, which can be a meaningful inspiration for further research.

### **4.1.3 BESO**

Here in this thesis, we use BESO as a theory to generate structures. What should be stressed here is just one ideal of many. How do we use new material, as well as fabrication tools and methods, is always a multidisciplinary innovation. it can be BESO, it can be FGM [10], it can be any possible or seems-impossible way. We tried BESO in this thesis because we want to confirm this thinking is possible, and more important, to share an inspiration for further innovation.

**Acknowledgments.** I would like to express my gratitude to my thesis advisors Areti Markopoulou. This thesis would not be possible without her guidance, dedication and inspirational insides on the intersection of architecture, biology and additive manufacture. I would like to thank Eduardo Chamorro Martin, a genius mechanist, and Nikol Kirova, a worm-heart supervisor and friend. Their support and remarks were quite essential for the development of the thesis.

I would also like to thank Ricardo Mayor for his guidance on digital fabrication, robotics and computation. I would like to thank Ankita Alessandra Bob, Surayyn Selvan and Megan Yates Smylie. Their willingness to give time so generously has been very much appreciated.

Next, I would like to thank Anton Koshelev for his support on robotic extruder. And Seçil Af¸sar, for her generous sharing of chitosan-cellulose work experience. Also, I would like to express my special gratitude to my Classmate Doruk Yildirim for sharing the in-house knowledge on robotics and additive manufacturing with clay.

Finally, I will be forever grateful to my family and friends who have supported me for the last 9 years throughout different countries and continents, their encouragement has made all this possible.

# **References**


**Open Access** This chapter is licensed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made.

The images or other third party material in this chapter are included in the chapter's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the chapter's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.

# **Imprimer La Lumiere – 3D Printing Bioluminescence for Architectural Materiality**

Mette Ramsgaard Thomsen1(B) , Martin Tamke1, Aurelie Mosse2, Jakob Sieder-Semlitsch1, Hanae Bradshaw2, Emil Fabritius Buchwald1, and Maria Mosshammer<sup>3</sup>

<sup>1</sup> CITA, Royal Danish Academy, Copenhagen, Denmark {mette.thomsen,martin.tamke,jsie,efab}@kglakademi.dk <sup>2</sup> ENSAD, Paris, France {aurelie.mosse,hanae.bradshaw}@ensad.fr <sup>3</sup> University of Copenhagen, Copenhagen, Denmark maria.mosshammer@bio.ku.dk

**Abstract.** 'Imprimer la Lumière' examines the making of a bioluminescent micro architecture. The project positions itself inside a sustainability agenda. By exploring the use of light-emitting bacteria as a material for architecture it asks what are the concepts, methods and technologies needed for designing with living materials. The project devises new means by which to design with the luminescent vibrio fischeri bacteria in a 3D printing manufacturing process based on extrusion principles. By combining the study of these living organisms and their appropriation through advanced robot-controlled 3D printing technologies, we establish a conceptual, material and technological framework for a bio-controlled bacteria growth and 3D extrusion process and a printable material based on agarose and gelatine.

**Keywords:** New materiality · Architecture · Bio-design · Robotic fabrication · Bioluminescence · 3D printing

# **1 Introduction**

As we enter an era of resource scarcity, architecture and design communities are urgently rethinking their material practices. New conceptual frameworks allow us to consider a bio-based material paradigm as an inspirational model (biomimetics), a co-worker (biodesign) or a technological platform (bio-technology) allowing to reprogram the living (Collet 2013).Whether natural or synthetic, the development of biological manufacturing processes plays an increasing role in this context (Franklin and Till 2018; Terranova and Tromble 2017; Brayer and Zeitoun 2019), leading to the conceptualisation of a metabolic-driven material paradigm that challenges the perception of our surroundings as inert to something essentially living (Ramsgaard Thomsen 2019) (Fig. 1).

'Imprimer la Lumière' is an interdisciplinary research enquiry sitting at the intersection of architecture, design and microbiology exploring the use of light-emitting bacteria

**Fig. 1.** Micro-architecture of 'Imprimer la Lumière'.

as architectural materiality. By investigating the 3D printing of bioluminescent bacteria, we are questioning how architecture can be host for an ecology of species in symbiotic coexistence (Beesley 2014). Widely occurring in marine life, some mushrooms and insects, bioluminescence in architecture and design remains an under-explored territory of investigation, essentially used to date to imagine and probe more sustainable ways of lighting. In this paper, bioluminescence is used as a mean to question the critical thinking and appropriation of bacteria as an architectural material and how this is changing the practice of architectural design and fabrication.

Our aim is to investigate the critical thinking and appropriation of living bacteria as an architectural materiality. To do so, we explore bioluminescence: a chemical form of light produced by many marine organisms, some insects and mushrooms. While bioluminescent genes are used as markers and for imagining in biology and medicine bioluminescence, recent experiments in the bio-design community have explored bioluminescence as an alternative to public and domestic lighting (Myers 2012; Brayer and Zeitoun 2019). Here, successful experiments have predominantly focused on bioluminescent algae (Van Dongen 2014; Rodriguez 2016, Douenias 2015), we extend this research to bioluminescent bacteria.

In 'Imprimer la Lumière' we use bioluminescent bacteria to examine the metabolism of a living architecture. As any living organism, luminescent bacteria have a limited lifespan. Their appropriation into the built environment therefore induces an intrinsically temporal dimension to the conception, fabrication and experience of architecture, which implies not only the development of a new conceptual framework but also new processes, tools and know-how adapted to microbiologic life that are inseparable from a set of ethical challenges.

As a consequence, this paper asks what are the conceptual, material and technological framework for bio-based architecture and how does designing for living systems challenge our current models of design, registration, specification, fabrication and inhabitation. What are the new models by which we can support the design of metabolistic systems of living organisms and capture their limited lifespans and how does living and nurturing become part of a new vocabulary for how we build?

The project is an interdisciplinary collaboration between architecture, design and marine biology undertaken by: CITA (Centre for IT and Architecture, KADK), Soft Matters group, Ensadlab (ENSAD) and Department of Marine Biology (Copenhagen University) (Fig. 2).

**Fig. 2.** Micro-architecture performance: the light emitted by the bacteria.

# **2 New Methods for a Living Architecture**

'Imprimer la Lumière' presents a set of 3D printed structures acting as a constellation of self-illuminating micro architectures. These micro architectures act as design probes (Ramsgaard Thomsen and Tamke 2009; Mossé 2018) that reflect on the actualisation of architecture as a host for an ecology of species in symbiotic coexistence. The method of the design probe positions the research across conceptual and technological investigations challenging the way we understand the material and performance of contemporary architecture while simultaneously questioning the technologies by which the design and fabrication of living systems can be studied.

Design can be understood in a generic sense as a cultural process of technological appropriation, meaning 'a way of adopting technology in our culture by accepting its influence as well as by influencing it' (de Winter 2002). To question how microbiology can become a new technological platform for architecture, we adopt a practice-based approach informed by the design probe culture (Gaver and Dunne 1999). In this context, a variety of design probes are developed to critically question how to create acceptance for new technological evolutions and discuss whether these changes are desirable. The micro-architectures discussed in this paper sit more specifically at the intersection of conceptual and material probes (Mossé 2018). As a conceptual experiment, they are probed to develop a speculative inquiry or narrative addressing the implications of future desirable cultural patterns. As a material probe, their primary role is to explore the crafts, materials and techniques through which this cultural pattern can be shaped as well their performance as an architectural materiality (Fig. 3).

**Fig. 3.** Vibrio Fischeri cultures in nutritional solution over time from initial activationa.

# **3 Developing 'Imprimer La Lumiere'**

The project is conceived over a series of experiments appropriating techniques for growing luminescent bacteria and developing the technologies for 3D printing the extrusion of their medium. 3D printing is explored as a means of liberating the forming processes of the medium to investigate how topology and surface treatment can drive the life cycles and therefore the light performance of the bacteria. In 'Imprimer la Lumière' we use a collaborative robot with a bespoke micro dispenser. This allows us to address an architectural scale of fabrication distinct from dedicated bioprinter that operate at smaller scales. The building of new 3D printing methods for collaborative robots also allows us to interface with programmable design environments, allowing a higher degree of control and steering of both the design and fabrication process.

# **3.1 Ethical Considerations**

With biodesign, ethical considerations become an intrinsic part of the design project. Designing with bacteria as a material for architecture means designing for and with the life cycles of living organisms, and allowing co-inhibition within our environment. Where bacteria do die as part of this new architecture, they are also nurtured and invested into a new habitat. In 'Imprimer la Lumière' we work with biosafety level 1 bacteria meaning that they are not genetically modified, do not pose a danger to their surroundings and are fully biodegradable.

In designing with Vibrio Fischeri, we delocalise a marine bacteria to the new host environment of the medium. In doing so we provide a monocultural environment which both optimises its living conditions but also makes it less resilient to change.

### **3.2 Designing the Medium for 3D Printing**

The first task is to understand the optimal living environment for the bacteria and design the medium in which it lives. The medium provides nutrition and holds oxygen for the bacteria to metabolise. In 'Imprimer la Lumière' the design of the medium is tailored to the fabrication process and incorporate the requirements for 3D printing. Through experimentation we developed a recipe to control the rheological properties of the medium consisting of Agar, Gelatin, Glycerin, Nutritive media and Water (to 3.2%, 1.8%, 2.8%, 6.5% respectively). The recipe is developed in order to control the viscosity of the medium during the printing process and final state and use for fusion deposition. Agar is used as the basis of the medium for its melting and gelation properties. The amplitude between both temperatures gives the medium durability and firmness (Whistler 1985). Due to this, it is widely used as microbiological media (Dhanapal et al. 2012). We use a common household agar for cooking which enables us to print with additional height, compared to lab grade agarose media otherwise used in direct ink writing technologies (Kilian et al. 2017) (Fig. 4).

**Fig. 4.** Fabrication process in 'Imprimer la Lumière'.

In 'Imprimer la Lumière', we require the medium to be kept above the gelation point at all times during printing. To keep the medium cartridge that stores the material at constant temperature before the 3D printing process, we developed a self-regulating heating unit. Gelatine is added to lower the gelation temperature of the mixture to 40°C. This reduced temperature conserves energy, enables faster gelation and smooths the surface for a more homogeneous appearance. Additionally, glycerin is used as a plasticizer that increases the flexibility of the intermolecular connections between the Agars polymer chains (Arham et al. 2016). The medium is extruded through a high precision micro-dispensing unit (ViscoTececoPen700) with self-sealing rotor-stator arrangement. The dispensing unit is carried by a 6 axis collaborative robot (UR5e). This allows the robot being used inside laboratory conditions and to be moved to sterile environments for inoculation.

Through this design process, we built a strong understanding of the requirements for the medium and the 3D printing process. The medium needs to perform as a hybrid of scaffold and nutrition where the containing walls allow dispersion of nutrition and oxygen. During experimentation, we learnt that controlling the surface of the medium was crucial to promote the light-emitting capacity of the cultures. By having a high surface to volume ratio the bacteria is better exposed to oxygen, propagates better and its emitted light becomes easier to perceive. The parallel effort to control the structural capacity of the medium in a gelled state allowed us to develop design criteria for the probes as having a large surface, interior cavities and to build high (Fig. 5).

**Fig. 5.** Changing the composition of medium radically affects the living conditions of the bacteria, the structural capacity and surface quality of the structure

### **3.3 Designing a Living Architecture**

The design of the micro-architecture probes employs a differential curve growth algorithm. This allows us to maximise the surface to volume ratio of the structure and optimise the bacteria's light emitting capacity. The line differential growth algorithm also allows us to generate the robot print path as an intrinsic part of the design process allowing greater unity between processes of design and processes of making.

The generative algorithm employs strict form-generating rules to generate complex topologies. In 'Imprimer la Lumière', we adjust the rules to addressing structural and fabrication-based constraints such as maximum overhang, interconnectivity, layer heights and structural performance as a result of wall undulation.

The algorithm follows the logic of differential curve growth. Here, a base curve is iteratively subdivided in selective increments. The curve sections are form found using a physics engine to simulate forces between the subdivision points on the curve. On the subdivision points, sphere colliders force these segments to elongate, resulting in the differential growth of the geometry.

In order to achieve a suitable degree of surface complexity and to ensure a balanced distribution of complexity, we employ the differential growth algorithm in two stages: firstly to generate a satisfactory base geometry and secondly to grow the topology in three dimensions. Narrow passages, emerging from the generative process are detached allowing the probe geometry to branch. The splitting is conditioned by a straight skeleton calculation evaluating local surface area. The single branches are then calculated within the incremental growth process as individual objects within a unified system. This process ensures that the overall forces driving the form finding process apply equally to the topological whole and that structural and fabrication requirements are fulfilled. The process can be locally varied to account for structural support and gravity.

The branching structure generates a complex interior section with distributed hollows that act as vessels for bacterial growth. The complexity of the micro-architecture probes allow us to steer the emission of light through topology.

# **3.4 The Inoculation Process**

Lifecycle optimisation and the strict environmental requirements of the cultures demand new fabrication processes bringing new challenges and opportunities (Dade-Robertson 2019; Colette 2007; Pasquero and Poletto 2014). In 'Imprimer la Lumière', the manufacturing process requires the nurturing of the bacteria culture and its shielding from impending contamination. Activated from lyophilized state, the culture is pre-grown within nutritive media suspension while the printed probe is inoculated with a microdispenser. This allowed for stronger recognisable glow, which can be attributed to the reintroduction of oxygen through diffusion. The inoculation process uses the robot for precise positioning. A digital model of the probe is used to compute the access-path for the robot. The probe is then injected with the bacteria suspension and filled into its vessels (Fig. 6).

**Fig. 6.** Bacteria culture growing on medium over time with luminescence analysis.

During tests we observed an activation time of an average of three days. For the colony to survive, a sterile environment containing sufficient oxygen and nutrition needs to be ensured. Prior experiments have shown that the gelation of the agar prohibits diffusion of additional oxygen, which renders active air exchange obsolete. To understand the life cycle of the probes, the timeframe of the culture's growth and propagation needs to be considered.

### **3.5 Evaluation**

The probes are evaluated using two processes of observation and oxygen sensing. The first process uses imaging to register the bacteria growth and propagation patterns across the surface in time. The images are taken in the dark to allow best understanding of the bacteria's light emitting properties. The colony is observed across 12 days in which we observe firstly the growth of the colony around the inoculation point and the strengthening of the light emitted (day 2–4) and secondly a move of the light emitted away from the inoculation point (day 5–12). Our assumption is that the colony propagates into new medium as nutrition and oxygen is consumed. To test this assumption a second evaluation is undertaken using optical oxygen sensing based on the dynamic quenching of an oxygen sensitive indicator dye by oxygen. Here, an optical oxygen meter (FireStingO2) with a retractable needle-type sensor (OXR230) from Pyroscience (pyroscience.com) is used for measurements of dissolved oxygen. Oxygen is measured at the surface of the bacterial culture and different depths through the culture and the medium towards the bottom of the petri-dish. Additional measurements are done at the interface of the bacterial culture and medium, and stepwise further away from the bacterial culture at approximately 1 mm depth within the medium. A clear gradient in oxygen concentration became apparent, as oxygen depleted gradually from the medium (approx. 100% air saturation) towards the bacterial culture, which was completely anoxic (0% air saturation). An oxygen profile measured from the surface of the culture through the medium to the bottom of the petri-dish showed similar results. The bacterial culture proved to be anoxic, and the medium below increased in oxygen concentration the further away from the bacteria measurements were conducted (Fig. 7).

**Fig. 7.** Particle system simulating the progress of bacteria propagation through the medium.

# **3.6 Simulation of Bacteria Behaviour**

In order to understand the lifecycle of the bacteria, we develop a simulation of propagation through the medium. Herefore, we use a simple particle engine to approximate the printed geometry. These are equipped with a density attribute computed by the proximity to their neighbours. The particles closest to the inoculation points are activated as live particles. This attribute is used to control and call the propagation of neighbouring particles by counting up a 'life-time' attribute across each time-step of the calculation. Neighboring particles are then 'activated' in subsequent iterations. When a particles 'lifetime' surpasses a defined maximum value, it transitions into a 'dead' particles, simulating the state where all oxygen and nutrition is consumed. No additional particles can be activated from this point. The simulation of propagation speed and lifespace is calibrated using the observed imagining data from the physical probes.

# **4 Conclusion**

'Imprimer la Lumière' sits at the intersection of biodesign and digital fabrication. It opens up new perspectives for the design and appropriation of bioluminescence as an architectural materiality. At a conceptual level, the project asks what happens when the material of architecture becomes living: what are the new concepts, methods and technologies that are needed to design for and with living materials?

The first step in 'Imprimer la Lumière' is to link bio-design agendas with digital fabrication and 3D printing. Our application of collaborative robots for 3D printing of microbiologic organism allow us to extend the scale of fabrication and speculate upon the design criteria. In 'Imprimer la Lumière', the probes are informed by both the optimal living environment for the bacteria and the fabrication criteria for 3D printing. By controlling the rheological properties of the medium and its gelling, we can steer the light-emitting performance and design its structural capability to build complex topologies with interior cavities and vessels that optimise the living environment for the bacteria.

The second step is to build assumptions about how the geometry of the microarchitecture affects performance. These assumptions are made through empirical observation studies and evaluated through localised oxygen sensing. The assumptions are used as a basis to build prototypical models that can simulate and capture the light emitting performance and propagation of the bacteria as it moves through the medium, 'Imprimer la Lumière' speculates on what the nature of futures representations can be. Where traditional architectural representation emphasises the description of form through the geometry of extension, 'Imprimer la Lumière' develops volumetric models that change across time steps. These descriptions allow architects and designers to specify and describe the design and steering of material lifecycles and their associated performances.

The probes in 'Imprimer la Lumière' look like towers or dense cities in small scale. Conceived as micro-architectures, their aim is not to solve an architectural performance such as public or domestic lighting, but instead to probe what a living architecture could mean.

**Acknowledgment.** 'Imprimer la lumière' is a cross-disciplinary collaboration between CITA (Centre for IT and Architecture, KADK), Soft Matters group, Ensadlab (ENSAD) and Department of Marine Biology (Copenhagen University). The project has benefited from the support of the IFD sciences programme of Institut Français du Danemark (2018) and subsequent funding from the Danish Arts Foundation (2019). We thank Leuchtlabor GbR, Weiherhammer, Germany for their kind support. We thank the students of CITA Computation in Architecture (Aroni Roy,Tessira Reyes Crawford, Claudia Colmo, Izabella Banas, Kawtar Al Akel, Ke Lin, Nikhila Vedula, Youcheng Li, Carolin Feldmann) for their contribution to observational studies.

# **References**


**Open Access** This chapter is licensed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made.

The images or other third party material in this chapter are included in the chapter's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the chapter's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.

# **Bio Scaffolds**

N. Alima1(B) , R. Snooks1(B) , and J. McCormack2(B)

<sup>1</sup> School of Architecture and Urban Design, RMIT University, Melbourne 3000, Australia {natalie.alima,roland.snooks}@rmit.edu.au <sup>2</sup> Sensi Lab, Monash University, Melbourne 3000, Australia

Jon.McCormack@monash.edu

**Abstract.** 'Bio Scaffolds' explores a series of design tectonics that emerge from a co-creation between human, machine and natural intelligences. This research establishes an integral connection between form and materiality by enabling biological materials to become a co-creator within the design and fabrication process. In this research paper, we explore a hybrid between architectural aesthetics and biological agency by choreographing natural growth through form. 'Bio Scaffolds' explores a series of 3D printed biodegradable scaffolds that orchestrate both Mycelia growth and degradation through form. A robotic arm is introduced into the system that can respond to the organism's natural behavior by injecting additional Mycelium culture into a series of sacrificial frameworks. Equipped with computer vision systems, feedback controls, scanning processes and a multi-functional endeffector, the machine tends to nature by reacting to its patterns of growth, moisture, and color variation. Using this cybernetic intelligence, developed between human, machine, and Mycelium, our intention is to generate unexpected structural and morphological forms that are represented via a series of 3D printed Mycelium enclosures. 'Bio Scaffolds' explores an interplay between biological and computational complexity through non anthropocentric micro habitats.

**Keywords:** Mycelium · Feedback systems · Material agency · Adaptive fabrication

# **1 Introduction**

In this paper, we present a novel cybernetic relationship entwining robotics and Mycelium growth. Through an adaptive feedback system of biological and computational agencies we explore the robotic infusion of Mycelium into biodegradable scaffolds. To orchestrate natural growth and decay over time, we posit a strategy where the degradation of form is catalyzed by robotic interaction between material, natural and computational agency. Our aim is to develop design techniques where biological materials influence the robot's movements and thus become a co-creator within the design process. We therefore examine the design tectonics that occur when enabling nature to co-direct the construction of architectural form. The methodology presented here draws from the adaptive processes of biology within a broader ambition to consider how our buildings can grow, adapt, self-repair and biodegrade. As László Moholy-Nagy stated,

**Fig. 1.** Universal Robot injecting mycelium liquid culture into 3D printed biodegradable scaffolds.

"architecture will only be brought to its fullest realization when the deepest knowledge of biological life is available" [2]. Within this context, we examine natural materials and demonstrate new relationships between form and environment to speculate on their architectural potential. Our research is therefore interested in the biological process of formation and ways in which it can contribute to design. We encourage other designers to rethink current relationships with nature by enabling a negotiation between biological growth and architectural intention. To demonstrate such a dialogue, here we present *BioScaffolds*: a sustainable approach and demonstration of novel feedback strategies for architects working with living materials (Fig. 1).

The integration of biological systems within the architectural design and construction processes has a significant lineage and historical precedence. Many disciplines, from engineering through to design and computer science have drawn on natural systems and processes as a rich source of design inspiration. Common methodologies of incorporating nature into design include biomimetics, biophilia and sustainability. Often these methodologies result in a mimicry of nature's forms or its simple application onto existing building structures, without engaging with the agency of material throughout the design process. In this paper we argue that working with nature requires a radical shift toward a new era in which nature is incorporated in both the design and construction processes [11]. An era in which natural growth becomes the catalyst for robotic intervention, 3D printing and computational design.

Within the innovative field of design, Mycelium is increasingly being used for the fabrication of products and buildings. Mycelium is the vegetative part of a fungus bacterial colony and is characterised by mass branching of growth [16]. Mycelium's long, branching filaments, known as *hyphae* are collections of one or more cells surrounded by a cellular wall [3]. Mycelium is often referred to as 'the web of life' as it plays an important role in the decomposition of plant material and pollutants, making them important ecosystem engineers. As Paul Stamets asserted, "I see the mycelium as the Earth's natural Internet, a consciousness with which we might be able to communicate" [4]. We too view mycelium as an integral part of our planet and have therefore become fascinated in enhancing our relationship to nature through design; specially through the creation of form and robotic orchestration.

Today, mycelium is increasingly being used in the fabrication of products and sutainable solutions to building materials. Mycelium's chemical characteristics include, its ability to remove toxins from water, act as a natural binder, act as a good insulator and a moldable and biodegradable substrate [5]. Due to its proven compressive abilities, projects such as Philippe Block's *Mycelium Tree* demonstrate structural use of the materials, despite its modest compressive capacity of around 30 psi [16]. Designers including Phill Ross have demonstrated that mycelium can be grown and transformed into building blocks of different shapes that are 100% organic and compostable [7]. Showcased in David Benjamins HiFi tower, mycelium bricks where fabricated in order to showcase sustainable solutions to our design and construction industry. Inserting a Mycelium mixture within a mold, it hardens over time, taking the shape of the desired form. It is then dried to become inactive and no longer a living material [8]. Companies such as 'Myco Composite' are currently utilizing the material as an agricultural bio-product for packaging by moulding the material into predetermined forms to be utlised for human purposes [6].

While Mycelium is increasingly being used in the field of architecture and design, its application is generally subservient to a priori form. What has yet to be explored within the field of design is utilising Mycelium within its live state and exploring its complex patterns of growth. We are therefore interested in the organism's ability to consume and eventually decompose an organic substrate, in order to receive its nutritious properties [8]. This chemical reaction occurs as Mycelium grows by releasing enzymes from the hyphal tips to absorb and digest the surrounding nutrients. As a result Fungi attain energy from their surroundings by branching out and building filamentous Mycelial networks [16]. Due to the material's ability to biodegrade through a range of substances in order to receive its nutrients and fibrous minerals, our original contribution to this field is harnessing the organism in its live state in order to enable natural patterns of growth and agency to contribute to the design and fabrication process. Rather than molding Mycelium to preexisting architectural forms and drying it out [18], this research exploits natural growth, enabling the living material to become a co-creator within the design processes. 'Bio scaffolds' examines Mycelium's ability to biodegrade and destroy architectural forms and host systems. By exploring the potential of natural Mycelium growth within the architectural context, our ambition is to engage nature and material agency within the design process rather than deferring material behavior directly to form. We therefore present a series of design methodologies that emerge from the interaction of living (material) and non-living (machine) behaviors, which currently remains underexplored within the field of Architecture. 'Bio scaffolds' explores the fusion between computational and biological intricacies, resulting in a series of non-anthropocentric Mycelia enclosures. Hacking into the degradation rates of natural material, sacrificial frameworks act as architectural habitats for Mycelium to grow and eventually biodegrade. In this research paper, we explore robotic intervention as a tool for maintaining and choreographing's the organisms homeostasis and patterns of growth. In order manipulate natural growth, we explore dynamic feedback system in which machine, nature and computational form are in constant dialogue with one another. As the organism begins to grow within the designed geometry, the robot detects and responds to this data by injecting additional mycelium culture into the sacrificial formworks. This cyclitic process occurs over a seven day period, resulting in a computationally and robotically orchestrated process of natural growth. By fusing technologies and existing processes from the natural, robotic and computational fields, this research invites a multidisciplinary approach to architecture. Architect and academic, Marcos Cruz has said that "A notion of design is emerging whereby interdisciplinary work methodologies is traded between designers, engineers and biologist; giving rise to hybrid techniques, new materials and hitherto unamenable living forms" [10]. In order to truly rethink our relationship with nature, 'Bio Scaffolds' adopts a series of trans-disciplinary techniques from the architectural, medical and scientific disciplines.

### **1.1 Medical Bio Scaffold**

This ability to intertwine robotic 3D printing with living materials in the fabrication of biodegradable forms was originally derived from the medical bio-scaffold. Our research draws on techniques from the field of biomedicine, such as tissue engineering, 3D printed artificial organs and bio-scaffolds. Specifically, we have been investigating the processes of medical bio-scaffolds where cells are implanted in order to adopt the geometry of the scaffold. This process occurs by 3D printing a structure, then seeding it with native cells and proteins to encourage cell adhesion and tissue generation [12]. Being a biocompatible and bioresorbable material [13], the scaffold is designed to degrade over time (see Fig. 2). Our research explores how these characteristics of biodegradability and biocompatibility may occur within an architectural context.

# **1.1.1 Mycelium: Bio Scaffold**

Adopting an approach similar to the medical bio-scaffold, our research explores the decomposition process of Mycelium. However, rather than allowing this process of growth and biodegradability to occur naturally and randomly as it would in nature, we conducted a series of experiments to influence and disrupt the Mycelium's existing behavioral characteristics through robotic intervention, form and materiality. Using computer-designed geometries, we direct the growth of mycelia by hacking into its existing patterns of growth through complex, nutrient rich scaffolds. Mycelium excrete enzymes to break down resources in their surroundings and assimilate the nutrients to build up their fungal network [17]. This absorption process occurs when fibrous, organic substrates are fed to the organism including, wood chips, coffee grounds, sawdust, biodegradable plastic, cardboard and paper [17]. With this knowledge we tested the organism's ability to biodegrade a wood plastic composite material, comprised of corn starch and wooden fibres. The organism was attracted to the fibrous properties contained

**Fig. 2.** Far left image (sourced from Computer-Aided Designed, 3-Dimensionally Printed Porous Tissue Bioscaffolds For Craniofacial Soft Tissue Reconstruction Journal) showcases a medical bio scaffold before implantation. Middle image and far right showcases computational forms designed by authors incorporating this time based process of decay

within a wood-based PLA filament, and was therefore capable of growing on scaffolds composed from this material. The following experiments explore 3D printed, biodegradable, nutrient rich scaffolds which manipulate the growth of mycelium through form and materiality. As shown in Fig. 3, this 3D printed geometry demonstrates a successful process of degradation. Mycelium was injected into the wood-based plastic composite forms, eventually degrading the structure and adopting to the set geometry provided. As the Mycelium grows along the scaffold it decomposes and consumes the geometry, replacing the structure with a network of fibrous growth. Growth occurs largely at the tips of the hyphae [5], allowing the Mycelium to spread directly over complex geometric structures as they consume the scaffold.

**Fig. 3.** Far left image showcases growth of organism, middle image showcases bio scaffold printed out of wood based plastic composite material before mycelium inoculation. Far right images showcases mycelium bio degrading wooden scaffold.

### **1.2 Designing with Living Matter**

In order to integrate this unique form finding method with biological growth, our research aims at applying scientific knowledge of Mycelia growth to achieve novel architectural forms. Bio scaffolds entails a layered iterative design process that combines techniques of digital software, fabricated experiments and robotic feedback. Through this fusion of operations, a true hybrid between computation, nature and machine is achieved, as the architect can orchestrate natural growth through form and robotic intervention. Through this nonlinear framework, a singular design process becomes undetectable as the architect's aesthetic and the organism's natural growth become intertwined. Rather than treating nature as a decorative element applied to objects, our approach explores a design and robotic response to Mycelia's natural growth.

Utlising accessible Mycelia cultures, we experimented with the organism and studied its natural growth patterns, in order to manipulate this process through design. This raised the question of how would robotic interference and architectural aesthetic modify the growth of mycelium to achieve a specific growth aesthetic? And what types of geometries would result in the selected biological patterns of growth? Our objective was therefore to study how mycelium propagates over specific forms and surfaces that arise from computational design and robotic fabrication.

During this process, a series of computational experiments were devised to control Mycelium's patterns of growth and behavioral growth characteristics. Each computational form was digitally fabricated and tested by infusing Mycelium culture at key points over the scaffold. This process began with a with a series of computational heterogeneous skins that selectively encourage and hinder the growth of the organism. These features varied in porosity, density and shape and density of internal chambers for the mycelium to seep through. Each geometry was 3D printed using a wood-based PLA filament, composed of approximately 20% sawdust. Once the scaffold has been fabricated, liquid culture mycelium was robotically injected into it at specific insertion points. We studied the organism's compatibility and rates of degradation, in order to gain a comprehensive understanding of its process of growth and decay. Each experiment was judged and reflected upon in order to examine the ideal forms for successful Mycelia growth. Through these designed experiments, we discovered that Mycelium grows at its most rapid pace along smooth puros surfaces that provide a series of micro valleys for the organism to seep through (shown in Fig. 6). Through this process, we cataloged the inherent qualities of mycelia growth, which included characteristics of branching, bridging and web like strands (see Fig. 4). Each experiment explored a series of intricate weaves and designed obstacles that the Mycelium would grow around, resulting a unique set of growth that would otherwise not occur in nature. Through this interface between machine- and biological-fabricated forms, Mycelia growth therefore became the performative aspect within design, resulting in a series of non-anthropocentric biological enclosures.

However, in order to enable a true co-design between designer and nature, the architect's aesthetic must remain an important contributor to the design process. The preservation of design identity showcases a true duality between nature and architect. Through a series of digitally designed geometries, the designer aims at creating unusual forms that showcase biological material in new and innovative ways. Whilst the geometry has

**Fig. 4.** Computational forms: smooth surfaces designated from mycelium growth

evolved from an understanding of the relationship between digital tools and Mycelium growth, the designer's vocabulary of creating 'mystical creatures' remains an integral part of this process. This co-existence between the architect's design vocabulary and biological agency ultimately exposes a new relationship between computational and biological complexity.

Whilst this research encouraged Mycelia growth through digital tools and robotic fabrication, it also explores an intervention and orchestration of growth. We therefore attempt to also hinder the growth of Mycelium in particular sections of the geometry through a series of articulated forms. Rather than allowing the organism to grow wildly and degrade the entire form, we are interested in a controlled processes, as sections of the geometry are designed to remain or fossilize. In order to achieve this gradience of growth, the scaffolds are comprised of heterogeneous skins what both encourage and hinder life through form. Whilst mycelium latches to smooth surfaces, it is repelled from climbing up vertical antenna lattice systems (Showcased in Fig. 6). We therefore tested a series of scaffold designs that would hinder the growth of mycelium and allow sections of the scaffold geometric integrity to remain. Through a time-based process of decay, we designed fractal components that would remain uneaten by the organism once majority of the scaffold had degraded (Fig. 6).

A series of self-organising generative algorithms were deployed in the design of the scaffolds, including a behavioral design strategy that draws on the logic of swarm intelligence. This multi-agent process, based on the Behavioral Formation approach developed by Roland Snooks, encodes design intention within a population of agents that interact to generate a self-organised design intention and emergent formal assemblages. This algorithmic logic distributes a series of components that create complex topologies and intricate, heterogeneous surface articulation [19].

This contrast between the fibrous intricate details that hinder the growth of mycelium and the smooth valleys that encourage Mycelia growth, enables the designer to orchestrate both the growth and decay of nature. This juxtaposition between designed intricacy and simplicity exposes a contrast between computational and biological complexity. Whilst the smooth surface offers a blank canvas for the Mycelium to spread its growth,

**Fig. 5.** 3D printed scaffolds from wood based biodegradable plastic infused with corn starch and natural fibers.

the intricate scaffold details enables the architect's aesthetic to be preserved, showcasing a true hybrid between biological and design agencies. Showcased in Fig. 6 nature is working with and against the geometry. Whilst it follows the architectural pathways provided, it also separates from it: affirming its own distinction, independence and individuality. This duality between surface and component enables an ecosystem of interactive geometries, counterbalancing both the biological and computational intricacies.

**Fig. 6.** Image on far left showcases computational model with smooth surface for mycelium to inoculate and designed intricacy that would hinder the growth of mycelium. Middle image showcases this computational model printed out of a wood based bio plastic material. Far left image showcase mycelia growth being orchestrated through these designated forms.

**Fig. 7.** Universal Robotic customized tool rotating to extract data from mycelium. This tool contains a robotic syringe and Arduino moisture sensors on the opposite end effector.

### **1.3 Technical Workflow: Robotic Intervention**

In order to detect and respond to the organism variations of growth, robotic vision systems where implemented that combine form generation, digital fabrication, and material computation into a seamless integrated processes. In addition to orchestrating biological degradation through form, robotic intervention enables us to further intervene on the 'living' with immediate response [1]. As a result, we explored methodologies in which complex fungi behaviors and material data informed robotic feedback. As Mycelium growth may often take place over months or years, robotic intervention seems ideal, since a robot never tires and can act over timescales inconvenient or impossible for human designers. These long-term temporal interventions react to the organism's unpredictability and change over time. During the course of the mycelium's development, the battle and symbiotic tension between machine, organism and form become apparent. As the mycelium begins to spread throughout the sacrificial framework, multiple computational behaviors mutually negotiate to orchestrate the organism's growth. As a result, the robot detects and responds to the organism's properties, including variations in colour, and moisture qualities and responds through systematic feedback loops.

Mycelium, similar to some plants, displays distinct signs when in a flourishing live state in contrast to its dead, dried out state. Visually, these signs include its colour and its moisture content. According to Mitchell P. Jones if the organism is dying, it will display signs of a brown exterior with minimal moisture [8]. In order to keep the organism alive, it requires moisture that maintains its white fibrous exterior. Consequently, we have developed an approach where colour, moisture data and patterns of growth are extracted from the mycelium through robotically controlled scanning and sampling, which then influences the robot's behaviour. The robot reads and responds to this data establishing a symbiotic feedback system where fabrication techniques, robotic intervention and organic development all contribute to the overall design process. Our initial experiments, which hack into nature via robotic intervention, began examining Mycelium's patterns of growth and its ability to biodegrade through a range of complex scaffolds.

The technical components of this research include a computer vision system, 3D scanning process, sensing, and feedback control, 3D printing, and robotic infusion of liquid culture, have been included in the co-creation process between human, machine, and mycelium intelligence. Equipped with customised sensors that track the organism moisture qualities, patterns of growth and colour variation, the technical workflow for*Bio* *Scaffold* involves a feedback system between the robot, customised tools, material behavior, the vision system and computational form. This series of design experiments used a Universal Robot UR10 controlled by Grasshopper, and 3D printed wood-plastic composite scaffolds, an Arduino moisture sensor, webcam, a customised multi-tool including robotic syringe. These techniques were combined into a single end-effector to scan, read and respond to Mycelium growth in a unified way (see Fig. 7).

**Fig. 8.** Technical workflow showcasing the robot scanning, extracting and computationally recognizing this data from the organism

# **1.3.1 Feedback System: Computation, Robotic and Material Agencies**

In order to maintain the mycelium's homeostasis, the UR robot operated on the organism over an extended period of time, a process which we refer to as 'slow-botics'. This long-term intervention offers an alternative to the common conceptions of robotics that concentrate on speed and efficiency. Over a 7-day period, the robot waits patiently to receive new data and then acts accordingly [9] (see Fig. 8). Rather than treating the robot and organism as two separate mediums, the robot remains a permanent fixture within in this system, where the behavior of organism and machine support growth and coexistence. The following experiments demonstrate processes in which the robot interpreted data provided by the organism, by identifying the contrasting living conditions of the material, quality of life and areas of nourished or dying mycelium. This feedback system occurred through the following process (see Fig. 9). **Process:**


**Fig. 9.** Technical cyclitic feedback workflow showcasing the robot scanning, extracting and computationally recognizing this data from the organism

6. According to the data received from the moisture sensor, a set of computational rules and restraints are implemented. The algorithm instructed the robot how to respond to the organism. (see computational *Logic* bellow) **If** the robot detected that the area was lacking in nutrients and therefore 'barren' it would respond by rotating the singular tool, infusing 30ml of mycelium. In contrast, **if** the robot extracted 'living' data and was therefore in a nourished moist state, it would 'do nothing' and proceed to take the next value of its neighboring cell.

During the scanning process, an algorithm is implemented instructing the robot how to react.

*Logic:*

**If** value received was 0 *>* 50 **then** infuse 30 ml of Mycelium

This indicated that the mycelium was extremely dry and was in need of additional moisture.

**If** value received was 50 *>* 80 **then** infuse 10 ml of Mycelium

This indicated that the mycelium was in a stable condition and required minimal moisture to maintain this condition.

**If** value received was 80 *>* 120 **then** do nothing.

This indicated that the mycelium was extremely moist, do not deposit additional moisture in order to prevent drowning the organism and bacteria.

**Fig. 10.** Results showcasing the orchestration of mycelia growth through form and robotic intervention.

# **2 Conclusion**

Resulting from this symbiotic feedback system and form finding process, this research explores the ability to orchestrate natural growth through robotic intervention and Architectural form. Through this time-based process of growth and decay, Mycelia growth is showcased in new and innovative ways that would otherwise not occur in nature. Designing through the interaction of natural systems and computational behaviors creates a complex feedback process that privileges volatility and the unknown. By researching through design, we examined various types of forms which encouraged biological growth. Demonstrated in (Fig. 10) skins that contain dense pours areas, enable the organism to seep through the built framework and degrade it at rapid speed. Similarly, geometries that where based on a branching system, encouraged the organism to extend its patterns of growth along the designated surface area. This research has established an approach where the interaction of these two domains – the physical and the digital, have the potential to generate new and unexpected structural and morphological formations. The interaction of these agencies offers the potential of creating new architectonic approaches, or as Philip Glass states, "A new language requires a new technique" [15]. In this paper we have demonstrated an approach to growing Mycelium through a set of hybrid digital and biological behaviors that interact with computationally generated scaffolds. We are interested in exploring how computational and material agency may work together through robotics feedback systems. 'Bio scaffolds' examines the design tectonics and adaptive living forms that may emerge from this process. Through this integration of a unique form finding technique with biological growth, this research aims at merging robotic technologies with living matter until the two become indistinguishable. Whilst the Mycelium enclosures presented are not yet architectural buildings or at architectural scale, they represent progress towards a developed system that engages with materials and addresses sustainability within Architecture.

# **References**


**Open Access** This chapter is licensed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made.

The images or other third party material in this chapter are included in the chapter's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the chapter's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.

# **Di-terial – Matching Digital Fabrication and Natural Grown Resources for the Development of Resource Efficient Structures**

Felix Amtsberg1(B) , Caitlin Mueller2(B) , and Felix Raspall3(B)

<sup>1</sup> ICD, University of Stuttgart, Keplerstrasse 11, 70174 Stuttgart, Germany felix.amtsberg@icd.uni-stuttgart.de <sup>2</sup> Massachusetts Institute of Technology, 77 Massachusetts Avenue, Cambridge, MA 02139, USA caitlinm@mit.edu <sup>3</sup> DesignLab, Adolfo Ibanez University, 2700 Diagonal Las Torres, Santiago, Chile

felix.raspall@uai.cl

**Abstract.** The research presented in this paper focusses on the concept of "Diterial" which aims to merge digital design and fabrication technology with natural materials such as bamboo poles and raw timber. It proposes a digital workflow that uses sensing techniques to gain individual material information of natural, unprocessed construction resources and identify their individual strengths and characteristics and therefore its potential in load-carrying structures. This information is then used to develop bespoke designs and fabrication concepts, bridging the gap between unprocessed material and automated fabrication setups. Two case studies, developed to prove this concept, are described and compared. Both cases focused on the development of spatial structures using node-bar combinations of local resources.

**Keywords:** Visual sensing · 3D-printing · Robotic fabrication · Node design · Material analysis · Bamboo structures · Timber structures

# **1 Context and Relevance**

Natural grown resources such as raw timber and bamboo poles are important construction materials with a long history in architecture but have been relegated in favor of the nowadays dominant materials concrete or steel. The building sector is the major contributor to climate change, accounting for over 30% of global green-house gas emissions, 40% of global energy use and 50% of global waste [1]. Due to the climate crises, the dominance of concrete and steel is being contested. Rising prices on the commodity markets make regrowing local resources an important factor for the near future in Architecture Engineering and Construction (AEC).

Currently, the construction industry is based on standardization. This determines how natural materials are processed and used: irregular logs and poles, for example, are usually cut into slats or strips, which are then re-joined via gluing. The resulting semi-finished goods include glue-laminated bamboo panels or timber, chipboards or LVL, among others. This standardization paradigm presents several drawbacks. First, during processing, the unique and highly efficient individual characteristics that trees and bamboos develop while growing in a specific natural environment are largely lost. Second, the processing of materials is machine- and labour-intensive and discards a significant amount of material due to offcuts. Third, the gluing results in a worse energy balance of the material. For example, Glulam, LVL, plywood and OSB used 8, 16, 19 and 28% respectively of total energy consumption for resin production [2]. Finally, the standardization of components influences, at least to some extent, the standardization of architectural concepts and may results in similar construction and building typologies.

# **2 Material Scanning in AEC Industry and Research**

Scanning natural materials to identify their geometry and properties exist in the AEC industry today. The timber/sawmill industry implements high performance analysis tools like 3D scanners or computer tomography for logs. These are used to gain information of each individual piece of lumber and generate cutting plans optimize the production of construction elements according to standards and to cut out any defect like knots, holes or pitch pockets [3]. In the research community, the use of material information obtained through computer vision has been tested in a large variety of applications: it has been used to upcycle scrap material and inform robotic assembly sequences [4], to assemble small scale structures or establish material libraries of natural grown resources i.e. timber crotches [5], to identify the best fitting element and use it only processed at the connection points to build a bis scale structure for a barn. These and other examples show the immense capacity of material scanning for a smart (re) use in digital fabrication processes.

# **2.1 Di-terial**

The concept "Di-terial" investigates smart design-to-fabrication systems that bring together cutting-edge digital technologies and raw, natural-grown resources. Aiming to develop highly efficient loadbearing structures in architectural scale, they use local resources. Di-terial uses material scanning as a tool to understand the individual material properties that each natural element has and take advantage of its unique qualities (Fig. 1).

### **2.2 The Central Role of the Node**

This research evaluates the concept of Di-terial in the design of space frames. Space frames combine bars-linear elements- and nodes -points where bars converge- into a three-dimensional structural system. They are notable for being lightweight and extremely efficient structures. The separation of the structure into relatively small nodes and bars simplifies prefabrication in a controlled environment using technology unsuitable for onsite use and eases transportation to the construction site. Space frames can

**Fig. 1.** Detailed individual material information investigated in this research: section geometry (bamboo) and fiber orientation (timber)

be configured to achieve structures of almost any shape, size and use, and to dis- and reassemble them for a second life cycle if needed. The node design is crucial for space frame's flexibility and efficiency. The joining concept with the bar, the valence of the system, the structural design of system and a corresponding node performance are the main questions to answer for a successful node-bar system. During the 20th century the influence of mass- and prefabrication concepts for standardized constructions, has produced various node concepts. Above all, the developments of Konrad Wachsmann with the General Panel System for housing and the USAF Node for industrial, have undertaken the effort to enable greatest freedom possible with standard components. However, the nodal joint was too complex to be economically successful [6]. The simpler MERO-Node which was developed in 1937 and is still in use today, enables multiple configurations of space frameworks but is still limited to the geometric constraints of the node.

The digital design-to-fabrication methods of the recent years have introduced new ways for efficient production of one-off components. CNC-technology like Industrial robots have been used for custom node fabrication and the recent advancements in additive manufacturing enabled an even higher degree of customization of structural connectors [7].

# **2.3 Case Studies**

Two separate case studies were developed with researchers from Singapore University of Technology and Design (SUTD) in Singapore and Massachusetts Institute of Technology (MIT) in Massachusetts using natural grown resources from the local environment (bamboo in South East Asia and timber in New England). Both projects developed customized node-bar systems based on the concept of Di-terial. It was important for this research to enable a maximum variety of design possibilities, deriving from the individual structural strengths of each material.

With a node-bar system identified as the construction system, the natural materials were used for the bars and for the node in Singapore and Cambridge, respectively. Both projects developed design-to-fabrication processes which could use the material as barely processed as possible and find their best possible fit in complex but efficient structural system. Digital design methods were used to produce the bespoke node geometries (Fig. 2).

**Fig. 2.** 36/12 bespoke and digitally fabricated nodes merge the individual strength of natural grown resources in spatial frameworks

### **2.3.1 Case Study "bamboo3"**

The project "bamboo3" investigates the combination of 3D-printing technology and unprocessed bamboo. Forming hollow straight tubes by nature, bamboo poles appear to be the ideal base material for trusses and bar-node systems. A simple 2D- scanning process is used to gain precise inner and outer contour curves of the bamboo section, which was then used to inform the geometry of the bespoke connectors (Fig. 3).

**Fig. 3.** Collected bamboo, scanning of the section geometry and digital representation

As mentioned before, 3D-printing enables the fabrication of almost any shape, but especially the chosen method FDM-printing of biodegradable PLA, results in limited structural performance. Thus, it was decided to focus on structural systems, where the node only must react to normal forces but not bending moments. Examples (Fig. 4) are trussed beam structures (a) and triangulated grid shells (b), but also spatial grids like tetrahedral meshes (c), which was chosen for the final design (d). These systems, especially if irregular request an efficient fabrication of one-off node due to unique angle combinations and various node valences in case of the realized structure 3 to 9.

**Fig. 4.** Spatial structures in 2D (a: trussed beam/valence 2–5, b: triangulated grid shell/valence 2–3) and 3D (c: tetrahedral mesh/valence 2-n) were investigated and led to the final design (d: Sombra Verde/valence 3–9)

A Grasshopper© script using Karamba© [8] was written to compare load carrying capacity of the bamboo pole and structural requirements of the design to identify the bamboo thickness (Fig. 5a). The results were categorized in bamboo diameter *>*30, 40 and 50 mm and used to inform the node diameter at the node-bar connection (b). Combined with the information of mesh angles and valence, this input is used to automatically generate the node design (c) and print (d) the nodes.

**Fig. 5.** Structural analysis of the designed bamboo gazebo (a) led to the node and dowel (red) geometries (b), their implementation in the structure (c) and production via 3D-printing of a node

Conventional PLA based 3D-printers were used to produce the 238 unique connectors and 36 nodes that joined the 117 bamboo poles. The fabrication time varied between 21 and 128 h for the node elements and 1.5 to 4 h for the dowels, respectively. These elements formed "Sombra Verde" a shade providing structure in Duxton Plain park Singapore. The 3-legged, 6 by 8 m spanning structure was installed for 3 months during the "Urban Design week 2018" (Fig. 6).

### **2.3.2 Case Study "Structural Upcycling"**

The project "Structural Upcycling" investigates the use of robotically subtractive fabricated timber crotches. The typically Y-shaped crotches of deciduous trees form natural

**Fig. 6.** Installed bamboo gazebo "Sombra Verde" and node close-ups

cantilevers and present a complex fiber orientation. For this reason, they are usually discarded during the fabrication of standard timber products despite their structural potential in structures with similar structural requirements as the original crotches [9]. This case study takes advantage of this natural structural design, using the crotches as bending stiff nodes in node-bar systems.

Low-cost consumer-grade 3D-Scanners are used to generate a 3D-mesh of the crotch surface. The meshes are simplified and the original branch vectors determined. A material library, an inventory of scanned crotches, is established and used to match a developed design concept (Fig. 7).

**Fig. 7.** Sourced crotches are 3D-scanned to establish the material library

Since the design system is based on the use of predominately 3-valence bending nodes crotches (Fig. 8) honeycomb meshes (a), more patterns like Voronoi (c) or certain Archimedean solids were investigated to establish the final design (d).

It is also essential for the morphology of the nodes in the structure to match the branch vectors as closely as possible, to benefit from the fiber orientations in the crutches. Large mismatches between designed structural node and the assigned crutch result in a decrease of the structural performance of the node [10]. Therefore, a matching algorithm was developed to assign the best-fitting crutch to each node in the structure and adapt the structure's morphology to the inventory of available crutches (Fig. 9 a). This matching concept has been explained in a separate publication [11]. In theory, the crotch achieves the best structural performance unprocessed, when no fibers are cut. However, this results

**Fig. 8.** Spatial structures in 2D (a: hexagonal grid shell, b: Voronoi) and 3D (c: truncated Icosahedra) were investigated and led to the final design (d)

in several disadvantages, when the unprocessed nodes are joined with the standard linear elements. Therefore, the node design was developed based on the logic of a convex hull (Fig. 9 b, c), a geometry which represent the smallest convex structure to cover the contact faces and generates a triangulated node. The convex hull has two key benefits: it creates a geometry that is compatible with the standard lumber sections of the bars converging in the node, and it creates a geometry without valleys, which can be manufactured using simple planar cuts with a band saw. This case study takes advantage of this natural structural design, using the crotches as bending stiff nodes in node-bar systems.

A second algorithm generates a node geometry which fills the scanned crotch mesh and expands its for to maximize the use of material available in the crutch. The rectangular contact faces fit the corresponding bars and establish a well-balanced compromise between maximized material use, minimal fabrication time and matches one-off and standard components. For the fabrication, a robotic work cell was developed, in which an industrial robot holding a crutch was located in front of a standard bandsaw (d). The script automatically generates the cutting sequence, resulting in a time-saving production of one-off component with just a few straight cuts (between 12 and 25 in the case study). Thus, the fabrication time of the convex hull took between 13 and 35 min depending on dimension of the node and the number of faces, cutting depth and hardness.

**Fig. 9.** Matching crotches were placed in the designed mock-up (a) a convex node geometry (b), their implementation in the structure (c) and production via robotic band sawing of a node

A full-scale mock-up was developed and installed at the MIT School of Architecture and Planning. The prototype demonstrated that the workflow, from scanning of crutches to node generation and allocation in the structure, to the robotic fabrication and assembly, are fully functional. The results of the mock-up also served as preparation for a permanent structure which is currently in the planning phase. The crotches were sourced from trees that had been felled during a renovation process in an urban environment. The pieces not collected by the researcher had no value and were chipped immediately. 12 crotches were processed and joined with 19 1 ½ × 1 ½ inch bars to form this 4 m × 2 m prototype (Fig. 10).

**Fig. 10.** Exhibition of the mock-up to test the case study "Structural Upcycling"

# **3 Comparison, Conclusion and Outlook**

As described, both case studies focused on specific implementations of Di-terial, but, in retrospect, they would significantly benefit from each other's innovations.

*Bamboo3:* The creation of an inventory of all scanned bamboo poles was used solely to create bespoke connectors. However, this information could be used for alternative strategies such as optimizing the structure based on the actual moments of inertia or cutting longer poles into shorter pieces. Additionally, the node printing orientation was determined to minimize printing speed, but could have been optimized for mechanical strength.

*Structural Upcycling:* Due to time constraints, the bar elements in the project had a fixed section. Applying different section geometries would result in a higher efficiency of the material use. The algorithm developed to generate the nodes already includes this functionality. Additional information like material hardness could generate more effective libraries. Alternative geometries beyond the convex hull could exploit more efficient material performance. Additional studies investigated the matching potential of different tree species for designed geometries, based on their typical ratio of branch angles [12].

The projects presented in this paper, show two different, successful approaches to Diterial. They analyze and identify individual strengths of *natural grown local resources* using *material sensing*. A *Structural system*, which takes advantage of these specific strengths is defined, designed and structural analysis software calculates the designed structure. That way, the *Material placement* happens were needed. *Digital Fabrication Concepts* are developed to join natural-grown construction materials seamlessly in a well-balanced resource efficient system.

One of the major challenges of the 21st century is the design, planning and construction of sustainable building and structures. Natural-grown resources show an immense capacity for application in digital fabrication and sustainable architecture. As data acquisition, storage and processing become increasingly affordable and ubiquitous, new digital workflows such as those described in this paper, can be developed to take advantage of non-standard materials with more complex structures. In this way, individual characteristics of each naturally grown material component are computationally analyzed and its features considered an opportunity rather than a disadvantage.

**Acknowledgements.** We would like to thank our sponsors, contributors and researchers:

*Bamboo3:* Urban Redevelopment Authority and Lope Lab, AIRLab, Yuxin Hu, Sourabh Maheshwari, Jenn Chong, Anna Toh, Aurelia Chan, and Sihan Wu.

*Strucutral Upcycling:* MIT School of Architecture Woodshop, Autodesk Build Space Boston, Yijiang Huang, Kevin Moreno Gata, Daniel M. Marshall.

# **References**


**Open Access** This chapter is licensed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made.

The images or other third party material in this chapter are included in the chapter's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the chapter's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.

# **Application of 6-Dof Robot Motion Planning in Fabrication**

Hao Wu, Ming Lu, XinJie Zhou, and Philip F. Yuan(B)

College of Architecture and Urban Planning, Tongji University, 1239 Siping Rd., Shanghai, China philipyuan007@tongji.edu.cn

**Abstract.** In practical robotic construction work, such as laying bricks and painting walls, obstructing objects are encountered and motion planning needs to be done to prevent collisions. This paper first introduces the background and results of existing work on motion planning and describes two of the most mainstream methods, the potential field method, and the sampling-based method. How to use the probabilistic route approach for motion planning on a 6-axis robot is presented. An example of a real bricklaying job is presented to show how to obtain point clouds and increase the speed of computation by customizing collision and ignore calculations. Several methods of smoothing paths are presented and the paths are re-detected to ensure the validity of the paths. Finally, the flow of the whole work is presented and some possible directions for future work are suggested. The significance of this paper is to confirm that a relatively fast motion planning can be achieved by an improved algorithmic process in grasshopper.

**Keywords:** Motion planning · Robot · Fabrication · Brick · Grasshopper

# **1 Introduction**

### **1.1 Motion Planning**

With the increasing variety of architectural forms, more and more buildings need to rely not only on procedurally generated forms but also on the assistance of machines or robots to do bricklaying and wall painting. In the case of bricklaying, for example, many successful projects have been built, such as the curved facade of the West Coast Pool House (Fig. 1). Since most of these projects are built on-site, robot obstacle avoidance, or robot motion planning, needs to be considered. Robot motion planning is a longstanding problem that is still being studied today, and a balance between computational efficiency and reliability needs to be found in practical projects.

In traditional robot construction, robot path planning can be taught manually point by point, but this is very time-consuming and labor-intensive. Nowadays, the mainstream approach is to make the robot's motion in the simulation, adding auxiliary points to avoid the obstacles in the program, and then generate the offline program. However, in this case, the user still needs to manually add auxiliary points and check whether the whole

**Fig. 1.** XiAn Chi She brick wall

process encounters collisions. So a better approach is to automate the motion planning to do the job.

In building construction, real-time has a great impact on efficiency, but the algorithm of motion planning is a very time-consuming and complex task. There are a large number of papers that study motion planning algorithms, and there are also many software and libraries that implement most of the mainstream algorithms.

# **1.2 Previous Work**

The algorithms for motion planning are divided into sampling point-based and potential field-based. Most motion planning libraries use sampling-based algorithms. The most famous work is the OMPL library, which is based on PRM [1], RRT, EST, SBL, KPIECE, SyCLOP algorithms, and several variants developed from them. Also, the OMPL library can work with many software such as Openrave, MoveIt, etc. Similar to OMPL, there are also planners such as CHOMP and STOMP. All of the above need to be used in Linux systems and require a certain knowledge base.

Vrep has the OMPL library as a built-in plug-in, and RoboDK comes with its own PRM motion planner. However, most architects are familiar with these sorts of software, they use Rhino as a robot simulation environment and program using grasshopper, and use software such as Kuka PRC and FUrobot to control the robot to work. The work in this paper is to implement path planning in the FURobot environment.

# **2 Research**

### **2.1 Potential Field**

Obstacles and target poses are known, so a potential field is formed by setting the obstacles to exert repulsive forces on the robot and setting the target poses to exert attractive forces on the robot. This method is the potential field method. The potential field method obtains the gradient according to the external situation and descends step by step to the lowest point, which is the target point, according to the guidance of the gradient. The advantage of the potential field method is that it does not require extensive calculations, so it can be done in real-time.

However, after actual experiments, it is found that the potential field method needs to set at least the coefficients of attraction, repulsion, and descent rate to establish a suitable potential field, so setting these three coefficients is the key. In the case of ordinary moving path planning with a two-link robot arm, the robot's configuration space is twodimensional, so the potential field is three-dimensional, and it is possible to clearly see the three-dimensional shape of the potential field to adjust these three coefficients. However, in a high-dimensional configuration space, such as the six-dimensional configuration space of a 6-degree-of-freedom robot, the generated potential field is 7-dimensional, so it is difficult to adjust the three parameters to obtain a suitable potential field.

Finally, the potential field method has local minima, which will make the potential field method unable to reach the target point poses. In summary, PRM based on sampling points is chosen in this paper.

# **2.2 PRM**

PRM [1] first creates a series of randomly different poses and eliminates those poses that encounter external objects and keeps the collision-free poses. After establishing a certain number of sample points, PRM searches for neighboring sample points around each sample point and connects the two sample points when found. If the motion path between these two sample points is collision-free, the connection of these two sample points can be added to the whole sample point map. Once all the sample points are connected, the Astar algorithm can be used to find the shortest route.

Since rotating 0 degrees is the same as rotating 360 degrees (the angles are in degrees), the following distance function is needed:

$$Dist(\theta\_{goal} - \theta\_{start}) = \min(|\theta\_{goal} - \theta\_{start}|, (360 - |\theta\_{goal} - \theta\_{start}|))$$

Because the energy required to rotate each axis is different, distance weights can also be added to these six-axis joints (six-axis joints per pose Ang), with specific weight values C that can be set according to the energy consumption and range of influence of the rotation.

Dist*(*Anggoal <sup>−</sup> Angstart*)* <sup>=</sup> <sup>6</sup> <sup>i</sup>=<sup>1</sup> Ci <sup>∗</sup> Dist*(*θistart <sup>−</sup> <sup>θ</sup>igoal*)*.

# **3 Implement**

### **3.1 Collision**

As mentioned above, when building a map of sampled points, collision detection needs to be performed for each sampled point. In practice, using the mesh-to-mesh collision detection in Rhino is too time-consuming, so the method needs to be simplified. Specifically, the robot parts are simplified to simple geometry, such as cylinders or spheres, and the external collision objects are simplified to multiple points, and then these basic geometric objects are used for collision detection, which will greatly improve the detection efficiency.

The robot's A2, A3, A4, and A6 joint positions are extracted and connected to form three axes, and by setting a safety radius, such as 200 mm for each axis, a cylinder can be formed correspondingly. In collision detection, choosing the cylinder speeds up the calculation. The collision detection is calculated by detecting whether the shortest distance between the external collision point and the axis of the cylinder exceeds the sum of the radius of the cylinder and the radius of the external collision point [2]. The A1 joint is not included because in general work, the robot is stationary and the range of motion of the A1 joint is not large, so the detection can be ignored.

In addition to the robot's own parts, the tool head also needs to perform collision detection on external objects. But the shape of the tool head is rather irregular. By default, we use a line segment from the flange plate to the tool head coordinates as the axis and set a relatively large safety radius. However, this is not a good simulation, and it would be more appropriate to add multiple cylindrical axes and corresponding safety radius to the tool head (Fig. 2).

**Fig. 2.** Using different cylinders as collision detection objects for robots

After the sample points are created, it is necessary to find neighboring sample points for each sample point and then test the lines between them for collisions. It is not appropriate to test the entire line segment for a large number of collisions, but rather to test as few as possible to ensure that there are no collisions. Such tests are not even necessary when the distance between sampling points is small.

Once the sample points are created, they can be saved, since they are valid as long as no new obstacle objects are added and only the robot's starting and target poses are changed. It has been tested that the time required to create sampling points is typically three to five times longer than the time required to calculate the shortest path.

### **3.2 Trajectory**

### **3.2.1 By Grasshopper**

The shortest path derived by the Astar algorithm is not a smooth path, we can use a polynomial least-squares fit or use Rhino's spline curve to smooth it.

The specific steps to use rhino's spline curve are:

1. Take the components of the first three axes A1, A2, A3 of the robot arm as the x,y,z components of the first 3d point. Then take the components of the last three axes A4, A5, and A6 as the x,y,z components of the second 3d point.


The path through the fit will change the pose of the shortest path generation (Fig. 3), so it is better to check the collision again.

This trajectory smoothing approach is relatively easy to implement, but requires the use of Rhino's existing commands, and in general situations, other approaches may be choosed.

**Fig. 3.** Smooth path after fitting

# **3.2.2 Polynomial**

After check, it is feasible to use polynomials to find the smooth path. As an example, we first obtained 5 poses by PRM, and we can list the following polynomials and combine them into a matrix.

$$X = \begin{bmatrix} 1 \ t\_0 \ t\_0^2 & t\_0^3 \ t\_0^4 & t\_0^5 \ t\_0^6 \\ 0 \ 1 \ 2t\_0 \ 3t\_0^2 \ 4t\_0^3 \ 5t\_0^4 \ 6t\_0^5 \\ 1 \ t\_1 \ t\_1^2 & t\_1^3 \ t\_1^4 \ t\_1^5 \ t\_1^6 \\ 1 \ t\_2 \ t\_2^2 & t\_2^3 \ t\_2^4 & t\_2^5 \ t\_2^6 \\ 1 \ t\_3 \ t\_3^2 & t\_3^3 \ t\_3^4 \ t\_3^5 \ t\_3^6 \\ 1 \ t\_4 \ t\_4^2 & t\_4^3 \ t\_4^4 \ t\_4^5 \ t\_4^6 \\ 0 \ 1 \ 2t\_4 \ 3t\_4^2 \ 4t\_4^3 \ 5t\_4^4 \ 6t\_4^5 \end{bmatrix}$$

We divide the time equally among the 5 poses and assume that the robot takes 1 s to go through the 5 poses, so we have:

$$t\_0 = 0, t\_1 = 0.25, t\_2 = 0.5, t\_3 = 0.75, t\_4 = 1$$

*ti* represents the time of movement to the ith pose. Rows 2 and 7 of X represent the axial joint velocities at the starting and final pose, respectively, which are stationary at both poses, so they are 0. The remaining rows are the values of the joint angles corresponding to the poses in which they are located, so we have:

$$A = \begin{bmatrix} A\_0 \ 0 \ A\_1 \ A\_2 \ A\_3 \ A\_4 \ 0 \end{bmatrix}^T$$

Set the polynomial coefficients:

$$B = \begin{bmatrix} B\_0 \ B\_1 \ B\_2 \ B\_3 \ B\_4 \ B\_5 \ B\_6 \end{bmatrix}^T$$

Since XB equals A, and X is a square matrix, it is straightforward to solve for B by inverse. This is one of the reasons why the X matrix is designed in this way. In this case, the number of known poses is 5, so the dimension of the X matrix is the number of known poses plus 2(two velocity constraints), and so on for different number of known poses from PRM.

In the case of real-time control of the robot, the advantages of this method over the previous ones are:


**Fig. 4.** Polynomial, PRM and curve interpolate

an example, it can be found that the curve interpolation does not conform well to the pose from the PRM, while the polynomial interpolation does.

# **3.3 Check**

After doing the path planning, the path can be checked once more depending on the situation. Global collision detection can be performed, or some points that may create problems can be identified, and then detection can be performed on those points. For example, the distance between the two endpoints of the alternative collision cylinder axis of the robot body including the tool head and the point clouds of all surrounding obstacles can be calculated for all sampled points, and then collision detection is performed for some sampled points with the smallest distance. If a problem is detected, a new shortest path is generated and detected again by changing the random number seed that generates the map of sampled points and performing a new calculation.

# **4 Example**

Here is an example to briefly introduce the workflow on bricklaying. The robot is laying bricks and taking bricks at the same time, if there is already an obstacle, then path planning is needed, but even if there is no obstacle, the height of the whole brick wall is constantly getting higher during the process of laying bricks, so it can also be considered as a changing obstacle. Of course, it is possible to write a program manually to follow the height of the brick wall for path presetting, but this increases the workload of the program, so motion planning can be used to automatically generate the brick moving motion (Fig. 5).

**Fig. 5.** Automatic generation of brick picking and bricklaying movements through motion planning

### **4.1 Component**

The most important part of the whole process is the motion planning component of FURobot, which has the following input interface (Fig. 6):


**Fig. 6.** Motion planning component of FURobot

# **5 Future Work**

In the Grasshopper and FURobot environments, using the improved method described above, a real-time (as fast as 2 hundred milliseconds and as slow as 1–2 s including recalculation of the sampled map CPU i7 2.7GHz) path planner can be made and can be used in offline or online construction projects.

Next, the motion planning component can be encapsulated into a single instruction, which becomes a motion instruction along with the existing straight line instruction, point-to-point instruction, etc. The difference is that the path planning instruction reacts to the motion planning component in the offline or online construction project. The difference is that the motion planning instruction reacts to the actual program as a sequence of angular instructions that guide the entire obstacle avoidance process. This significantly reduces the amount of work and the number of components in the Grasshopper programming.

The PRM algorithm may not be the best choice. In future work, the RRT algorithm, or other algorithms that can be processed in parallel, can be tried and it is also possible to parallelize the creation of sampled maps.

# **References**


**Open Access** This chapter is licensed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made.

The images or other third party material in this chapter are included in the chapter's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the chapter's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.

# **Reinventing Staircases for Thermoplastic Additive Manufacturing**

Mirko Daneluzzo1(B) and Michele Daneluzzo<sup>2</sup>

<sup>1</sup> Dubai Institute of Design and Innovation, Ras Al Khor Road, Bldg 4 Dubai Design District, Dubai, UAE mirko.daneluzzo@didi.ae

<sup>2</sup> NYXO Visionary Design, In5 Design, Ras Al Khor Road, Bldg 4 Dubai Design District, Dubai, UAE

michele@nyxostudio.com

**Abstract.** The paper presents an ongoing project focusing on the application of additive manufacturing technologies for the design of staircases. Additive digital fabrication allows architects to reinvestigate materials, processes, and creates new design opportunities to explore novel aesthetical and functional expression in architecture, enabling a reinterpretation of the typology of the staircase, using thermoplastic materials. This paper reviews the opportunities and challenges of using 3D printing for fabricating custom stairs with complex geometries in two studied configurations.

**Keywords:** 3D printing · Additive manufacturing · Staircase · Architectural components · PET-Carbon

# **1 Additive Manufacturing in Architecture: From the Large-Scale of the Structure to the Medium-Scale of Interior Products**

The usage in architecture of additive manufacturing (AM) has shifted from producing scale modeling to a full-scale end-product, and it is usually referred to as large-scale 3D printing. As the word suggests, is a special type of AM that specializes in erecting largescale, heavy, and often permanent structures (Al Jassmi 2018). While these technologies have been successful for small-scale purposes, scaling up 3D printing for construction, and replacing conventional building methods, is yet a challenge (ibid.). The companies investing in the field focus their attention on the architectural envelope, so on how to build the main body of the building, usually incorporating structure and cladding. Although different printing technologies are using for example locally sourced clay like the Italian Company WASP, the state-of-the-art of large-scale AM in architecture is largely dominated by concrete type research efforts. Concrete is the world's most widely used engineered material (Ashby 2012), nevertheless, its low cost, its compressive performance, and its versatility made it a choice.

It is possible to identify two main approaches for the application of AM in architecture: the first one is about the in-situ fabrication of the whole building (the printable part of the building) in a continuous fashion, the second is about the in-factory prefabrication of components. Researchers at ETH Zurich are applying the latter approach for the design of staircases using the prefabrication of 3D-printed formwork for the manufacturing of custom concrete stairs (Jipa 2019). With the same logic of 3D printed formworks, the Dutch company Aectual, showcased a monolithic concrete staircase at the Dubai Design Week 2020. The projects analyzed in this paper operate in the same domain of prefabrication but suggest a different approach, where no formwork is used in the process of shaping the product. The numerically controlled fabrication process is used to materialize the digital model using only the material needed, embedding in a single body the architectural and the structural parts, without the need to cast other materials.

The research is under development as a project of Nyxo Studio, the practice cofounded by the authors. The project was initiated in 2019 to address alternative applications of 3D printing for architectural-scale purposes and involves the authors and Cristian Li Voi as assistant designer. The research practice of the authors is aimed at having a direct impact on the market, so these projects evaluate both the technical and economic feasibility. The projects illustrated in this paper are part of a series of reflections about the application of AM in architecture, shifting the attention from the large-scale approach to a medium-scale one. These medium-scale applications range between the scale of furniture and that of a room. It is believed by the authors that the exploration of this spectrum is just at the beginning, and it offers a large possibility of expansion for the near future.

# **2 Fused Granular Fabrication for Medium-Scale Objects**

Large-scale robotic thermoplastic printing for manufacturing finite objects is still a young technology. The main products realized so far using these technologies are furniture pieces, like chairs and vases, printed with different thermoplastics like PLA, ABS, PETG, TPU, and PET. These projects are inspired by these applications in the furniture field. The staircase is indeed thought of, since the beginning, as a discrete object, where each unit has the size of a furniture piece. This analogy helped to channel the characteristics of the unit as a lightweight object, something mobile, easy to handle as a single piece. 3D printing is used to build the entire body of the module in an integrated way, to avoid any additional component. This analysis suggests avoiding using materials with a very high unit weight like concrete, but still capable of a decent tensile and compressive strength. The exploration of Dirk Van der Kooij with his Endless Chair (2010), is one of the first examples of the evolution of the Fused Deposition Modeling (FDM) prototyping technique into a fabrication scale, using thermoplastics in form of pellets. More commonly used in injection molding, pellets are not only less expensive but come in a wider range of materials than filament, including a long list of recycled and sustainable products. Pellet-based 3D printers can print large objects quicker than filament-based printers because a high volume of pellet material is fed into the extruder and thanks to a larger nozzle. It is usually referred to as Fused Granular Fabrication (FGF) and it is setting a standard in printing large objects using Thermoplastic materials. FGF is a very attractive and promising technology for applications in medium-scale architecture components, due to their speed, their capacity to print different materials, and availability on large scales. FGF is recently diffusing in the industry, also for the relatively low cost of the raw materials, that if we consider thermoplastic materials, they can run as low as \$2.5 per kg.

FGF is available in two main configurations: gantry-based and robotic-arm-based. In the first case, an example is the printer MarkI or MarkII by the Belgian company Colossus. In the same category, we can consider the Delta 3MT by the Italian company WASP. Even if the Delta printer is equipped with a three arms stabilization system, and the degree of freedom of the machine is potentially higher than the gantry-based one, the printing process is the same, characterized by a horizontal stratification of the layers.

In the family of robotic-arm printers, there are companies like the UK-based Ai Built, The Spanish Nagami, the Dutch Aectuel, and the American company Branch Technology. The advantage of working with a robotic arm is that there is the possibility to print with the Tangential Continuity Method (TCM) (Al Jassmi 2018) that allows to overcome the horizontal layering and also concedes the possibility to print on pre-defined molds like in the case of Zaha Hadid's Bow Chair and The Rise Chair manufactured by Nagami. Another approach that the robotic-arm technique offers is the definition of spatial 3D lattice structures. Both cases make use of the increased degrees of freedom of a 6-axis robotic arm to generate a building path that could go beyond the horizontal stratification of a 3D axis machine.

For the project discussed in this paper, the choice fell on a gantry-based system for the relatively lower and easier operational costs respect a robotic-arm printing system, considering also the possibility for a hypothetical company to invest in the acquisition of the machine.

# **3 Why 3D Printing a Staircase?**

Non-standard stairs have an important role in architecture, but their complex details pose significant fabrication challenges. One of the preferred materials for custom stairs is steel, which can be shaped in different ways, but with high costs and a lot of components. 3D printing can unlock an entirely new vocabulary of shapes, previously unavailable with traditional systems, and materials like steel, wood, and concrete. Only a minimal amount of 3D-printed plastic is required to deliver a very thin, stable shell. Complex topologies can be achieved with less effort, such elements can optimize the structural performance or improve functional aspects, as well as introduce a radically different aesthetic.

The market of interior non-standard staircases, leaving aside the in-situ concrete stairs, goes in two main directions. On the one hand, there is the umbrella of bespokemade products, on the other hand, there is the family of industrial in-kit products. The first case is generally defined by a specific design that is adapted to the context in every single component, this is also one of the main factors that raise the prices of these products. The overall result is usually intended to be organic, coherent as a whole.

The in-kit products are instead meant to be mass-manufactured, to reduce the design and manufacturing costs. In this staircase typology, the adaptability is shifted from the design phase to the assembly. The interaction between the standardized components is designed to guarantee a range of configurations. This advantage has a side effect from the aesthetical perspective: it is difficult here to achieve an aesthetical result that seems designed for the specific case, usually, these products are manifesting this adaptation, with discontinuities.

AM could bridge the two typologies with an affordable on-demand customized fabrication, creating staircases that can adapt to a specific context without losing the overall aesthetic quality.

As indicated in the CSC Leading Edge Forum (2012), the advantages of using 3D printing for such a product for interiors and medium-scale architecture components can be listed as follows: affordable customization; allows the manufacture of more efficient designs; lighter, stronger, less assembly required; one machine, unlimited product lines; efficient use of raw materials (less waste); pay by weight; complexity is free; batches of one, created on demand; print at point of assembly/consumption; new supply chain and retail opportunities. In the same report, there are highlights that, there are still some areas in need of further development. In particular, the possibility of printing large volumes economically, expanding the range of printable materials, using multiple materials in the same printer, to improve durability and quality as a final result. All these benefits make it worth exploring AM technologies for real applications. Architects are offered the chance to reinvent architectural components and the ecology of materials of the built environment by exploring forms and processes deemed impractical or inconceivable before.

With innovative construction materials/methods and better decision-making systems, not only projects are getting smarter but also it is an opportunity to build our environment more sustainable (Beyhan 2018).

# **4 Main Characteristics and Goals**

One of the first goals considered in the design process of the staircase, was the reduction of the components, to speed up and simplify the assembly operations, and as a direct consequence, the reduction of the suppliers' chain. A big problem with the in-kit staircases is indeed the large number of components that need specialized labor and a long time to assemble the final product. The staircase is indeed conceived as a discreet assembly with components not bigger than small furniture, to avoid logistic problems. The module is so incorporating riser, tread, and string in one single element. This logic makes the module lightweight, easy to transport, and quick to install. The connection between the components is guaranteed by standard mechanical hardware employable for every size or configuration. In synthesis, the characteristics used to drive the design process are: integrating the components (riser, tread, stringer); lightweight; easy assembly; quick manufacturing; adaptability to specific configurations (with an organic whole aesthetics).

For structural integrity and due to the use of FGF technology, the module, which is a single step, is described as an open volume, or a reverse vase, with a section that allows a continuous printing path. The step is then a hollow volume, where the mechanical stress is handled by twisted surfaces operating as struts due to their triangular configuration. The dynamic section of the single module is designed to optimize the printing time while guaranteeing the required mechanical strength. The adaptability is reached through the definition of a parametric model of the module. The adjustments of the module affect the production costs only in the quantity of material used.

# **5 BRep Stereotomy: Conceptualization of the module's Geometry**

The design process to define the geometry of the modules is inspired by the art of stereotomy in gothic stone spiral staircases. Instead of working with subtraction, AM requires a digital model with the specific articulation of the surfaces defining the volume. This is an example of extending the use of computer-aided design (CAD) from being a medium of representation to a media of design and manufacturing (Celani 2002). Future architects are expected to become robotic-aware, in other words, able to consider the robotic arm constraints for the design of a given building element (Al Jassmi 2018). In this spirit, the proposed design process shows that the understanding of the fabrication process with its limits and the material behavior is important to define the physical articulation of the object, as a synthesis between aesthetics and mechanical performance. This methodology is compatible with most CAD packages that describe geometric objects as a collection of individual surfaces that are joint along their edges. Boundary representations (BRep) are highly efficient and offer a lot of flexibility in terms of design. The combination of BRep design and 3Dprinting for the construction of hollow components reduces manufacturing-related resource inputs because it only requires the amount of material that ends up in the printing well without too many losses (Reeves 2008). In addition, it makes the component lightweight and therefore, easy to handle.

# **6 Design Configuration 1: Oblique Interlocking**

The volume of the step is here defined by four main surfaces: the tread, the riser, and the two opposite surfaces generating a quadrangular shape in its section at the ends. The staircase slope is used to define the perpendicular surfaces of connection between the modules. This inclination consequently determines the riser and its opposite back face. The riser and the back face are articulated on the outer shell, so to obtain a rebated joint and using in addition chemical adhesives to bond the modules together (Fig. 1).

**Fig. 1.** Configuration 1, (a) rendering of two disjointed modules. (b) Diagrams showing the articulation of the inner and outer surface and relative areas of connection.

The inner surface is designed as a twisted element that interpolates the lateral section into the middle one, exactly where the step needs structural reinforcement. The tread, for example, needs to be a large flat surface to guarantee comfortable usage. As a single thin layer of material subjected to the direct stress of the weight of the user, it needs reinforcements. The inner surface indeed articulates to distribute the loads through triangulations (Fig. 1b, Fig. 3a). The combination of the box-like outer shell with the closed triangulated geometry in section, and the twisted surface configuration, guarantees mechanical efficiency with thin layers, avoiding deformations due to compression and tensile forces, and torsions of the planar surfaces.

**Fig. 2.** Configuration 1, rendering of two versions of the module: Closed Shape (a) and open shape (b).

In the attempt of saving material, it has been developed a version without the surface opposite to the tread, leaving a connection between the riser and the back face only in the central part (Fig. 2b). This has been evaluated only in a scale model, showing potential for further investigations.

**Fig. 3.** Configuration 1, photos of the prototype. The module is printed in PLA with a 5 mm nozzle. The layer height is 1 mm for a total mass of 12.5 kg.

# **7 Design Configuration 2: Horizontal Stacking**

The second configuration (Fig. 4) is based on the horizontal stacking of the components, relying on mechanical fasteners to allow the possibility of an easy assembly and disassembly (Fig. 5b). To maximize the mechanical performance of the module, the extremities are based on triangular shapes. The linear connection between the two sections is expanded on the back to form an arc. This arc defines the boundary surface of connection between the modules. The outer surface of the module is simply the interpolation of the rise, tread, and the curved surface of the back: the analysis of the sections reveals the transition from a triangle shape to a trapezoidal one in the middle sections. The inner surface traces the triangular shape on one side to then interpolate it to the riser profile on the opposite side (Fig. 5a). This allows having one side that is closed and the other open to insert the fastenings for the mechanical connections and the inspections. For the spiral version, the open side is the inner one, where the triangle is smaller. As in the previous case, the articulation of the inner surface is necessary to reinforce the structural integrity of the whole step. The flat surface of the tread is indeed larger and needs a strut to avoid flexions under payloads. The inner shell is so used to generate two twisting surfaces that are reinforcing the tread. The area between the two curves has a double layer, but the loads are distributed down to the base of the step (Fig. 5a).

**Fig. 4.** Configuration 2, Photograph of the scale models. The module is developed in a spiral (a) and linear (b) version.

A series of steps have been produced with the printers manufactured by the Belgian company Colossus, using a carbon fiber (10 –*<* 20%) filled recycled PETg (80 -*<* 90%) material. The steps have been printed with a 5 mm nozzle and a layer height of 1,8 mm. Printing a single step with that resolution takes between 4,5–5,5 h. The process is slowed down by the high resolution and precision required by the fact that is a final product. It must be considered that the printing speed is adjusted during the print process. The step is printed indeed, with a faster speed at the base where the shape is larger, and with a

**Fig. 5.** Configuration 2, diagrams of the Spiral module. (a) Development of the section using the printing orientation. The design takes into consideration the limitation of the cantilever walls. (b) Assembly of the modules using mechanical fasteners.

slower speed at the top, where the contour lines are getting smaller. This is related to the temperature of the material and its solidification: if it doesn't slow down the extruded part remains too hot for the next layer, causing unwanted deformations. Particular care must be invested in the definitions of the edges. Sharp edges are difficult to print because the robot cannot change the direction of movement and simultaneously keep the printing speed constant. This is an advantage, anyway, because filleting the edges increases the solidity between the surfaces.

**Fig. 6.** Configuration 2, Photographs of three modules prototype. (a) Front, (b) Side, (c) 2452 N payload test.

The final mass of the single step is about 10.8 kg. In the experience of the authors, the average mass for a custom staircase with a steel structure and metal sheet tread is between 17–30 kg. An empirical series of tests have been done to verify the integrity of a single step under a distributed load of 5394 N. Three steps mechanically fastened were also tested with a payload of 2452 N on the first, and the second step (Fig. 6c). The promising results of these initial verifications, opened the possibility for a more accurate series of testing to verify inflections and breaking load, and the testing of an entire staircase.

# **8 Conclusions and Further Development**

This paper proposes an approach to FGF design that balances process efficiency with a strong design focus, aiming to articulate a performative yet unique aesthetical quality for staircase design. The combination of BRep design and AM possess a wide range of architectural qualities that have to be explored and have the potentials to radically change the design and the construction processes so the language and identity of contemporary architecture, with functional, hollow structures.

After this first phase of testing that demonstrates the viability and sets a reference point, much further research has to be done on points like structural and mechanical stability, material life, fire resistance, the toxic effect of materials, etc. In particular, regarding the material and printing process, the opportunities are open to exploring other recycled materials, or bio-plastics. Another study concerns the adoption of TCM processes, using robotic-arm-based printing systems, especially in the case of spiral staircases. At last, new fabrication methods suggest also the definition of custom computational tools embedding the printing and material properties.

# **References**


**Open Access** This chapter is licensed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made.

The images or other third party material in this chapter are included in the chapter's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the chapter's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.

# **Joint Descriptive Modeling (JDM) for Assembly-Aware Timber Structure Design**

Ayoub Lharchi(B) , Mette Ramsgaard Thomsen, and Martin Tamke

Centre of Information Technology and Architecture, Copenhagen, Denmark alha@kglakademi.dk

**Abstract.** Joints design is an essential step in the process of designing timber structures. Complex architectural topologies require thorough planning and scheduling, as it is necessary to consider numerous factors such as structural stability, fabrication capabilities, and ease of assembly. This paper introduces a novel approach to timber joints design that embed both fabrication and assembly considerations within the same model to avoid mistakes that might cause delays and further expenses. We developed a workflow that allows us to identify the fundamental data to describe a given joint geometry, machine-independent fabrication procedures, and the assembly sequence. Based on this, we introduce a comprehensive descriptive language called Joint Descriptive Model (JDM) that leverages industry standards to convert a joint into a usable output for both fabrication and assembly simulations. Finally, we suggest a seed of a joint's library with some common joints.

**Keywords:** Design for assembly · Joints design · Timber structure · Assembly information modeling

# **1 Introduction**

With the rapid advances in computer-assisted design tools and parametric modeling in architecture; designing and manufacturing, and assembling buildings with complex geometry is becoming more accessible, especially within the timber construction field. Simultaneously, digital fabrication machines' proliferation is pushing toward an uninterrupted chain from the design-to-fabrication process (Beorkrem 2017). Complex structures can then be described with an exhaustive parametric model that breaks down the complexity into several separated relatively simple elements and becomes more manageable.

While these parametric models can provide an extensive overview of the design, the seamless translation from design model to fabrication files to assembly instructions is still limited to research and academic fields, where the global design parameters are known in advance (Stehling et al. 2014). In a large-scale project, it is very difficult (and often impossible) to have an overview of different components, fabrication techniques, and logistic considerations during the early design phase. This is because all the stockholders are not known initially, in addition to other aspects such as administration or tendering that do not get decided until later stages. This requires multiple iterations of the design, particularly in the case of timber structure; where slight changes of the geometry significantly impact the connections that constitute a decisive part of the structure (Willmann et al. 2016) and therefore the fabrication and assembly strategies as a whole. Concepts and tools that support an integrative design procedure are missing.

In this paper, we present Joint Descriptive Model (JDM). This expressive model is intended to represent timber joints regardless of their position in the structure and facilitate machine-independent fabrication data production. Finally, we enrich this model with assembly instructions for a comprehensive design process.

# **2 Background**

### **2.1 DfMA in Architecture**

Design for X (DfX) is a philosophy that has been around for several decades (Boothroyd 1987; Eastman 2012), and aims in general to ensure the quality of products or services and the same time, to optimize the manufacturing procedure and to minimize life-cycle costs (Gatenby and Foo 1990). DfX is a generic term where X stands for any critical considerations during the design, affecting the outcome profoundly. For example, manufacturing (Design for Manufacture – DfM), assembly (Design for Assembly – DfA), testability (Design for Testability – DfT), and many others. Mainly, DfM looks into the optimization of methods and procedures for making or fabricating individual parts, while DfA is concerned with how those parts are put together to constitute the final product. As most products are complex and constituted of several intricated elements, these two disciplines are often considered together and constitute Design for Manufacture and Assembly (DfMA) (Bogue 2012).

Recently, several researchers (Gao et al. 2020; Tan et al. 2020) looked into applications of DfMA in architecture, and while it is present to some degree, it is still arguably yet to be implemented efficiently. Such an approach would be particularly beneficial in timber construction, where it is essential to deal with fabrication processes (both analog and digital) and off/onsite assembly strategies.

# **2.2 Joints in Timber Architecture**

Timber structures are usually composed of multiple elements that are put together onsite with or without additional fasteners. It is essential to think about the connections between these elements. As mentioned earlier, joints play a crucial role; therefore, a particular focus has been given to the design of connections both in academia and industry. In general, we distinguish between two types of joint systems:


**Fig. 1.** Example of a finger joint with additional fasteners (left) and integral joints (right). Source: ICD, University of Stuttgart (left).

While these two types of joints require different design approaches, they are fundamentally similar as they both need extensive planning for fabrication and assembly. Within the scope of this work, we focus on integral joints.

### **2.3 Assembly Information Modeling**

This research builds upon previous work about design for assembly in an architectural context. This work is developed as an extension to an existing framework, "Assembly Information Modeling (AIM)" (Lharchi et al. 2019). AIM is a digital framework that is designed to integrate assembly considerations in the design process. Based on principles from DfA, it aims to include all the necessary data to describe precisely an assembly sequence (including the geometry, directions, sequence, and environment). The flexibility of the model allows many applications such as cloud collaboration or augmented reality assisted assembly (Lharchi et al. 2020). The assembly data is stored in a single model called Assembly Digital Model (ADM). In research, we use ADM as an intermediary container to enrich a classic joint model with assembly data.

The use of AIM allows a description and simulation of the assembly process, and thereby the evaluation of chosen construction and assembly strategies. However, fabrication is inherently linked to these strategies as it dictates the nature of joints that are fabricable. Methods to embed fabrication information in a single model alongside with other data (geometry, assembly, logistics) are still missing.

### **2.4 Practices in Timber Fabrication**

There are many suggested methods to exchange fabrication data within the timber industry. This is achieved exceptionally well by the Building Transfer language initiative.

Building Transfer Language (BTL) is an open standard that is developed and maintained by SEMA and CadWork. It provides a parametric description of the geometry of a timber building component (Al-Qaryouti et al. 2019). There are two available variants: BTL and BTLx. The latter is an improved version of the former, and offers a modern XML-based syntax. BTL is purposely not machine-specific, and the processing is defined through the results rather than the actual machining process (Stehling et al. 2014). This machine-agnostic approach releases the design from requiring a precise machining environment at the time of design (Fig. 2).

**Fig. 2.** Machining operations as described by BTL. Source: https://design2machine.com.

Many CAM software packages adopt BTL, but very few CAD modeling software offer usable interfaces to analyze geometry features and export them to BTL. Woodpecker (Stehling et al. 2014) is a free plug-in for Grasshopper<sup>1</sup> and is used for BTL export. Overall, the process of design should focus on ensuring a seamless data flow from CAD to CAM (Tamke and Ramsgard Thomsen 2008).

# **3 Methodology**

To provide a generic model capable of describing the geometry of a joint accurately, independently from machine specifications, and at the same time to include assembly information, we looked at existing practices within timber fabrication to identify a base for our proposed language. Based on standard industry representations, we defined a set of extendable specifications. Finally, we provided a seed for a joints catalog that can be used to speed up timber structures' design process.

# **3.1 Timber Machining Methods**

We classify machining methods in timber into three categories:


<sup>1</sup> https://www.rhino3d.com/.

To support exporting machining operations, we considered AutoCAD DWG and AutoCAD Drawing Exchange format (DXF). While DWG is more advanced and supports more features, it is proprietary and requires specific licensing models. Therefore, DXF format was chosen to represent 2D and 2.5D operations. All other machining operations are exported using the BTL presented earlier.

### **3.2 JDM Implementation**

The proposed JDM implementation is divided into two parts: The first part "JDM.Core", is a tool that analyzes intersections in the structure (curves and vectors) and is capable of generating the joint geometry. The second part analyzes the geometric features of the said joint, and generates a corresponding BTL file.

The JDM.Core library was written in C#2, a powerful object-oriented programming language that leverages the.NET Core Framework. While the core library can run on different operating systems (Windows, Linux, macOS), we focused on the Windows Platform and Grasshopper environment to demonstrate the potential of the proposed method.

To generate BTL files, we used the Grasshopper plug-in "Woodpecker" presented earlier (Stehling et al. 2014). As there is no available plug-in Application Programming Interface (API), we used an intermediary interface exposed by Grasshopper "Ghpythonlib.Components". This interface allows direct interaction and scripting for installed plug-ins even if an API is not exposed.

The model specifications and the implementations sources and binaries are available online on this repository: https://github.com/ALharchi/JointMaker.

### **3.3 Joint Library**

One of the goals of this research is to provide a base for designers to facilitate the design of timber structures. This can be achieved by providing a catalog of timber joints that can be used within a parametric model. The joints can be adapted to the structure and incorporate various limitations such as maximum angles and minimum sizes. Users can

**Fig. 3.** Different joints types implemented in the library

<sup>2</sup> https://docs.microsoft.com/en-us/dotnet/csharp/.

specify nodes within the structure and then can choose an appropriate joint from a given catalog. Adding more joints to the library is possible using a JDM definition.

We defined three joints types to demonstrate this feature: Half Lap, Cross Half Lap, and Corner Half Lap (Fig. 3).

Nodes in the structure can be replaced with joints that are already enriched with fabrication and assembly data. Export as BTL, NC, DXF for fabrication or ADM for assembly simulation is then possible. Figures 4 and 5 illustrates an overview of the process and the final result as viewed in BTL Viewer software.

**Fig. 4.** Automatic Joints generation from nodes in a wireframe structure

# **4 Result and Discussion**

The outcome of this work is a novel joint representation that focuses on timber construction. This representation allows embedding both assembly and fabrication consideration in one single model. The implementation (Fig. 6) of the method within the CAD software offers an effective tool for designers to iterate quickly over different design options and have realistic expectations about the fabrication and assembly to assess design choices.

Further development of this work should focus on the expansion of the joints catalogue and the integration of structural properties.

**Fig. 5.** BTLx as viewed in BTL-Viewer<sup>3</sup>

**Fig. 6.** Integration of fabrication and assembly to inform the global design

# **5 Conclusion**

This paper has demonstrated that the combination of machine-independent fabrication representations and assembly digital models constitutes a powerful tool to go rapidly through design iterations, simulate the assembly and digital fabrication, and finally prototype and fabricate timber joints. We showed potential usages of the JDM-powered joints catalog and workflows from design to assembly and how it can allow a seamless transfer of DfMA principles between the different stakeholders.

**Acknowledgements.** This project was undertaken as part of the Innochain Early Training Network. This project has received funding from the European Union's Horizon 2020 research and innovation programme under the Marie Sklodowska-Curie Grant Agreement No. 642877.

<sup>3</sup> https://design2machine.com/btl/viewer.html.

# **References**


**Open Access** This chapter is licensed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made.

The images or other third party material in this chapter are included in the chapter's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the chapter's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.

# **Process and Evaluation of Automated Robotic Fabrication System for In-Situ Structure Confinement**

B. Bala Murali Kumar1, Yun Chung Hsueh1, Zhuoyang Xin1, and Dan Luo2(B)

<sup>1</sup> School of Civil Engineering, University of Queensland, Brisbane, Australia <sup>2</sup> School of Architecture, University of Queensland, Brisbane, Australia d.luo@uq.edu.au

**Abstract.** The additive manufacturing process is gaining momentum in the construction industry with the rapid progression of large-scale 3D printed technologies. An established method of increasing the structural performance of concrete is by wrapping it with Fibre Reinforced Polymer (FRP). This paper proposes a novel additive process to fabricate a FRP formwork by dynamic layer winding of the FRP fabric with epoxy resin paired with an industrial scale robotic arm. A range of prototypes were fabricated to explore and study the fabrication parameters. Based on the systemic exploration, the limitations, the scope, and the feasibility of the proposed additive manufacturing method is studied for large scale customisable structural formworks.

**Keywords:** Robotic fabrication · Customisable formworks · FRP layer winding

# **1 Introduction**

Recent studies on topology optimisation (Ning Gan 2021) have found that the material efficiency can be significantly improved by using irregular sections to replace the conventional sections in structural members. The optimized structures are also tended to be with changing cross-sections along the member span or height (Lloret Fritschi 2017), such as the tree-like structure used at the Qatar National Convention Centre and the Art Nouveau Apartment by Flying Concrete in San Miguel De Allende Mexico. The conventional concrete casting and steel manufacturing process is highly efficient and less time consuming for standardised cross sections but have their limitations on irregular shapes and sizes.

Additive Manufacturing (AM) on the other hand, is known for its flexibility and effectiveness for fabricating customised parts with complex geometries. AM techniques are used in various industries to create physical prototypes as well as manufacturing end parts. The construction industry has started to adopt the AM process and progressed from labs to printing full scale 3D structures (Souza 2020) in various parts of the world. The AM allows architects and engineers the geometrical freedom to produce highly efficient, non-standard building components without significant increase in cost and time.

© The Author(s) 2022 P. F. Yuan et al. (Eds.): CDRF 2021, *Proceedings of the 2021 DigitalFUTURES*, pp. 368–379, 2022.

https://doi.org/10.1007/978-981-16-5983-6\_34

However, as additive manufacturing is different from common construction process, there has always been a challenge on how the additive manufacturing could be compatible with common structure system and standard assembly process in construction.

**Fig. 1.** Robotic fabrication.

Combining the concept of additive manufacturing and the existing established application of FRP in structure reinforcement and formwork, our research is introduced to explore the potential use of robotic technology for robotic fabrication as shown in Fig. 1 of structural members with greater formwork flexibility, and to increase the material and structural efficiency of the building structure.

# **2 Background**

The various AM techniques can be broadly divided into seven categories (ISO/ASTM 2015): binder jetting, material extrusion, powder bed fusion, directed energy deposition, sheet lamination, vat photopolymerization and material jetting. The AM systems offers flexibilities in fabricated geometries with the degrees of flexibilities in the system. For example, the gantry robot has three degrees of freedom, systems based on industrial robotic arms have six degrees of freedom, which can be further expanded with a mobile base. Furthermore, these systems can also be enhanced with both terrestrial and aerial collaborative robots ((Lloret Fritschi 2017); (Keating 2017)).

The most prevalent AM method in construction is based on material extrusion, where a nozzle with fresh cementitious material is deposited along a horizontal layer in a predefined path. In the concrete extrusion process, the AM systems are categorized based on the layer thickness, size of the printed object, the printing environment, the assembling strategy, the use of support structures and the robot complexity. Several objects were implemented with these systems namely Curved Bench by Loughborough University (Lim et al. 2012), Complex Wall by XtreeE: (XtreeE), WinSun printed building components for the Dubai Future Foundation (Anon 2018).

Other innovative AM systems, such as the particle bed process FreeFAB, Smart Dynamic Casting and Mesh Mould increases the accuracy, ease the printing of complex shape and geometry of the architectural building components which are otherwise not possible using the traditional AM construction systems. The AM system called Smart Dynamic Casting by ETH Zuirch prints complex-shaped columns with changing cross sections. The formwork is printed, and the concrete extrusion process takes place simultaneously along a trajectory by a robot arm. The hydration process is monitored using sensors for the addition of admixtures and controlling the printing speed. In the mesh mould method, an In-situ fabricator which is an autonomous mobile robot bends and welds the steel to create a mesh. The mesh acts as a formwork and reinforcement for the concrete.

Current AM systems focuses on the traditional concrete structures which faces the challenges of expanding from small scale to large scale, and the integration of steel reinforcement into the system. Comparably, the large-scale additive manufacturing of FRP-concrete composites may have special advantage in resolving those issues.

# **3 Fabrication Methodology**

# **3.1 Robotic System Design - End Effector**

End effector is a customized robotic system developed for the FRP fabric weaving in additive manufacturing to create the desired formwork. The end effector design consists of two support rollers, the guide roller, and the FRP fabric support holder as shown in Fig. 2. The support rollers are mounted on a steel frame system linked with springs, which helps them to apply adequate pressure against each other and maintain their spatial position during the fabrication process. The FRP weave will be strategically positioned between these two rollers and the hollow formwork geometry could be maintained by the pressure as the rollers are passively driven. The guiding roller is responsible for the smooth transition from the FRP fabric roll to a straight fabric while maintaining the tension to prevent wrinkles in the additive layers. The dimensions of the end effector's rollers are 51.5 mm in diameter *d*, 200 mm in length *l*, the inner mould radius *Ri* is 81 mm and the outer radius *Ro* is 329 mm as shown in Fig. 2b.

# **3.2 Material Parameters**

The commercially available FRP fabric, 150 mm in height, installed directly onto the support holder without any pre-processing requirement, is used for this additive manufacturing process. Every FRP fabric layer is required to bond with the preceding and succeeding layers to gain the formwork's strength and stiffness properties to resist against the concrete's hydrostatic pressure. This bonding action is achieved through the epoxy resin adhesives.

The epoxy resin was a two-part mixture of Amperg 22 with fast hardener mixed in a ratio 100:40 specified by the manufacturer. The working time for the epoxy resin is only 15 min beyond which the epoxy resin starts curing by exhibiting exothermic reactions. The curing time of the adhesive impregnated FRP fabric was different. The epoxy resin when applied to fibres, took approximately 90 min to 120 min to completely cure. However, it was found that when an external heating apparatus, such as use of a heating gun at 200 C, the curing time reduced to approximately 20 min for every 2.5 layers.

**Fig. 2.** (a) End effector design: A – the supporting rollers, B – the guiding roller, C – the FRP fabric holder, D – the steel frame system of the rollers. (b) Fundamental dimension of end effector design.

#### **3.3 Additive Fabrication Process**

The design stage of the additive process and the complex geometries are carried out in the Rhino/Grasshopper software environment. The surface of the geometry is extracted to plan the path planning algorithm for the robot's end effector to move along the designated path. To detect any potential crashes and deviations in the path logic, the algorithm is tested and visually inspected in the rhino environment. Next, the algorithm is given as an input to the KUKA robot plugin available in the grasshopper made by KUKA itself. The script is then generated by the plugin and copied into the manual controller of the KUKA robot to start the fabrication process. A flowchart of the fabrication process is shown in Fig. 3.

The additive manufacturing technique of the FRP formwork for different cross sections, shapes and non-linear geometries depends on the preceding layer's stiffness, strength, and torsional resistance. The first layer of the FRP formwork depends on an external support, termed "base mould", to serve as the foundation for succeeding layers. The base mould is 3D printed with polylactic acid plastic (PLA), and statically bolted to the fabrication table to prevent any lateral or slip movement during the fabrication process. The dimension of the base mould used in this paper is shown in Fig. 4b.

The end effector is then installed onto the robot's sixth axis hand and tested. The fabrication process follows a spiral path and hence a data configuration file was set for the robot's sixth arm to rotate more than ± 360 degree in a limitless spindle manner. The end effector is calibrated with the mould spatial location with respective to KUKA robot's location in the 3D space.

From experience, it is suggested that the generated path planning script is verified by allowing the robot to run without any fabric and epoxy resin. After the successful completion of the dry run, the experimental preparation and precaution measures against epoxy resin is elaborated in the following section. The end effector is lowered to the bottom of mould and fabric is attached with the mould. The epoxy resin is prepared

**Fig. 3.** Fabrication process

in a mix ratio of 100:40 as per manufacturer's specification and applied on the fabric as described in the previous section. An external heat source is applied for a particular number of revolutions up to the calibrated durations. In this experiment, a heat gun was used for a duration of 30 min for every 2.5-layer revolution. The layer winding, preparation and application of epoxy resin on the fabric and heat application are repeated until the desired fabrication height is achieved as shown in Fig. 4a. This fabricated FRP formwork is then demoulded from the base mould by disconnecting the initial mould to fabric connections. A time lapse of the formwork fabrication process is shown in Fig. 5.

# **3.4 Parameters Influencing the Additive Process**

The potential parameters that have an influence on the additive manufacturing process that determines the stiffness, strength, flexibility, and feasibility for the large-scale process are summarised as follows.

### **3.4.1 Epoxy Curing Parameters**

The adhesive property of the epoxy resin is achieved after curing. The curing time is influenced by two major factors: temperature and number of layers. Temperature above 200 °C can damage the FRP fabric. Therefore, care must be taken to not apply the

**Fig. 4.** (a) The robotic fabrication system. (b) The base mould.

temperature at an area for a prolonged time to avoid FRP fabric damage. In addition to the temperature magnitude, the curing time is also influenced by the number of layers that is subjected to the heat source at a given time. A number of trials were conducted to find the optimum temperature without damaging the layers and number of layers to determine the curing time. In this paper, all the prototypes were subjected to a temperature of 200 °C at every 2.5 revolutions for a period of 30 min. The external heat source used to apply the temperature in this paper was a heat gun while other possible curing methodology such as UV enabled resin curing will be explored in future. The waiting time to manually apply the epoxy resin was set to 20 s for every 1/8th of the circumference of the cross section.

### **3.4.2 Fabrication Time Estimation**

The theoretical time required to complete the prototype fabrication can be estimated using Eqs. (1), (2) and (3) shown below. The total theoretical time depends on the height of the prototype geometry, the height of the FRP layer, the number of overlap layers, the time required to apply the epoxy resin and the FRP winding process. The total time shown here is an estimate but the experimental time may be different due to large environmental factors influencing the fabrication process.

$$t\_{\text{total}} = \frac{H n\_{OL}}{h\_{fp}} (t\_1 + t\_2) \tag{l}$$

$$t\_{\rm l} = \frac{\pi d}{V\_{\rm winding}}\tag{2}$$

$$t\_2 = n\_{resin} t\_{resin} \tag{3}$$

**Fig. 5.** Time lapse of FRP formwork fabrication.

ttotal – the estimated total time for prototype fabrication, *H* – the total height of the designed prototype, *nOL* – the maximum layer overlap number, *hfrp* – the height of one-layer FRP fabric,*t*<sup>1</sup> – the robotic winding time for each circumference motion, *t*<sup>2</sup> – the epoxy resin applying time for each circumference motion, *d* – the diameter of the hollow cylinder section, *Vwinding* – the constant winding velocity, *nresin* – the times of resin applying for each circumference motion, *tresin* – one epoxy resin applying time.

### **3.4.3 Fabrication Speed**

The fabrication speed of the additive process depends on factors such as curing time, layer winding speed and area of the column covered by the end effector in a single revolution. The fabrication speed parameter is optimized between the minimum time required to complete the additive process and the quality of the structural component. The quality of the structural component is the deviation between the cross section of the design and the additive component. In this paper, the layer winding speed was set to 1 cm/s and curing time for every 2.5 layers was 30 min at 200 °C.

# **4 Results and Discussion**

A preliminary study was conducted to assess the feasibility and flexibility of proposed novel additive process for large-scale additive production of architectural elements. Initially, two trial experiments were conducted as shown in Fig. 6 (A)–(B) to calibrate the end effector, epoxy resin mixture to determine the optimum number of layer revolutions application, estimated time for each of the operation and the Rhino/grasshopper script as summarised in Table 1. The experience gathered from the two trials paved the way to establish a procedure for the future additive process for the prototypes is shown in Table 2. After the initial trial experiments, three preliminary prototypes were fabricated as shown in Fig. 6 (C)–(E) to study the challenges and issues associated with the proposed additive process. The objective of these preliminary study is to identify the parameters and challenges that can be controlled or solved to construct the proposed additive technique feasible for large-scale additive process of architectural elements.

**Fig. 6.** The FRP formworks - (A) trial 1 (B) trial 2 (C) 10 layers (D) 15 layers (E) 20 layers

### **4.1 Accuracy of the Proposed Additive Process**

The physical environmental factors play a key role in determining the accuracy of the fabrication of the component. For the prototype to be compared fairly with the additive process and not wrapping on the base mould, the entire prototype was subdivided into three segments where the middle segment is 150 mm tall.

The accuracy between the digital design and the additive process is compared by 3D scanning the fabricated prototype. The 3D scanning process is used to extract the surface details in the form of three-dimension spatial coordinate points. These points are meshed together to form the object geometry in digital design space. During the 3D scanning process, the probability of collecting data points other than our prototype due to the environmental factors can affect our accuracy estimation. Therefore, the data is pre-processed by setting the threshold value of the data points to a maximum radius of 10


**Table 1.** Estimated time analysis for the prototypes for 2.5-layer additive process

**Table 2.** Trials and prototypes performed for the experimental study


mm ensuring the collected data points represent only the prototype. The design geometry and the meshed geometry are then overlapped for comparison purposes. The coordinates of the data points are then averaged over a thickness axis to obtain the average thickness of the prototypes.

With the average thickness estimated in the spatial coordinate space, the deviation of the additive process in fabricating the prototype as designed in the software can be compared. The comparison of three prototypes for the three different number of layers of 10, 15 and 20 can be compared. It can be observed from Fig. 7 that the accuracy of the scanning process is approximately 97%, 98% and 98% for 10, 15, and 20 layers respectively. The comparison between the 3D scan and the digital design in shown distinctively in (b)–(d) and (e) indicates the measurements of the prototypes section to calculate the average deviation.

**Fig. 7.** Accuracy comparison between the physical prototype and rhino design

### **4.2 Discussion**

The Fabrication system proposed in this research is an integration of the emerging technologies of advanced manufacturing with architectural design and structural engineering. This system is designed for the rapid fabrication of topological optimized-based generative form with customizable FRP formwork. The major parameters that influenced the proposed additive process were identified and investigated. In this research, parameters regarding the selection/designs of epoxy resin, end effector, base mould and the FRP fabric were investigated during the additive fabrication of the prototypes. The increase in layer winding speed and the number of layers can greatly influence the total fabrication time of the prototypes. From the observation, it is imperative to seek other alternate source of FRP fabric adhesion, alternate ways to increase the temperature of the epoxy resin to lower the fabrication time.

From our current study, it is evident that the proposed additive process has the potential to bring a change in the fabricated structure component because of the strength, design, geometrical flexibility and cost effectiveness, while dealing with the following disadvantages like the high fabrication time, manual epoxy resin application. Thus, further investigation of the methodology and technique to implement for large scale non-uniform and complex geometry by integrating the FRP and concrete is required.

# **5 Conclusion**

The additive manufacturing process in the industry is gaining momentum. Though there are rapid development in 3D concrete printing, from the perspective of structural engineering, we require the material and cost-effective system that is compatible with current construction process. This paper explores the feasibility of using the additive process of FRP dynamic winding of formwork for concrete casting of customizable structure components in large scale construction. The additive process proposed in this paper uses an FRP that is operated by KUKA robot with customized end effector, and the epoxy resin is used as the adhesive for lamination in the process. Three prototypes and two trial experiments were conducted to study the influencing parameters and record the deviation of the fabricated components comparing with the original design. From the preliminary studies, it can be concluded that there is only less than 4% deviation between the fabricated components and the digital model, yet further improvements in subsequent experiments is highly likely. Thus, the proposed additive process has the potential to manufacture fabricated large-scale structural components of topologically optimized shape with irregular cross section and non-uniform geometry.

# **References**

Anon: 3D Printing with concrete: state-of-the art*,* 2018(4), 275–287 (2018)


XtreeE: XtreeE. http://www.xtreee.eu/. Accessed 24 June 2019

**Open Access** This chapter is licensed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made.

The images or other third party material in this chapter are included in the chapter's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the chapter's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.

# **Mass Customization: The Implication on Development of Aluminum Joint**

Jiabei Ye1(B) and Xiaoxi Guo2

<sup>1</sup> The Bartlett School of Architecture, 22 Gordon Street, Bloomsbury, UCL, London, UK <sup>2</sup> College of Architecture and Urban Planning, Tongji University, Shanghai, China

**Abstract.** In the manufacturing process, the production of standardized prefabricated components is highly efficient, which can benefit the demand for mass production of standardized architecture after World War II. However, overstandardized architecture sometimes cannot satisfy the demand for uniqueness in an architecture project. At this time, bespoke components began to be used to solve the over-simplification of aesthetics of architecture. Besides, with the help of digital fabrication, bespoke components could achieve mass customization in architecture. The research designs two joints: prefabricated aluminum joints and bespoke aluminum joints, which aims to develop bespoke joints to aluminum components with ornamental characteristics and become a part of architecture with practical function and ornamental function. Furthermore, in the process of generating bespoke joints, improve the deficiency when conducting lost-foam casting.

**Keywords:** Prefabricated aluminium · Lost foam casting · Mass customization · Architecture joint · Bespoke component

# **1 Mass Production and Prefabrication**

The mass production of buildings in a prefabricated way represents the massive demand for housing after the war. Designing and constructing the house based on prefabrication was a solution to the housing shortage in the short term and promoted the development of modern architecture. However, there is now controversy over the standardized production of buildings. And standardized production methods are used in buildings make them more like some of the products rather than unique ones.

Under these circumstances, a prefabricated way and the aluminum joint are combined to design a building structure with prefabricated efficiency and custom aesthetics (Fig. 1).

### **1.1 The Performance of Prefabricated Aluminum**

Prefabricated productions are economical and have significant advantages in speed, quality, and flexibility. The project uses one of the prefabricated components—extruded aluminum to create standard structural joints and chooses aluminum as the primary material based on low melting point, durability, corrosion resistance, and lightness.

**Fig. 1.** Aluminum extrusion joints and components

Aluminum extrusion is the technique used to transform aluminum alloy into definitive, cross-sectional profiles. The extrusion process makes the most of aluminum's unique combination of physical characteristics, specifically ductility. Its ductility allows it to be pressed and formed into complex shapes even after extrusion. Also, with one-third of the density and stiffness of steel, the resulting products offer exceptional strength and stability. In the research, we have selected T, L, and U sections of extruded aluminum which are most available on the market (Fig. 2), and these the extruded aluminum can combine to create structural joints.

**Fig. 2.** Extruded aluminum profiles with different section and cut results

After the deformation and connection test of the extruded aluminum, we choose to use L-section aluminum. In terms of connection, a combination of bolts and rivets has been used. The bolt connection is suitable at the junction of the casting joint and the extruded aluminum because it facilitates the disassembly and assembly of two parts. And the rivet is more suitable in how the extruded aluminum is attached to the extruded aluminum, because it leaves a more elegant trace on the smooth profiled aluminum surface. We assemble the casting joint and the mixture physical model with the L-section extruded aluminum.

### **1.2 Standard Joints Design by Extruded Aluminum**

The joints can be flexibly and conveniently assembled by using extrusions. This way fully embodies the advantages of prefabricated components and can be mass-produced. The research needs to create these joints by simply cutting corners and assembling them with a rivet to create a series of fixed-angle joints.

In this way, the research designs a series of extruded joints. These standard joints can change to commonly-used fixed angles, such as 45, 60, 90, and 120 degrees (Fig. 3). Since these fixed angles can only be made by extrusion, there are significant limitations in designing the extrusion column. Therefore, we designed columns with simple shape by connecting regular geometric grids (Fig. 4).

**Fig. 3.** Extrusion joints **Fig. 4.** Extrusion columns

The research chose to use prefabricated aluminum to design standard extrusion joints. For modelling, 80 extruded joints and 450 extruded aluminum panels were used to construct the building structure to reflect the structural flexibility of the prefabricated parts (Fig. 5).

However, the extrusion joints we designed still have many limitations. The more angles the joint can change, the more difficult it is to produce. In addition, the building structure simulated in the software with joints designed by us uses a prefabricated joint made of extruded aluminum. The prefabricated joint looks like a standardized building product and lacks the uniqueness and beauty of the construction plan.

# **2 Bespoke and Handcraft Manufacture**

# **2.1 Lost-Foam Handcraft Casting**

The lost-foam casting technology is currently being used to manufacture a wide variety of ferrous and nonferrous metal components in the catering industry and automotive industry (Shivkumar and Wang, 1990)1. The lost-foam casting process means pouring

<sup>1</sup> Shivkumar, S.,Wang, L., & Apelian, D. (1990). The lost-foam casting of aluminum alloy components. JOM, 42(11), 38–44.

**Fig. 5.** Architecture structure of extrusion

liquid metal directly into a foam block buried in loose sand. The foam block undergoes thermal degradation and is gradually replaced by the molten metal, solidifies, and produces the casting (Fig. 6). This method has been applied to bespeak aluminum joints in the research (Fig. 7).

**Fig. 7.** Process of lost-foam casting

### **2.2 Design Bespoke Joint by Foam Prototype**

In the case of the Nanyang Technological University designed by Heatherwick Studio (Fig. 8), rubber moulds used to customize the concrete pillars inspire research. A more suitable foam material as a prototype for customized aluminum joints was also a wish in this project in the lost-foam casting.

**Fig. 8.** Adjustable silicone moulds in NTU ([Adjustable silicone moulds] n.d. [image online] Available at: *<*http://www.designcurial.com/news/dim-sum-towers-heatherwick-studioslearning-hub-in-singapore-4593669/4*>* [Accessed 16 June 2018].)

Since most foams are easily deformed at high temperatures, the key to lost-foam casting is the choice of foam. The exact thickness of foams and rubber burning and deformation properties tests had been tested, and the sheet of foam and rubber are easy to fold. Due to the high toughness of the rubber, under the same tensile force, the deformation is large and not easy to cut. After the foam is burnt by hot liquid aluminum, the plastic foam has better combustion performance (Fig. 9).

**Fig. 9.** Foam sheet test and foam joints prototype

After a series of tests, we decided to use Plastazote--a kind of plastic foam that is flexible, steady, and easy to melt. More importantly, their thickness can be thin enough for us to make various shapes (Fig. 9). Since the model is folded with foam and it can be changed in any direction to construct (Fig. 10).

**Fig. 10.** Angle morphology

### **2.3 Improve Lost-Foam Casting Technology**

Taking the accuracy of the angle of the bespoke joints into account, a wooden frame has been made by laser cutting to control. The section and direction of the model are determined by using wood slices with specific incisions and frames that form a specific Angle. The model had been fixed into the frame before casting and then be placed in the sand (Fig. 11). Unique bespoke joints can be built by using the lost-foam casting method. However, this method is challenging to produce joints on mass customization and can only be partially customized, because every wood frame will burn out.

**Fig. 11.** Wood frame and the joint after casting

In addition, we have changed the numbers and locations of the backflow foam to direct the direction of the liquid aluminum during casting. In order to allow more liquid aluminum to flow down, we have increased the original two guided foams to the current three at the top of the model. Besides, we tried horizontal and vertical directions (Fig. 12) when the foam nodes were buried in the foam model. The horizontal casting method has a higher success rate. We built a basic physical structure model by assembling prefabricated rods and custom nodes (Fig. 13).

By comparison, we conclude that the integrity of the casting result depends on the quantity and location of the guided foam.With more numbers of guided foams, the guided liquid of the aluminum would be more even, and the completion of the aluminum joints

**Fig. 12.** Change casting direction

**Fig. 13.** Connection of extrusion and casting joint

would be higher. With the lost-foam to create this technology, we have customized many joints that can change in different directions and have many aesthetics (Fig. 14). Based on this logic, we tried to generate artistic pillars composed of custom nodes (Fig. 15).

**Fig. 14.** Custom joints **Fig. 15.** Custom columns

# **3 Ornamental Component and Mass Customization**

### **3.1 The Construction of Aluminum Component**

With the introduction of contemporary design and manufacturing techniques, there are unprecedented opportunities for designers to combine the structural logic of a building with expressive piecing, and decoration is reinvented to explore the interplay between function and decoration, volume and surface, structure and envelope. A positive example is the apartment building on 40 Bond Street in New York, designed by Herzog & de Meuron with computer-aided technology. Every component of gate mold is made of expanded polystyrene (Fig. 16).

**Fig. 16.** Sculptural Gate at 40 Bond Street in New York ([Aluminum components] n.d. [image online] Available at: *<* https://www.exyd.com/40-bond.html*>* [Accessed 06 July 2018])

In the gate manufacturing process, every component of gate mold is made of expanded polystyrene. The shape of the molds is designed by computer software, and then the foam molds are cast into prefabricated aluminum components in the factory. After the aluminum components are transported to the site, they are quickly assembled by the workers.

### **3.2 Design Facade with Ornamental Component**

The sculpture gate case inspires us to customize the prefabricated components with ornamental features. At the same time, it has the aesthetics of customization and the advantages of rapid assembly due to prefabrication. Using this language to design, we change the component's geometry and get triangle, quadrilateral, pentagon, and hexagonal component —Type A. This set of components has many circular holes, which are left by the guiding foam. This set of components is linear, axial and bifurcated. The arc and the composite curve form the mixed flower diameter in the rigid linear geometry. Also, by changing the folding method, we get another series of components - Type B (Fig. 17).

After using components with two different systems, A and B, we connect them in a point-to-point manner, combine them with each other (Figs. 18 and 19), and fill the gaps between the geometries with strips. We design a rectangular facade with decorative features. Such an aluminum facade has a robust aesthetic appearance and can function as an exterior wall of the building (Fig. 20).

**Fig. 17.** Prototype of ornamental component A&B

**Fig. 18.** Component A&B with different geometries

### **3.3 Mass Customization with 3D Print Metal Joints**

To achieve mass customization after a series of practice and the case studies of 3D printing, it has to apply computer computation and digital manufacturing to the project.

**Fig. 19.** Component overview

**Fig. 20.** Facade design of symmetry

In the 3D printing component prototype method, designing 100 identical and different component models is as fast as possible, increasing the efficiency of making components. The casted aluminum components are then used to design the building facade.

ETH Zurich researchers designed the metal Deep Facade consisting of 26 separate components, with the height of 3.5 m2, It combines the "3D printed geometrical freedom and structural properties of cast metal" to achieve a new building possibility (Fig. 21). Computational techniques designed each significant metal component of this facade.

<sup>2</sup> Rima, S,(2018). ETH Zurich casts intricate metal facade in a 3D-printed mould. Available at: *<*https://www.dezeen.com/2018/06/22/eth-zurich-metal-facade-3d-printing-mould-tec hnology/*>* [Accessed 16 June 2018].

The components are designed to be flexible and are time-efficient like prefabricated aluminum. The components are cast in a customized way, so they have custom aesthetics. Therefore, the goal of our project is to realize custom aluminum joints and components with the help of computer technology (Fig. 22), to build an ornament and functional facade.

**Fig. 21.** 3D printing sand mold casting **Fig. 22.** 3D print joints

# **4 Conclusion**

The design goes through three steps, from standardizing extrusion joint to custom casting aluminum joints that eventually develop into customized building components. Pressed aluminum extrusion joints could be mass-produced for the flexible construction of building structures. Pressed aluminum extrusion joints could be mass-produced for the flexible construction of building structures. Such design loses the uniqueness of the architectural project. After, we design other customized joints, which are cast with lost-foam method. The joints cast in this way have an aesthetic appearance and can be connected with the extrusion bar to become a hybrid Furniture (Fig. 23).

**Fig. 23.** Furniture Joints

**Fig. 24.** Hybrid building structure generation

Then, we realize the importance of ornament in architecture, so we develop the building joint into an ornamental architectural component.The casted aluminum components are then used to design the building facade. The components are designed to be flexible and are time-efficient (Fig. 24), like prefabricated aluminum, and the components are cast in a customized way, so they have custom aesthetics. Therefore, our project aims to realize custom aluminum joints and components with the help of computer technology, build an ornament and functional facade, and construct buildings aesthetically.

# **References**


**Open Access** This chapter is licensed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made.

The images or other third party material in this chapter are included in the chapter's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the chapter's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.

# Author Index

#### A

Alima, N., 316 Amtsberg, Felix, 330 Antorveza, K., 275

#### B

Bao, D. W., 117 Bian, Zhirui, 189 Bradshaw, Hanae, 305 Buchwald, Emil Fabritius, 305 Bueno, Ernesto, 231 Bugni, Shane, 3

#### C

Cabrera Jauregui, Pablo, 158 Cao, Shicong, 139 Chen, Min, 129 Chen, Mingxi, 148 Chen, Yuhan, 189 Cheng, T., 275 Cheung, Ka Ming, 168

#### D

Dai, Haocheng, 263 Daneluzzo, Michele, 349 Daneluzzo, Mirko, 349 Deng, Qiaoming, 35

E Esquivel, Gabriel, 3

#### F

Fang, Chenrong, 35 Farr, Marcus, 211 Feng, Z., 117

#### G

Gao, Wen, 45 Grassi, Giulia, 201 Gu, Pengcheng, 13, 117 Gu, Sijia, 221 Gu, Tianyi, 189 Guo, Xiaoxi, 380

#### H

Hsueh, Yun Chung, 368 Hu, Weilin, 252 Huang, Jiale, 221 Huang, Weixin, 45 Huang, Xiaokai, 242

#### J

Jaminet, Jean, 3 Ji, Guohua, 168

#### K

Kiesewetter, L., 275 Kirova, Nikol, 286 Kong, Yuwei, 221 Konrad, Mirjam, 179 Kumar, B. Bala Murali, 368

#### L

Leder, S., 275 Lharchi, Ayoub, 359 Li, Shuyang, 252 Li, Wenjing, 242 Li, Yuqian, 26 Liang, Lingyu, 35 Lin, Yinshan, 252 Liu, Fuyuan, 129

© The Editor(s) (if applicable) and The Author(s) 2022

P. F. Yuan et al. (Eds.): CDRF 2021, Proceedings of the 2021 DigitalFUTURES, pp. 393–394, 2022. https://doi.org/10.1007/978-981-16-5983-6

394 Author Index

Liu, Sheng, 168 Liu, Xun, 242 Liu, Yubo, 35 Lo, Cheng-Hung, 129 Lu, Ming, 340 Lu, Youyu, 189 Lu, Yue, 221 Luo, Dan, 368

#### M

Macruz, Andrea, 211, 231 Mao, Gang, 102 Markopoulou, Areti, 286 Martin, Eduardo Chamorro, 286 McCormack, J., 316 Melnyk, Virginia Ellyn, 69 Meng, Shengyu, 55 Menges, A., 275 Mosse, Aurelie, 305 Mosshammer, Maria, 305 Mueller, Caitlin, 330

#### N

Na, Risu, 263

#### O

Özdemir, E., 275

#### P

Palma, Gustavo G., 231 Palmieri, Ricardo A., 231 Paoletti, Ingrid, 201 Pezeshk, Sara, 80

#### Q

Qiu, Waishan, 242

#### R

Raspall, Felix, 330

#### S

Saez, Dana, 179 Schumacher, Patrik, 13 Shi, Shaohang, 45 Sieder-Semlitsch, Jakob, 305 Snooks, R., 316 Sparrman, Bjorn, 201 Sun, Chengyu, 252

#### T

Tamke, Martin, 305, 359 Thomsen, Mette Ramsgaard, 305, 359 Tibbits, Skylar, 201 Tong, Ziyu, 189 Trautz, Martin, 179

#### U

Ulson, Alexandre, 211

### V

Vega, Jaime, 231

### W

Wang, Likai, 189 Wang, Lizhe, 129 Wang, Xiang, 129 Wang, Xuexin, 35 Wei, Yimeng, 286 Wen, Hao, 13 Wood, D., 275 Wu, Hao, 340 Wu, Tan Chen, 231

### X

Xin, Zhuoyang, 368 Xu, Weiguo, 26 Xu, Weishun, 221

### Y

Yan, Hainan, 168 Yan, X, 117 Yang, Zhe, 35 Ye, Jiabei, 380 Yuan, Philip F., 340

### Z

Zhang, Xuanming, 45 Zhang, Yiting, 168 Zhang, Yuchao, 13 Zheng, Hao, 139 Zheng, M., 117 Zhou, XinJie, 340 Zhou, Zhuohong, 35 Zhu, Yuanshuang, 286 Zimbarg, Ana, 92 Zou, Shuai, 13