Benedikt Fengler

**Band 81**

B. Fengler

Manufacturing-constrained multi-objective optimization

**Manufacturing-constrained multi-objective optimization of local patch reinforcements for discontinuous fiber reinforced composite parts**

Benedikt Fengler

**Manufacturing-constrained multi-objective optimization of local patch reinforcements for discontinuous fiber reinforced composite parts**

### **Karlsruher Schriftenreihe Fahrzeugsystemtechnik Band 81**

Herausgeber

### **FAST Institut für Fahrzeugsystemtechnik** Prof. Dr. rer. nat. Frank Gauterin Prof. Dr.-Ing. Marcus Geimer Prof. Dr.-Ing. Peter Gratzfeld Prof. Dr.-Ing. Frank Henning

Das Institut für Fahrzeugsystemtechnik besteht aus den Teilinstituten Bahnsystemtechnik, Fahrzeugtechnik, Leichtbautechnologie und Mobile Arbeitsmaschinen.

Eine Übersicht aller bisher in dieser Schriftenreihe erschienenen Bände finden Sie am Ende des Buchs.

# **Manufacturing-constrained multi-objective optimization of local patch reinforcements for discontinuous fiber reinforced composite parts**

by Benedikt Fengler

Karlsruher Institut für Technologie Institut für Fahrzeugsystemtechnik

Manufacturing-constrained multi-objective optimization of local patch reinforcements for discontinuous fiber reinforced composite parts

Zur Erlangung des akademischen Grades eines Doktors der Ingenieurwissenschaften von der KIT-Fakultät für Maschinenbau des Karlsruher Instituts für Technologie (KIT) genehmigte Dissertation

von Benedikt Fengler

Tag der mündlichen Prüfung: 04. Dezember 2018 Erste Gutachterin: Dr.-Ing. Luise Kärger Zweiter Gutachter: Prof. Dr. Andrew N. Hrymak Dritter Gutachter: Prof. Dr. Frank Henning

**Impressum**

Karlsruher Institut für Technologie (KIT) KIT Scientific Publishing Straße am Forum 2 D-76131 Karlsruhe

KIT Scientific Publishing is a registered trademark of Karlsruhe Institute of Technology. Reprint using the book cover is not allowed.

www.ksp.kit.edu

*This document – excluding the cover, pictures and graphs – is licensed under a Creative Commons Attribution-Share Alike 4.0 International License (CC BY-SA 4.0): https://creativecommons.org/licenses/by-sa/4.0/deed.en*

*The cover page is licensed under a Creative Commons Attribution-No Derivatives 4.0 International License (CC BY-ND 4.0): https://creativecommons.org/licenses/by-nd/4.0/deed.en*

Print on Demand 2020 – Gedruckt auf FSC-zertifiziertem Papier

ISSN 1869-6058 ISBN 978-3-7315-1006-2 DOI 10.5445/KSP/1000104717

# **Kurzfassung**

Diese Arbeit leistet einen Beitrag zur Optimierung von lokalen endlosfaserverstärkten Patches, unter Berücksichtigung von Fertigungsbedingungen.

Die Kombination von diskontinuierlich und kontinuierlich faserverstärkten Kunststoffen bietet ein breites Anwendungsspektrum. Dabei kann die hohe Gestaltungsfreiheit der diskontinuierlich faserverstärkten Kunststoffe (DiCoFRP) mit den hohen spezifischen Materialeigenschaften der kontinuierlich faserverstärkten Kunststoffe (CoFRP) kombiniert werden. Dieser Ansatz erfordert allerdings spezifische Optimierungsstrategien. Daher wurde eine Mehrziel-Optimierungsstrategie, in welcher Fertigungsrandbedingungen Berücksichtigung finden, für die Positionierung und Dimensionierung von lokalen Endlosfaserverstärkungen entwickelt.

Der in dieser Arbeit entwickelte Mehrziel-Optimierungsansatz verwendet dabei einen evolutionären Algorithmus als Optimierungsalgorithmus. Da diese Klasse von Algorithmen eine Vielzahl von Funktionsauswertungen erfordert, sind effiziente Methoden zur Bewertung der Fitnesswerte notwendig. Aus diesem Grund wird eine kinematische Drapiersimulation zur Vorhersage der Patch-Geometrie verwendet. Um die Fähigkeit der kinematischen Drapierung zu demonstrieren, wird ein Vergleich mit einem mechanischen Ansatz durchgeführt. Dieser Vergleich zeigt den Vorteil der kinematischen Drapiersimulation, die geringe Rechenzeit, sowie deren Nachteil, eine weniger genaue Vorhersage des Umformverhaltens.

Als Zielfunktionen für die Mehrzieloptimierung werden sowohl strukturelle als auch prozessbezogene Ziele verwendet. Als strukturelles Optimierungsziel wird die Bauteilsteifigkeit verwendet, welche mittels einer linear-elastischen Struktursimulation ermittelt wird, während für das prozessbezogene Ziel eine Verzugssimulation durchgeführt wird. Um die prozessbezogenen Zielfunktionen, Bauteilverzug und Eigenspannungen, zu ermitteln, muss das Aushärteverhalten modelliert werden. Hierfür wird eine Abaqus Subroutine vom Typ UEXPAN verwendet, mit welcher sich das Aushärteverhalten und die daraus resultierenden Dehnungen effizient bestimmen lassen. Neben dem Verzugsziel werden weitere relevante Fertigungseinflüsse und –randbedingungen aus dem Halbzeug-, Drapier- und Co-Molding-Prozess diskutiert und finden während der Optimierung Berücksichtigung. Die Visualisierung der Optimierungsergebnisse erfolgt mit Hilfe von Heat-Maps, in denen Bereiche hervorgehoben werden, welche den größten Einfluss auf die Optimierungsziele haben. Darüber hinaus werden Ungenauigkeiten aus dem Herstellungsprozess mittels einer Robustheitsbewertung berücksichtigt. Hierfür wurde ein Workflow entwickelt, um die beiden Robustheitsbewertungsgrößen, Degree of Robustness und Robustness Index, im Nachgang zur Optimierung zu berechnen. Die Berechnung erfolgt mit einem rechnerisch effizienten Ansatz, unter Verwendung eines Kriging-Metamodells welches auf den bei der Optimierung gewonnenen Daten aufbaut.

Die entwickelte Optimierungsstrategie wird anhand einer Reihe von Anwendungsbeispielen, angefangen bei einfachen 2D-Geometrien bis hin zu einem komplexen 3D-Beispiel, demonstriert. Dabei wird der Einfluss verschiedener Einstellungen des Optimierungsalgorithmus diskutiert. Dabei wird auch der Einfluss der Anzahl der Zielfunktionen auf die Performance des Optimierungsansatzes anhand der 3D-Beispielstruktur demonstriert.

# **Abstract**

In this work, contributes to the optimization of local continuous fiber reinforcement patches, under consideration of manufacturing constraints.

The combination of discontinuous and continuous fiber reinforced plastics offers a wide range of applications. Thereby the high degree of design freedom of the discontinuous fiber reinforced plastics (DiCoFRP) can be combined with the high specific material properties of continuous fiber reinforced plastics (CoFRP). This approach requires specific optimization strategies. Therefore, an multi-objective optimization strategy for the placement of local reinforcement patches, under consideration of manufacturing constraints, has been developed.

The multi-objective optimization approach developed in this thesis uses an evolutionary algorithm as optimization algorithm. Since this class of algorithms requires a large number of function evaluations, efficient methods for the evaluation of the fitness values are necessary. Therefore, a kinematic draping simulation is used for the prediction of the final patch geometry. To demonstrate the ability of the kinematic draping approach, a comparison with a mechanical approach is performed. This comparison shows the advantage of the kinematic draping simulation, low computational time, as well as its disadvantage, a less precise prediction of the forming behavior.

During the multi objective optimization, structural and process related objectives are considered. As structural objective the components stiffness is taken from a linear-elastic simulation, while for the process related objective a warpage simulation has to be performed. For the prediction of the warpage related objectives, distortion and residual stress, the curing behavior has to be modeled. An Abaqus subroutine UEXPAN is used for the efficient modeling of the curing behavior and the resulting strains. Besides the warpage objective, further relevant manufacturing constraints, coming from the semi-finished product, draping process, and co-molding process, are discussed and considered as constraints during the optimization. The visualization of the optimization results is done with heat-maps, where areas are highlighted in which a reinforcement has the largest impact on the objectives. Furthermore, inaccuracies from the manufacturing process are taken into account during a robustness evaluation. Therefore, a workflow has been developed to calculate the two robustness evaluation measures degree of robustness and robustness index. The calculation of these measures is done with a computational efficient approach, by using a Kriging meta model, which is built on the data gained during the optimization.

The developed optimization strategy is demonstrated with a number of application examples, starting with simple 2D geometries up to a complex 3D example. Thereby, the influence of different settings of the optimization algorithm are discussed. The effect of increasing the number of objective functions on the optimization performance is demonstrated with the 3D example structure.




# **1 Introduction**

### **1.1. Motivation**

Optimization tools are often part of the product development process, where problem specific strategies are necessary to find the best solution. Thereby, the range of optimization strategies varies depending on the tools used for the initial geometry concept creation, such as topology optimization, to methods used in a later steps of the design process, such as shape optimization. When using composite materials, the number of design variables is significantly increased, since the material composition itself can serve as an optimization objective. In addition, the high anisotropy of this material class has to be considered during the optimization process, to fully exploit the excellent weight-specific stiffness of composite materials. For this purpose, optimization strategies specifically developed for composite materials are necessary.

Furthermore, the part's material cost can be reduced when combining discontinuous fiber reinforced polymers (DiCoFRP) with local reinforcements, such as continuous fiber reinforced patches (CoFRP), to create load carrying paths. Therefore, a commonly used optimization objective is to achieve highest performance in terms of maximum stiffness, while using a minimum amount of reinforcement material. In addition, the concurrent use of continuous and discontinuous fiber reinforced composites, allows a combination of the advantages of both material classes, i.e. DiCoFRP and CoFRP. While the excellent weight specific properties of the continuous reinforced material are utilized in highly loaded areas, the discontinuous reinforced composite material offers a wide range of design freedom. To achieve an advantageous combination of both material classes, an optimization strategy dealing with the positioning of the continuous fiber reinforcements (patches) is necessary.

In general, the use of virtual optimization tools reduces the number of expensive prototypes, since the product quality can be increased in an early phase of the design process. Therefore, a number of different optimization approaches have been developed in the past, mostly focusing on isotropic materials. The methods presented for composite optimization are usually focusing on either continuous or discontinuous reinforcements, while the combination of both material classes has not been a research focus. Hence, a method should be proposed, to combine both material classes, utilizing the advantages of each class. In addition, the quality and feasibility of optimization results should be increased by the incorporation of manufacturing constraints during the optimization. An additional improvement of the optimizations result can be achieved by performing a robustness evaluation. Thereby, further influences of the manufacturing process can be incorporated, to ensure that the predicted optimum remains an optimal solution in the real process.

### **1.2. Objectives of the thesis**

The global objective of this thesis is the development of a strategy for the optimization of position and size of continuous fiber reinforced patches, under consideration of manufacturing constraints and warpage.

Therefore, a multi-objective optimization method has to be implemented, which allows a parametric optimization of the patches and the determination of the final Pareto front, where a Pareto front defines a multi-objective tradeoff curve. In addition to the Pareto front, a second tool for the designer should be proposed to support the patch positioning in the final part. Manufacturing constraints, like maximum patch size or feasible patch geometries, shall be considered during the optimization. For this purpose, an efficient draping simulation has to be implemented in the optimization workflow. Furthermore, a warpage objective should be integrated in the optimization workflow to limit arising distortions and inner stresses. Therefore, curing and warpage simulation models have to be implemented. Draping method and curing simulation need to be implemented in a CAE chain to model the entire process from starting material to final product, which is finally used for the optimization. In addition, a method for the visualization of the optimization results has to be developed, which can assist the designer in finding the area, in which the patch reinforcement has the largest impact.

Since the robustness of an optimization result is an important objective, a method should be proposed, which allows for evaluation of the final front's robustness. Therefore, a meta-model based robustness evaluation method has to be implemented, which uses the results from the multi-objective optimization as input information.

### **1.3. Structure of the thesis**

The thesis is organized as follows: An overview of the state of the art, regarding the methods used and developed in this thesis is given in Chapter 1. In Chapter 2, the patch optimization method is introduced, and an introduction in the field of evolutionary algorithm is given, including basic definitions regarding optimization.

The used kinematic draping simulation method is presented in Chapter 3. In addition, a comparison of kinematic and mechanical draping methods is given here. The approach of the proposed curing simulation is introduced in Chapter 4. Besides the curing method, the approach used for the calculation of the residual strains, occurring during cure, is given here.

The optimization along the CAE chain, developed within this work, is presented in Chapter 5. In addition, the constraints, resulting from the manufacturing process, to be considered within the optimization are discussed in this chapter. The developed optimization approach is applied to a number of application examples, numerically verified and discussed in Chapter 6. To further include manufacturing inaccuracies in the optimization process, a robustness evaluation method is developed. The proposed workflow of the robustness evaluation method is presented in Chapter 7. First, an overview of robustness evaluation, in the context of multi-objective optimization is given. Furthermore, a surrogate model approach, used for the robustness evaluation, is presented. In addition, the developed method is applied to an application example, introduced in the previous chapter.

This thesis is concluded with Chapter 8, summarizing the findings and giving an outlook for further research in the field of composite optimization.

# **2 State of the Art**

## **2.1. Overview of the SMC process with continuous fiber reinforcements**

The optimization strategy of this thesis focuses on the compression molding process of sheet molding compounds (SMC) with over-molding of continuous fiber reinforced patches. Therefore, a brief introduction in the processing of patch-reinforced SMC parts is given here.

The first step is the preparation of pre-impregnated semi-finished products (prepreg) with discontinuous and continuous fiber reinforcement (DiCoFRP and CoFRP), cf. Figure 2-1 (a) and (b). After the resin is applied to the bottom foil, either continuous or discontinuous fibers are added, and finally a resincoated top foil is applied. Within the GRK2078 project frame of this thesis, an unsaturated polyester polyurethane hybrid (UPPH) resin system is used for both prepregs [1]. Once the fabrication of the semi-finished products is done, the materials are cut and stacked. After a near net-shaped draping process of the CoFRP material, cf. Figure 2-1 (c), the material is positioned in the press and co-molded, cf. Figure 2-1 (e). The last steps of the process are the demolding and deburring of the final parts, cf. Figure 2-1 (f). The reader is referred to Sharma et al. [2] for further details on the SMC process, and to Bücheler et al. [1] for the specific requirements of the combination of SMC with local patch reinforcements. The manufacturing of components with multiple fiber architectures, like SMC with unidirectional reinforcements, and their behavior during the manufacturing process, was also discussed by Corbridge et al. [3].

The lightweight ability of the combination of continuous and discontinuous fiber reinforced materials has been demonstrated within several industrial applications. Akiyama [4] presented a trunk lid and an engine cover, while Bruderick et al. [5] showed a more holistic approach with the design of the Dodge Viper, where several SMC components are used in combination with CoFRP.

Figure 2-1: Process for continuous-discontinuous-fiber reinforced thermoset components, used within GRK2078; discontinuous-fiber prepreg preparation (a), continuous fiber prepreg preparation (b), handling and near net-shape draping (c), position and orientation control of the continuous reinforcement (d), co-molding (e), and final deburring (f) [1]

### **2.2. Structural optimization**

In product development, optimization methods have gained an important role, since they allow a designer to improve a component's performance in early design phases. Therefore, several different approaches have been developed in the past. Structural optimization methods can be classified in the three main groups sizing, shape, and topology optimization. For these approaches the degree of design freedom increases from sizing to topology optimization. While using a size optimization method, only a given set of dimensions are subject to the optimization, such as the cross section thickness of beam elements, cf. Figure 2-2. Whereas, shape optimization is a more flexible approach, allowing also to change the shape of the geometry. Here usually a set of control points for morphing is defined to describe a flexible geometry. These points work then as degrees of freedom for the optimization. In the example given in Figure 2-2, the shape of the holes would be subject to the optimization process. The approach with the highest degree of freedom, regarding geometry changes, is the topology optimization. In a product development process, topology optimization is usually used in an early state to create a conceptual design [6]. While size and shape optimization start from an almost final part design, topology optimization starts from a rough design space with a uniform material distribution. Therefore, usually a volume constraint is used, to describe the maximum allowed amount of material [7]. For a detailed overview on topology optimization methods, the reader is referred to Eschenauer and Olhoff [8]. A comprehensive overview and evaluation of structural optimization methods in general was published by Saitou et al. [6]. The approach presented here is a combination of shape and topology optimization, where the change of patch size is a kind of shape optimization, while the positioning of the patch can be assigned to the field of topology optimization.

Figure 2-2: Comparison of structural optimization approaches [9]

Since for most applications the quality of optimization results can be increased by the incorporation of manufacturing constraints, this topic has been a field of research for some time. One of the first constraints to be included in topology optimization was the consideration of an extrusion direction, which ensures that the resulting geometry can be manufactured by an extrusion process [10]. In the same work, the authors developed an approach for the consideration of a mold opening direction in the topology optimization. While using a cast molding part as an application example, Zhou et al. demonstrated the influence of manufacturing constraints on the quality of the optimization result.

Zuo et al. [11] introduced further machining and manufacturing constraints for the topology optimization. To ensure a post-processing, such as milling, they proposed a minimum hole size control, which allows them to remove small holes form the resulting structure. Furthermore, they integrated a symmetry constraint in the optimization, which is able to ensure demoldability for injection molded parts.

The approaches from Zhou et al. [10] and Zuo et al. [11] were integrated in the so-called Solid Material with Penalization Model (SIMP), where a virtual density is used as optimization parameter. Therefore, in both approaches the density distribution over the thickness of the model is used as constraint, in a way that no change of the gradient in thickness direction is allowed. For a detailed explanation of the SIMP approach the reader is referred to Harzheim [7].

The importance of manufacturing constraints for industrial applications is demonstrated by Gruber et al. [12]. They used topology optimization with integrated minimum member size control, to generate design proposals for an aircraft structure. Liu et al. [13] discussed the influence of different manufacturing constraints on the topology optimization results. For this purpose, they compared different constraints like minimum hole size (to prevent small holes) or length control methods (to prevent checker boarding). Focusing on injection molded parts, they demonstrated the capability of a rib thickness gradient method to ensure the same thickness along a rib, which is useful to prevent problems with warpage. They also demonstrate the usefulness of the extrusion and tool opening direction constraint.

Usually gradient-based optimization approaches are used for topology optimization. Chapman et al. [14] showed in their work the integration of an evolutionary algorithm in the topology optimization. In their approach, they use the elements as design variables, which results in a large number of design variables, even for small models. Thereby, every element corresponds to an entry in the vector of design variables, which is in the context of evolutionary algorithm also called gen string. For larger structures, they proposed an approach for the building of sub-groups as well as a connectivity check. Within their work they focused only on 2D structures.

Even though size and shape optimization are usually used in later steps of the design process, those methods can also be used to provide input for initial design concepts. For this purpose a truss structure, consisting of beam elements, is used as a representation for the geometrical optimization problem. Those representations are typically used in civil engineering optimization problems. [15]

Like topology optimization, shape optimization has been a topic in research for a while. Haftka and Grandhi [16] give a good introduction of the applied methods. Furthermore, they describe some reccurring problems, like stress localization due to sharp edges generated by the change of shape or the need of mesh refinement during the optimization. An overview of the latest developments and trends, with focus on aerodynamic applications, for shape optimization methods was given by Skinner and Zare-Behtash [17]. It has been proven by several authors, that including manufacturing constraints in the optimization workflow significantly increases the quality of the results [11– 13]. In addition, it could be seen that evolutionary algorithms are able to deal with complex optimization problems, like topology optimization [14].

Beyond the described methods, approaches with the integration of surrogate-models or reduced-order models have been used for structural optimization [6]. Additionally an approach to use particle swarm optimization (PSO) methods as algorithm for a structural optimization problem have been presented by Perez and Behdinan [18]. A PSO approach for 2D topology optimization has been presented by Luh et al. [19], thereby they used the elements as discreet design parameters. Another field gaining attention for the development of optimization methods is the additive manufacturing, since this technology is able to realize relatively complex structures, following mostly the result from the optimization. Therefore, Liu et al. [20] addressed this topic in their work, by the consideration of deposition paths during the optimization run. Contrary to evolutionary algorithm, PSO is usually used for single objective optimization only, furthermore, a rather complex modeling of the optimization problem is necessary, as input for the optimization process.

### **2.3. Composite optimization**

Since composite materials offer several additional degrees of design freedom, such as fiber or matrix type, fiber volume content, or stacking sequence of laminates, special optimization strategies are necessary to fully exploit the abilities of this material class. Common topology optimization approaches are not able to cover all possible degrees of design freedom. Therefore, several approaches have been made to expand the capability of topology optimization for composite applications.

#### **Topology and structural optimization for composite applications**

In their work, Blasques and Stolpe [21] utilise the classical laminate theory (CLT) within a topology optimization. Based on a predefined set of layer types, a list of possible layups, each consisting of several layers, is created a-priori. Those materials where then assigned during the topology optimization, thereby the material assignment represents an additional parameter, compared with the common topology optimization approaches. Dai et al. [22] extended this approach by integrating the CLT directly in the optimization loop, considering 2D geometries only, but offering more degrees of freedom for the optimization. Therefore, not only the predefined layups can be used, but all possible layer combinations.

A more holistic approach is presented by Liu et al. [23], where they use three different objectives, resulting from a static load case, a frequency analysis, and a crash simulation. Therefore, the model is divided by a user-predefined segmentation, each of this segments has a specific layup and thickness, where the thickness serves as design parameter. For these parameters, a specified number of simulations is done to create a Kriging surrogate model, which is then used as input for a particle swarm optimization algorithm.

As the fiber orientation has a major influence on the mechanical properties, orientation optimization strategies have been addressed by several authors [24–28]. Savic et al. [27] used an improved hit and run algorithm for the optimization of fiber orientations in I-profiles. Therefore, they subdivided the profile in several regions and used the local fiber orientation of each region as a design parameter. Focusing on 2D structures, Legrand et al. [28] proposed a method for the optimization of the element orientation. Here, they integrated a genetic algorithm in a finite element simulation, with the global strain energy as objective function. Extending this approach, Vosoughi et al. [25] used a genetic algorithm combined with a particle swarm method to maximize the buckling load of a given structure.

#### **Process parameter optimization**

Since the mechanical properties of composite materials are strongly dependent on the manufacturing process, it is self-evident to study the abilities of virtual process optimization. An approach for the minimization of the shear angle is presented by Skordos et al. [29], thereby they used a combination of an evolutionary algorithm and a kinematic draping simulation, with the orientation of the layers as an optimization parameter. In addition, an optimization of the draping process with a mechanical-based draping simulation was proposed by Chen et al. [30], where they used a series of springs to model and optimize the gripping system used during the draping process. A two-loop CoFRP optimization workflow, considering a process and structural simulation has been developed by Kärger et al. [31]. In their work, they used the process simulation to optimize the draping process, while the structural performance is evaluated subsequently by a structural simulation, considering the results from the process optimization. A further method to consider draping effects in the optimization is introduced by Kaufmann et al. [32]. Since they focus on the weight optimization, they create an a-priori draping data base, which then can be used during the optimization loop. This means the draping simulation is not part of the optimization loop.

With the minimization of residual stresses, Sonmez and Akbulut [33] focus on the optimization of the tape laying process itself. Therefore, they included a thermoplastic tape laying process simulation in the optimization loop, while searching the optimum process parameters. In the optimization workflow, the tape laying process simulation is combined with a residual stress analysis, including a degradation model, and a simulation of the bonding process. Therefore, they used the process parameters such as temperature, roller velocity, and heated length as design variables. With this, they performed a single objective optimization to identify the optimal process window, in terms of residual stress minimization, with the degree of bonding and the thermal degradation as constraints.

#### **Optimization of layup and stacking sequence**

A topic covered by several authors is the search for the optimum layup configuration. This has been first addressed by Le Riche and Haftka [34], where they used an evolutionary algorithm to optimize the buckling load of a structure. An extended approach was introduced by Zehnder and Ermanni [35], with their patch based optimization of a boat structure. In their context, a patch is defined as an area with a constant layup and thickness, within a structure. These patches have to be defined a priori, and their shape will remain the same during the optimization, while the stacking sequence in each patch is subject to the optimization.

The latest development has been presented by Albanesi et al. [36], where they extended the optimization quality by including manufacturing constraints, besides mechanical constraints, to minimize a component's weight. Therefore, they combined a genetic algorithm with a Laplace crossover and a power mutation approach, using ply order, ply number, and ply drop as design parameter. They used a wind turbine blade as reference model, to demonstrate the proposed approach, cf. Figure 2-3.

Figure 2-3: Example of a stacking sequence optimization for a wind turbine blade [36]

Since composite materials consist of several components, there has been some work done, dealing with the optimization of the material composition. Sadagopan and Pitchumani [37] suggested a method to minimize manufacturing cost, while maintaining mechanical and thermal properties. In their work, they compared a genetic algorithm and a simulated annealing approach, while searching for the optimum material composition, in terms of fiber architecture, fiber and matrix type, and fiber volume fraction.

A special case of reinforcing structures is covered by Jansson et al. [38] with the optimization of local unidirectional reinforcement tows, cf. Figure 2-4, where the authors proposed a method of the combined optimization of the layup and tow thickness, while the geometry remains unchanged.

Figure 2-4: Example of a local tow reinforcement [38]

Rettenwander et al. [26] introduced an iterative approach for the optimization of local part thickness and fiber orientation. Here, thickness and fiber orientation are not modified at the same time, but instead optimized sequentially after each other. This loop is run through several times. The optimization of local thickness and fiber volume content is addressed by Qian et al. [39] with an element based approach. To consider manufacturability, they included a combination step, in which they combined small regions with similar thickness to larger regions with equal thickness. Thereby, areas which are too small to manufacture are removed. Using a spare wheel housing as case study, the authors demonstrated the capacity of their approach [40].

A method to consider uncertainties during a laminate optimization procedure was presented by Kalantari et al. [24], where they used fiber orientation, fiber volume fraction, local thickness, and stacking sequence as parameter. In the proposed method, they incorporated a lower and an upper bound for the consideration of scattering within the three parameter fibers volume fraction, fiber orientation, and local thickness. Therefore, in each iteration, an intermediate step is included, in which a worst-case scenario for the three scattering parameter is assumed, while the total thickness and volume fraction remain constant. The result from this worst-case scenario is then used during a stacking sequence optimization.

While looking at commercially available solutions, the most common method is the implementation in Altair's software OptiStruct, cf. Figure 2-5. Here, a procedure consisting of the three main steps free-size optimization, composite size optimization, and composite shuffling is implemented. The result of the free-size step are the required thicknesses of each layer with a defined fiber orientation. Based on this result, the Size Optimization is used to calculate the necessary thickness of each ply, thereby discrete thicknesses are considered to ensure manufacturability. In the final step, the stacking sequence of the laminate is optimized.

Figure 2-5: Composite optimization procedure implemented in OptiStruct (Source: altairhyperworks.com)

#### **Optimization of local patch reinforcements**

In their work, Mathias et al. [41] focused on the optimization of composite repair patches for aluminium structures. Therefore, they used a spline-based representation of a 2D patch as a reinforcement for damaged structures, cf. Figure 2-6. Beside the control points, they used the global patch orientation as well as the local fiber orientation as optimization parameter. Zehnder and Ermanni [35] proposed a similar approach, by using a parametric CAD model to describe the patch geometry for the optimization.

Figure 2-6: Example of a spline-based definition of a local repair patch [41]

Rettenwander et al. [42] introduced a method for the optimization of local unidirectional reinforcement patches. The positioning of the patches is done with respect to a stress field resulting from a finite element simulation, where the patches follow the main stress paths. This procedure is based on the CAIO method (Computer Aided Internal Optimization), where tailored fiber rovings are oriented along the principle stress trajectories. Therefore, they focused on 2D structures and did not include a draping step. Another approach for the optimization of the fiber orientation based on the CAIO method was proposed by Klein et al. [43]. In their work, they also focused on 2D structures only. Focusing on repair strategies, Kashfuddoja and Ramji [44] used a genetic algorithm for the optimization of the shape of local patches, therefore the dimensions of patches with predefined geometries are used as parameters. The optimization is performed with four predefined layups. The design parameters of the composite patch, like fiber orientation or fiber volume content, remain unchanged.

A different definition for a patch is given by Zhang et al. [45], in their approach a patch refers to an area with a uniform stacking sequence. Here, they subdivided the initial geometry a priori in patches, for which they used the ply orientation as parameter and a weighted sum, consisting of the two objectives compliance and eigenfrequency.

Optimization of local reinforcements for composite parts has shown its potential in several different applications. In most cases, manufacturing effects are not considered during the optimization process, though these effects can largely influence the optimization process, and should therefore be included in the workflow. Furthermore, several approaches are only applicable for 2D applications, and do not consider draping effects in the structural optimization. In addition, the presented approaches focus mostly on the optimization of single objectives, from either a process or structural point of view. Here a more holistic approach is necessary, to fully exploit the advantages of composite materials.

### **2.4. Multi objective optimization**

In general a multi-objective optimization problem can be formulated as

$$\begin{array}{c} \text{Minimize } [\mathbf{F}(\mathbf{x})] = [F\_1(\mathbf{x}), F\_2(\mathbf{x}), \dots, F\_k(\mathbf{x})]^\text{T}, \\ \mathbf{x} \in \mathbf{X} \end{array} \tag{1}$$

subject to

$$g\_j(\mathbf{x}) \le 0, \qquad j = 1, 2, \dots \\ m \tag{2}$$

$$h\_l(\mathbf{x}) = \mathbf{0}, \qquad l = \mathbf{1}, \mathbf{2}, \dots \mathbf{n} \tag{3}$$

where is the number of objective functions () and represents the vector of design variables in a feasible design space . Furthermore, the constraints are given by inequality constraints () and equality constraints ℎ(). The objective of the optimization is to find a solution which minimizes all objective functions simultaneously. In general, a minimization is sufficient for all problems, since a maximization can be formulated as

$$Maximize\ [\mathbf{F}(\mathbf{x})] = Minimize\ [-\mathbf{F}(\mathbf{x})].\tag{4}$$

For most engineering applications, the objective functions have opposite correlations, which means that improving one objective will impair another one. To quantify these differing correlations, Villfredo Pareto described the idea to find a set of optimal solutions, the so called Pareto set or Pareto front [46– 48]. The Pareto optimality is defined as a set of solutions = [, , . . . , ] for which no point <sup>∗</sup> exists, with () ≥ ( ∗ ) for all () and () > ( ∗ ) for at least one function (), with = 2 … as the number of considered fitness functions. To solve this problem several different approaches exist in literature. Thereby, two quality measures are pursued to evaluate the optimization algorithms. The first is to ensure convergence in the direction of the best solutions, and the second is to maintain a certain diversity along the front, to obtain a better representation of the front, cf. Figure 2-7. Additionally should be mentioned, that it cannot be ensured that an optimization converges to a global optimum rather than a local. This applies in general for multi-objective as for single-objective optimization problems.

Figure 2-7: Main objectives of a multi-objective optimization: Convergence to an optimal front and a high diversity along the final front

For the most common multi-objective optimization method, the weighted sum approach, different strategies are presented in literature [48, 49]. The idea of this method is to combine all objective functions on a single objective function, in a way that only one function has to be minimized. For the simplest case is defined as

$$U = \sum\_{l=1}^{k} w\_l \ast F(\mathfrak{x})\_l, \qquad F\_l(\mathfrak{x}) > 0 \,\,\forall l \tag{5}$$

In which represents the individual weighting factors for each objective function . Several studies have been conducted with these methods, and several variations have been published. For the creation of a Pareto set, the optimization has to be done with several different sets of . Furthermore, those methods are susceptible to user influence, since the different sets of

have to be selected by the user. Therefore, no non-problem-specific strategy exists, which means that the user has to have a good knowledge of the behavior of the objective functions to define a reasonable set of . [48]

In addition several other strategies, like the weighted global criterion or the lexicographic method, have been developed with the aim to find the Pareto optimal set. All of those methods have in common, that multiple optimization runs, each with adjustments made by the user, are necessary [48], except for genetic algorithm methods. With these methods it is possible to calculate the complete Pareto front within one optimization loop. Therefore, these methods are insensitive to user influence, since here no weighting vector has to be defined. In the following, an overview of engineering applications of multiobjective optimizations is given. Despite this drawback, in most cases a weighted sum approach is implemented to solve these problems.

Walker and Smith [50] used a genetic algorithm for the simultaneous optimization of mass and deflection. Therefore, they used a weighted sum method, while a Tsai-Wu failure criteria is incorporated as a constraint. Thereby the fiber orientation of each ply and the laminate thickness are implemented as discreet design parameters. Almeida and Awruch [51] investigated the influence of different methods for the representation of the design variables, while using a genetic algorithm with a weighted sum. Hemmatian et al. applied a gravitational search algorithm [52] and an ant colony optimization method [53] to solve a composite optimization problem, with the objectives of total weight and cost, while combining different material types.

A patch optimization procedure is presented by Mejlej et al. [54], where they used a combination of two different genetic algorithms. In their work, they covered the stiffness and weight optimization, combined in a weighted sum, of local reinforced metallic components, but did not include forming or warpage simulation to predict the realized patch position or to consider warpage.

Although most references use a weighted sum method, some approaches have been done to use Pareto optimization approaches in engineering optimization. Vo-Duy et al. [55] used the contrary optimization problem, minimization of weight and maximization of natural frequency, to demonstrate the ability of genetic algorithm in finding the Pareto front. Thereby, they also showed the influence of the number of optimization design variables by varying them from one variable to three variables. Ermakova and Dayyani [56] applied a weighted sum method and a genetic algorithm to a shape optimization problem. In their work, they pointed out the problems with the random character of genetic algorithm, which in their case results in non-smooth results.

Another field for the application of multi-objective optimization techniques are optimization procedures covering several different simulation methods for the calculation of the fitness functions. Eck et al. [57] introduced a method to consider a process cost objective, resulting from the resin transfer molding (RTM) process, within the optimization. Therefore, they used a surrogate model tool, which they called process-estimator, to predict the mold filling time and total process cost depending on the stacking sequence. A simultaneously structural and aerodynamic optimization approach is introduced by Dal Monte et al. [58], where they concurrently optimize the stiffness and the generated power of a wind turbine blade. Akmar et al. [59]suggested a multiscale approach for the optimization of hybrid composite layup structures. Their approach consists of a genetic algorithm, which is used to optimize the weave pattern by using a representative volume element (RVE), and an ant colony method for the optimization of the stacking sequence.

Pohlak et al. [60] developed an approach for the optimization of large composite parts, where they use cost and time as objectives, while considering stress, displacement, and maximum thickness constraints. The optimization is performed on a meta-model, which is trained with input data from a freesize optimization.

It has been proven by several authors, that evolutionary algorithms are able to solve complex engineering optimization problems. Nevertheless, most approaches utilize a weighted sum approach rather than a multi-objective optimization, which results in several optimization runs, from which each is subject to user influence. Furthermore, it could be seen that the combination of different simulation methods is a promising field. Therefore, a structural simulation will be coupled in this work with process simulation, warpage prediction and kinematic draping simulation, to form a multi-objective optimization approach. In addition, the use of meta-models has been proved to be useful for composite optimization problems. Therefore, a meta-model approach will be used for the evaluation of robustness.

### **2.5. Draping simulation**

Since the fiber structure determines the mechanical properties of the final part, it is important to predict the fiber architecture of continuous fibre reinforcements by draping simulation. In principle, the available draping simulation approaches can be separated in two fields, kinematic (also known as mapping approaches) and mechanical methods [61]. Both approaches have been developed with a different focus. Mechanical approaches are based on constitutive models and offer higher prediction accuracy of the actual material behavior [62–65]. Therefore, a good knowledge of the material behavior is essential. Compared to this, kinematic approaches offer only an approximate prediction of the material behavior and, thus, do not require material characterization. Moreover, kinematic approaches are usually very efficient. Due to the lack of exact modeling of the material behavior, it is not possible to predict wrinkling by kinematic draping simulation. Nevertheless, van West et al. [66] proposed a knowledge-based prediction of wrinkling by selecting the initial paths in a way, that the shear angle can be used as an evidence. Therefore, they placed the initial paths close to the point of interest. Nevertheless, this approach is only feasible for simple geometries, since the initial paths have to be placed individually for each possibly problematic geometry feature. Long and Rudd [67] used the kinematic draping simulation to predict hotspots of fiber misalignment. Thereby, they used different angles for the initial paths as representation of different fiber orientations of the undraped fabric. Since the draping simulation used here is a kinematic method, this chapter will focus on kinematic draping simulation.

The basic assumption, made for kinematic draping simulation of unidirectional fiber reinforcements, is a simple shear deformation of the material, where the length of the fibers and the spacing between the fibers is assumed to be constant during the forming process, cf. Figure 2-8. The kinematic draping approach has been first introduced by Mack and Taylor [68] for the forming behavior of woven textile fabrics. Woven textiles can be assumed to deform in pure shear, where the length of the fibers in both directions is kept constant. The following assumptions are made for kinematic draping simulations:


Those constraints serve as basis for every implementation of a kinematic draping approach, and will therefore also be applied here.

Figure 2-8: Assumptions of the simple shear deformation behavior of unidirectional fiber reinforcements, used for the kinematic draping simulation, where fiber length and distance between the fibers remain unchanged during the draping procedure [61]

The drawbacks and advantages of kinematic methods have been extensively studied by multiple authors. A main disadvantage of kinematic draping simulation is the dependence of the simulation result on the choice of the starting point and of the initial path of the kinematic algorithm. Van West et al. [66] demonstrated the influence of the initial path for the draping of a corner bending, where the results depend heavily on the selected starting point. Vanclooster et al. [69] systematically studied the influence of different orientations of the initial path and different positions of the starting point.

Sharma and Sutcliffe [70] used a double curved structure to compare the results of a forming experiment with the result from a draping simulation. Furthermore, they showed that the kinematic draping simulation has problems to predict the shear angle in areas with fast changing curvature. Pickett et al. [71] used a rear seat bench and wheel arches of an automotive structure to prove that a kinematic draping simulation can be used to get a first impression of the draping behavior in an early state of product design, since it is reasonably robust and requires only a minimum of data input. Nevertheless, they also pointed out that the starting point and initial path has a significant influence on the result of the simulation.

It has been proven by several authors, that a kinematic draping simulation is sufficient to give a coarse representation of the patch's geometry. However, a comparison of the results gained with a FE based draping simulation and a kinematic draping simulation, is performed within this thesis. Furthermore, the kinematic draping simulation has significant benefits, in terms of computational time. Since the proposed optimization procedure requires a large number of calculations, a kinematic draping approach is used in this work.

### **2.6. Curing and warpage simulation**

The causes for warpage during composite manufacturing processes can be split up into a thermal and a chemical part. The thermal proportion results from the different coefficients of thermal expansion of fiber and matrix, while the source for the chemical proportion is the creation of covalent bonds during the curing of the matrix. The distance between the monomer molecules in the liquid state is dominated by Van der Waals interactions. Since the specific distance of covalent bonding is about an order of magnitude smaller, the curing of the matrix results in a shrinkage of the matrix material. The amount of shrinkage is thereby influenced by type and concentration of matrix, hardener, inhibitor, and fillers [72].

The model used in this thesis to describe the curing behavior was first introduced by Kamal [73], which is a further development of a model proposed earlier by Kamal and Sourour [74] including additional fitting parameters. The governing equations are presented in Section 5.1. A model for the curing of sheet molding compound (SMC) during the compression molding process was proposed by Lee [75]. The approach proposed by Lee covers the kinetic mechanism with governing equations for initiation, inhibition and propagation of the curing process. Furthermore, Lee presented a method for the simulation of the heat transfer within the SMC material, during the compression molding process, and compared his simulation results with experimental data. Since the chemical composition of the SMC material used in this thesis is not available, the method from Lee could not be used here and, thus, the Kamal model is chosen. However, the choice of the curing model does not affect the principal goal of the thesis, since the curing simulation is not the main focus, but its implementation in the optimization workflow.

An overview of different kinetic models is given by Halley and Mackay [76], where they present the typically used approaches, like first and nth order, polynomial, or autocatalytic models. Furthermore, they present the common methods for the chemo-rheological characterization, as well as for the curedependent modeling of the viscosity.

In their work, Tseng and Osswald [77] presented an FE based method for the prediction of shrinkage and warpage of short fiber reinforced thermoset composites during the compression molding process. Furthermore, they considered the fiber orientation from the process simulation for the shrinkage calculation. A separation of different effects, leading to warpage, was done by Radford and Rennick [78]. According to their work, the most important influences are material anisotropy, temperature gradient through parts thickness direction, and part-tool interactions.

Bogetti and Gillespie [79] presented an approach for the prediction of process induced stress and deformations within thick-section thermoset composite laminates. For this purpose, they proposed a combination of a one-dimensional cure simulation to calculate the gradients of degree of cure and temperature over the thickness. A micro-mechanical model is used to calculate the temperature and cure dependent material properties, while an incremental laminated plate theory is used to calculate the laminate properties, and perform the warpage calculation.. In addition, they presented a method to calculate the development of the Young's modulus depending on the degree of cure. Based on this work, Svanberg and Holmberg [80–82] developed an approach for the calculation of the strains resulting from curing. In their approach they split the occurring strains in a chemical and a thermal component. The governing equations are presented in Section 5.2. The presented approach is relatively computationally inexpensive, and therefore used within this thesis.

While focusing on unsymmetrical laminates, Wisnom et al. [83] analyzed the basic mechanism causing residual stress and distortion, and especially their development during cure. Thereby, they showed that after vitrification, the main cause for distortion is thermal stress, while before vitrification thermal and chemical stresses, have to be considered. In their work, Wisnom et al. focused on experiments, and did not perform simulations. Hossain [84] proposed the inclusion of a modeling of the material's stiffness increase during curing in combination with a phenomenological viscoelastic curing model, to achieve a better prediction of the residual stresses and distortion than the approach presented by Svanberg, but with greater computational costs.

Warpage optimization was addressed by Gao and Wang [85]. Their approach consists of a design of experiments (DOE) to create training points for a metamodel, which is then used to link warpage and process parameter. Since they focused on injection molding, they used Moldflow during the DOE to create the necessary sample points for the meta-model. Thereby, a Kriging model was used to link the resulting warpage with the design parameters injection time, mold and melt temperature, packing time and pressure. Oliaei et al. [86] present a method for the optimization of warpage and volumetric shrinkage, utilizing Moldflow and an artificial neural network. Thereby they used a similar parameter set as Gao and Wang.

The curing model presented by Kamal showed that it is able to sufficiently predict the resin's curing behavior, with a reasonable amount of fitting parameters and experimental effort. Furthermore, the approach presented by Svanberg is an efficient and robust method for the prediction of the occurring warpage. As for the draping simulation, computational efficiency is preferred over accuracy in this work, since the optimization workflow requires numerous simulation runs. A detailed process simulation has to be performed after the optimization to account for further process influences.

### **2.7. Robustness studies in context of optimization**

The idea of incorporating uncertainties from manufacturing in the product development process trace back to Taguchi [87, 88], where the topic is covered from the final product quality point of view. The first integration of a robustness criteria in an optimization workflow was proposed by Conceição [89], where they used a robustness measurement as a constraint for a weighted sum optimization.

The first implementation of a robustness measurement as an objective in single-objective optimization was presented by Jin and Sendhoff [90]. They proposed an expectation-based and a variance-based method for the prediction of the robustness. For the calculation of the robustness value with the expectation-based method, a number of random sample points in the neighborhood of the current solution is created, and their mean value is used as a fitness metric of the current design. The variance-based method utilizes a standard deviation around the current solution as an estimation of its robustness. The standard deviation is calculated a-priori for different conditions, based on a reference set.

For a multi-objective optimization, the first implementation of a robustness consideration was presented by Deb and Gupta [91]. In their work, they presented two different approaches. The first method is to consider an effective fitness function, rather than the real fitness function. Therefore, they proposed to include deviations in manufacturing or other sources into the fitness evaluation step. The second method proposed by Deb and Gupta, is to introduce a robustness-based fitness function, which is then used as a constraint for the optimization. Sun et al. [92] included a variation of relevant design parameters in a crash simulation. Utilizing a particle swarm optimization method (PSO), they included surrogate models in the optimization loop, to avoid computationally expensive simulation runs.

Several benchmark problems for robustness in optimization runs are proposed by Gaspar-Cunha et al. [93]. In their work, they included the robustness as an objective for the optimization. The robustness of a solution is thereby evaluated by a number of test points, randomly created within the neighborhood of the current solution. An overview of applicable robustness optimization methods was given by Jin and Branke [94]. Furthermore, Beyer and Sendhoff [95] presented an extensive overview on robust optimization, in which they focus on robustness in terms of changes in the production process, model sensitivities, and parameter drifts during operation time. A brief survey on the consideration of uncertainties in optimization was given by Schuëller and Jensen [96], where they suggest to include an uncertainty analysis in the product development process to improve the performance and quality of a developed product. A method for the consideration of manufacturing inaccuracies in the topology optimization, to improve the robustness of a solution, has been proposed by Troll [97]. Thereby, the focus was on the SMC process with varying fiber orientations in the SMC material, but without consideration of local reinforcements.

Since it is important for engineering applications to consider the robustness of an optimization result, the robustness should be included as an additional decision making criteria for the design engineer. In the context of this work, the robustness evaluation is performed during a post processing step of the optimization to further account for manufacturing influences. Thereby, the aim of the evaluation is to predict a process range, which has to be maintained to ensure that the optimum predicted by the optimization is an optimum that can be achieved by the manufacturing process.

### **2.8. Meta-models in the context of optimization**

In general, a meta-model is used as an approximation of a detailed physicalbased simulation. It is more efficient to run, which means, that meta-models have a trade-off between accuracy and efficiency. Therefore, in context of an optimization, a meta-model builds the link between the design parameter vector and the fitness of the design. If the simulation model () is defined as

$$F = s(\mathfrak{u})\tag{6}$$

then the meta-model () can be described as

$$
\hat{F} = m(\mathfrak{u})\tag{7}
$$

and

$$F = \mathcal{F} + \varepsilon,\tag{8}$$

where represents the error of approximation [98].

According to Simpson et al. [98], meta-models are especially useful in early states of product design, since there is large uncertainty regarding the constraints, design parameters, and manufacturing process. In addition, metamodels are useful for optimization-based robustness studies, since production failures cannot be predicted precisely in an early design state and have to be estimated, based on expert knowledge. Simpson et al. [98] give a good overview of available meta modeling strategies.

An extremely flexible but complex modeling approach is the one proposed by Krige [98, 99], which is especially well-suited for deterministic problems. Thereby, the approach from Krige consists of a known approximation function to model the global behavior, and a realization of a stochastic process to correct the trend function and ensure the interpolation of all design points. The model proposed by Krige is referred to as Kriging model, and was intentionally used to create maps of underground geologic deposits using irregular spaced boreholes data. A meta-model based optimization approach, for engineering applications was presented by Park and Dang [100], where they used an FE simulation to create training points for a meta-model, which is then used for the optimization. The approach was extended by Rouhi et al. [101], where they implemented a loop, with several meta-model creation steps. Thereby, the bounds of the sampling space for the meta-model are tighten to the current optimum, to achieve a better representation of the most promising area. This is achieved by an refinement of constraints for the sample point selection. Furthermore, a good overview on meta modeling in an engineering context was given by Wang and Shan [102], where they focused on the topics of real world applications, role of meta-models in engineering, and applied model types.

Due to its flexibility, the Kriging model is used as meta-model for the robustness evaluation within this thesis. Furthermore, this type of model provides a high prediction quality in areas close to the training points. This characteristic is beneficial in this work, since the robustness evaluation focuses on the vicinity areas. Thereby, the results gained by the evolutionary algorithm during the optimization can be used as training data.

# **3 Patch optimization method**

### **3.1. Definition of patch optimization problem**

Some of the content in this section has been published in [103].

A general definition of a multi objective optimization problem is given by equation (1) in Section 2.4. Based on this the patch optimization problem can be formulated as

$$Minimize\ F(\mathfrak{u}) = [F\_1(\mathfrak{u}), F\_2(\mathfrak{u})]^\top, \mathfrak{u} \in \mathfrak{U},\tag{9}$$

where <sup>1</sup> and <sup>2</sup> are the objective functions, represents the design space and the vector of design variables, given by

$$\mathbf{u} = \{\mathbf{x}, \mathbf{y}, l, \boldsymbol{\varphi}\}^{\mathrm{T}}.\tag{10}$$

Here, the position of a patch center point is defined by the set of coordinates (, ). Furthermore, represents the orientation of the patch, and the patch length. In the proposed approach the width of the patch is kept constant according to usual manufacturing conditions, and is therefore not an optimization parameter. The described patch definition is visualized in Figure 3-1.

Figure 3-1: Visualization of the patch optimization problem, with the center point , the center point coordinates (, ), patch orientation , length , and width

Furthermore, constraints for the design variables have to be defined. Based on equation (2) and (3), the patch optimization problem is constrained by

$$\begin{aligned} \varkappa\_{\min} &< \varkappa\_l < \varkappa\_{\max} \\\\ \varkappa\_{\min} &< \varkappa\_l < \varkappa\_{\max} \\\\ l\_{\min} &< l\_l < l\_{\max} \\\\ \varphi\_{\min} &< \varphi\_l < \varphi\_{\max} \end{aligned} \tag{11}$$

Thereby the design variables and are modelled as discrete design parameters, with the step size step and step, which restricts the design space to practical solutions.

### **3.2. Basic definition on the optim**i**zation space**

A definition of the optimization space is given to clarify the difference between the search, design, and solution space. These three domains are used for the definition of the optimization workflow, as well as for the definition of the robustness. The basic correlations are visualized in Figure 3-2. Here, the design space corresponds to a set of all possible designs, represented by all possible combinations of design variables , and the solution space to a set of all possible solutions, represented by all possible fitness values. For an unconstrained problem, is

$$D, \mathbb{S} \in \mathfrak{R}.\tag{12}$$

The subset corresponds to the fitness values of all solutions, that are feasible for the optimization problem, constrained by the specifications from equation (11). Furthermore, the set of feasible designs can be defined by

$$\mathcal{U} = \{ \mathfrak{u} \in D | f(\mathfrak{u}) \in F \}, \tag{13}$$

where () is the function used to map between design and solution space, which is in the context of this thesis equal to the fitness evaluation.

Figure 3-2: Definition of search space, design space, and solution space for an optimization problem

In addition, a search space is defined. The search space is the representation of the design values, used by the optimization algorithm to perform the search operations. Therefore a mapping between the search and the design space is necessary, where () is a function used to map the design parameter, both ways, between design and search space. In the presented approach a binary representation is used for the search space, cf. Chapter 3.3, therefore () represents a transformation from integer values to a binary representation. Furthermore, it should be clarified that for optimization approaches with real-value-search methods this would lead to the simplification = .

### **3.3. Evolutionary algorithm**

Some of the content in this section was already published in [103].

The idea of an optimization algorithm based on Darwin's principal of the survival of the fittest, was first introduced by Holland [104]. Therefrom, the field of evolutionary algorithms evolved, which consists of several different strategies, like genetic algorithms, evolutionary strategies, genetic programming, and evolutionary programming [105]. Today, the most popular type of evolutionary algorithm is the genetic algorithm, which is why both terms are often used synonymously. Contrary to other evolutionary algorithms (e.g. evolutionary strategies), for genetic algorithms mutation is not the primary search operator, but a crossover step is included to propose solutions based on prior solutions [106]. Since a genetic algorithm is applied here, this section will only focus on this methods.

The general workflow of a genetic algorithm is shown in Figure 3-3. First, an initial population initial, consisting of individuals, is created randomly within the design space. In the context of evolutionary algorithms, an individual corresponds to a design vector . After a fitness value is assigned for each individual (cf. Section 3.4), a selection (cf. Section 3.3.3) and a crossover step (cf. Section 3.3.1) is performed to create an offspring generation. In addition to the crossover operator, a mutation step (cf. Section 3.3.2) is applied, to increase the diversity in the population.

Once the fitness calculation for the offsprings is done, the new population +1 is formed by a selection procedure. Thereby, a distinction is made between the generational and the steady-state approach. The generational approach considers only the offsprings for the next generation, while the steady-state approach considers both, parents and offsprings [107]. Here, the steady-state approach is applied, since it is more likely to converge.

After the new population +1 is formed, again a crossover is performed to create new individuals. This procedure is repeated until either a defined number of generations is reached, or a convergence criteria is met, cf. Section 6.3. The result of the optimization is given by the final population final, which represents the Pareto optimal front.

In addition it should be mentioned that the size of the population has an important influence on the optimizations convergence behavior, since a too small population has not enough diversity for a crossover to be useful, while a too large population will have a slower convergence rate.

Figure 3-3: General workflow of an evolutionary algorithm with the main steps fitness evaluation, crossover, mutation, and selection.

There is a large variety of different algorithms proposed in literature [48, 108– 111]. Since the focus of this thesis is not the development of an optimization algorithm, but an optimization strategy applying existing algorithms, one algorithm is selected and used to perform all calculations. The algorithm used

within this thesis is the no-dominated sorting genetic algorithm II (NSGA-II) developed by Deb et al. [112], which is an improved version of the NSGA, proposed by Deb and Agrawal [113]. The peculiarity of the NSGA-II is that the fitness calculation is based on a domination value, cf. Section 3.4.

### **3.3.1. Crossover**

The purpose of the crossover phase is to create new possible solutions (offspring) based on existing individuals (parents). In general it can be distinguished between real-value and binary crossover operator [113]. For realvalue operators, the search space is equal to the design space, cf. Figure 3-2. In contrast, binary operators use a mapping function () to transfer the real-value design parameters into binary representations and, thus, use a binary search space for the crossover, cf. Figure 3-2. A binary representation is used in the proposed algorithm, since it is the most common approach in literature [114]. Therefore, the crossover operators used within the scope of this thesis are from the binary type, and only this type will be further discussed.

The first crossover type, which is investigated in this study, is the multi-point parameterized binary, where each design variable is threaten individually. Therefore a binary representation of the design vector is created, and a bit switching operation is performed on each parameter individually, cf. Figure 3-4. Here, two parents are used to create two children. Thereby, the parents are selected randomly within the current population, and the crossover process is repeated until the desired population size is reached. The behavior of the crossover operator can be controlled with the two parameters, probability <sup>c</sup> and number of points . The parameter is used to set the number of bit-switching operations done per parameter during the crossover process. In Figure 3-4 an example with = 1 is given. The probability parameter <sup>c</sup> is used to set the crossover probability for each parameter, and is defined by

$$
\vartheta\_{\mathbf{c}} \in [0, 1]. \tag{14}
$$

where 0 represents a crossover probability of 0 %, while 1 corresponds to 100 %. The crossover probability is used to ensure, that an offspring can also consist of unchanged parts of the parents, which leads to a more pronounced local search around the parents.

Figure 3-4: Example for a multi-point parametrized binary crossover with number of points =

The second crossover operator is the multi-point binary. Here, all design parameters are combined in one gene representation, cf. Figure 3-5. The bitswitching operation is then performed on this gene, which has the peculiarity that one design parameter can be changed on multiple positions, while other design parameters may not changed at all. The crossover process is controlled with the same parameters, and <sup>c</sup> , as the multi-point parametrized crossover.

Both binary crossover methods share the characteristic, that the average value of the decoded gene stays the same during the crossover process, since the bits are purely exchanged cf. Figure 3-6, [113]. This can be proven by taking the mapping equation

$$g\_l(\mathbf{u}) = \sum\_{k=0}^{m} b\_{l,k} \ast 2^k,$$
 
$$\text{where } l = \text{parent1, parent2, off5prring1, off5prring2}$$

where represents the maximum number of bits used for the representation and the bit-string is defined as

$$b\_l = [b\_0, b\_1, \dots, b\_k], \quad \text{with} \ b\_n \in \{0, 1\}. \tag{16}$$

$$\frac{g\_{\text{parent1}} + g\_{\text{parent2}}}{2} = \frac{g\_{\text{offsetspring 1}} + g\_{\text{offsetspring 2}}}{2}. \tag{17}$$

The mean value of parents and offsprings is defined by

With equation (15), and the fact that the number of "ones" and "zeros" has to be equal before and after the crossover, it can be seen that the mean value has to stay constant during the crossover process.

Figure 3-6: A special behavior of the binary crossover operator is that the decoded average of both parent and offspring strings stays the same

In addition, the definition of the crossover spread factor , taken from Deb and Agrawal [113], is introduced by

$$\mathbf{s} = \left| \frac{g\_{\text{parent1}} - g\_{\text{parent2}}}{g\_{\text{offset3pring 1}} - g\_{\text{offset3pring 2}}} \right|,\tag{18}$$

with

$$s < 1 \text{ are contracting crossover (local search)},$$
 
$$s > 1 \text{ are expanding crossover (global search)},$$
 
$$s = 1 \text{ stationary crossover.}$$

Since the mean value is equal before and after crossover, cf. equation (17), a stationary crossover means that parents and offspring remain the same before and after crossover, which of course is undesirable.

#### **3.3.2. Mutation**

Beside the crossover, a mutation step is performed as a secondary search operator. While the primary driver of the optimization is the crossover, the mutation is used to increase the diversity within the population, cf. Figure 2-7. In general, mutation is applied after the crossover step on the offspring population , cf. Figure 3-3. Similar to the crossover, a probability for the mutation can be defined as

$$
\vartheta\_{\rm m} \in [0, 1]. \tag{20}
$$

The mutation operation is applied on the binary representation, and switches one or more randomly selected bits from 0 to 1 or vice versa. The process is illustrated by example in Figure 3-7.

Figure 3-7: Example for a bit switching mutation operation

The mutation probability <sup>m</sup> is applied for each parameter individually. Therefore, by using basic equations from the theory of probability, the overall probability distribution for mutation ℙ can be calculated by

$$\mathbb{P}(\eta\_{\rm m}, \eta\_{\rm nm}) = \left(\vartheta\_{\rm m}^{\eta\_{\rm m}} \* (1 - \vartheta\_{\rm m})^{\eta\_{\rm nm}}\right) \* k \tag{21}$$

with

$$k = \frac{(\eta\_{\rm m} + \eta\_{\rm nm})!}{\eta\_{\rm nm}! \* \eta\_{\rm m}!} \tag{22}$$

where <sup>m</sup> is the number of mutated and nm the number of non-mutated design variables.

#### **3.3.3. Selection**

The selection phase is used to choose the individuals that should survive in the next generation. Several different strategies are presented in literature, like tournament selection, fitness proportional selection, generational selection, roulette-wheel selection, elitist selection, and below-limit selection [107, 115, 116]. As for the crossover operator, the most common approaches are used for the selection. Therefore, the elitist and below-limit selection methods are used in this thesis, and this chapter focuses only on these two approaches.

When using the elitist method, only the individuals on the Pareto front survive in the next generation. This means that in the example, given in Figure 3-8, only individuals from front 1 are taken from one generation to the next. Considering the fronts from Figure 3-8 (a), the elitist approach would not consider individual <sup>2</sup> when setting up the next generation, since fitness 1 of individual 1 is slightly better than those of <sup>2</sup> . While doing so, promising individuals might be rejected when setting up the next generation. To overcome this problem, the below-limit approach has been proposed, where an adjusted number of fronts is taken from generation to generation.

While the below-limit approach would work better for the configuration presented in Figure 3-8 (a), the contrary applies for Figure 3-8 (b). Here, also the individuals on front 2 are preserved to the next generation. Since, all individuals in a generation serve as possible parents during the crossover, cf. Section 3.3.1, the convergence rate would be expected to be significantly lower, assuming that parents with considerably worse fitness will result in offsprings with worse fitness values.

Figure 3-8: Subdivision of the results into different fronts, with fronts good (a) and bad (b) for a below-limit selection method

Both methods can be used in combination with a niching pressure. Thereby, only one individual in a defined area is taken. The aim of this method is to

increase the diversity along the final front and to prevent local accumulations of individuals, cf. Figure 2-7.

### **3.4. Fitness calculation**

Some of the content in this section was already published in [103].

In terms of optimization, a fitness of a solution is defined as a measure for the solution's capability to fulfill the optimizations objectives [110]. The most common approach to combine multiple objectives is to use a weighting function, cf. equation (5). However, the weighting approach has the drawback, that the optimizations result is directly influenced by the selected weighting factors. Therefore, an approach without weighting is used here.

First, the fitness values of the objective functions have to be evaluated for each individual in the current generation, cf. Figure 3-3. In this work, the fitness evaluation is done by an Abaqus simulation, cf. Section 6.2. Once the objective functions are evaluated, they must be made comparable. Therefore, a domination value is calculated

$$d\_{\vec{l}} = \sum\_{l=1}^{n} d\_{\vec{l}l} \tag{23}$$

with

$$d\_{jl} = \begin{cases} 1 \,\, \, \_1F\_1(\mathfrak{u}\_l) > F\_1(\mathfrak{u}\_l) \text{ and } F\_2(\mathfrak{u}\_l) > F\_2(\mathfrak{u}\_l) \\ 0 \,\, \, \text{else} \end{cases} \tag{24}$$

where the fitness values () represent minimization objectives, is the total number of individuals and

$$\mathbf{j} = [1, 2, \dots, n]. \tag{25}$$

The selection process is then made, based on the domination value. Figure 3-9 shows the evaluated domination values for an example set of individuals. In addition, the dominated area for each point is highlighted, which corresponds to the visualization of equation (23).

Figure 3-9: Visualization of the domination values and dominated areas for an example set of individuals

Furthermore, the domination value is used to arrange the individuals into different fronts, cf. Figure 3-8. It is obvious, that all individuals with a domination value of zero are those which lie on the Pareto front [46, 47].

# **4 Draping simulation method**

### **4.1. Kinematic draping method in the context of an optimization strategy**

Most of the content in this chapter has been published in [103].

Due to the high number of simulations performed during an evolutionary optimization, a very efficient draping algorithm is needed. Since kinematic draping simulation requires small computation times than constitutive-based draping simulation, kinematic methods are, from a computational point of view, more suitable for optimization workflows. Furthermore, the optimization shall work as a suggestion for the designer and not as a final verification. Hence, a mechanical draping simulation must be performed afterwards to ensure manufacturability of the optimized patch setup without forming defects like wrinkling, fiber fracture or gapping. This is similar to topology optimization, where a verification simulation is needed to ensure the load-bearing capacity of the optimized topology [7]. The focus of the draping simulation within the presented optimization workflow is the calculation of the final patch geometry, while the prediction of forming defects is not yet relevant. Therefore, kinematic draping simulation is chosen to be sufficient for the aspired purpose. The drawbacks regarding the dependence on the initial starting point are not highly important, since all possible starting points are taken into account by the proposed optimization workflow. Furthermore, a comparison of FE based and kinematic draping simulation will be given in Section 4.3 using the example of a curved reference structure. This comparison is used to discuss the advantages and disadvantages of both methods.

### **4.2. Workflow of the draping simulation**

The method presented in this chapter was already published in [103].

The general assumption of kinematic draping simulation is the inextensibility of the material in fiber direction. Depending on the type of fiber reinforcement, the deformation of the material is assumed as pure shear or simple shear along the fiber direction. While woven fabrics deform in pure shear, unidirectional reinforcements are assumed to deform in simple shear, as shown in Figure 2-8.

Starting point , patch orientation , patch length <sup>0</sup> and width <sup>0</sup> are the necessary input parameters for the kinematic draping simulation. Another required input is the geometry of the initial component, on which the patch is going to be draped. The first step is to create the initial paths A and B on the part surface, starting from the given starting point in the center of the patch, cf. *Figure 4-1 (a)*. Thereby, path A is the initial path in fiber direction, while B is perpendicular to the fiber direction.

For the calculation of the next node <+1> from the current node <> at path A, a number of constraints need to be fulfilled. First, the distance between both nodes is set to be equal to the defined mesh size

$$\left| a^{} - a^{} \right| = m. \tag{26}$$

Furthermore, the node <+1> has to be on the initial component surface

$$a\_3^{} = F(a\_1^{}, a\_2^{}),\tag{27}$$

where <sup>1</sup> <+1>, <sup>2</sup> <+1>and <sup>3</sup> <+1>are the , , and components of the new node and (, ) represents the surface of the initial component. Additionally, it has to be ensured that the given global patch direction is maintained. With these assumptions all nodes along the initial paths A and B can be calculated, cf. Figure 4-1 (a).

Figure 4-1: Node calculation, starting from the initial paths A and B (a), and for all further nodes (b).

Based on these initial paths, all other nodes can be calculated. From the assumption of simple shear (Figure 2-8) the following equations can be derived for the parallel paths

$$\left| \mathbf{a}^{} - \mathbf{a}^{} \right| = m \,, \tag{28}$$

$$\left| \mathbf{a}^{} - \mathbf{a}^{} \right| = m,\tag{29}$$

$$with \ i = 1, 2, \dots \frac{l\_0}{m}, \qquad j = 1, 2, \dots \frac{w\_0}{m}.$$

Equation (28) and (29) represent that the distance between the fibers as well as the fiber length is constant during the draping process. Here, <(+1),()> is the next node on path A, while <(),(+1)> is the closest node on path B, cf. Figure 4-1 (b), <sup>0</sup> the initial patch length and <sup>0</sup> the initial patch width. Again, the node has to be on the part surface, which is represented by Equation (27). Rearrangement of equations (27), (28) and (29) leads to the following system of equations

$$\begin{aligned} 0 &= \left(a\_1^{\triangle A(l+1), B(j+1)} - a\_1^{\triangle A(l+1), B(j)}\right)^2 \\ &+ \left(a\_2^{\triangle A(l+1), B(j+1)} - a\_2^{\triangle A(l+1), B(j)}\right)^2 \\ &+ \left(a\_3^{\triangle A(l+1), B(j+1)} - a\_3^{\triangle A(l+1), B(j)}\right)^2 \\ &- m^2 \end{aligned}$$

$$\begin{aligned} 0 &= \left(a\_1^{\triangle A(l+1), B(j+1)} - a\_1^{\triangle A(l), B(j+1)}\right)^2 \\ &+ \left(a\_2^{\triangle A(l+1), B(j+1)} - a\_2^{\triangle A(l), B(j+1)}\right)^2 \\ &+ \left(a\_3^{\triangle A(l+1), B(j+1)} - a\_3^{\triangle A(l), B(j+1)}\right)^2 \\ &- m^2 \end{aligned} \tag{30}$$

$$\begin{aligned} 0 &= a\_3^{\triangle A(l+1), B(j+1)} - \\ F(a\_1^{\triangle A(l+1), B(j+1)}) &> a\_2^{\triangle A(l+1), B(j+1)} \end{aligned}$$

which has to be solved for all nodes. The presented kinematic draping approach is implemented in MatLab and embedded in the optimization workflow.

### **4.3. Numerical verification**

To demonstrate the ability of the kinematic draping method used within this thesis, it is compared with results from an established mechanical method developed by Dörr et al. [117]. Therefore, three reference patch configurations are defined, cf. Figure 4-2. The tool geometry used here has different angles for the two edges D1 and D2.

Configuration A is a simple, single-curved forming of the patch at the edges. In configuration B the patch is placed over the corner bendings, which results in more complex deformation than configuration A. In the last configuration, the patch is additionally rotated to further increase the complexity of the forming process.

A [0/90/0] layup with a total thickness of 1.5 mm is used for the mechanical draping simulation. For each reference configuration, the first 0° layer is compared with the results from the kinematic draping simulation.

Figure 4-2: Tool geometry with two different angles at the sides D1 and D2, and three reference patch configurations A, B, C

The results of configuration A are shown in Figure 4-3. For the simple deformation applied here, kinematic and mechanical draping simulation create the same results. This shows, that the kinematic draping is able to create reliable results for simple geometries.

Figure 4-3: Comparison of the results for configuration A: kinematic method (green) is congruent with mechanical method (blue) [118])

Comparing the results from configuration B, cf. Figure 4-4, it can be seen that the kinematic draping method is not able to give a correct representation of the deformation process within the corner bendings. The corner bendings result in material accumulations, which cannot be modelled by kinematic draping methods, cf. Figure 4-4 (b). Therefore, the folding of the patch in the corner bending can not be predicted correctly. The disadvantages of the kinematic draping simulation, when it comes to doubled curved structures, has also been demonstrated by Sharma and Sutcliffe [70]. Comparing the results of the simulations, cf. Figure 4-4 (c), with the experimental result, cf. Figure 4-4 (d), it can be seen that the mechanical method predicts the final geometry better than the kinematic draping simulation. However, the main part of the patch was draped correctly, while the differences are only in small areas of the patch. Therefore, the result of the kinematic draping simulation can be seen as a good approximation of the patch's final position.

Figure 4-4: Comparison of the results for configuration B: the kinematic model (green) differs from the mechanical model (blue) only in the corners of the structure, perspective view (a), top (b), bottom view (c) [118], experimental result (d) [119]

The results of the last configuration are presented in Figure 4-5. As for configuration B, the kinematic draping simulation shows disadvantages in predicting the correct geometry in the corner bendings. Here, the kinematic draping simulation is not able to correctly predict the material movement resulting from the corner bending. However, as for configuration B, the differences affect only small areas of the patch, while the main parts of the geometry are predicted correctly. Furthermore, it should be mentioned that this effect depends strongly on the considered geometry, since the deviation created at the double curved geometry would increase with increasing patch length beyond the curvature

Figure 4-5: Comparison of the results for configuration C: top view with details D1 and D2 (green: kinematic method blue: mechanical method [118])

The advantage of the kinematic draping simulation becomes clear, when comparing the necessary time for the simulation, cf. Table 4-1. While the kinematic draping simulation is performed within only a few seconds, the mechanical method needs a couple of hours. Since a large number of draping simulations is necessary during one optimization run, it becomes obvious that a mechanical method cannot be used here.


Table 4-1: Comparison of the computational times for the reference configurations

### **4.4. Conclusion on draping simulation**

A kinematic draping method has been implemented, which fulfills the requirements of an optimization strategy based on an evolutionary algorithm. A comparison of kinematic and mechanical draping methods has been presented to show the advantages and disadvantages of a kinematic draping approach. Results and computational time have been compared for three different reference patch configurations, with different degrees of geometrical complexity. Taking the results of the comparison with the mechanical draping simulation into account, the kinematic draping simulation can be seen as a good approximation of the patch's forming behavior. In general, for the kinematic draping simulation, no material parameters are necessary. In addition, the computational time of a kinematic draping simulation is much faster, compared with the mechanical method.

Furthermore, the optimization shall work as a suggestion for the designer and not as a final verification. Therefore, a mechanical draping simulation must be performed afterwards to ensure manufacturability of the optimized patch setup without forming defects like wrinkling, fiber fracture or gapping.

# **5 Curing and warpage simulation**

### **5.1. Kamal-Malkin curing model**

The calculation of the warpage, occurring during the manufacturing process is directly related to the curing of the resin. Therefore, the approach used for the prediction of the curing is presented first. For an exothermal crosslinking reaction, the degree of cure is defined by

$$q(t) = \frac{H(t)}{H\_{\text{max}}} \tag{31}$$

where () and max are the accumulated enthalpy until time and the total reaction enthalpy [84]. In addition, () is per definition

$$q \in [0, 1] \,\tag{32}$$

where 0 represents the uncured and 1 the fully cured state. Based on this, a cure rate

$$\frac{dq}{dt} = f(q, T),\tag{33}$$

can be defined depending on the state of cure and the temperature , where the function (, ) depends on the resin type. Several different approaches to describe the cure rate, depending on the used material type, can be found in literature. An overview is given by Halley and Mackay [76].

The simplest form to model the development of cure is the nth order model [76]

$$\frac{dq}{dt} = k(1-q)^n\tag{34}$$

with the Arrhenius equation

$$k = k\_0 \exp\left(-\frac{A}{RT}\right) \tag{35}$$

with as the universal gas constant, the material dependent parameter <sup>0</sup> and , and the order of reaction . In addition, an autocatalytic model approach can be given with the kinetic equation [74, 120]

$$\frac{dq}{dt} = kq^m(1-q)^n \,. \tag{36}$$

where is defined according to equation (35), while and represent the order of reaction. Based on this, Kamal [73] combined both approaches and derived the following phenomenological model

$$\frac{dq}{dt} = (k\_1 + k\_2 q^m)(1 - q)^n \,. \tag{37}$$

Again, is calculated with the Arrhenius equation

$$k\_l = k\_{l0} \exp\left(-\frac{A\_l}{RT}\right), l \in \mathbf{1}, 2. \tag{38}$$

Since this model is used in this work to describe the curing behavior, the six material dependent parameters , , 10, 20, <sup>1</sup> , and <sup>2</sup> must be determined. The data have been determined by DSC measurements performed by Schwab [121].

It should be mentioned that all of these approaches have the drawback, that a small initial state of cure has to be defined to start the reaction. Therefore, the initial state of cure is set to

$$q\_0 = 0.01.\tag{39}$$

### **5.2. Svanberg method for expansional strain**

The expansional strain is calculated based on the approach proposed by Svanberg and Holmberg [80, 82]. In addition, for the calculation of cure-induced residual stresses and deformation in fiber-reinforced composites, either the material properties have to be homogenized or the fiber and matrix have to be considered separately. Hence, homogenized material parameter will be used here, the mapping and homogenization step is explained in Section 5.4.

The stress tensor is defined by

$$
\sigma\_{lj} = \mathcal{C}\_{ljkl} \; \varepsilon\_{kl} - \mathcal{C}\_{ljkl} \; \varepsilon\_{kl}^{\ominus} \tag{40}
$$

where . is the stiffness tensor, is the mechanical and E the expansional strain tensor. Thereby, the stiffness tensor is temperature dependent,

$$\mathcal{C}\_{ljkl} = \begin{cases} no\ stress\ ,q < q\_{\text{gel}}\ \text{and}\ \ T \ge T\_{\text{g}}(q) \\ \mathcal{C}\_{ljkl}^{r}, q \ge q\_{\text{gel}}\ \text{and}\ \ T \ge T\_{\text{g}}(q) \\ \mathcal{C}\_{ljkl}^{g}\ \ ,\ \ T < T\_{\text{g}}(q) \end{cases} \tag{41}$$

where and are the stiffness tensor for the rubbery and glassy state, while in the liquid state no stress development is expected to occur. Considering the approach from Svanberg and Holmberg [80, 82], the expansional strain can be divided into a thermal part T and a chemical part C

$$
\varepsilon\_{kl}^{\rm E} = \varepsilon\_{kl}^{\rm T} + \varepsilon\_{kl}^{\rm C}. \tag{42}
$$

The thermal strains can be expressed by

$$
\varepsilon\_{kl}^{\mathrm{T}} = \int\_0^t \alpha\_{kl}(T, q) \frac{\partial T}{\partial t'} dt' \; ,
$$

where the coefficient of thermal expansion depends on temperature and degree of cure

$$\alpha\_{kl} = \begin{cases} \alpha\_{kl}^1, q < q\_{\text{gel}} \text{ and } T \ge T\_{\text{g}}(q) \\ \alpha\_{kl}^r, q \ge q\_{\text{gel}} \text{ and } T \ge T\_{\text{g}}(q) \\ \alpha\_{kl}^{\text{g}}, T < T\_{\text{g}}(q) \end{cases} \tag{44}$$

where l , r , and g are the coefficients of thermal expansion in the liquid, rubbery, and glassy state. <sup>g</sup> is the glass transition temperature and gel the gel point of the resin. The gel point characterizes the change from liquid to solid behavior.

Similar to the coefficient of thermal expansion, a coefficient of chemical shrinkage is introduced, which correlates the chemical expansion to the curing rate. Thus, the strain resulting from the chemical shrinkage can be written as

$$
\varepsilon\_{kl}^{\mathbb{C}} = \int\_0^t \beta\_{kl}(T, q) \frac{\partial q}{\partial t'} dt' \,, \tag{45}
$$

while the coefficient of chemical shrinkage is also dependent on the temperature and on the degree of cure, and can be expressed by

$$\beta\_{kl} = \begin{cases} \beta\_{kl}^1, q < q\_{\text{gel}} \text{ and } T \ge T\_{\text{g}}(q) \\ \beta\_{kl}^r, q \ge q\_{\text{gel}} \text{ and } T \ge T\_{\text{g}}(q) \\ \beta\_{kl}^g, T < T\_{\text{g}}(q) \end{cases} \tag{46}$$

where l , r , and g are the coefficients of chemical shrinkage in the liquid, rubbery, and glassy state. Since the chemical shrinkage is only present in the matrix, the coefficient can be calculated from the matrix properties. Therefore, Svanberg and Holmberg [80, 82]suggested the following approach

$$
\mathcal{J}\_{kl}^{\text{matrix}} = -\frac{\Delta V^{\text{matrix}}}{3},
\tag{47}
$$

where ∆ matrix represents the relative matrix cure shrinkage, which represents the relative volume change of a resin from liquid to fully cured state. This corresponds to the assumption, that the cure shrinkage is evenly distributed in all spatial directions. Furthermore, the following simplification shall apply

$$
\mathcal{J}\_{\text{xx}}^{\text{matrix}} = \mathcal{J}\_{\text{yy}}^{\text{matrix}} = \mathcal{J}\_{\text{zz}}^{\text{matrix}}
$$
 
$$
\mathcal{J}\_{kl}^{\text{matrix}} = 0
$$
 
$$
\text{with } k, l = [\boldsymbol{x}, \boldsymbol{y}, \boldsymbol{z}], k \neq l
$$

During the homogenization step, the coefficients , , are adjusted according to the corresponding fiber volume fraction and fiber orientation, cf. Section 5.4.

The relative cure shrinkage of the matrix can be estimated from an experiment by [80, 81]

$$
\Delta V^{\text{matrix}} = \frac{\rho\_{\text{matrix}}^{\text{liquid}}(T\_{\text{cur}}) - \rho\_{\text{matrix}}^{\text{cured}}(T\_{\text{cur}})}{\rho\_{\text{matrix}}^{\text{cured}}(T\_{\text{cur}})} \ast \frac{1}{q(T\_{\text{cur}})} \tag{49}
$$

where matrix liquid and matrix cured are the density of the liquid and the cured resin, respectively. Furthermore, cure is the temperature at which the resin was cured during the experiment and () is the degree of cure achieved at the end of the experiment.

The glass transition temperature is the temperature at which the transformation from rubbery to brittle state occurs, resulting from temperature decrease. Hence, the knowledge of this value is essential for the definition of the material properties stiffness, thermal expansion and chemical shrinkage. The glass transition takes usually place within a temperature range, rather than at one specific temperature. The glass transition temperature depends on the degree of cure, which can be expressed with the DiBenedetto equation [80, 82]

$$\frac{T\_{\rm g} - T\_{\rm g0}}{T\_{\rm g\alpha} - T\_{\rm g0}} = \frac{\lambda \, \ast \, q}{1 - (1 - \lambda) \, \ast \, q} \tag{50}$$

where g0 and g∞ are the glass transition temperature of the uncured and the fully cured matrix, respectively, and represents a material system dependent parameter.

### **5.3. Implementation of the curing and warpage model**

Svanberg's approach is implemented within an Abaqus user subroutine of the type *UEXPAN*. To be implemented, equations (42) - (45) have to be formulated in an incremental form, using the forward integration scheme

$$
\varepsilon\_{kl}^{\mathbb{E}}(t+\Delta t) = \varepsilon\_{kl}^{\mathbb{E}}(t) + \Delta \varepsilon\_{kl}^{\mathbb{E}} \, , \tag{51}
$$

where E represents the expansional strain. According to the approach, the expansional strain can be split up in a thermal ( T ) and a chemical ( C ) component

$$
\Delta \varepsilon\_{kl}^{\rm E} = \Delta \varepsilon\_{kl}^{\rm T} + \Delta \varepsilon\_{kl}^{\rm C}. \tag{52}
$$

with

$$
\Delta \varepsilon\_{kl}^{\mathrm{T}} = \alpha\_{kl} \ast \Delta T \tag{53}
$$

and

$$
\Delta \varepsilon\_{kl}^{\mathbb{C}} = \beta\_{kl} \ast \Delta q. \tag{54}
$$

where ∆ and ∆ are the temperature and cure gradient. The coefficients and are selected according to equations (44) and (46). For the used material, the coefficients l and <sup>l</sup> of the liquid state are assumed to be zero, since the strains remain undeveloped in the liquid state. In addition, the curing coefficients in rubbery state r and glassy state g are considered to be identical. Besides this, the stiffness tensor depends also on temperature and degree of cure, cf. equation (41). However, since there is no material data available for the used material, the stiffness tensor for the glassy state is used for all states. This simplification should be sufficient enough here, since the main focus of this thesis is on the optimization procedure.

Since an initial degree of cure <sup>0</sup> is required for the implemented curing model, cf. equation (37) and (38), it is set to 0.1 %. Furthermore, the gel point gel is set to 60 % for all calculations performed. Both values represent assumptions based on known material systems. Since no measurement for the development of the glass transition temperature <sup>g</sup> dependent on the degree of cure is available for the material system used, <sup>g</sup> is assumed to be constant during the curing process.

## **5.4. Calculation of the local material parameter based on fiber orientation and fiber volume content**

The coefficient of thermal expansion for the patch is calculated based on the method proposed by Schapery [122]. According to Schapery's approach, the coefficient of thermal expansion in fiber direction can be calculated by

$$\alpha\_{\parallel} = \frac{E\_{\text{matrix}} \ast a\_{\text{matrix}} \ast (1 - \theta) + E\_{\text{fiber}} \ast a\_{\text{fiber}} \ast \theta}{E\_{\text{matrix}} \ast (1 - \theta) + E\_{\text{fiber}} \ast \theta} \tag{55}$$

where fiber and matix are the Young's moduls of fiber and matrix, while is the fiber volume content. Here, the carbon fiber is assumed to be isotropic.

This simplification should be feasible, since the focus of this work is on the development of an optimization procedure. Furthermore, the coefficient of thermal expansion perpendicular to the fiber direction is

$$\begin{array}{c} \alpha\_{\perp} = (1 + \eta\_{\text{matrix}}) \ast a\_{\text{matrix}} \ast (1 - \theta) + (1 + \eta\_{\text{fiber}}) \ast \\\alpha\_{\text{fiber}} \ast \theta - \alpha\_{\parallel} \ast (\eta\_{\text{matrix}} \ast (1 - \theta) + \eta\_{\text{fiber}} \ast \theta) \end{array} \tag{56}$$

where matrix and fiber are the Poisson's ratios of matrix and fiber.

The fiber orientation distribution within the discontinuous fiber reinforced component is calculated by a preceding process simulation. The resulting anisotropic coefficients of thermal expansion are calculated with the method proposed by Rosen and Hashin [123]:

$$\begin{aligned} \alpha\_{lj} &= \alpha\_{\text{matrix}} \ast (1 - \theta) + \alpha\_{\text{fiber}} \ast \theta + \frac{\alpha\_{\text{matrix}} - \alpha\_{\text{fiber}}}{K\_{\text{matrix}}^{-1} - K\_{\text{fiber}}^{-1}} \\ &\ast \left[ 3 \ast \mathcal{S}\_{kklj} - \left( \frac{1 - \theta}{K\_{\text{matrix}}} + \frac{\theta}{K\_{\text{fiber}}} \right) \right] \end{aligned} \tag{57}$$

where is the compliance matrix, and fiber and matrix are the bulk modulus of fiber and matrix.

The coefficients for chemical shrinkage, also dependent on fiber orientation and fiber volume content, are calculated with the same approach as the coefficient of thermal expansion. The material parameters used for the thermal and chemical coefficients are given below in Table 5.2.

### **5.5. Numerical verification**

Since the curing model is the basis for the calculation of inner stresses and warpage, the results obtained with the curing model are investigated first. Experimental results are taken from Schwab [121]. For the characterisation of the used matrix system, DSC measurements have been performed by Schwab to measure the heat flux created during curing. The total enthalpy max and the accumulated enthalpy () of the exothermal crosslinking process can be calculated with a numerical integration. The temperature profiles used for the experiments and the simulations are given in Figure 5-1. Thereby, two different heating rates are used to characterise the curing kinetic of the resin. Based on this, the degree of cure over time can be described by equation (31) and the fitting parameters of the Kamal-Malkin model can be calculated (equations (37) and (38)), which are given in Table 5-1.

Table 5-1: Fitting parameters of the Kamal-Malkin curing model, based on DSC measurements performed by Schwab [121]


Figure 5-1: Temperature curve of the heat rates used for DSC experiments and simulations

The comparison of experimental and simulation results is shown in Figure 5-2. Here, cure rate and degree of cure are compared in both the time and temperature domains. Comparing the results in the time domain, a good agreement of experiment and simulation can be seen for a heat rate of 20 K/min, while for a lower heat rate of 15 K/min the cure rate is slightly overestimated by the simulation, cf. Figure 5-2 (a). When looking at the degree of cure, cf. Figure 5-2 (b), the first increase is slightly too early for the higher heat rate, and a little delayed for the lower heat rate, showing an acceptable overall compromise. Comparing the results in the temperature domain, a small shift in the cure rates to higher temperatures can be observed for both heat rates, cf. Figure 5-2 (c). The degree of cure fits well for the higher cure rate, whereas there is a small deviation for the lower cure rate, where the reaction starts at a higher temperature in the simulation than in the experiment, cf. Figure 5-2 (d). In general, only small deviations between experiment and simulation occur, which shows that the Kamal-Malkin curing model has a good prediction quality for the curing behavior of the material system.

Figure 5-2: Comparison of the results from the DSC experiments (dashed lines) and simulations (solid lines) for two different heat rates (15 K/min, 20 K/min): development of the cure rate (a) and degree of cure (b) over time; development of the cure rate (c) and degree of cure (d) over process temperature

After the curing model, the strain calculation approach is discussed. Therefore, two different material model setups are used. The first model considers only the different coefficients of thermal expansion for fiber and matrix, while the second model uses the complete Svanberg approach, including chemical shrinkage, cf. Section 5.2. The comparison of the two models shall demonstrate the influence of the chemical curing on the strain and warpage development. The simulation model and the temperature profile used for the evaluation are shown in Figure 5-3, while the material parameters used for the calculations are given in Table 5.2.

For simplicity, a unidirectional fiber direction with a symmetric layup is used, with the fiber direction equivalent to the global x-direction, cf. Figure 5-3. The temperature profile consists of a period of constant temperature, which simulates the time the material spends in the tool, and a cool down phase, in which the temperature is reduced to room temperature. The temperature is thereby applied uniformly on the whole part. To exclude an influence of geometrical effects on the results, a flat plate structure is used. To enable a free deformation in membrane direction, the nodes B1, B2, and B3 are constrained in x, y, and z direction, respectively.


Table 5-2: Material parameters used for the calculation of the expansional strain

Comparing both simulation results, it can be seen that the developed strain is significantly underestimated when curing effects are not considered, cf. Figure 5-4 (b). The chemical shrinkage occurs after the gel point is reached ( = gel), until the temperature is below <sup>g</sup> . Thereafter, no further curing takes place and the strain development depends only on the directional-dependent coefficient of thermal expansion. Consequently, from there on the strain development for both examples is alike.

Figure 5-3: Plate model with the constraints B1, B2, B3, and the reference evaluation point A (a), and temperature profile (b)

Comparing the strain components, the strain transverse to the fiber direction (cf. Figure 5-4 (b), E22) is largely effected by the influence of the chemical shrinkage, since the matrix behavior is dominant. Contrary to this, the influence is less significant in the fiber direction (cf. Figure 5-4 (b), E11).

Figure 5-4: Resulting displacement (a) and resulting strain (b) at reference point A, cf. Figure 5-3 (a) with consideration of curing and without consideration of curing.

### **5.6. Conclusion**

The curing model presented by Kamal [73] was used to model the curing behavior or the UPPH resin system used within this thesis. A comparison of the simulation results with experimental results showed a good accordance for different heat rates.

Beside the curing calculation, a strain calculation was performed based on the method proposed by Svanberg [82]. With this approach, the influence of the incorporation of chemical strain during the curing process could be demonstrated. For this purpose, the strain development with consideration of curing was compared with a simulation, where only different coefficients of thermal expansion where considered.

With dependence on fiber orientation and fiber volume content, the effective coefficients of thermal expansion for the UD reinforcement patch were calculated with the method of Schapery [122]. The anisotropic coefficients for the discontinuous fiber reinforced component were calculated with the approach proposed by Rosen and Hashin [123]. Furthermore, the assumption was made, that the effective coefficients of chemical shrinkage can be calculated with the same approach as the coefficients of thermal expansion.

# **6 Multi-objective optimization along the CAE chain**

## **6.1. Manufacturing constraints**

The incorporation of manufacturing constraints within the optimization gives the essential benefit, that the optimized part can be manufactured as predicted or at least close to the predicted optimum. Therefore, the manufacturing constraints, which apply for the SMC process and should be integrated within the optimization loop, will be discussed in this section. First, the basic constraints of the compression molding process with local reinforcement patches (co-molding), and their influence on the optimization are presented. Thereafter, variations resulting from the production of the semi-finished product and the co-molding process are presented, which could influence the optimization result.

#### **Semi-finished product**

Since the cutting of the patches is done automatically, usually no scattering in the initial length and width of the patch is expected [128]. However, the positioning of the prepreg material on the cutting table is done manually. As a result, the initial fiber orientation may deviate from the patch length direction, cf. Figure 6-1 (a). Thereby, the variation of the orientation is typically in a range of ±3° from the optimum orientation [128]. Additionally, the fiber orientation can be influenced by the prepreg production process, in a way that a misalignments goes through the whole patch, cf. Figure 6-1 (b). This failure can result in rated break point of the patch. The variations caused by fiber misorientation are considered by a variation of the orientation of the total patch within the robustness evaluation, cf. Section 8.3.

Furthermore, to minimize waste and simplify the manufacturing process, the reinforcement patch should have an initially rectangular geometry [128]. Thereby, a minimum and a maximum patch size, depending on the initial components dimensions, should be considered. The minimum and maximum constraint is directly implemented in the definition of the optimization problem, cf. equation (11), and therefore implemented in the optimization algorithm.

Figure 6-1: Inaccuracies during the prepreg production: misalignment of patch and fiber direction during cutting (a), and shift in the fiber orientation of the semi-finished product (b)

#### **Draping process**

An obvious restriction is that the reinforcement patch has to be completely within the area of the initial component. A further restriction, resulting from the process, is that a patch reinforcement is only possible on either one or the other side of the initial component. These restrictions are incorporated in the draping simulation, where the patches are cut at the border of the initial component, cf. Section 4.2. Furthermore, the final geometry of the reinforcement patch is predicted by the kinematic draping simulation. A further process constraint, incorporated within the draping simulation is the prevention of undercuts of the patch, which would be impossible to manufacture. In addition, the consideration of user-defined no-go-areas for the reinforcement patch is possible within the draping simulation. No-go-areas in the context of patch reinforcements, to be considered here, are preserved spaces for inserts or areas with too complex geometry or a too high slope, making the draping process impossible [128].

#### **Co-molding process**

The third group of influencing variables are the process-induced local variations of the material constitution. Therefore, the manufacturing influence related to the DiCoFRP component is the fiber orientation, which is provided by a process simulation and used as input for the optimization. Thereby, the fiber orientation influences the mechanical performance and the warpage, which is predicted with respect to the fiber orientation, cf. Section 5.4.

During the co-molding process no influences regarding the thickness of the patch are expected [128]. The deformation and displacement of reinforcement patches has been studied by Bücheler and Henning [129]. Their results for the displacement and deformation of the patch during the molding process are shown in Figure 6-2. Since the material system used within this thesis is a UPPH resin, the deviations for the UPPH resin are going to be used as input for the robustness evaluation, cf. Section 8.4. Once the co-molding process is done, the adhesion between the patch and the SMC material is unproblematic, since the same resin system is used for both materials.

Figure 6-2: Resulting total deformation (left) and displacement (right) for a patch reinforcement during the manufacturing process [129]

Summarizing the influences of the manufacturing inaccuracies, two main sources of deviations can be identified, orientation, in terms of the whole

patch or the fibers, and size changes, like patch length and width. Furthermore, the most important question, to be answered by the robustness evaluation is, how accurate the process has to be, to ensure that a predicted optimum remains an optimum for the real part. Therefore, a robustness evaluation is performed after the optimization.

### **6.2. Workflow of the multi-objective patch optimization**

Most of the content in this chapter was already published in [103].

The general approach of the patch optimization chain is shown in Figure 6-3. Thereby, the optimization procedure can be divided into the three main parts: pre-processing, optimization loop and post-processing. The pre-processing phase starts with a design phase for the initial component. After the initial design is specified, a process simulation is performed to predict the fiber orientations for the SMC component. Based on the fiber orientations, the material parameters are calculated within a mapping step. After the pre-processing part is done, the optimization starts. Since the optimization loop is the main scope of this work, it will be discussed in the following sections. The last part of the optimization chain is the post-processing step. Here the Pareto front is evaluated (Section 6.3) and heat maps are created, cf. Section 6.4.

Figure 6-3: CAE chain used for the patch optimization

The more detailed workflow of the optimization loop is shown in Figure 6-4 (a). Firstly, the initial population of design variable vectors is created with uniform random distribution for each parameter within the given design space. Secondly, the initial fitness calculation is performed (Figure 6-4 (c)), where the kinematic draping simulation (cf. Chapter 1) is integrated into the fitness calculation step (cf. Section 3.4) of the multi-objective genetic algorithm. The draping simulation is used to create the necessary input for the structural simulation in terms of position and fiber orientation of the formed CoFRP patch. Furthermore, the draping simulation is used to return the fitness value <sup>2</sup> . Fitness value <sup>2</sup> represents the amount of used patch length. Since the random selection of design variables may create patch lengths, which overlap the boundaries of the geometry, two different scenarios have to be distinguished:


Subsequent to draping simulation, a structural simulation computes the fitness value <sup>1</sup> , which represents the compliance of the component, quantified by the global strain energy. For linear-elastic material behavior, the strain energy can be defined by

$$\mathcal{W}\_{\varepsilon}^{\text{global}} = \int\_{\Omega} \sigma^{\text{T}} \ast \varepsilon \, dV \, = \frac{1}{2} \sum\_{l=1}^{n} \mathbf{P}\_{l}^{\text{T}} \ast \mathbf{d}\_{l} \tag{58}$$

where and are the nodal forces and nodal displacements of a finite element model with element nodes.

Figure 6-4: Integration of the fitness calculation in the genetic optimization algorithm of the Dakota tool box (a), used to solve the patch optimization problem (b) by applying the two-step fitness calculation (c).

The fitness calculation step is slightly adapted, when the warpage simulation is included in the optimization run, since the fitness value <sup>2</sup> results then from the warpage simulation. Therefore, the fitness calculation is performed as presented in Figure 6-5. In this case, the draping simulation is only used to calculate the final patch geometry and the resulting fiber orientation as input for structural and warpage simulation. The resulting patch length is not investigated as an additional objective function in order to limit the number of fitness values to be evaluated.

Figure 6-5: Fitness calculation when incorporating a warpage simulation in the optimization

For the optimization algorithm, the open access software toolkit Dakota is used, an implementation of the Sandia National Laboratories [130]. The Dakota toolkit provides diverse operators for genetic algorithms (as the ones described in Section 3.3) as well as an interface between analysis code and optimization methods. Within the optimization loop, two termination criteria can be used: a maximum number of iterations and a convergence metric.

### **6.3. Convergence criteria**

Since multi-objective optimizations have the goal to predict a wide range of the Pareto front, more than one criterion is needed to properly characterize the performance. The criteria presented first in this section, are developed to evaluate the final results of the optimization. In addition, a convergence metric is necessary during the optimization runs to measure the improvement between two generations. Therefore, Dakota offers a metric consisting of three different criteria, which cannot be changed manually [130]. Finally the criteria developed are compared with the Dakota criteria.

The three criteria, introduced within this thesis to evaluate the convergence rate, the quantity of solutions, and the diversity, are:

1. *Dominated area for the evaluation of the convergence rate.* The dominated area is defined as the area enclosed by the current front and the extreme points <sup>1</sup> and <sup>2</sup> and therefore represents the progress of the optimization. Therefore, the extreme points are defined by

$$\mathbf{k}\_{l} = [F\_{1}(\mathbf{u}\_{l}), F\_{2}(\mathbf{u}\_{l})]^{T}, l \in \mathbf{1}, 2 \tag{59}$$

$$k\_1 = \{ \mathbf{k} | F\_1(\mathfrak{u}\_1) > F\_1(\mathfrak{u}) \,\,\forall \,\mathfrak{u} \in \mathcal{U} \}\tag{60}$$

$$k\_2 = \{k | F\_2(\mathfrak{u}\_2) > F\_2(\mathfrak{u}) \,\,\forall \,\mathfrak{u} \in \mathcal{U}\}\tag{61}$$

The dominated areas of two successive fronts are exemplarily shown in Figure 6-6.


Figure 6-6: Dominated area calculation for generation (black solid line) and + (grey dotted line), and the maximum points and (red dots).

The following criteria are implemented in the Dakota workflow, and will therefore be used during the optimization run:

1. *Spread of the solutions within the design space*, which is calculated by

$$
\Delta F = |\mathbf{k}\_1 - \mathbf{k}\_2| \tag{62}
$$

where and are calculated according to equation (60) and (61).

2. *The density along the current Pareto front*, is used to measure the diversity within a current solution.

3. *The relative "goodness" of the current front*. To measure the relative goodness, the number of individuals on front ( − 1) dominated by front is counted.

All three criteria are evaluated for each generation and implemented as percentage change between two generations. From these three criteria the highest percentage change is taken to evaluate each generation. The optimization is terminated, when a user defined percentage change is less than a specified value, for a defined number of subsequent generations. [130]

Comparing the proposed convergence criteria with the criteria implemented in Dakota, a number of advantages can be seen. When evaluating the convergence rate, the Dominated area is used instead of the solutions spread, since it is a more precise method to measure the optimizations progress, taking all points on the front into account instead of only the extreme points. The diversity measurement is done with the patch length distribution, since it subdivides the solutions in groups connected to the input parameter. The overall progress of the optimization can be better described, when taking the dominated area and the total number of solutions into account, instead of the Dakota-defined "goodness" of a generation.

### **6.4. Heat-maps**

Since the multi-objective optimization does not result in one best solution, but in a set of solutions, a tool needs to be introduced to help a designer to choose the right position for the patch reinforcement. The final Pareto front can be used to select one specific solution, but thereby a large amount of the information gained during the optimization, is not used any more. Therefore, the heat map is proposed here as an additional tool to visualize the optimization results.

The heat map focuses on the evaluation of the global strain objective in conjunction with the patch usage or the warpage objective. Therefore, the patch position (including patch length) is visualized by means of a heat-map, which weights the resulting fitness values. The results along the Pareto front are summarized in a weighted form, thereby the weighting factor is calculated by

$$\begin{aligned} &= \frac{\mathbf{1}^{\omega\_l}}{\begin{pmatrix} F\_1(u\_l) \\ F\_{1,\max}(u\_{\max}) \end{pmatrix} \* \begin{pmatrix} F\_2(u\_l) \\ F\_{2,\max}(u\_{\max}) \end{pmatrix}}, l \\ &= 1 \ldots n \end{aligned} , l \tag{63}$$

with as the number of individuals on the Pareto front, 1,max is the maximum value observed for fitness 1, and 2,max the maximum value observed for fitness 2, respectively. The weighting is normalized and plotted on a uniform mesh, which is only used for visualization purposes, cf. Figure 6-7. Thereby, areas with high patch concentration, cf. red regions in Figure 6-7, should be preferred over those with a lower patch concentration, cf. blue regions in Figure 6-7.

The resulting heat map taken from the application example, cf. Section 7.2, is used here to demonstrated the function of the heat map. Therefore, three different approaches to create the heat map are compared. The first and most evident way, is to visualize only the individuals from the final Pareto front, cf. Figure 6-7. With this approach, those areas are visualized, where the patches have the largest impact for the reinforcement criteria.

Figure 6-7: Heat map with the individuals from the final Pareto front only, where the colour represents the patch concentration along the final front (red = high patch concentration, blue = low patch concentration), while the abscissa and ordinate axes represent the initial components x and y coordinates

The second way is to visualize all individuals incorporated in the optimization process, cf. Figure 6-8. This results in a more unclear representation, since a significantly larger number of solutions, spread over the whole search space, is incorporated in the visualization process. Nevertheless, the advantage of this method is the visualization of the areas being searched by the optimization.

The last visualization method is shown in Figure 6-9, where a user-defined number of individuals from the Pareto front is visualized. This approach is useful, if the Pareto front can be constrained to a specific range, to be visualized. With this approach, the visualization results in a clearer picture, since only a smaller number of individuals is taken into account.

Beside the performance evaluation, the heat map is a useful tool for the designer to find the areas in which the reinforcement is most effective. While the visualization proposed in Figure 6-8 is more useful for the performance evaluation, the visualization of a selected number of individuals from a constraint area on the Pareto front is a helpful tool for the designer.

Figure 6-8: Heat map with all results obtained during the optimization (colour and axes as for Figure 6-7)

Figure 6-9: (b) Heat map with a selected number of individuals (colour and axes as for Figure 6-7); here the 15 stiffest individuals are visualized ((a) red dots in the Pareto front)

# **7 Application examples and numerical results**

## **7.1. Plate structure**

In this section, three plate examples are used to demonstrated the influence of the crossover type (plate-1), the influence of different patch numbers (plate-2) on the performance of the algorithm, and the behavior of the algorithm with a large number of generations (plate-3).

#### **Application example plate-1**

The first application example is used to demonstrate the ability of the method to solve the patch optimization problem. Based on equation (9), the optimization problem is formulated as

$$\begin{array}{c} \text{Minimize } F(u) \\ = \text{Minimize } [global \, strain \, energy, patch \, usage)^T, \end{array} \tag{64}$$

with the definition of the global strain energy given in equation (58). Furthermore, the influence of the crossover type is studied with this example. Therefore, the setup of the test calculations is shown in Table 7-1.

The load case is a bending load, cf. Figure 7-1 (b), where the design area is the top side of the plate. For the crossover comparison, the basis setup A-Opt 1 is used as benchmark. To characterize the distribution within the search space, all solutions are arranged in groups with patch-length steps of 25 mm, from 0 mm to 425 mm maximum patch length for example A. The division into the seventeen groups will also be used for A-Opt-2. The distribution of the results within the search space is shown in Figure 7-1 (a). The presented figure shows that solutions with smaller total patch length are more likely to

appear. This is also demonstrated by the percentage share of all A-Opt 1 results in Figure 7-1 (d). In the given example, the shortest possible patch length is 25 mm, therefore patch usage group 1 remains empty. Furthermore, four reference points are given in Figure 7-1 (a), which correspond to the patch configurations shown in Figure 7-1 (c). These points shall help to evaluate the optimization results.


Table 7-1: Setup of the evolutionary algorithm used for the test calculations with the plate

Comparing the Pareto fronts, both optimizations converge to a similar front, cf. Figure 7-2 (a), which allows the assumption that the proposed approach runs stable. The convergence rate, represented by the increase of the dominated area (Figure 7-2 (b)), is significantly higher for the multi-point binary crossover (A-Opt 2) than for the parameterized binary crossover (A-Opt 1). This is due to the fact, that the multi-point binary crossover can change a design variable on several bits, while other design variables remain unchanged. Consequently, the multi-point binary crossover has a more pronounced global search than the parametrized binary crossover, which only applies one change per variable. Comparing the number of solutions found on the final Pareto front, the multi-point binary crossover also has a better performance. Both heat maps show the similar result that a reinforcement on the left side of the plate will create the best result in terms of the total strain energy, Figure 7-2 (e) and (f). This can also be seen by the proposed reference points, cf. Figure 7-1 (c), where configuration 4 is the best solution, in terms of strain energy minimization. Comparing the distribution of the solutions on the Pareto front of A-Opt 1 (cf. Figure 7-2 (d)) with the distribution of all solutions of A-Opt 1 (Figure 7-1 (d)), it can be seen that the qualitative distribution is very similar. As for A-Opt 1, the solutions along the Pareto front of A-Opt 2 also show a tendency to smaller total patch lengths.

Figure 7-1: Distribution of all calculated individuals in the search space of A-Opt 1 (parametrized binary crossover) in comparison with four reference solutions (a). The blue points represent the final Pareto front. Load case used for application example plate-1, where A represents the constraint and B the load (b). Patch configurations of four reference solutions (c). The red bars illustrate the resulting percentage share of all individuals (blue and grey dots in (a)) for each patch-usage group with a steplength of 25mm (d). (from [103], modified)

Summarizing the crossover effects, the multi-point binary crossover offers a better performance in terms of convergence rate and number of found solutions. Since both final Pareto fronts are quite similar, the multi-point binary crossover is used to perform the optimization runs with the curved structure, cf. Section 7.2.

Figure 7-2: Comparison of the results of the two crossover processes A-Opt 1 (parameterized binary crossover) and A-Opt 2 (multi-point binary crossover): (a) Comparison of the final Pareto fronts, (b) the dominated area for the evaluation of the convergence behavior, (c) development of the number of elements on the current Pareto front for each generation, (d) percentage share of the solutions along the Pareto front (blue dots in (a)) for each length-group with a step-length of 25mm, (e) heat map for A-Opt 1, (f) heat map for A-Opt 2. [103]

#### **Application example plate-2**

As second application example a plate structure with a different load case is used, to demonstrate the influence of the patch number on the final optimization result. Furthermore, the influence of the mutation probability is investigated. This study was published by Fengler et al. [131]. Based on equation (9), the optimization problem is formulated as

$$\begin{array}{c} \multicolumn{3}{c}{Minimize \, F(u)}\\ = \multicolumn{3}{c}{Minimize \, [displacement, patch\, usage]}^{T} . \end{array} \tag{65}$$

With the first fitness value as the displacement, a local criterion is used instead of a global criterion. The amount of patch used as reinforcement is used as second criteria. The load case is presented in Figure 7-3 (b), where the constrained side is represented by A, while the load is applied in form of a line load at B. In addition, the line at B is used as evaluation set for the displacement criterion.

To demonstrate the influence of the number of patches on the optimization result, the patch number is set to = [1,2,3], while all other parameters remain constant during the optimization run, cf. Table 7-2. The maximum number of generations for this example is = 50.


Table 7-2: Setup of the evolutionary algorithm used for the patch number influence study

The mutation parameter is set to <sup>m</sup> = 0.2 for every calculation, cf. Section 3.3.2. Based on this and with equation (21) the actual probability for mutation can be calculated, depending on the number of patches and thus the number of design variables. The calculated actual probabilities for three possible events no mutation, mutation of 50 % of the design variables and mutation of all design variables are given in Table 7-3. Comparing the probabilities, it can be seen, that for the one-patch optimization the majority of individuals remains un-mutated, while for the optimization with larger patch numbers, the amount of unchanged solutions is considerably lower. A mutation of all design variables would comply with a random search, and is therefore undesirable. Looking at the probability of a mutation in every design variable, it can be seen that it is negligible small for the one-patch optimization, and almost zero for A-Opt 4 and A-Opt 5.


Table 7-3: Calculated actual probability of mutation, based on the number of design variables and a probability of mutation = .

The final Pareto sets for the different patch number configurations are shown in Figure 7-3 (a). The resulting curves show similarities in fitness ranges that could be covered by all patch configurations. These similar results are caused by multiple patch solutions, with one long patch and one or two patches with the defined minimum patch length. The longest patch is located here in a similar position, as for the calculation with fewer patches. This can also be demonstrated with the heat maps, cf. Figure 7-4 (b) to (d), created for the section of the Pareto front covered by all patch configurations.

Figure 7-3: Load case and evaluation of the example structure plate-2 for different patch numbers: Comparison of the different Pareto fronts (a), load case used for all calculation (b), the dominated area for the evaluation of the convergence behavior (c), development of the number of elements on the current Pareto front for each generation (d)

The progress of the optimization run is demonstrated with the dominated area, Figure 7-3 (c). Here can be seen, that the dominated area for the solutions with one patch (A-Opt 3) is, as expected, significantly smaller than for the two- and three-patch runs. Furthermore, it can be seen that the difference between the two- and three-patch solutions is rather small, which is also visualized in Figure 7-3 (a). In addition, A-Opt 3 converges significantly faster to a final front, cf. Figure 7-3 (c). This is due to the fact that the number of design variables is smaller here. When looking at the number of solutions found, the quantity of solutions is the largest for the A-Opt 3. The explanation therefore is, that once the optimization has converged, the focus is on increasing the number of solutions along the front.

Comparing the section of patch usage covered by all three optimization runs, cf. Figure 7-4 (a), it can be seen that all optimization runs create a Pareto front with distinct steps. This is caused by the discrete modeling of the design parameter length. When using more than one patch, the gaps between the steps are closed, since the total length is a combination of the individual design parameter. The values in-between two steps is thereby created by patches cut at the border of the plate, which is more likely to appear for larger patch numbers. The effect of a significantly larger amount of solutions becomes clear, when comparing the heat maps. Here, A-Opt 3 results in a more uniform visualization than A-Opt 4 and A-Opt 5, cf. Figure 7-4. Nevertheless, all optimization runs have a similar result.

Summarizing, the findings from the patch number study, the presented example showed that different patch numbers create similar results, in areas of the Pareto front accessible with one, two or three patches, cf. Figure 7-4 (a). Furthermore, it could be seen, that the influence of the discrete modeling of design parameters is less pronounced for larger patch numbers, since the gaps between two steps in the Pareto front are closed by solutions cut at the border of the plate, as the Pareto front shows, cf. Figure 7-4 (a). Additionally the influence of the number of design parameters on the actual mutation probability has been discussed. Thereby has been shown, that for larger patch numbers the probability for a mutation of all design parameters is significantly low, and therefore it is ensured that the crossover remains the main search operator.

Figure 7-4: Considered section of the Pareto front (a), and the corresponding heat maps: A-Opt 3 (b), A-Opt 4 (c), A-Opt 5 (d)

#### **Application example plate-3**

For the last plate example, the same load case and the same optimization objective as for plate-2 is used, cf. Figure 7-3 (b) and equation (64). Compared to plate-2, the maximum number of generations has been raised from max = 50 to max = 200. However, the optimization converged after max = 136 generations. The further settings of the algorithm are the same as for test case A-Opt 5, see Table 7-4.

Table 7-4: Setup of the evolutionary algorithm used for the study of the influence of the maximum number of generations


The final Pareto front found during the optimization is compared to A-Opt 5 to evaluate its quality, Figure 8-4 (a). It can be seen, that the optimization with a higher number of generations can increase the performance of the Pareto front, especially in the area of higher patch usage. This is also visualized by a significantly larger dominated area, cf. Figure 8-4 (b). Since the largest part of the performance increase is within an early generation, it is obviously not caused by the increase of the generation number. The explanation for the increase is the mutation, which is a random based procedure and can therefore randomly vary between a local search, as for A-Opt 5, and a global search, as for A-Opt 6, where the global search is characterized by larger improvements within the dominated area between two successive generations.

As expected, the number of found solutions for the optimization with the higher number of generations, A-Opt 6, is significantly higher than for A-Opt 5. However, the number of solutions found up to the 50th generation is approximately similar. Comparing the heat maps, cf. Figure 7-4 (d) and Figure 7-5 (d), can be seen that the difference is negligible.

Concluding the study on the influence of the maximum number of generations, the highest impact of a higher number of generations is on the total quantity of solutions found on the final Pareto front. Therefore, the dominated area could be a useful criteria for the convergence of a solution, since for the event of convergence of the optimization, the dominated area approximates a horizontal line. Comparing the qualitative results, in terms of the heat map, the influence is negligible, therefore it should be sufficient to continue with a smaller number of maximum generations.

Figure 7-5: Comparison of the results for the variation of maximum number of generations (A-Opt 5: n=AAA, A-Opt 6: n=200), (a) Comparison of the final Pareto fronts, (b) the dominated area for the evaluation of the convergence behavior, (c) development of the elements on the current Pareto front for each generation, (d) heat map for the final generation of A-Opt 6.

### **7.2. Slightly curved structure**

The second example is a slightly curved structure, in which the kinematic draping simulation has a larger effect than in the previous example, where the kinematic algorithm only checked a possible overlapping of patches beyond the plate boundaries. Likewise as for the first example, the optimization problem can be formulated based on equation (9) as

$$\begin{array}{c} \text{Minimize } F(u) = \\ \text{Minimize } [global\ strain\ energy, patch\ usage]^T. \end{array} \tag{66}$$

Based on the findings from the plate structure, the crossover type is kept constant for this example (multi-point binary crossover), while the optimization parameters for mutation, replacement and niching are systematically varied to show their influence on the performance of the optimization. The settings for the calculation are summarized in Table 7-5. The following results have been published by Fengler et al. [103].


Table 7-5: Setup of the evolutionary algorithm for the second application example

The load case is a bending load with two constrained areas, cf. Figure 7-6 (c). As for the plate structure, the Pareto fronts obtained for the different setups (cf. Table 7-5) are compared with predefined reference points to evaluate the results, cf. Figure 7-6 (a). Thereby, the patch reinforcement of reference point 4 ends at the constraint area, while the reinforcement from point 3 is limited by the part geometry, Figure 7-6 (b).

Comparing the different mutation settings, the higher mutation (B-Opt 2) creates a slightly better result regarding stiffness maximization for large patch lengths, cf. Figure 7-7 (a). This is due to the fact that the changes applied by the more flexible mutation operator work similar to the more flexible crossover process, and therefore increase the search power of the optimization. The dominated area does not converge faster compared to the low mutation (Figure 7-7 (b)), but starts at a higher value. When looking at the number of found solutions, as a second quality criterion, the number of found solutions at the final front is higher for the higher mutation rate, cf. Figure 7-7 (c). Comparing the heat maps, it can be seen, that both optimization runs end up in similar results, cf. Figure 7-10 (a) and (b). The better performance in terms of finding the extreme point regarding stiffness maximization is not visible in the heat map (Figure 7-10 (b)), since only a small proportion of solutions is in the extreme area, and is therefore underrepresented within the heat map. This shows also, that the heat map is an additional tool, but should not be considered as the only basis of decision, but always in combination with the Pareto front.

Comparing the results obtained by calculation B-Opt 1, B-Opt 3 (replacement type below-limit, where also dominated individuals are passed to the next generation), and B-Opt 4 (with niching), no significant change regarding the Pareto front and the convergence rate can be seen, cf. Figure 7-8 and Figure 7-9 each (a) and (b). The difference in the results of the different setups becomes more obvious comparing the heat maps, cf. Figure 7-10. Here it can be seen, that the distribution of the resulting patch positions is more balanced (reflected in a more pronounced symmetrical picture in the heat map), when more fronts are taken into account (B-Opt 3) or a niching pressure is applied (B-Opt 4). This effect results from a higher diversity of the found solutions along the Pareto front.

Figure 7-6: (a) Comparison of the Pareto fronts obtained for the curved structure, (b) predefined reference points to classify the results, and (c) load conditions. (from [103], modified)

For the given example, a reinforcement covering the position of the structural kink has a large effect on the total component's stiffness. This becomes evident when comparing reference points 2 and 3, cf. Figure 7-6 (b), as well as in the heat maps, represented by the red areas, cf. Figure 7-10 (c) and (d). The red areas in the heat maps for B-Opt 1 and B-Opt 2 are smaller compared to those of B-Opt 3 and B-Opt 4, resulting from a lower diversity, and therefore an allocation of solutions with smaller patch length.

Summarizing the results obtained with the curved structure, all optimizations converge to a similar final Pareto front. This shows that the proposed approach is capable to solve the patch optimization problem. Furthermore, it could be seen that a higher mutation rate is likely to improve the ability of finding extreme points regarding stiffness maximization. However, the diversity of the solutions is still low, which results in a more asymmetrical heat map. Utilizing the heat maps, the best results could be achieved by incorporating a niching pressure and using the below-limit replacement method. This was demonstrated by a more symmetrical picture in the heat map. The effect on the convergence rate was found to be small for the proposed configurations, except for the higher mutation rate, which creates a slightly higher convergence rate, represented by the dominated area. Based on this, a higher mutation rate is suggested, if the finding of extreme solutions (maximum stiffness) has a higher preference. Additionally, a niching pressure and a belowlimit replacement are helpful to achieve a more balanced Pareto front and therefore more suitable heat maps.

Figure 7-7: Comparison of the results for the variation of the mutation (B-Opt 1: low mutation rate, B-Opt 2: high mutation rate), (a) Comparison of the final Pareto fronts, (b) the dominated area for the evaluation of the convergence behavior, (c) development of the elements on the current Pareto front for each generation, (d) percentage share of the solutions along the Pareto front for each length-group [103, 103]

Figure 7-8: Comparison of the results for the variation of the replacement type (B-Opt 1: elitist replacement, B-Opt 3: below-limit replacement) , (a) to (d) as inFigure 7-7. [103]

Figure 7-9: Comparison of the results for no-niching and niching (B-Opt 1: no-niching, B-Opt 4: niching), (a) to (d) as in Figure 7-7. [103]

Figure 7-10: Top view of the generated heat-maps for the results obtained along the Pareto front (red: high patch concentration, blue: low patch concentration): (a) basic setup (B-Opt 1), (b) higher mutation rate (B-Opt 2), (c) change of replacement type (B-Opt 3), (d) niching pressure (B-Opt 4). [103]

### **7.3. IRTG reference structure**

In this section the reference structure developed within the DFG-GRK2078 will be used to demonstrate the performance of the algorithm. First a stiffness optimization is performed with both crossover operators, to demonstrate the influence of the crossover for a rather complex structure (example 1). The second optimization run consists of a combined warpage and stiffness optimization along the proposed optimization chain (example 2), cf. Section 6.2, for which only one crossover operator is considered. In the presented examples, the fiber orientation from a mold filling simulation is used.

#### **Application example GRK2078-reference structure 1**

Based on equation (9) the optimization problem is formulated as

$$\begin{array}{c} \text{Minimize } F(u) = \\ \text{Minimize } [\text{strain } energy, patch\, usage]^T. \end{array} \tag{67}$$

The parameters used for the evolutionary algorithm are given in Table 7-6. Here the crossover operator is varied to validate the findings from the plate structure, cf. Section 7.1, with a 3D structure. Furthermore, a niching pressure is applied here, since this is excepted to result in a more uniform distribution. In both cases the optimization is run for = 30 generations.


Table 7-6: Setup of the evolutionary algorithm for the study with the 3D structure

The load case used for the structural simulation is a four-point-bending load, cf. Figure 7-11 (a), where the structure is loaded with two line loads at B and constrained at four support points A1 to A4. Additionally, it should be mentioned that the proposed structure is not symmetric. The slope of the two sides C and D, cf. Figure 7-11 (b), is 140° and 120°. Whereas the two sides E have an equal slope of 135°.

Figure 7-11: Reference structure used as a 3D validation structure: (a) load case with the four support points A1 to A4 and the line loads B, (b) top view of the structure with the sides with different slope C and D, and the sides with an equal slope E

Comparing the Pareto fronts, the multi-point crossover creates a slightly better result than the parametrized binary crossover. This has also been observed in the first application example, cf. Section 7.1. Furthermore, the unsymmetrical behavior of the example structure can be seen by looking at the reference points. When comparing reference point 3 and reference point 4, cf. Figure 7-12, the difference in the performance of mirrored solutions can be seen. This effect is enlarged by the process-induced fiber orientations, which are due to the asymmetric geometry also asymmetric.

Figure 7-12: Comparison of the Pareto fronts achieved with the two crossover types (C-Opt 1: parameterized binary, B-Opt 2: multi-point binary) with four reference points to classify the results. In reference point 3 and 4 the smaller patch is equal to the minimum patch length of 25 mm.

The performance of the two crossover types is compared with the development of the dominated area, cf. Figure 7-13 (a). It is noticeable, that unlike in the previous examples, a large increase of the dominated area between generation 1 and 2 occurs. Additionally, for both crossover types, there are phases in which non or almost non progress is achieved, like between generation 2 to 10 for C-Opt 1 or generation 6 to 14 for C-Opt 2. This can be explained by the probability based characteristic of the evolutionary algorithm, which can results in phases with large- or non-progress, as in the presented example. Comparing the number of individuals on the front per generation, cf. Figure 7-14 (b), C-Opt 2 finds also a larger number of solutions. However, C-Opt 1 creates a significantly larger number of solutions in the group of larger patch lengths. Compared to the previous examples, cf. Section 7.2, the distribution within the search space is more balanced here. The influence of the fiber orientation and of the asymmetrical geometry is visible in the heat maps, cf. Figure 8-4. In both cases, the area with a more pronounced reinforcement is on the side with the smaller angle.

Figure 7-13: Comparison of the results for the variation of the crossover (C-Opt 1: parameterized binary, B-Opt 2: multi-point binary), (a) Comparison of the dominated area for the evaluation of the convergence behavior, (b) development of the elements on the current Pareto front for each generation, (c) percentage share of the solutions along the Pareto front for each length-group

Summarizing the results for the optimization of the GRK2078 reference structure, both crossover types achieve a similar result in terms of the final Pareto front, which shows that the proposed algorithm is able to solve the patch optimization problem for a complex 3D structure independent of the evolutionary configuration. Again, the performance of the multi-point crossover is better than the one of the parametrized binary crossover, which was also observed in the first example.

Figure 7-14: Heat maps of the two optimization runs, each with perspective and top view: (a) and (b) with parameterized binary crossover (C-Opt 1); (c) and (d) with multi-point binary crossover (C-Opt 2);

#### **Application example GRK2078-reference structure 2**

The last application example is used to demonstrate the performance of the proposed algorithm, by incorporating the warpage simulation in the optimization loop. For this purpose, the same geometry as for the previous example is used. Utilizing equation (9), the optimization problem can be formulated as

$$\begin{array}{c} \text{Minimize } F(u) \\ = \begin{array}{c} \text{Min } [\text{strain } energy \text{ (sttruncural load case)}, \\ \text{strain } energy \text{ (curing)}, \text{distortion)} \text{"} \end{array} . \end{array}$$

Hereafter, the identifiers <sup>1</sup> , <sup>2</sup> , and <sup>3</sup> are used for strain energy resulting from the structural load case, strain energy resulting from curing, and the distortion objective. For the structural load case, the same conditions are applied as for reference structure 1, cf. Figure 7-11 (a). The optimization of the occurring warpage is performed with two objectives, minimization of processinduced displacements and strain energy. The reason for considering both is that a component can be distortion-free, with the drawback of a significant amount of residual stresses. For the distortion minimization objective, the mean value from the four evaluation points C1 to C4 is used, Figure 7-15 (a), while the strain energy is taken from the whole model. The temperature profile used for the curing and warpage simulation is presented in Figure 7-15 (b).

The setup of the evolutionary algorithm is the same as for C-Opt 2, and is shown in Table 7-7.


Table 7-7: Setup of the evolutionary algorithm used for the warpage optimization

Figure 7-15: Load case for the warpage simulation: (a) model with the constraint points (A1: x and y direction, A2: z direction) and the evaluation points for the distortion criteria (C1 to C4), (b) temperature profile used for the curing and warpage simulation

The three-dimensional Pareto front, as a result of the combined structural and warpage optimization, is shown in Figure 7-16 (a). In addition to the Pareto optimal results, an approximation of the Pareto front is shown. Besides, the Pareto front is analyzed in each search space individually, cf. Figure 7-16 (b) to (d). Therefore, all results from the 3D Pareto front are shown in the individual search spaces, and the subset which would correspond to the search space Pareto front is highlighted.

The most distinct creation of a front is shown for the search space covering the two strain energy objective functions, cf. Figure 7-16 (b). This is due to the fact that the two objective functions are most contradictory, since longer patches are required for higher stiffness, which also leads to higher curing stresses. Looking at the distribution in the F2/F3 search space, it can be seen that the majority of solution is allocated in one area the search space. This is caused by the close link between the two objective functions. Nevertheless, this is not the fact for all solutions, since there are solutions for which both objective functions differ considerably. These results correspond with solutions with ether high distortion and therefore low residual stresses or vice versa, therefore it is reasonable to consider both objective functions during the optimization, to ensure a solution with a minimum of distortion and residual stress.

Furthermore, the result of the optimization is visualized with a heat map, cf. Figure 7-17. Comparing the heat map with those from the first application example, cf. Figure 7-14, can be seen that the highest patch concentration is on the same side, though the area with a high patch concentration is smaller when considering the warpage objective during the optimization. Besides this, the patch length is in general small than for the previous example. This is caused by the two manufacturing process based objectives, F2 and F3, since a smaller patch length results in lower distortions and residual stresses. In addition, the patch concentration on the side with the higher geometrical stiffness, cf. Figure 7-17 (b) lower part of the component, is higher. This leads to the deduction that the ratio of stiffness improvement to resulting warpage in this area of the component is better than the ratio of stiffness improvement to patch usage, and thus the use of patch at this point is more advantageous, with regard to the optimization objectives for the combined warpage and stiffness optimization.

Summarizing the findings of the warpage optimization, a three-dimensional Pareto front could be determined, considering strain energy from a structural load case and, distortion and strain energy from the curing process as objective functions. Based on the given example, the combination of the three objectives is a useful approach for the optimization of local reinforcements, since manufacturing influences can be considered during the optimization process. In addition, the heat maps are again a very good way to visualize the optimization results.

Figure 7-16: Results for the combined structural and warpage optimization: (a) 3D representation of the final Pareto front (results and interpolation); Analysis of the Pareto front in the three solution spaces (b) strain energy from curing and load case, (c) strain energy from load case and distortion, (d) distortion and strain energy from curing, (e) dominated area for the evaluation of the convergence behavior, (f) development of the elements on the current Pareto front for each generation

Figure 7-17: Heat maps of the optimization run with warpage consideration, (a) perspective and (b) top view

### **7.4. Conclusion on application results**

Three application examples have been studied to show the ability of the proposed algorithm to solve the patch optimization problem. Starting from a rather simple plate structure, the geometrical complexity was increase to a slightly curved and finally to a 3D reference structure.

The plate structure was used to perform basic feasibility studies and to compare the performance of the different crossover operators. Thereby, the multi-point binary crossover offers a better performance in terms of convergence rate and number of found solutions, while the found Pareto fronts are similar with both crossover types. Furthermore, the influence of the patch number on the performance and quality of the found Pareto fronts has been analyzed. Thereby, it could be seen that gaps in the Pareto front, caused by the discrete modeling of the design parameters, are closed when using a larger n umber of patches. The reason therefore are patches cut at the border of the plate, which are more likely to appear for larger patch numbers. In addition, the different patch numbers create similar results, in areas accessible with one, two, and three patches. The last study with the plate structure analyzed the influence of the maximum number of generations on the optimizations performance. Thereby, could be demonstrated that once the Pareto front has converged to a final Pareto front, the highest impact of a higher number of generations is on the total quantity of solutions found. Therefore it should be sufficient to continue with a smaller number of maximum generations, though it has to be ensured that the optimization converges.

The curved structure was used to analyse the influence of different settings of the evolutionary algorithm on the optimizations performance. Thereby, all optimizations converge to a similar final Pareto front, which shows that the proposed approach is capable to solve the patch optimization problem. Additionally, it could be demonstrated that a higher mutation rate is likely to improve the algorithm's ability to find solutions in extreme areas, like high stiffness or low patch usage. A more balanced distribution of the solutions along the final Pareto front can be ensured when a niching pressure is applied. Furthermore, the influence of the two different selection types, elitist and below limit, on the optimization result has been studied. Thereby, the below limit method results in a more balanced Pareto front than the elitist method. The effect on the convergence rate was found to be small for the proposed configurations, except for the higher mutation rate, which creates a slightly higher convergence rate.

The last application example studied here, was the reference structure developed within the DFG-GRK2078. Again, the performance of the multi-point binary crossover was better than those of the parametrized binary crossover. After the structural optimization was demonstrated successfully with the algorithm, a combined warpage and stiffness optimization was performed. Contrary to the first example, the three objective functions, strain energy from a structural load case, distortion, and strain energy from curing, have been used here. Thereby could be demonstrated that the concurrent use of distortion and strain energy from curing is a good approach to consider effects resulting from warpage in the optimization.

# **8 Robustness study**

### **8.1. Robustness measurements**

According to Jin and Branke [94], a robust design should still work satisfactory when inaccuracies, e.g. from the manufacturing process, influence the design parameters. Therefore, perturbations and changes are applied on each design variable of the optimal solutions, once the optimization procedure is done. This definition of a robust solution is also used in the context of this thesis. Therefore, the robustness evaluation is used to rate the solutions on the Pareto front in terms of sensibility against inaccuracies of the design parameter, while the determination of the robust front is not focused here. Methods to determine the final robust front have been presented by Deb and Gupta [132], and Gaspar-Cunha et al. [93].

The variation of the design parameter is applied in the design space, while the evaluation of the robustness is done in the solution space, cf. Figure 3-2. Thereby, the same variation of the design parameter can result in different variations in the solution space, cf. Figure 8-1. Furthermore, small changes in the design space can result in larger changes in the solution space. The inaccuracies of the design parameters are taken from the manufacturing process, cf. Section 6.1, while the allowed deviation in the solution space comes from tolerance management and is a user defined input parameter. Based on this input, two different ratios are applied to rate the robustness of a solution, the degree of robustness and the robustness index.

Figure 8-1: Effect of the variation of the design parameters on the solutions for two exemplary designs, where equal variation of the design parameters may result in different variations in the solution space

### **8.1.1. Degree of robustness**

The concept of a degree of robustness for multi-objective optimization was first introduced by Barrico and Antunes [133]. They defined the degree of robustness of a solution as a value , for which


Thereby, represents the initial test neighborhood size, and the size of the ℎ neighborhood, for which a predefined percentage of solutions ,

$$p\_k = \frac{A\_{k, \text{robust}}}{A\_{k, \text{sample}}},\tag{69}$$

has to be within the neighborhood. Here, ,sample is a randomly created set of ℎ solutions in the neighborhood, and ,robust the set of robust solutions within the same neighborhood defined by

$$\begin{array}{ll} & \boldsymbol{A}\_{k, \text{robust}} & \quad \text{(70)}\\ = \{\boldsymbol{\mathfrak{u}}\_{l} \in \boldsymbol{A}\_{k, \text{sample}} \, | \, F\_{l}(\boldsymbol{\mathfrak{u}}\_{l}) \le F\_{l, \text{0}}(\boldsymbol{\Delta\!u}\_{\text{0}}), \forall \; l \} & \text{with } j \\ = 1, 2, \dots \boldsymbol{h}, \end{array} \tag{70}$$

with the number of fitness functions , the test sample points , and the allowed deviation of the fitness function ,0 (Δ<sup>0</sup> ). In addition, the size of the test sample is calculated by

$$h = h\_0 + (k - 1) \* d \* h\_0 \, , \tag{71}$$

where ℎ<sup>0</sup> is the user-defined initial sample size, and the grow rate for the sample size for increasing degree of robustness . The definition of the degree of robustness in the design and solution space, cf. Section 3.2, is visualized in Figure 8-2.

Figure 8-2: Definition of the degree of robustness in the design (left) and solution space (right) where ∆, and ∆, represents the maximum allowed deviation in the solution space ([133] modified)

#### **8.1.2. Robustness index**

The second robustness measure used here, is the robustness index, and was introduced by Gunawan and Azarm [134] for single-objective optimization. The method was later extended for multi-objective optimization by Li et al. [135], and Gunawan and Azarm [136]. The method applied in this work is a variation, proposed by Li et al. [137], where they use the variation of the design parameter for the calculation of a sensitivity region.

The basis for the robustness index is the calculation of the sensitivity region of a test solution . The sensitivity set Ω is given by

$$\begin{aligned} \Omega(\mathfrak{u}\_0 + \Delta \mathfrak{u}) &= \\ \{\Delta \mathfrak{u} \in \mathfrak{R} | [F(\mathfrak{u}\_0 + \Delta \mathfrak{u}) - F(\mathfrak{u}\_0)]^2 \le [\Delta F\_0(\Delta \mathfrak{u}\_0)]^2\}, \end{aligned} \tag{72}$$

where (<sup>0</sup> ) is the fitness of the initial point, (<sup>0</sup> + ∆) is the current test point's fitness, ∆ the deviation of the design variables, and ∆<sup>0</sup> (∆<sup>0</sup> ) the allowed deviation of the fitness function, which is called acceptable objective variation range (AOVR).

A consideration of a worst case scenario regarding the sensitivity region is assumed, to calculate the worst-case-objective-sensitivity-region (WCOSR), cf. Figure 8-3. Here, a worst case scenario means, that always the most sensitive direction is considered for the calculation of the robustness index. The WCOSR method will thereby always result in an underestimation of the actual sensitivity region.

The tolerance region is a hyper-rectangle in the design space, formed by all acceptable values of ∆ around the current point <sup>0</sup> , which is defined by ∆ ≤ ∆ ≤ ∆ . In Figure 8-3 the tolerance region is visualized exemplarily for a design space with two design parameters.

Figure 8-3: Definition of a tolerance region in the design space (left) and the corresponding objective sensitivity region (OSR) with the resulting worst-case-objective-sensitivityregion (WCOSR) and the acceptable objective variation range (AOVR) (right), ([137] modified)

The most sensitive direction, used for the WCOSR, can be found by solving the following optimization problem

$$\max\_{\Delta u} R\_{\rm f}(\Delta u) = \left[ \sum\_{l=1}^{n} |\Delta F\_{l}(\Delta u)|^{2} \right]^{1/2} \tag{73}$$

with the number of fitness functions n, the radius of the sensitivity region R, and

$$
\Delta F\_l(\Delta \mathbf{u}) = \frac{F\_l(\mathbf{u}\_0 + \Delta \mathbf{u}) - F\_l(\mathbf{u}\_0)}{\Delta F\_{l,0}} \tag{74}
$$

$$
\Delta \mathfrak{u}^{\text{lower}} \le \Delta \mathfrak{u} \le \Delta \mathfrak{u}^{\text{upper}}.\tag{75}
$$

Based on the radius if the OSR, the robustness index can be defined as [137]

$$
\eta = \frac{R\_{\text{f}}}{R\_{\text{I}}},
\tag{76}
$$

where <sup>I</sup> is the external radius of the required size of the AOVR. A solution is thereby defined as robust, by the mean of the robustness index, if its OSR is completely enclosed by the AVOR, cf. Figure 8-3.

### **8.2. Kriging meta-model**

The basic definition of a meta-model is given in Section 2.8. Based on this, the Kriging meta-model approach consists of a known set of approximation functions and a realization of a stochastic process . With this, the unknown function of interest can be defined by

$$f(\mathfrak{u}) = g^{\mathsf{T}}(\mathfrak{u})\widehat{\mathfrak{F}} + Z(\mathfrak{u}),\tag{77}$$

where beta represents the vector of the estimated trend function coefficients. Typical approximation functions, used for the Kriging method, are linear, polynomial or constant. A universal Kriging approach can consist of a set of approximation functions. Within this thesis, linear, quadratic, and reduced quadratic approximation functions are used. [138]

The covariance matrix of () is defined by

$$Cov[Z(\mathfrak{u}^l), Z(\mathfrak{u}^l)] = \sigma^2 R([\mathbb{R}(\mathfrak{u}^l, \mathfrak{u}^l)])\tag{78}$$

with the variance 2 , the correlation function ℝfor any of the sample points and , and the symmetric correlation matrix . Thereby, is a ( × ) symmetric matrix with ones on the diagonal. The correlation function ℝ is a user defined function. In the presented approach, a Gaussian correlation function of the form

$$\mathbb{R}(\mathfrak{u}^{l}, \mathfrak{u}^{l}) = \exp\left[-\sum\_{k=1}^{n\_{\mathrm{dv}}} \left(u\_{k}^{l} - u\_{k}^{l}\right)^{2} \* \theta\_{k}\right] \tag{79}$$

is used. Here, are the unknown correlation parameters, used to fit the model, dv is the number of design variables, while and are the th components of the sample points and .

The predicted estimates ̂ for a point ̂, other than the sample points , is given by

$$\mathcal{F}(\mathfrak{U}) = \mathfrak{g}^{\mathrm{T}}(\mathfrak{u})\mathfrak{F} + r^{\mathrm{T}}(\mathfrak{U})R^{-1}(f - G\mathfrak{F}) \tag{80}$$

where is a vector of the size , consisting of the sample points, and is the matrix of regression functions. Furthermore, is the correlation vector between the untried point ̂ and the sample points , and is given by

$$\mathbf{r}^{\mathrm{T}}(\mathfrak{U}) = [\mathbb{R}(\mathfrak{U}, \mathfrak{u}^{1}), \mathbb{R}(\mathfrak{U}, \mathfrak{u}^{2}), \dots, \mathbb{R}(\mathfrak{U}, \mathfrak{u}^{n\_{\mathfrak{s}}})].\tag{81}$$

In addition, ̂ can be estimated by

$$\widehat{\mathcal{B}} = (G^T R^{-1} G)^{-1} G^T R^{-1} f. \tag{82}$$

The variance of an untried point ̂ 2 (̂) of the underlying model ̂ and can be estimated by

$$\boldsymbol{\sigma}^{2}(\boldsymbol{\mathfrak{U}}) = \sigma^{2} \left[ 1 - \begin{bmatrix} \boldsymbol{g}^{\mathrm{T}}(\boldsymbol{\mathfrak{U}}) & \boldsymbol{r}^{\mathrm{T}}(\boldsymbol{\mathfrak{U}}) \end{bmatrix} \begin{bmatrix} \mathbf{0} & \boldsymbol{G}^{\mathrm{T}} \\ \boldsymbol{G} & \boldsymbol{R} \end{bmatrix} \begin{bmatrix} \boldsymbol{g}(\boldsymbol{\mathfrak{U}}) \\ \boldsymbol{r}(\boldsymbol{\mathfrak{U}}) \end{bmatrix} \right], \tag{83}$$

where 2 represents the maximum variance, and is defined by

$$
\sigma^2 = \left(^1\!/ \_{n\_s}\right) \left(f - G\beta\right)^T R^{-1} \left(f - G\beta\right) \tag{84}
$$

The Kriging model used here is implemented in the Surfpack software library [139], which is part of the Dakota software package [130].

### **8.3. Implemented workflow**

The method presented here, was developed within the master thesis of Bastian Schäfer [140].

The general workflow of the developed robustness evaluation method is shown in Figure 8-4. The results of the optimization process, cf. Chapter 1, can be used as input for a subsequent robustness study. Thereby, not only the final Pareto front is used, but further solutions created by the evolutionary algorithm should be part of the training data set. Hence, the first step is a data treatment to select suitable data points. Thereby the results are sorted according to their domination value, cf. Section 3.4. Furthermore, duplicates are removed from the data set. Duplicates can arise, since the patch is cut at the border of the initial component. Therefore different values for the design parameter length can result in identical fitness values. It should be defined here, that for all individuals with the same fitness value for patch usage, only the individual with the shortest initial length is kept.

The next step is the selection of the training data, used to create the metamodel, cf. Section 8.2. Here, only data points close to the Pareto front are taken into account, since the robustness is only evaluated for the points on the final Pareto front, and therefore the meta-model creation focuses on this part of the design space. For this purpose, the acceptable objective variation range (AOVR), which corresponds to the range in which the solutions are considered to be robust, is multiplied by a scaling factor, to define the range which is used as training set. Based on the training data, the meta-model is created. Therefore, the data is divided into ten equal sized sets, where nine sets are used to train the model, and one set is used for verification, to predict the resulting model error. Here, every combination of the ten data sets is used to train a model, and the mean error value of all ten models is used as quality criteria for the model. This procedure is repeated for different trend functions of the Kriging model, cf. Section 8.2. Finally, a meta-model for each objective function is selected, and trained with all training points within the defined training data set.

The initial meta-model, created before the training data set was increased, is used at first to calculate the robustness index, cf. Section 8.1.2. Afterwards, the model construction and calculation of the robustness index is repeated for the meta-model of the increased data set. This step is necessary to check, if the number of training points for the meta-model was sufficient enough. Hence, the quality of the model is rated by comparing the calculated robustness index changes, resulting from the increased training data set, with a predefined threshold. Since the calculation of the robustness index is much faster than for the degree of robustness, the robustness index is used here. After the meta-model has been setup successfully, the degree of robustness can be calculated as well, cf. Section 8.1.1.

After the calculation of degree of robustness and robustness index, the results are visualized. Besides a diagram for each robustness measurement, a heat map is created. The heat map is created similar to those presented in Section 6.4. Here, the degree of robustness is mapped on the structure, thereby, only one solution is considered at a time. Besides the original solution, the areas for different values of are mapped on the initial component.

The presented workflow of the robustness evaluation is implemented in MatLab. For the meta-model, an implementation in Dakota [130] is used.

Figure 8-4: Workflow of the robustness evaluation method, based on the results created by the optimization [140]

### **8.4. Application example**

As application example for the robustness evaluation workflow, the results from the optimization run B-Opt 4, presented in Section 7.2, are taken.

The parameter used for the calculation of robustness index and degree of robustness are given in Table 8-1 . The allowed deviations of the fitness values ∆1,0 and ∆2,0 are given as absolute values, instead of a percentage change, since this definition is more applicable here. For example for the patch-usage fitness criterion, a definition in percent would result in larger acceptable length changes for longer patches, which is not the case with an absolute deviation definition.


Table 8-1: Parameter setup for the robustness evaluation of the application example

The robustness index , calculated for each individual on the final front is shown in Figure 8-5 (a). In general, solutions with < 1 are considered as robust solutions, since here the maximum deviation of the design parameters, defined in Table 8-1, results in a lower deviation of the fitness values than the applied constraint. The boundary between the robust and non-robust region is furthermore marked with the black solid line ( = 1). In addition, the degree of robustness was calculated for each individual on the Pareto front, cf. Figure 8-5 (b). Here a higher value corresponds to a higher robustness of a solution.

A degree of robustness of = 3 is comparable with a robustness index of = 1, considering the presented example parameter set. The slight difference is caused by the different approaches to calculate the robustness measurements. When calculating the degree of robustness, only a defined threshold (in this example 95%) has to be within the allowed deviation, while the robustness index calculates the worst case with a given deviation, which would be equal to a threshold of = 100%.

Comparing both robustness measurements, it can be seen that the predicted robust regions have a similar trend. In both cases the solutions at the extreme ends of the Pareto front, high stiffness or low patch usage, have a lower maximum robustness (green dots in Figure 8-5). This effect could be caused by the meta-model, because the quality of the meta-model, used to calculate the robustness measurements, is lower in the extreme areas, since a lower number of training points in present here. In addition to the robustness of each solution, the Pareto front is plotted with black dots, which helps to classify the results. Furthermore, the scattering of the degree of robustness, in the middle region of the Pareto front, results from solutions for which only small deviations in the design parameters cause large deviations of the fitness values.

Figure 8-5: Results of the calculation of robustness index (a) and degree of robustness (b), both in comparison with the final Pareto front (black dots)

Based on the degree of robustness, a heat map is created. The visualization via a heat map is only meaningful for the degree of robustness, since here a distinction between the different robustness neighborhoods can be made, while for the robustness index, only the area which was a-priori defined to be robust would be visualized.

In Figure 8-6, the point with the best performance regarding stiffness, i.e. fitness 1 (b) and a point with a high robustness (c) are used for demonstration purposes. The area marked in the heat maps has a similar size for both cases. However, the number of neighbourhood enlargements, and thus the degree of robustness, is significantly higher for point 2 than for point 1, cf. Figure 8-6 (b) and (c). By comparing the robust areas it can be seen, that changes of the patch position affect the fitness values more than changes in the orientation of the patch. This is demonstrated by a larger robust area, caused by orientation changes. Furthermore, the influence of the chosen draping method can be seen in the heat maps, since all changes of the orientation are performed with the center of the patch as center of rotation.

Figure 8-6: (a) Pareto front with the selected points and the created robustness heat maps, (b) for the stiffest point (point 1, = ), and (c) for a point with a high robustness (point 2, = )

## **8.5. Conclusion on robustness studies**

A robustness evaluation strategy was developed, which uses the results of the optimization as input data. Therefore, a data selection procedure has been presented, which focuses on a selection of the training data considering that the meta-model has to be more precise around the Pareto front, since the robustness is evaluated for those points.

The robustness was evaluated taking two different approaches into account. First, the degree of robustness calculates the maximum possible changes of the design parameters, which result in solutions that are still robust. The second approach is the robustness index, where the worst case deviation of the design parameters is evaluated to check whether it is still robust or not.

The approach is demonstrated with a heat map created for this purpose. The heat map is a useful tool for process and quality engineers, since it shows the areas in which the patch has to be, to remain the optimal solution. This means, the heat map can be used as an additional decision making tool.

# **9 Conclusion and outlook**

### **9.1. Conclusion**

In this work an optimization strategy for the positioning of local fiber reinforcement patches, under consideration of manufacturing constraints, has been developed. Therefore an evolutionary algorithm has been used as optimization algorithm. The principal concept, consisting of the steps fitness evaluation, crossover, mutation, and selection has been discussed, and their influence on the optimization performance has been studied. To consider manufacturing effects in conjunction with the structural performance, a kinematic draping, a warpage, and a structural simulation are included in the fitness evaluation of the workflow.

The first step is to perform a kinematic draping simulation to predict the final, deformed geometry of the patch. A kinematic draping simulation has been chosen, since a large number of function evaluations is necessary during the optimization process, and therefore a very efficient method is required. Therefore, a comparison of kinematic and mechanical draping methods has been presented to show the advantages and disadvantages of a kinematic draping approach. Within this study, three patch configurations with increasing degree of geometrical complexity have been compared on the basis of a three-dimensionally curved reference structure. Taking the results of the comparison with the mechanical draping simulation into account, the kinematic draping simulation can be seen as a good approximation of the patch's forming behavior. Compared with the mechanical method, the kinematic draping simulation is much faster, and requires no material parameters. Therefore, the kinematic draping suits best for the proposed optimization workflow. Additionally, the optimization shall work as a suggestion for the designer and not as a final verification. Therefore, a mechanical draping simulation must be performed afterwards to ensure manufacturability of the optimized patch setup without forming defects like wrinkling, fiber fracture or gapping.

For the warpage simulation, the curing behavior of the resin has to be predicted. Therefore, an approach presented by Kamal [73] was used. A comparison of the simulation results with experimental results, for the cure rate and degree of cure, showed a good accordance by different heat rates. Furthermore, a process-induced strain calculation was performed based on the method proposed by Svanberg and Holmberg [82]. With their approach, the influence of the incorporation of chemical strain during the curing process could be demonstrated. Furthermore, the effective coefficients of thermal expansion for the UD reinforcement patch were calculated in dependence on fiber orientation and fiber volume content with the method of Schapery [122]. The anisotropic thermal coefficients for the discontinuous fiber reinforced component were calculated with the approach proposed by Rosen and Hashin [123]. Furthermore, the assumption was made, that the effective coefficients of chemical shrinkage can be calculated with the same approach as the coefficients of thermal expansion.

Beside the effects resulting from warpage and draping, further manufacturing inaccuracies influence the performance of the final part. Therefore, two main sources of deviations, orientation and size changes of the patch, could be identified and their consideration during the optimization process has been discussed. Thereby, orientation changes refer to misalignments of the whole patch or the fibers, whereas size changes occur in terms of length and width changes. Furthermore, the most important question to be answered is, how accurate the process has to be, to ensure that a predicted optimum remains an optimum for the real part. Therefore, a robustness evaluation is performed after the optimization.

To rate the structural performance of a configuration a structural simulation is performed, in which the results from the draping simulation are included. Therefore, a linear elastic simulation of the component is performed, and the strain energy is used to rate the fitness of the current configuration. For the evaluation of the optimizations performance and results, a number of criteria have been developed. The presented visualization approach by means of a heat map, is a helpful tool for the designer to decide, in which part of the component the reinforcement has the highest impact.

The presented workflow and the evaluation criteria have been applied to a number of demonstration applications. Thereby, the degree of complexity has been increased from a simple plate structure to a complex 3D component. First, the plate structure was used to perform basic feasibility studies and to compare the performance of the different crossover operators. Thereby, the multi-point binary crossover offers a better performance in terms of convergence rate and number of found solutions, while the found Pareto fronts are similar with both crossover types. Utilizing an optimization run with a large number of generations, the ability of the dominated area to work as a convergence criteria has been demonstrated. While increasing the complexity of the application example, the influence of different settings of the evolutionary algorithm on the optimization's performance has been studied. Thereby, all different settings converge to a similar Pareto front, which shows that the approach is capable to trustworthy solve the patch optimization problem. Thereby, the use of a niching pressure ensures a more balanced result, while a higher mutation rate results in a better coverage of the extreme areas of the search space. The last application example studied here, was the reference structure developed within the DFG-GRK2078. This example showed the ability of the proposed approach to also solve complex optimization problems. Furthermore, the reference structure has been used to perform an optimization with the three objectives, strain energy from a structural load case, and distortion and strain energy from curing. Thereby, could be shown that it is possible to find a three-dimensional Pareto front with the proposed approach. Furthermore, it was shown that the combination of distortion and strain energy from curing, is a suitable approach to consider manufacturing influences in the optimization.

Based on the final Pareto front a decision for one patch configuration has to be made by the designer. Since the presented optimization workflow results in a number of Pareto optimal solutions, usually a trade-off between the objective functions has to be made when choosing one solution. Therefore, the robustness of a solution is introduced as an additional decision criterion. The developed robustness study uses the results of the optimization as input data, to create a meta model, which is then used during the robustness evaluation. Therefore, a data selection procedure has been presented, which focuses on a selection of the training data considering that the meta-model has to be more precise around the Pareto front. The robustness was evaluated taking two different approaches into account. First, the degree of robustness calculates the maximum possible changes of the design parameters, which result in solutions that are still robust. The second approach is the robustness index, where the worst case deviation of the design parameters is evaluated to check whether it is still robust or not. Finally, the proposed robustness workflow was demonstrated, using the slightly curved structure. Thereby could be shown that the robustness is a helpful further criteria for the decision maker for the selection of a solution on the final Pareto front. Additionally, the robustness measurements are useful tool for the consideration of manufacturing inaccuracies during the part design process.

### **9.2. Outlook**

The proposed optimization workflow showed its ability to optimize local reinforcement patches. Furthermore, the integration of manufacturing constraints as well as draping and warpage simulation within the optimization workflow improved the quality of the results. Here, an integration of the mold filling process in the optimization loop could be useful, to further improve the quality of the results. Thereby, the influence of the mold filling process on the final patch position and geometry could be considered during the optimization.

The robustness study showed that the use of meta models, instead of computationally expensive simulations, is a promising approach. Hence, the optimization performance could be enhanced by integrating meta models in the optimization workflow. Therefore, two different approaches seem to be promising, the setup of meta models a-priori or an alternating use of meta models and simulations. By using a-priori created meta models, a number of simulations is performed, according to a design of experiments (DoE), to create the required training data. For the second approach, meta models and simulation are used alternating, where the data created by the simulations can be used to train the meta models. Thereby, the meta models could especially be used to replace the warpage simulation, since it is computationally more expensive than the structural simulation.

In addition, the robustness evaluation could be directly integrated into the optimization loop. Thereby, the robustness of a solution could serve either as constraint or as and additional objective of the optimization.

# **10 References**


blades using the inverse finite element method," *Composite Structures*, vol. 184, pp. 894–903, 2018.


hyperelastic, discrete and semi-discrete approaches for textile composite reinforcement forming," *International Journal of Material Forming*, vol. 3, no. 2, pp. 1229–1240, 2010.


Mechanik, Fachgebiet Kontinuumsmechanik im Maschinenbau, Karlsruhe, 2017.


*the IEEE Congress on Evolutionary Computation*, Vancouver (British Columbia), Canada, 2006, pp. 1887–1892.


# **11 List of figures**









# **12 List of tables**


# **13 List of symbols**

#### **Optimization**




#### **Draping**



#### **Curing**


#### **Robustness study**


# **14 Vocabulary for evolutionary algorithm**


### **Karlsruher Schriftenreihe Fahrzeugsystemtechnik (ISSN 1869-6058)**

Herausgeber: FAST Institut für Fahrzeugsystemtechnik


Die Bände sind unter www.ksp.kit.edu als PDF frei verfügbar oder als Druckausgabe bestellbar.


Die Bände sind unter www.ksp.kit.edu als PDF frei verfügbar oder als Druckausgabe bestellbar.









### Karlsruher Schriftenreihe Fahrzeugsystemtechnik

Diese Arbeit leistet einen Beitrag zur Optimierung von lokalen endlosfaserverstärkten Patches, unter Berücksichtigung von Fertigungsbedingungen. Der in dieser Arbeit entwickelte Mehrziel-Optimierungsansatz verwendet dabei einen evolutionären Algorithmus als Optimierungsalgorithmus. Da diese Klasse von Algorithmen eine Vielzahl von Funktionsauswertungen erfordert, sind effiziente Methoden zur Bewertung der Fitnesswerte notwendig. Aus diesem Grund wird eine kinematische Drapiersimulation zur Vorhersage der Patch-Geometrie verwendet. Als Zielfunktionen für die Mehrzieloptimierung werden sowohl strukturelle als auch prozessbezogene Ziele verwendet. Als strukturelles Optimierungsziel wird die Bauteilsteifigkeit verwendet, welche mittels einer linear-elastischen Struktursimulation ermittelt wird, während für das prozessbezogene Ziel eine Verzugssimulation durchgeführt wird. Neben dem Verzugsziel werden weitere relevante Fertigungseinflüsse und -randbedingungen aus dem Halbzeug-, Drapier- und Co-Molding-Prozess diskutiert und finden während der Optimierung Berücksichtigung.

B. Fengler

Manufacturing-constrained multi-objective optimization

**Band 81**