12

Guaranteed Verification of Dynamic Systems

KARLSRUHER BEITRÄGE ZUR REGELUNGS-UND STEUERUNGSTECHNIK

Stefan Schwab

**Guaranteed Verification of Dynamic Systems**

Karlsruher Beiträge zur Regelungs- und Steuerungstechnik Karlsruher Institut für Technologie

Band 12

# **Guaranteed Verification of Dynamic Systems**

by Stefan Schwab

Karlsruher Institut für Technologie Institut für Regelungs- und Steuerungssysteme

Guaranteed Verification of Dynamic Systems

Zur Erlangung des akademischen Grades eines Doktor-Ingenieurs von der KIT-Fakultät für Elektrotechnik und Informationstechnik des Karlsruher Instituts für Technologie (KIT) genehmigte Dissertation

von Stefan Schwab geb. Maier, M.Sc.

Tag der mündlichen Prüfung: 22. Juli 2019 Hauptreferent: Prof. Dr.-Ing. Sören Hohmann Korreferent: Prof. Dr. Vicenç Puig

### **Impressum**

Karlsruher Institut für Technologie (KIT) KIT Scientific Publishing Straße am Forum 2 D-76131 Karlsruhe

KIT Scientific Publishing is a registered trademark of Karlsruhe Institute of Technology. Reprint using the book cover is not allowed.

www.ksp.kit.edu

*This document – excluding parts marked otherwise, the cover, pictures and graphs – is licensed under a Creative Commons Attribution-Share Alike 4.0 International License (CC BY-SA 4.0): https://creativecommons.org/licenses/by-sa/4.0/deed.en*

*The cover page is licensed under a Creative Commons Attribution-No Derivatives 4.0 International License (CC BY-ND 4.0): https://creativecommons.org/licenses/by-nd/4.0/deed.en*

Print on Demand 2022 – Gedruckt auf FSC-zertifiziertem Papier

ISSN 2511-6312 ISBN 978-3-7315-0965-3 DOI 10.5445/KSP/1000097527

# Preface

This thesis was written during my time at the Institute of Control Systems (IRS) at Karlsruhe Institute of Technology (KIT) and the department of Control in Information Technology (CIT) at FZI Research Center for Information Technology.

First of all, I want to thank Prof. Dr.-Ing Sören Hohmann for providing the necessary environment to allow successful scientic work like this. I really appreciate your never ending support and guidance throughout the years that enabled this thesis.

Also I'd like to thank Prof. Dr. Vicenç Puig for reviewing this thesis. Besides that, I really enjoyed our fruitful discussions that led to the collaborative development of the zonotopic method that forms a part of this thesis.

Further credits go to the IRS and FZI sta. I'll never forget this time and the vivid - not always work related - discussions. Also I'd like to thank all students and graduates that supported this work in some perspective.

Very special thanks go to Dr.-Ing. Gunter Diehm and Prof. Dr.-Ing. Mathias Kluwe for their very careful review of preliminary stages of this thesis. Your comments were very constructive, precise and rarely contradicting. You helped me very much to get the right focus and to add the correct nal touch.

Last but not least I'd like to thank my wonderful wife Elisa. Without your support and understanding it would not have been possible to nish this work. Thank you for your constant optimism and for allowing me to spent that much time on science.

The very last sentences need to be understandable for two very special German boys:

Lieber Jonathan, lieber Samuel, das ist Papas Buch. Es ist jetzt fertig.

Karlsruhe, in June 2019

There is no substitute for persistence! It cannot be supplanted by any other quality. With persistence will come success.

∼ Napoleon Hill

# Abstract

This thesis introduces a new specication and verication approach for dynamic systems. The introduced approach is able to provide type II error free results by denition, i.e. there are no hidden faults in the verication result. The approach is thus suitable to provide a reliable verication of safety critical systems.

A new notion of set based consistency for dynamic systems with a given specication is presented. Therefore Kaucher interval arithmetic is used to enclose the measurement data in a bounded error sense. The resulting method is able to verify the specied behavior of a dynamic system against its measurement data even in the presence of noise and sensor uncertainty. Consistency is dened using the Kaucher arithmetic united solution set which leads to mathematically guaranteed results.

It is proven mathematically that the desired property holds for a wide class of systems, including time invariant, interval type and hybrid systems, which can be used to describe even nonlinearities. Several extensions are introduced, leading to a new iterative identication and segmentation algorithm for hybrid systems which is able to handle even unknown switching times. In case the calculations can be done fast enough, the developed approach can also be used for the diagnosis of dynamic systems.

The presented methods are successfully applied to several example systems, including theoretic settings and a variation of dierent tank settings.

The new theories, methods and algorithms developed in this thesis form the foundation for reliable safety analysis of highly automated safety critical systems.

# Zusammenfassung

Diese Arbeit beschreibt einen neuen Spezikations- und Verikationsansatz für dynamische Systeme. Der neue Ansatz ermöglicht dabei Ergebnisse, die per Denition frei von Fehlern 2. Art sind. Dies bedeutet, dass das Ergebnis der Verikation keine versteckten Fehler enthalten kann. Somit können zuverlässige Ergebnisse für die Analyse von sicherheitskritischen Systemen generiert werden.

Dazu wird ein neues Verständnis von mengenbasierter Konsistenz dynamischer Systeme mit einer gegebenen Spezikation eingeführt. Dieses basiert auf der Verwendung von Kaucher Intervall Arithmetik zur Einschließung von Messdaten. Konsistenz wird anhand der vereinigten Lösungsmenge der Kaucher Arithmetik deniert. Dies führt zu mathematisch garantierten Ergebnissen. Die resultierende Methode kann das spezizierte Verhalten eines dynamischen Systems auch im Falle von Rauschen und Sensorungenauigkeiten anhand von Messdaten verizieren.

Die mathematische Beweisbarkeit der Konsistenz wird für eine große Klasse von Systemen gezeigt. Diese beinhalten zeitinvariante, intervallartige und hybride Systeme, wobei letztere auch zur Beschreibung von Nichtlinearitäten verwendet werden können. Darüber hinaus werden zahlreiche Erweiterungen dargestellt. Diese führen bis hin zu einem neuartigen iterativen Identikations- und Segmentierungsverfahren für hybride Systeme. Dieses ermöglicht die Verkation hybrider Systeme auch ohne Wissen über Schaltzeitpunkte. Die entwickelten Verfahren können darüber hinaus zur Diagnose von dynamischen Systemen verwendet werden, falls eine ausreichend schnelle Berechnung der Ergebnisse möglich ist.

Die Verfahren werden erfolgreich auf eine beispielhafte Variation verschiedener Tanksysteme angewendet.

Die neuen Theorien, Methoden und Algorithmen dieser Arbeit bilden die Grundlage für eine zuverlässige Analyse von hochautomatisierten sicherheitskritischen Systemen.

# Contents




# List of Figures



# List of Tables


# Abbreviations and Symbols

## Abbreviations


## Symbols






# 1 Introduction

The fast technological development in computer engineering in recent years led to very powerful computing capacities that are now available at very low costs [Wil17]. As a result those chips are used in an increasing number of products to make them "smart" and to enhance user experience and functionality. These smart devices are interleaving the daily life of millions of people and are used for an increasing number of tasks [Gho17]. State of the art techniques are powerful and mature enough to take over even very complex and sensitive tasks - for example in autonomous cars, in ight assistance systems or in the control of critical infrastructure. Tasks that can potentially harm human beings or destroy valuable infrastructure are called "safety critical" and special measures need to be taken during the development cycle to ensure correct operation of such safety critical systems [IEC10][ISO11].

These special measures are given by safety analysis methods. A very relevant property of all safety analysis methods is given by the amount of their type I and type II errors. Thereby type I errors (false alarms) denote the situation in which a safety analysis method evaluates a correct system to be faulty. The complementary condition is given by type II errors (hidden faults). In this case a safety analysis method evaluates a faulty system to be correct. Type II errors are of major importance in the context of safety critical systems. A faulty system that is evaluated to work correctly poses uncontrollable risk to the user and the environment. Thus there is a need for safety analysis methods that do not suer from type II errors.

In the context of this thesis, verication of dynamic systems means applying safety analysis methods in an oine setting to ensure consistency of the verication object (VO) with the specication. Guaranteed verication means that type II errors are impossible by design. In case the safety analysis is fast enough, it can be applied in an online setting which is then called "diagnosis". Diagnosis can also be used to detect runtime errors.

It is common opinion that there is currently no sucient type II error free method available in the state of the art and the state of science [Kap16].

Currently safety analysis methods use mostly falsication approaches, e.g. methods from the eld of testing. To achieve condence about the absence of failures based on testing methods it is necessary to use a suciently large amount of test cases. This leads to the fact that safety analysis is more expensive as the development itself [Fos15] and costs are expected to rise further with increasing complexity of the tasks assigned to the technical system.

Besides costs, current safety measures are often based on the experience of the responsible engineer or brute-force simulation approaches are applied [ZN09]. It is widely recognized that those methods will not keep up with the complexity given by currently developed or future systems [Ram17][Ott18].

The example which is predominant in public perception is located within the automotive industry. Current systems like automatic cruise control or autonomous parking pilots are analyzed by applying the previously mentioned experience and simulation based safety methods [Zan12]. Nevertheless it is known that those are not suitable for the arising challenges, e.g. in the context of autonomous driving [Wac17][Koe18].

A possibility to avoid this dilemma is to use formal methods that can proof specic properties of a verication object. A promising approach that gained great attention in recent years is given by interval arithmetic safety analysis methods, see among others [Uga03][Wol10] [San17]. Results obtained using those method are guaranteed to include all possible nominal system behavior as well as additional non-nominal behavior (so called spurious solutions). Due to this overapproximating property, there are no type I errors. However, for the same reason type II errors are possible by design.

The goal of this thesis is to close the gap by developing a formal method for the safety analysis of dynamic systems that is guaranteed to be free of type II errors.

## 2 State of Science

There are numerous methods and approaches concerned with (safety) analysis of development results in dierent communities. Also, there is a broad terminology with respect to the verication question. The primary goal of this chapter is to build a basic conceptualization and the resulting terminology used in this thesis. This is necessary to follow the ideas and approaches introduced in later chapters.

Furthermore, the most important and wide spread notions and methods used in engineering and engineering science are introduced and discussed.

## 2.1 Conceptualization and Terminology

System behavior analysis can be conducted with respect to dierent perspectives. This thesis addresses the verication of dynamic systems. A classication of other perspectives is given in Appendix A. To conduct the verication of dynamic systems it is necessary to dene three components. First a description of the intended system behavior is set up. The next step is to dene the concept of deviating behavior. Finally the developed real system behavior has to be assessed with respect to the intended behavior.

## 2.1.1 Behavior Description

The desire of the costumer needs to be documented in some kind of specication to allow any analysis in terms of verication. There are as many specication methods to dene the nominal behavior as there are methods to check their fulllment. The variety includes very formless approaches in human language [Mac95] as well as very formal denitions using (runnable) models [AI15] or special specication languages ([Par72][Spi89][Abr96]). The choice of a suitable specication formalism to be used in a project is a trade-o. The less formal a specication, the less eort is necessary to set it up, leaving the eort to the developer who needs to interpret the specication. During the verication procedure it is necessary to interpret the specication which leads to a need for experienced experts [ZN09, p. 120][Raj13][Bal16]. The more formal a specication, the more eort is necessary to set it up. An advantage of formal specications is that they force the specication engineer to capture the requirements in a precise and structured way. Standardized specication routines help to avoid careless mistakes during the setup [Par86][Hal90][Sch15a].

On the other hand all properties that should be covered need to be representable in the specication, which can lead to requirements being impossible to be captured in a specic formalism. However, due to the precision of very formal specications it is possible to analyze them in a rather automated or "proof-like" way.

Throughout this thesis it is assumed that the specication itself is known, correct and represents the whole functionality, behavior and all properties that are necessary to fulll the customers desire.

This work assumes a set based specication. It is assumed that it is possible to represent the desired behavior of a dynamic system in terms of a specic abstract set. This formal specication (FS) can be interpreted to include all dynamic system parametrizations that are able to create the intended behavior. The set of intended or desired behavior can be given by dierently shaped sets e.g. a circle or a square. In case there is no variation in the desired behavior, the set consists only of one parametrization which is given by a distinguished point. Dierent possible specication sets are depicted in Fig. 2.1.

Figure 2.1: Exemplary set based specications (Point, Circle, Square)

## 2.1.2 Behavior Deviation

Implemented systems can show behavior deviating from the desired behavior due to several causes. From a very basic point of view it is possible to dierentiate between mistake by misfortune, mistake by accident without intention and mistake by deliberate wrong-doing by an individual.<sup>1</sup>

The setting of this work tackles the second kind, mistake by accident without intention that can happen at every point during the development process. A wide range of expressions is used to dierentiate the eld of unintended behavior or unintended properties. However, dierent elds of research and profession are using dierent naming conventions.

The naming convention used in this thesis is given in Fig. 2.2.

Figure 2.2: Failure terminology

<sup>1</sup> This categories are inspired by Aristotle (384 - 322 BC) who thought about ethics and mistakes of human behavior [Res07].

The foundation of all unintended behavior is given by the basic mistakes. The next instance is called fault and denotes the deviation of at least one system value from its intended value.This deviation can happen due to all three of the basic mistakes. If a fault leads to unintended system behavior it is called error.

A system perturbed by a fault and a resulting error can still be able to operate correctly. Only if there is a persistent interruption of correct behavior the system is called to show a failure. This failure introduces a hazard into the environment the system operates in. The hazard can lead to consequences in the environment that potentially harm objects or even human beeings.

Complementary, there is the concept of disturbance in a control engineering sense. The process of capturing real world data and transferring them into any control system is always superimposed by a process that creates a deviation between the real values and the measurement values [Fra16, p. 65]. This deviation is called disturbance or noise and every system needs to be adapted to the specic noise present in itself as well as in the particular environment.

## 2.1.3 Behavior Assessment

The assessment of the verication object is done with respect to its behavior. Therefore it is necessary to set up the formal specication (FS) and additionally describe the behavior of the verication object (VO) using the same formalism. Both descriptions are assumed to be represented by a convex set. The notion of set based basic consistency that is used throughout this thesis is given in Denition 2.1.

### Denition 2.1 (Set Based Basic Consistency)

A set based verication object VO is called basic consistent with its set based formal specication FS if and only if there is an intersection between the formal specication and the verication object behavior:

( ∩ ̸= ∅) ⇔ .

A special case is given by full consistency, which means that all behavior given by the formal specication is available in the verication object.

### Denition 2.2 (Set Based Full Consistency)

A set based verication object VO is called full consistent with its set based formal specication FS if and only if the formal specication behavior is an subset of the verication object behavior:

( ⊆ ) ⇔ .

The inverse is given by inconsistency according to Denition 2.3.

### Denition 2.3 (Set Based Inconsistency)

A set based verication object VO is called inconsistent with its set based formal specication FS if and only if there is no intersection between the verication object behavior and the formal specication behavior:

( ∩ = ∅) ⇔ .

This means that none of the available VO behavior is given in the specication.

The resulting situations are depicted in Fig. 2.3. It is assumed that the formal specication FS is given by the blue circle. The set of real VO behavior is given by the green and red circles. Denition 2.1 and Denition 2.2 are fullled in the left and middle subgures, leading to the verdict and for the depicted VO and the FS. Denition 2.3 is fullled in the right subgure, leading to the verdict for the depicted VO and the FS.

Figure 2.3: Basic consistent, full consistent and inconsistent result of set based verication

In the context of this thesis the verication object is considered to be correct if there is specied behavior within the VO behavior. This is called "consistent behavior".

For the ease of notation, the term is used as soon as there is "consistent behavior", either due to or due to the even stricter . The following considerations apply equivalently to both denitions.

In general, the VO behavior is not directly available and thus needs to be captured by an approximation. If the behavior is given in terms of dynamic system parameters, the approximation can be calculated using identication methods [Lju99]. Therefore it is necessary to interact with the real VO to determine the underlying behavior. Assumption 2.1 has to hold to allow a successful identication.

### Assumption 2.1 (Persistent Excitation of the VO)

The verication object VO is suciently excited to show all relevant behavior.

Only behavior of the VO that is triggered or excited is included in the approximation and can thus be analyzed [Ast95, p. 41][Ise10, p. 250]. Throughout this thesis it is assumed that Assumption 2.1 holds.

There are two main set based calculation paradigms that can be used to determine the approximation of the VO: underapproximation ( <sup>−</sup>) and overapproximation ( <sup>+</sup>).

In case of overapproximation, there is spurious behavior in the resulting outer enclosure (see rectangle in the left of Fig. 2.4). If underapproximation is used, some VO behavior is missing in the inner enclosure (see rectangle in the right of Fig. 2.4).

Figure 2.4: Set based overapproximation and set based underapproximation

As the true VO behavior is not available, the approximated behavior is used to reason about . In case an overapproximation of the verication object behavior (VO<sup>+</sup>) is used, this leads to

$$\{FS \cap VO^{+} \neq \emptyset\} \Leftrightarrow Consistencyy^{+}.\tag{2.1}$$

It is very important to note that (2.1) yields the verdict <sup>+</sup> that holds only for the overapproximated behavior VO<sup>+</sup>. It does not have to be valid for the true VO behavior. To relate the verdict with the true VO behavior, the overapproximating property

$$\left(VO \subseteq VO^{+}\right) \Rightarrow Consistency \subseteq Consistency^{+} \tag{2.2}$$

has to be taken into account, leading to

$$\{FS \cap VO^{+} \neq \emptyset\} \Leftrightarrow Consistency^{+} \Leftarrow Consistency \Leftrightarrow \{FS \cap VO \neq \emptyset\}.\tag{2.3}$$

It can be seen from (2.3) that the <sup>+</sup> verdict can not be extended to the true system behavior VO. This situation is depicted in the lower left eld of Tab. 2.1. This property holds for both, and .

If an overapproximating method yields the result <sup>+</sup> and this result is generalized to , it is possible that there is a "hidden fault" present in the system. This situation is called type II error and it is a severe problem in the eld of safety critical systems. Type II errors are likely to harm people or the environment as the VO is showing wrong behavior but the supervising system assumes correct functionality. Corrective actions that are designed to prevent fault induced damage are not activated in case of a hidden fault. This type of fault can thus proceed and potentially harm people. In safety critical systems, type II errors need to be avoided under all circumstances.

Therefore it is benecial to use the underapproximation of the verication object behavior (VO−), leading to

$$
\left(VO^{-} \subseteq VO\right) \Rightarrow Consistency^{-} \subseteq Consistency. \tag{2.4}
$$

The resulting verdict <sup>−</sup> can be extended to the true system property:

$$\{FS \cap VO^{-} \neq \emptyset\} \Leftrightarrow Consistency^{-} \Rightarrow Consistency \Leftrightarrow \{FS \cap VO \neq \emptyset\}.\tag{2.5}$$

If the underapproximation of the verication object VO<sup>−</sup> is consistent with the specication, it is guaranteed that the true verication object is also consistent with the specication (see Tab. 2.1, lower right eld). Thus the verdict is guaranteed to be free of type II errors. Again this property holds for and .

The new verication approach developed in this thesis is based on the concept of underapproximation to avoid type II errors by design. This essential property is necessary to solve the currently unsolved reliable verication problem of safety critical systems.

However, the property comes at the costs of possible false alarms (see Tab. 2.1, upper right eld). Instead of additional spurious solutions there are missing solutions generated by the underapproximation VO−. Even though there is consistent behavior in the VO, this behavior is not included in the underapproximation, leading to a false alarm (type I error).

The situations in the remaining upper left eld of Tab. 2.1 depicts the correct verdict than can be obtained using both, under- or overapproximation. This is due to the fact that there is no consistent behavior for the real VO as well as for both approximations.

## 2.2 Interval Arithmetic Methods

The overapproximating property introduced in the previous section can be achieved using the notion of interval arithmetic. Interval arithmetic is thereby used to enclose the eects of noise and epistemic lack of knowledge that is always present in real systems. A xed lower and upper bound is used to describe a set of possible true values that are associated with a given measurement value :

$$x\_{true} \in [x\_{meas} - \delta, \ x\_{meas} + \delta]. \tag{2.6}$$

The maximum tolerance is the only parameter needed to set up the interval. This interval arithmetic notation of maximum deviation is widely used by sensor manufacturers [Kre95] and in the fault detection community (e.g. in [Arm09][Zai14][AI17]). The true measurement value is guaranteed to be included in the interval and it is guaranteed that the true value is never outside the interval.

When interval enclosure is used on the measurement data, all succeeding calculations have to apply the notions of interval arithmetic to preserve the guaranteed properties. The basic property of interval arithmetic calculations is that all possible solutions are included in the result. Therefore, interval arithmetic results are able to create the introduced overapproximation properties. The interval arithmetic solution set consists of real solutions and spurious solutions. Spurious solutions denote solutions that do not exist in the real system but are inevitable artifacts that are introduced by interval arithmetic calculations and the nal interval arithmetic (and thus axis parallel) enclosure of the real solution [Bau87].

Interval arithmetic is widely used for verication [Bal16] and diagnosis methods as type I errors (false alarm) are prevented by denition. Therefore, models of the nominal system are used to calculate a set of predicted outputs for the measured inputs of the system [Ven15]. This can be done by using intervals on the system parameters to calculate an interval range of outputs [Pui06][Mes10][Wol10]. The system is assumed to be correct as long as the measured output values are within the predicted output interval, i.e. within the so called direct image. Due to the used outer enclosure of the prediction, type II errors are possible using this class of methods.

An alternative approach uses the so called inverse image [Pui06] or feasible set [Cas14], also known as set-membership approach [Ing09]. In this case the input-output measurement data is used to calculate the set of parameters that is able to generate the observed mapping. This is possible if the system is linear or nonlinear but linear with respect to the parameters. If there is a member of the set of nominal models within the feasible set of the measurement, the measured data can be explained by the nominal model. This class of methods utilizes interval arithmetic identication based on outer enclosures. Therefore there are type II errors possible by denition.

The feasible set resembles the solution set of the identication problem given by the measurement data that can be computationally hard to calculate [Hor13].

A wide spread possibility to approximate the solution set is given by subpavings using the SIVIA algorithm (see [Jau01, p. 45][Pui06][Mes10]). The bisection approach of SIVIA leads to a large set of dierent intervals with various size. Even though this result is very precise, the great amount of intervals leads to a complex handling and to long calculation times.

A more ecient approach is given using a zonotope<sup>2</sup> representation as shown in [Ing09]. The question how this approximation can be calculated in an ecient way is still an active research topic. Latest results [Koc19] use sparse polynomial zonotopes. Additionally, there are methods to reduce the solution set by pruning spurious parts if possible. For example the approach presented in [Wol10] uses measurement data to reduce the overapproximated state set. Nevertheless, all approximation methods use outer enclosures and are thus prone to type II errors.

Besides the eld of diagnosis, dierent interval arithmetic approaches are used for state estimation ([Jau01][Ram09][Mes10][E13][Kre16][Kre18][Wan18]) and control [Rau06]. None of these methods addresses type II errors.

Therefore it is necessary to develop a new verication method that is able to guarantee the absence of type II errors. This can be achieved by calculating an inner enclosure that consists of a subset of the real solution as shown in the previous section. This thesis utilizes a special extension of interval arithmetic called Kaucher interval arithmetic that provides powerful theories to calculate the necessary inner enclosures.

## 2.3 Governing Complexity: Time Variant and Hybrid Verication Approaches

Linear time invariant (LTI) systems form the base for the most system theoretic methods and approaches. Nevertheless, real world systems are normally neither linear nor time invariant. It is thus the question how to handle the complexity inherently given in real world systems. Some nonlinear systems can be handled using nonlinear theory which provides methods for analysis and control [Kha15]. However those approaches are often subject to very strict preliminaries and only applicable for a narrow class of systems. Therefore linearization is often used in practice to handle nonlinearities. Another possibility is to model nonlinearities by using time variant parameters in a linear system [Ble11]. In this case, the set of feasible parameters can be bounded using interval arithmetic or any other set denition. It is also possible to split the nonlinear dynamic into a sequence of linear dynamics [Oza14] that are activated by a superimposed switching mechanism. The resulting model belongs to the class of hybrid systems. Hybrid systems can also be used to model time variant linear systems with piecewise constant parameters. Therefore the dierent piecewise constant parameters are represented using an individual dynamic subsystem each.

There is a wide theoretical framework available for hybrid systems [Eng02][Mah10]. The topic of hybrid verication is subject to current research in the cyber physical systems community (see e.g. [Sch15b][Kap16][Sch17a][Ara17] [Bar18][Har18][Lau18]). There are also large research clusters in this area [AVA19][ENA19].

To apply the set based approach introduced in this thesis in a hybrid setting, it is necessary to use hybrid identication methods to determine the unknown system parameters from given measurement data. The most relevant hybrid identication approaches given in the literature are introduced in the following.

<sup>2</sup> A zonotope is a convex polytope that is point symmetric with respect to its center.

The algebraic approach provided in [Vid08] interprets the identication setting as a geometric problem. The measurement data as well as the parameters are interpreted as vectors that have to be perpendicular in case the identied parameters match the true parameters. The goal is to nd the parameter vector with minimal projection on the family of all measurement vectors. A Bayesian approach based on stochastic properties is given by [Jul05]. The number of models and the model order need to be known a priori for this procedure. A cluster based approach was developed by [FT03] were machine learning methods are used to form groups of similar behavior. Identication methods based on optimization were developed and presented in [Mün05][Bor09][Lau18]. The bounded error approach introduced in [Bem05] and used for time variant systems in [Bra16] assumes errors that are characterized by their maximum value. Even though this is close to the basic interpretation used in this thesis, those approaches do not use interval arithmetic notations. Therefore they lack the guarantees that are necessary in the safety critical context of this thesis.

A greedy approach based on [Oza12] was developed in [Die13a] and [Die13b]. This approach is dierent from the others as it is the only one that uses a multi-step prediction error instead of the common one step prediction error. It was extended to cyber-physical systems in [Sch17a] and to Kaucher interval arithmetic in [Sch19].

## 2.4 Other Common Verication and Falsication Approaches

There is a wide range of verication methods with dierent degree of abstraction and formalization.

A strong diagnosis community is active in the control engineering eld (see e.g. [Ise93] [Sch03][Ven03a][Ven03b][Ven03c][Bla06][Arm09][Sch09][Pui10][Ise11][Cac13][Zol14]). Also, a large testing and verication community formed in the information technology community. This leads to a great range of verication and validation methods, unfortunately using a similar terminology (e.g. [Bar78][Boe84][Hay86] [TF91][Bar05]). Three of the most relevant approaches are introduced in this section.

## 2.4.1 Testing

A wide spread - if not the most wide spread - approach to assess the properties of a technical system is given by testing. Testing is a classic falsication method, aiming on the detection of counter examples that do not show the intended behavior. It is therefore necessary to dene a well-chosen set of test cases, consisting of the system state at the beginning of the test case, inputs that are applied during the run of the test case and outputs that are expected to appear during or at the end of the test case. If the system under test (SUT) shows outputs deviating from the expected outputs, the test case is called a "failed test" and further inspections of the test case are necessary to identify the reason. In contrast to verication methods, each result is only valid for the specic applied test case. It is long known that the condence of the results can only be increased by increasing the amount of test cases [Fut89, p. 3].

Testing can be applied at dierent stages of the development cycle and at dierent abstraction levels. The state of the art testing scheme is given by the V-model (see e.g. [Web09][ZN09] [Raj13][Ott18]) as depicted in Fig. 2.5.

Figure 2.5: V-model diagram (based on [Ott18])

The left part of the V-model is the specication branch. It is applied in a top-down fashion. The specication is initialized on the highest level representing the customers desire. Then it is propagated to lower levels and rened to match the degree of formalization of each level. This structured procedure includes the decomposition of the overall task in several sub tasks accompanied by the denition of interfaces between the sub tasks.

After every renement step the result is checked against the superimposed level to verify that both requirements are consistent. The nal (software) specication is implemented at the bottom level. The resulting SUT is checked against its specication at the same level. If the test succeeds, the function is used in higher levels and combined with other parts to meet superimposed specications. If a test fails, the system under test is rejected to the previous level. The complexity of the SUT increases with rising level on the right verication branch of the V-model. It is also possible that a failed test at high levels (e.g. at system or acceptance level) leads to changes in the respective high level specication. This results in the need of repeating the whole development process starting with the changed high level specication [Raj13][ZN09].

There are several technologies that are used in dierent phases of the V-model, sometimes leading to additional branches. Fig. 2.5 includes cutting edge technologies like rapid prototyping, component HiL and cluster HiL that are used to speed up the time eort to perform a complete cycle of the V-model.

An important part of the testing process is the test case generation. There are several possibilities to set up the test cases. If there is experience with this kind of product, it is likely that an existing test case database can be reused and adapted [Sax08][Bal16]. Another straight forward approach is to determine the feasible range of all input variables and divide this range in so called "equivalence classes" [Utt06]. The test cases are then formed by combining one representative (or the minimum and maximum values) from each equivalence class with all possible representatives of the other input variables [Utt06]. If it is not possible to form equivalence classes, it is also possible to sample the valid range of a variable randomly or in equidistant steps. This approach is straight forward and easy to understand but it will lead to very large sets of test cases with increasing number of variables in the system or if a ne resolution of the variable ranges is necessary (i.e. [Hei05][AI15]).

The runtime for testing or simulation rises with the number of test cases, leading to large computation times. If equidistant sampling is used, it is possible that much calculation time is spent assessing "uninteresting" regions of the input space or that "interesting" regions are only covered with few test cases. One major drawback of the testing approach is that all test cases need to be redone if the SUT changes. As there are frequent iterative changes during a development cycle (i.e. more than one iteration of the V-model is necessary), the test cases need to be applied several times which leads to even longer computation times.

## 2.4.2 Reachability Analysis

By including more system theoretic knowledge, testing can be developed to reachability analysis. The basic requirement for this purpose is a specication that includes some kind of state space the system is operating in. Further, the specication has to dene forbidden areas in this state space.

The main idea of reachability analysis is to determine whether there is a sequence of inputs that leads the SUT to enter the forbidden area. If an input sequence leading to a forbidden area is found, the SUT is falsied and needs to be improved to match the requirements (see among others [Bha04][Alu06][Mit07][Alt08][Don10]). Reachability analysis is also aiming on nding a counter example which has the same basic problem as the testing scenario: no detected counter example does not mean there is no fault present in the system as faults can be hidden in uncovered parts of the state space.

Runtime limitations restrict all methods to a nite number of samplings and thus to partial coverage of the state space. There are smart coverage criteria available that allow an eective calculation of the most important regions of the state space e.g. using rapidly exploring random trees (RRT) (see i.e. [Bha04][Kap16][Pan17]). These approaches can be extended to the so called method of star discrepancy [Dan11] or by applying the underminer method [Bal16]. Nevertheless, reachability analysis is still a falsication method, based on the specication of the faulty case (forbidden areas). Therefore reachability analysis is not suitable to solve the verication problem addressed in this thesis.

## 2.4.3 Formal Verication

One possibility to avoid the counter example problem is given by more abstract approaches from the verication eld. Those formal methods are used to reason about system properties in a mathematical rigorous way. To apply formal methods, the VO needs to be transferred to a strict mathematical notation, e.g. by using the Z specication language (see e.g. [Spi89][Bro05, p. 325]) or Prolog [Bro05, p. 334]. The formalized VO can then be used to carry out mathematical proofs showing that specic system properties hold in all operating conditions. The proof is thereby conducted by a so called theorem prover.

This approach is very powerful as the results are valuable and mathematically sound. Nevertheless, formal proofs can only be done for very distinct properties. Furthermore the methods need very long runtime even for "small" academic problems. This leads to still unsolved runtime issues for real world problems ([Bro05, p. 325][Bar18]).

A basic problem that cannot be omitted is that formal proofs cannot be conducted on the VO directly. Therefore the results hold only for the image of the VO which is given in the used formalism. Mistakes that are introduced when the system is transferred from the real world into the modeling formalism cannot be detected.

## 2.5 Scientic Gap and Related Research Question

Testing is the state of the art for current systems and is successfully applied in various communities. Nevertheless there are current systems, e.g. autonomous driving functions, that show a number of relevant scenarios that cannot be covered by testing or simulations. Even if this was possible, falsication methods cannot prove the absence of all faults. They need to stop at some point and have to assume that no undiscovered fault is present in the system. There are established methods available that use interval enclosures to mathematically bound the system behavior. Those methods use classic interval arithmetic, leading to overapproximating properties. Overapproximating methods are able to provide type I error free results, meaning that there are no false alarms generated by the method. Nevertheless, the overapproximation can cover missing behavior, leading to an undetected hidden fault. In the case of safety critical systems, hidden faults (type II errors) can lead to severe consequences threatening human life. It can be concluded that the verication of safety critical dynamic systems is currently not solved.

A new verication method has to be developed to close this gap. This method has to be free of hidden failures, meaning that there are no type II errors. Therefore the specication is assumed to be formally given in terms of a set of dynamic system parameters. The behavior of the VO has to be given in the same formalism, leading to an identication problem. Safety critical systems are often implemented as embedded systems that consist of a closely connected discrete event system (the controller) and a dynamic system (the plant). Thus it is necessary to develop a hybrid identication method that is able to provide the desired guarantees.

The comprehensive research question tackled by this thesis is:

"How can the consistency of highly automated safety critical dynamic systems be evaluated by a guaranteed verication method?"

# 3 Methodical Approach and Mathematical Preliminaries

Considering the state of science as well as the current and future challenges of system theory there is a need for a new verication method. The rising importance of safety critical systems emphasizes the need for formal methods that target type II errors. This thesis introduces such a formal method based on the notions of interval arithmetic, extended to Kaucher interval arithmetic. First the necessary notations and denitions are given to provide a sound theoretical base for further considerations.

## 3.1 Mathematical Preliminaries

All methods introduced in this thesis are based on interval arithmetic, appended by the properties of Kaucher interval arithmetic. In the following, the basic properties and notations of interval arithmetic are introduced.

The goal of this chapter is to provide a brief overview of interval arithmetic that is necessary for this thesis. The interested reader is referred to [Bau87][Rze08][Roh12][Sai14] for an extensive coverage of the topic. Throughout this thesis the well known notation of interval arithmetic extended by Kaucher interval arithmetic introduced in [Kup95][Sha96] is used.

## 3.1.1 Basic Interval Arithmetic

Interval arithmetic was initially developed to handle numerical calculation errors due to oating point calculation used in computer algebra systems [Apo67]. It gained popularity outside the numerical community with the rise of electronic computing in various elds. When measurement data is used in computing - as it is normally the case in engineering and natural science - faults are already created by the measurement process itself [Kre95]. Furthermore, the used values are given as samples at discrete time steps . Every measurement , is compromised by some noise that leads to a deviation between the real value , and its measurement

$$y\_{meas,k} = y\_{true,k} + \epsilon\_k.\tag{3.1}$$

It is possible to dene intervals around the measurement that are guaranteed to include the real system value

$$y\_{true,k} \in [y\_{meas,k} - \delta, \ y\_{meas,k} + \delta] \tag{3.2}$$

if the maximum of the absolute noise is known i.e. ∀ : || ≤ .

Suitable values of can be determined from the data sheets provided by the sensor manufacturers.

The denition of a classical interval type variable as given in [Sai14] is

$$\underline{x} \coloneqq \left[ \underline{x}, \,\, \overline{x} \right] = \left\{ x \in \mathbb{R} \, \middle| \, \underline{x} \le x \le \overline{x} \right\}.\tag{3.3}$$

This denition includes all real numbers that are between or on the inmum and the supremum . In case > 0 and > 0 the interval is called positive interval. If the inmum is negative ( < 0) and the supremum is positiv ( > 0) i.e. if 0 ∈ x, the interval is called zero interval. A negativ interval is given if < 0 and < 0. One last denition covers the case of supremum and inmum being the same, i.e. = , which is called a degenerated interval [Dja17] or point real interval [Sai14].

The set of so called proper intervals is given by

$$\mathbb{IR} := \{ \underline{x} = [\underline{x}, \,\, \overline{x}] \mid \underline{x} \le \overline{x} \text{ and } \underline{x}, \overline{x} \in \mathbb{R} \}\,. \tag{3.4}$$

Despite this inmum-supremum notation, each proper interval can be given using the center

$$x\_c := \frac{1}{2} \left( \overline{x} + \underline{x} \right) \tag{3.5}$$

and the radius

$$x\_{\Delta} := \frac{1}{2} \left( \overline{x} - \underline{x} \right). \tag{3.6}$$

of an interval. The interval can now also be stated in the center-radius notation

$$\mathbf{x} = \left< x\_c, \, x\_\Delta \right>. \tag{3.7}$$

Furthermore, it is important to introduce the interval type vector matrix notation based on [Jau01]. Vectors and matrices are written as capital letters and interval values are given in bold font , leading to interval matrices denoted as . An interval vector is dened as cartesian product of closed intervals that includes a subset of the real numbers R:

$$X := \mathbf{x}^{(1)} \times \mathbf{x}^{(2)} \times \dots \times \mathbf{x}^{(n)}, \text{with } \mathbf{x}^{(i)} = \left[ \underline{x}^{(i)}, \, \overline{x}^{(i)} \right] \text{ for } i \in \{1, 2, \dots, n\}. \tag{3.8}$$

This notation can be interpreted as projection of the -th interval component () to the -th axis of the vector space. An illustration for = 2 and = 3 is given in Fig. 3.1.

Figure 3.1: Examples of the graphical representation of ∈ IR<sup>2</sup> (left) and ∈ IR<sup>3</sup> (right)

An ( × ), , ∈ N, interval matrix can be interpreted as subspace of R ×. Again it is dened using the cartesian product of · closed intervals:

$$\begin{aligned} \mathbf{A} &= \begin{pmatrix} \mathbf{a}^{(1,1)} & \dots & \mathbf{a}^{(1,n)} \\ \vdots & & \vdots \\ \mathbf{a}^{(m,1)} & \dots & \mathbf{a}^{(m,n)} \end{pmatrix} \\ &= \mathbf{a}^{(1,1)} \times \mathbf{a}^{(1,2)} \times \dots \times \mathbf{a}^{(m,n)} \\ &= \left( \mathbf{a}^{(i,j)} \right) \end{aligned} \tag{3.10}$$

with 1 ≤ ≤ , 1 ≤ ≤ . The center matrix is dened element-wise as in [Hla14] to

$$A\_c \in \mathbb{R}^{m \times n} : \left( a\_c^{(i,j)} \right) = \frac{1}{2} \left( \overline{a}^{(i,j)} + \underline{a}^{(i,j)} \right) \tag{3.11}$$

as well as the radius matrix

$$A\_{\Delta} \in \mathbb{R}^{m \times n} : \left( a\_{\Delta}^{(i,j)} \right) = \frac{1}{2} \left( \overline{a}^{(i,j)} - \underline{a}^{(i,j)} \right) \tag{3.12}$$

with 1 ≤ ≤ , 1 ≤ ≤ .

The four basic arithmetic operations addition, subtraction, multiplication and division, i.e. ⋆ ∈ {+, −, ·, /}, are well dened for intervals. The general application of each operator on two interval values = [, ] and = [︀ , ]︀ is given by

$$\left[\underline{x},\,\,\overline{x}\right] \star \left[\underline{y},\,\,\overline{y}\right] = \left\{ z = x \star y \,\, \middle|\,\, \underline{x} \le x \le \overline{x}, \,\underline{y} \le y \le \overline{y} \right\} \tag{3.13}$$

according to [Apo67], which leads to the interval type calculation rule

$$\begin{aligned} \left[ \underline{x}, \,\,\overline{x} \right] \star \left[ \underline{y}, \,\,\overline{y} \right] &= \left[ \min \left( \underline{x} \star \underline{y}, \,\,\underline{x} \star \overline{y}, \,\,\overline{x} \star \underline{y}, \,\,\overline{x} \star \overline{y} \right), \\ &\quad \max \left( \underline{x} \star \underline{y}, \,\,\underline{x} \star \overline{y}, \,\,\overline{x} \star \underline{y}, \,\,\overline{x} \star \overline{y} \right) \right]. \end{aligned} \tag{3.14}$$

The dierent elements within the min(·) and max(·) operations are due to the fact that the combination of all extreme values need to be taken into account.

Unfortunately, this property also causes two major drawbacks of interval arithmetic, the dependency eect and the wrapping eect. Those eects are explained in Example 3.1 and Example 3.2.

With the assumption 0 ∈/ [︀ , ]︀ it is possible to explicitly state the four basic operations as in [Apo67]:

$$\begin{aligned} \left[\underline{x},\overline{x}\right] + \left[\underline{y},\overline{y}\right] &= \left[\underline{x} + \underline{y},\ \overline{x} + \overline{y}\right] \\ \left[\underline{x},\overline{x}\right] - \left[\underline{y},\ \overline{y}\right] &= \left[\underline{x} - \overline{y},\ \overline{x} - \underline{y}\right] \\ \left[\underline{x},\ \overline{x}\right] \cdot \left[\underline{y},\ \overline{y}\right] &= \left[\min\left(\underline{x}\underline{y},\ \underline{x}\overline{y},\ \overline{x}\underline{y},\ \overline{x}\overline{y}\right),\ \max\left(\underline{x}\underline{y},\ \underline{x}\overline{y},\ \overline{x}\underline{y},\ \overline{x}\overline{y}\right)\right] \\ \left[\underline{x},\ \overline{x}\right] / \left[\underline{y},\ \overline{y}\right] &= \left[\underline{x},\ \overline{x}\right] \cdot \left[\frac{1}{\overline{y}},\ \frac{1}{\underline{y}}\right]. \end{aligned} \tag{3.15}$$

It is shown in [Apo67] that associative and commutative property hold for interval values as well. However, the distributive law is not applicable anymore and needs to be changed to

$$x \cdot (y+z) \subseteq x \cdot y + x \cdot z \tag{3.16}$$

which is known as the subdistributive property for the interval values , and .

While evaluating an expression, every appearance of an interval variable is treated individually as if it was independent from its other occurrences. Multiple occurrences of the same variable thus lead to a widening of the enclosure. This property is called dependency effect and is illustrated in Example 3.1. One approach to mitigate the dependency eect is to reformulate the expression such that each variable occurs only once, if possible.

#### Example 3.1:

Assume the function

$$f\left(x\right) = 1 + \frac{1}{x}.\tag{3.17}$$

The resulting enclosure for the interval = [1, 3] can be calculated straight forward using the interval arithmetic denitions of (3.15) to () = [︀ 4 3 , 2 ]︀ , which matches the true range of the function within the interval. It is depicted by the blue dotted frame in Fig. 3.2. If (3.17) is reformulated such that there are multiple occurences of e.g.

$$
\vec{f}\left(x\right) = \frac{x+1}{x} \tag{3.18}
$$

the interval arithmetic evaluation yields ˜ () = [︀ 2 3 , 4 ]︀ . It can be seen that this is a large overestimation of the true values of the function within , depicted by the dashed frame in Fig. 3.2.

Figure 3.2: Dependency eect based on () and ˜()

Another eect occurring with interval calculations is the wrapping eect. This eect is caused by iterative calculations based on previous overestimations. Such iterative calculations are e.g. necessary to solve an initial value problem or to evaluate a state space equation. An example for the wrapping eect is given by the initial value problem of Moore [Bau87] and is depicted in Example 3.2.

#### Example 3.2:

Assume the initial value problem for the dierential equation

$$
\dot{Y}\left(t\right) = \begin{pmatrix} 0 & 1 \\ -1 & 0 \end{pmatrix} Y(t) \tag{3.19}
$$

with (0) = [[−, ] , [1 − , 1 + ]] , > 0. When the equation is evaluated, the resulting solution set needs to be framed by axis parallel enclosures after each step. The solution sets for = · ∆ , ∆ = 6 , ∈ {0, 1, 2, 3} are depicted in Fig. 3.3. It can be seen, that the overestimation is continually increasing, as the inherited overestimation is passed on and used as base for further calculations.

Figure 3.3: Example for the wrapping eect using the initial value problem of Moore [Bau87]

The wrapping eect can reach a serious extent even after only one iteration. A minimal example illustrating the extent of the problem after two steps is given in Example 3.3.

#### Example 3.3:

This example claries the eect of interval calculations in the setting of a proportional gain system with unknown gain :

$$
\mu \cdot p = y.\tag{3.20}
$$

The system setup is depicted in Fig. 3.4. The input and output ranges of the system are given and can be included in the intervals = [2, 3] and = [4, 9]. The goal is to calculate the gain that maps all possible input values ∈ to the specied output range .

Figure 3.4: Proportional gain system with proper solution

Using the introduced interval arithmetic calculations leads to

$$\begin{aligned} \mathbf{p} &= \mathbf{y}/\underline{u} \\ &= \begin{bmatrix} \underline{y}, \ \overline{y} \end{bmatrix} \begin{bmatrix} \frac{1}{\overline{u}}, & \frac{1}{\underline{u}} \end{bmatrix} \\ &= \begin{bmatrix} \underline{4}, & \underline{9} \\ \overline{3}, & \underline{2} \end{bmatrix} \\ &\approx \begin{bmatrix} 1.3, & 4.5 \end{bmatrix}. \end{aligned} \tag{3.21}$$

Re-substituting into the system equation yields

$$\begin{aligned} \boldsymbol{y} &= \boldsymbol{p}\boldsymbol{u} \\ &= [1.3, \ 4.5] \begin{bmatrix} 2, \ 3 \end{bmatrix} \\ &= [2.6, \ 13.5] \neq [4, \ 9] \end{aligned} \tag{3.22}$$

which is a strong overestimation of the genuine output range.

The example shows that the system parameter calculated from input and output ranges cannot be used to reason about the parameter set that is suitable to map the given input on the given output. The wrapping eect is caused by considering the combination of the extreme values of both intervals. Nevertheless, when regarding the task at hand in Example 3.3, the goal is not to nd all possible gains connecting the two intervals but to nd those gains reasonably connecting "the most" elements of the intervals. This slight but very important change is illustrated in Example 3.4.

#### Example 3.4:

Assume the setting of Example 3.3. The parameter interval is calculated as before using = /. The question is now how many pairs (, )|( ∈ ),( ∈ ) exist for each parameter ∈ . Therefore the intervals and are divided into equidistant parts of ∆ = ∆ = 0.0001. The resulting 10′001 discrete samples of are combined with the resulting 50′001 samples of to calculate the connecting parameter = /. The histogram formed by 500′060′001 values of is depicted in Fig. 3.5. It can be seen that the extreme values of the outer enclosure of the solution = [1.3, 4.5] are only connected by a single input-output pair each. On the other hand, there is a plateau between = [2, 3] that connects a nearly constant number of input-output pairs. Substituting this interval value into the system equation leads to

$$\begin{aligned} y\_i &= p\_i u \\ &= \begin{bmatrix} 2, \ 3 \end{bmatrix} \begin{bmatrix} 2, \ 3 \end{bmatrix} \\ &= \begin{bmatrix} 4, \ 9 \end{bmatrix} = y \end{aligned} \tag{3.23}$$

which is exactly the given output range. The interval is an inner enclosure of the solution set of = /.

Figure 3.5: Distribution of parameters in the proper case

The plateau in Fig 3.5 contains those parameters that are able to map any ∈ to a value ∈ . Note that not necessarily all values ∈ have to be met by . The contour of the histogram given in Fig. 3.5 can also be analytically calculated. The derivation of the exact distribution is given in Appendix B.

The property leading to the wrapping eect displayed in Example 3.3 and Example 3.4 is the non-existence of an inverse element in classical interval arithmetic [Apo67].The inverse element in the real numbers is dened with respect to an operation and denotes an element that maps itself on the neutral element of this operation (see [Bro08, p. 340]).

Thereby the neutral element is also dened with respect to the same operation and denotes an element that maps each other element on itself (see [Bro08, p. 339]).

There are neutral elements in classical interval arithmetic. For example the neutral element for addition is given by = [0, 0] and for multiplication by = [1, 1]. Applying the neutral elements to an arbitrary interval value = [, ] leads to

$$\mathbf{r} + \mathbf{e}\_a = [\underline{r} + 0, \,\, \overline{r} + 0] = [\underline{r}, \,\, \overline{r}] \tag{3.24}$$

$$
\boldsymbol{r} \cdot \mathbf{e}\_m = \begin{bmatrix} \underline{r} \cdot 1, \ \overline{r} \cdot 1 \end{bmatrix} = \begin{bmatrix} \underline{r}, \ \overline{r} \end{bmatrix} . \tag{3.25}
$$

However, in general there is no inverse element as can be seen in the following:

$$
\overline{r} + (-\overline{r}) = [\underline{r} + (-\overline{r}), \; \overline{r} + (-\underline{r})] \neq e\_a \text{ if } \underline{r} \neq \overline{r} \tag{3.26}
$$

$$
\overline{r} \cdot \left(\frac{1}{\overline{r}}\right) = \left[\frac{\underline{r}}{\overline{r}}, \frac{\overline{r}}{\underline{r}}\right] \neq \mathbf{e}\_m \text{ if } \underline{r} \neq \overline{r}. \tag{3.27}
$$

Equations (3.26) and (3.27) hold if and only if = which means that is a degenerated point real interval [Apo67].

## 3.1.2 Kaucher Interval Arithmetic

It is benecial to dene an extension to interval arithmetic that provides the existence of an inverse element for all arithmetic operations. By using Kaucher interval arithmetic [Kau80], the set of proper intervals can be extended by the introduction of a new set of so called improper intervals. These improper intervals are dened complementary to classical intervals:

$$\mathbb{K}\mathbb{R} = \left\{ \mathbf{x} = [\underline{x}, \overline{x}] \mid \overline{x} < \underline{x} \text{ and } \underline{x}, \overline{x} \in \mathbb{R} \right\}. \tag{3.28}$$

The set of all proper and improper intervals is given by IR\* = IR ∪ KR and is depicted in Fig. 3.6. The set of point real intervals is depicted as diagonal line in the gure. The set of proper intervals IR is formed by the half plain above the point real line. It can be seen, that the improper intervals KR complete the IR\* by covering the half plain below the point real line.

The denitions of the basic arithmetic operations ⋆ ∈ {+, −, ·, /} need to be adapted to hold as well for classical as for Kaucher interval arithmetic [Sha02].

Therefore, the denition of the negative part <sup>⊖</sup> and the positive part <sup>⊕</sup> of a real number is given by

$$x^{\ominus} = \max\left(-x, 0\right) \quad \text{and} \quad x^{\ominus} = \max\left(x, 0\right). \tag{3.29}$$

The four classic operations can thus be written as follows:

$$x + y = \begin{bmatrix} \underline{x} + y, \ \overline{x} + \overline{y} \end{bmatrix} \tag{3.30}$$

$$\mathbf{x} - \mathbf{y} = \begin{bmatrix} \underline{x} - \overline{y}, \ \overline{x} - \underline{y} \end{bmatrix} \tag{3.31}$$

$$\begin{aligned} \boldsymbol{x} \cdot \boldsymbol{y} &= \left[ \max \left( \underline{x}^{\ominus} \underline{y}^{\ominus}, \overline{x}^{\ominus} \overline{y}^{\ominus} \right) - \max \left( \overline{x}^{\ominus} \underline{y}^{\ominus}, \underline{x}^{\ominus} \overline{y}^{\ominus} \right), \\ & \quad \max \left( \underline{x}^{\ominus} y^{\ominus}, \overline{x}^{\ominus} \overline{y}^{\ominus} \right) - \max \left( \overline{x}^{\ominus} y^{\ominus}, \underline{x}^{\ominus} \overline{y}^{\ominus} \right) \end{aligned} \tag{3.32}$$

$$\max\_{\underline{x},\ldots,\underline{x}} \left( \underline{x}^{\ominus} \underline{y}^{\ominus}, \overline{x}^{\oplus} \overline{y}^{\ominus} \right) - \max \left( \overline{x}^{\ominus} \underline{y}^{\ominus}, \underline{x}^{\ominus} \overline{y}^{\ominus} \right) \tag{3.32}$$

$$x/\mathfrak{y} = x \cdot \left[1/\overline{y}, \ 1/\underline{y}\right], \text{for } \underline{y} \cdot \overline{y} > 0. \tag{3.33}$$

Figure 3.6: Geometric interpretation of IR\*. The diagonal line represents point real values (based on [Sai14, p. 18])

In addition the two unary operators

$$\text{opp}\left( [\underline{x}, \,\,\overline{x}] \right) = [-\underline{x}, \,\,-\overline{x}] \tag{3.34}$$

$$\text{dual}\left(\left[\underline{x},\;\overline{x}\right]\right) = \left[\overline{x},\;\underline{x}\right].\tag{3.35}$$

are introduced to toggle between proper and improper intervals. Using this operators leads to the denition of inverse elements in Kaucher interval arithmetic:

$$x + \text{opp}\left(\mathbf{z}\right) = \left[\underline{x}, \; \overline{x}\right] + \left[-\underline{x}, \; -\overline{x}\right] = \left[0, \; 0\right] =: 0\tag{3.36}$$

$$x / \text{dual} \left( x \right) = \left[ \underline{x}, \; \overline{x} \right] \cdot \left[ 1 / \underline{x}, \; 1 / \overline{x} \right] = \left[ 1, \; 1 \right] =: 1. \tag{3.37}$$

Those comply with the classic interval analysis denitions if all used intervals are proper [Sha02].

It is hard to imagine the nature of an improper interval as it is neither empty nor does it include the same values as a proper interval with inverse borders. A possibility to grasp an idea of the nature of an improper interval is given in Example 3.5.

#### Example 3.5:

Assume the proportional gain setting of Fig. 3.7 which is similar to Example 3.3 but with a dierent output range .

Figure 3.7: Proportional gain system with improper solution

The question is again which values can be used as gain ∈ that maps all input values = [2, 3] to the output range = [4, 5]. The intervals are again divided into equidistant parts of ∆ = ∆ = 0.0001. The resulting 10′001 discrete samples of are combined with the 10′001 samples of to calculate the connecting parameter = /. The resulting 100′020′002 parameter values are used to set up the histogram given in Fig. 3.8.

Figure 3.8: Distribution of parameters in the improper case

In this case it can be seen that there is no plateau in the histogram. However, there are two edges at <sup>1</sup> = 5/3 and <sup>2</sup> = 2. Substituting <sup>1</sup> into the system equation leads to

$$\begin{aligned} y\_{p\_1} &= p\_1 u \\ &= 5/3 \, [2, \, 3] \\ &\approx [3.33, \, 5] \neq [4, \, 5] \,. \end{aligned} \tag{3.38}$$

The evaluation for <sup>2</sup> yields

$$\begin{aligned} y\_{p\_2} &= p\_2 u \\ &= 2 \begin{bmatrix} 2, \ 3 \end{bmatrix} \\ &\approx \begin{bmatrix} 4, \ 6 \end{bmatrix} \neq \begin{bmatrix} 4, \ 5 \end{bmatrix} . \end{aligned} \tag{3.39}$$

It can be seen that both parameters are able to map some values of the input range into the output range. Nevertheless there is not a single parameter that can map all values of to . This observation can be combined with the fact that the inner enclosure is an improper interval:

$$\begin{aligned} p &= y/dual(u) \\ &= \begin{bmatrix} \underline{y}, \,\overline{y} \end{bmatrix} \begin{bmatrix} \underline{1}, \,\, \frac{1}{\overline{u}} \end{bmatrix} \\ &= \begin{bmatrix} \underline{4}, \,\, \frac{5}{3} \end{bmatrix} \\ &\approx [2, \,\, 1.7] \end{aligned} \tag{3.40}$$

Therefore improper intervals can be interpreted as solutions of a setting with "eroded plateau".

## 3.1.3 Interval Type Linear Equation Systems

The introduced interval arithmetic considerations can now be extended to a vector matrix notation. Assume there are measurement values ⟨⟩ =1 = [1,2, . . . , ] and ⟨⟩ =1 = [1, 2, . . . , ] , containing a valid range for each sample ∈ {1, 2, . . . , }. Each suitable parameter ∈ has to comply with all input and all output ranges. This problem can be stated as an interval type linear equation system

$$\begin{cases} \begin{array}{rcl} u\_1 p &=& y\_1 \\ u\_2 p &=& y\_2 \\ \vdots & &=& \vdots \\ u\_T p &=& y\_T \end{array} \end{cases} \tag{3.41}$$

or more general, for vectorial input [︁ (1) , · · · , () ]︁ , scalar output and parameters [︀ (1) , · · · , () ]︀ :

$$\begin{cases} \begin{array}{ccccccccc} u\_1^{(1)} p^{(1)} & + & u\_1^{(2)} p^{(2)} & + & \dots & + & u\_1^{(n)} p^{(n)} & = & y\_1 \\ u\_2^{(1)} p^{(1)} & + & u\_2^{(2)} p^{(2)} & + & \dots & + & u\_2^{(n)} p^{(n)} & = & y\_2 \\ \vdots & \vdots & & \vdots & & \vdots & & \vdots \\ u\_T^{(1)} p^{(1)} & + & u\_T^{(2)} p^{(2)} & + & \dots & + & u\_T^{(n)} p^{(n)} & = & y\_T \end{array} \end{cases} \tag{3.42}$$

The variables are used to set up the regressor matrix ∈ IR( <sup>×</sup>) , the measurement vector ∈ IR( <sup>×</sup>1) and the respective parameter vector ∈ IR\*(×1) with

$$\mathbf{A} = \left(\mathbf{a}^{(i,j)}\right)\_{1 \le i \le T, \ 1 \le j \le n} = \left(\mathbf{u}\_k^{(j)}\right)\_{1 \le k \le T, \ 1 \le j \le n} \tag{3.43}$$

$$\mathbf{B} = \begin{pmatrix} \mathbf{b}^{(i)} \\ \mathbf{b}^{(1)} \end{pmatrix}\_{1 \le i \le T} \qquad \qquad = (y\_k)\_{1 \le k \le T} \tag{3.44}$$

$$X = \left(x^{(j)}\right)\_{1 \le j \le n} \qquad = \left(\mathbf{p}^{(j)}\right)\_{1 \le j \le n} \tag{3.45}$$

The interval type linear equation system can thus be stated as

$$A \cdot X = B.\tag{3.46}$$

This system can be interpreted as the collection of all point real linear equation systems that can be formed from the enclosed interval values [Sha96].

Equation systems with a regressor matrix of dimension = are called quadratic. Dimension < stands for an underdetermined and dimension > results in an overdetermined equation system. Underdetermined systems do not carry enough information to solve the problem unambigiously. This thesis focuses on overdetermined systems which is the most relevant case when regarding reasonable measurement times and system orders .

The inverse of a quadratic point real matrix is dened if the matrix is non-singular i.e. −<sup>1</sup> exists if det() ̸= 0. Analogously a quadratic interval type matrix is non-singular if all point real matrices contained in the interval matrix are non-singular i.e. det() ̸= 0 ∀ ∈ [Sha14].

For overdetermined point real systems the criterion changes to a rank condition. The point real matrix ∈ R ( <sup>×</sup>) with > is said to have full rank if rank () = . For interval type overdetermined systems, this condition again changes to rank () = , ∀ ∈ . This means that all point real matrices included in the interval matrix need to show full rank. Determining the rank of an interval type matrix is in general an -Hard problem [Sha14]. Nevertheless, several criteria to check if an interval matrix has full rank were collected in [Sta16] based on [Sha14]. An introduction of the most relevant ones is given in Appendix C.

The full rank condition is connected with persistent excitation according to Assumption 2.1. If the used input signal provides persistent excitation, the regressor matrix has full rank [Sha14][Lak14]. This leads to Assumption 3.1.

#### Assumption 3.1 (Rank of the Regressor Matrix)

The interval type regressor matrix shows full rank according to [Sha96].

Throughout this thesis it is assumed that all specications and measurements lead to an interval regressor matrix that has full rank.

In general there is no unique, component wise point real solution vector for such an interval linear equation system. Instead, (3.46) is solved by a set of point real solutions ∑︀. The elements of the solution set () ∈ ∑︀ depend on the specic interpretation of (3.46). This interpretation is done by the interval quantors ∀ and ∃ as explained in [Sha02]. The notation of

$$\forall \left[\underline{a}, \; \overline{a}\right] \\ x = \exists \left[\underline{b}, \; \overline{b}\right] \\ \tag{3.47}$$

means that has to solve (3.47) for all elements of { ∈ R| ∈ [, ]} but only for at least one specic element of {︀ ∈ R| ∈ [︀ , ]︀}︀ [Sha02].

Each element of a vector or matrix can be assigned with an individual quantor, i.e. it is possible to precisely dene a specic solution set for the interval type matrix equations. Vectors and matrices with assigned quantors are denoted as <sup>C</sup> or <sup>C</sup>, respectively. A vector or matrix containing only the elements with assigned ∀ quantor are denoted by <sup>∀</sup> and <sup>∀</sup> , the elements assigned with an ∃ quantor are given by <sup>∃</sup> and <sup>∃</sup> . To split an assigned vector <sup>C</sup> or a matrix <sup>C</sup> depending on the quantors, the dualization of the intervals as given in (3.37) has to be used.

According to [Sha02] the splitting is dierent for matrices and vectors and it is given by the following relation:

$$\text{Vector: } \boldsymbol{B}^{\mathcal{E}} \coloneqq \text{dual}\left(\boldsymbol{B}^{\vee}\right) + \boldsymbol{B}^{\exists} \tag{3.48}$$

$$\text{Matrix: } \mathbf{A}^{\mathbb{C}} := \mathbf{A}^{\mathbb{V}} + \text{dual}\left(\mathbf{A}^{\mathbb{E}}\right). \tag{3.49}$$

The specic elements of the matrices <sup>∀</sup> and <sup>∃</sup> are given by

$$a^{\mathbb{M}(i,j)} = \begin{cases} a^{\mathfrak{E}(i,j)} & \text{, if } \mathfrak{E} = \forall \\ 0 & \text{, else} \end{cases} \tag{3.50a}$$

$$\boldsymbol{a}^{\exists(i,j)} = \begin{cases} \boldsymbol{a}^{\mathcal{C}(i,j)} & \text{, if } \mathfrak{C} = \exists \\ \boldsymbol{0} & \text{, else.} \end{cases} \tag{3.50b}$$

The elements for the vectors <sup>∀</sup> and <sup>∃</sup> are given by

$$\mathbf{b}^{\mathbb{M}(i)} = \begin{cases} \mathbf{b}^{\mathfrak{C}(i)} & \text{, if } \mathfrak{C} = \forall \\ 0 & \text{, else} \end{cases} \tag{3.51a}$$

$$\mathbf{b}^{\exists(i)} = \begin{cases} \mathbf{b}^{\pounds(i)} & \text{, if } \mathfrak{C} = \exists \\ 0 & \text{, else.} \end{cases} \tag{3.51b}$$

The most general solution set is given by a mixed assignment of both quantors to the interval matrix as well as to the interval vector . The resulting -solution<sup>3</sup> set ∑︀ according to [Hla14] is given by

$$\begin{aligned} \sum\_{AE} \left( A^{\mathcal{E}}, B^{\mathcal{E}} \right) &= \left\{ X \in \mathbb{R}^{n} \, \middle| \, \\ \left( \forall A^{\vee} \in A^{\vee} \right) \wedge \left( \forall B^{\vee} \in B^{\vee} \right) \wedge \left( \exists A^{\exists} \in A^{\exists} \right) \wedge \left( \exists B^{\exists} \in B^{\exists} \right) : \left( \left( A^{\vee} + A^{\exists} \right) X = B^{\vee} + B^{\exists} \right) \right\}. \end{aligned} \tag{3.52}$$

<sup>3</sup> The genuine notation of [Hla14, p. 2] is:

∈ R is an -solution if ∀<sup>∀</sup> ∈ ∀, ∀<sup>∀</sup> ∈ ∀, ∃<sup>∃</sup> ∈ ∃, ∃<sup>∃</sup> ∈ <sup>∃</sup> : (<sup>∀</sup> +∃) = <sup>∀</sup> +∃. This notation is slightly adapted for the sake of readability.

Based on the general -solution it is possible to dene four distinct solution sets as given in [Sha96][Fie06][Hla14].

#### Denition 3.1 (United Solution Set ∑︀ ∃∃)

The united solution set is formed by all solutions of any of the point real systems · = with ∈ and ∈ that are included in the interval system:

$$\sum\_{\Xi \ni \exists} (\mathcal{A}, \mathcal{B}) := \left\{ X \in \mathbb{R}^n \, | \, (\exists A \in \mathcal{A}) \wedge (\exists B \in \mathcal{B}) : (A \cdot X = B) \right\}.\tag{3.53}$$

Note that not all ∈ can match any element ∈ by multiplication with any ∈ ∑︀ ∃∃, and that not all ∈ can be calculated using any ∈ ∑︀ ∃∃ and all available <sup>∈</sup> . Furthermore, the set ∑︀ ∃∃ is not necessarily connected and not necessarily constrained by borders parallel to the coordinate axes. Using an enclosing interval ⊇ ∑︀ ∃∃ will likely create spurious solutions.

Denition 3.2 (Tolerable Solution Set ∑︀ ∀∃)

The tolerable solution set includes all values of that solve the interval type linear equation system regardless of the chosen point real matrix ∈ . This means the solution holds for all included point real matrices:

$$\sum\_{\forall \vec{x} \in \mathbb{B}} (\mathcal{A}, \mathcal{B}) := \left\{ X \in \mathbb{R}^n \, | \, (\forall A \in \mathcal{A}) \wedge (\exists B \in \mathcal{B}) : (A \cdot X = B) \right\}.\tag{3.54}$$

Note that not all ∈ can be calculated using any ∈ ∑︀ ∀∃ and all available <sup>∈</sup> . The set ∑︀ ∀∃ is not necessarily connected and not necessarily constrained by borders parallel to the coordinate axes. Using an enclosing interval ⊇ ∑︀ ∀∃ will likely create spurious solutions.

The controllable solution set applies the same principle to the measurement vector .

#### Denition 3.3 (Controllable Solution Set ∑︀ ∃∀)

The elements of the controllable solution set are feasible regardless of the chosen point real measurement vector ∈ . This means there is a suitable regressor matrix ∈ for all possible point real measurement vectors:

$$\sum\_{\exists \forall'} (\mathcal{A}, \mathcal{B}) := \left\{ X \in \mathbb{R}^n \, | \, (\exists A \in \mathcal{A}) \wedge (\forall B \in \mathcal{B}) : (A \cdot X = B) \right\}.\tag{3.55}$$

Note that not all ∈ can match an element ∈ by multiplication with any ∈ ∑︀ ∃∀. The set ∑︀ ∃∀ is not necessarily connected and not necessarily constrained by borders parallel to the coordinate axes. Using an enclosing interval ⊇ ∑︀ ∃∀ will likely create spurious solutions.

A very strict criterion is given by the strong solution set.

Denition 3.4 (Strong Solution set ∑︀ ∀∀)

The strong solution set includes only parameters that solve the interval type linear equation system for any regressor matrix and any measurement vector:

$$\sum\_{\forall \forall \forall} (\mathcal{A}, \mathcal{B}) := \left\{ X \in \mathbb{R}^n \, | \, (\forall A \in \mathcal{A}) \wedge (\forall B \in \mathcal{B}) : (A \cdot X = B) \right\}.\tag{3.56}$$

None of the limitations of the previous solution sets is necessary for the strong solution. Nevertheless, the set ∑︀ ∀∀ is not necessarily connected and not necessarily constrained by borders parallel to the coordinate axes. Using an enclosing interval ⊇ ∑︀ ∀∀ will likely create spurious solutions.

The algebraic solution diers in its denition as it is not quantor based.

Denition 3.5 (Algebraic Solution Set ∑︀ )

The algebraic solution is dened by the interval type vectors that solve the interval type linear equation system straight forward:

$$\sum\_{a} (A, B) := \left\{ X\_a \in \mathbb{I} \mathbb{R}^n \, | \, (A \cdot X\_a = B) \right\}. \tag{3.57}$$

Even though the elements of the algebraic solution are constrained parallel to the coordinate axes, the solution is ambiguous, i.e. there might be several or none solution vectors that fulll the equation [Kup95].

The dierent solution sets are related as they are subsets of each other. The united solution set is a superset of the algebraic solution set [Kup95], as well as of the tolerable and the controllable solution set [Sha96]

$$
\sum\_{a} \ (\mathcal{A}, \mathcal{B}) \subseteq \sum\_{\exists \exists \exists \ (A, B)} (\mathcal{A}, \mathcal{B}) \tag{3.58}
$$

$$
\sum\_{\forall \exists} (\mathbf{A}, \mathbf{B}) \subseteq \sum\_{\exists \exists \exists} (\mathbf{A}, \mathbf{B}) \tag{3.59}
$$

$$\sum\_{\exists \boldsymbol{\exists} \forall} (\mathcal{A}, \mathcal{B}) \subseteq \sum\_{\exists \exists \exists} (\mathcal{A}, \mathcal{B})\,. \tag{3.60}$$

The strong solution set on the other hand is a subset of the tolerable as well as of the controllable solution set [Fie06]

$$
\sum\_{\forall \forall} (\mathcal{A}, \mathcal{B}) \subseteq \sum\_{\forall \exists} (\mathcal{A}, \mathcal{B}) \tag{3.61}
$$

$$
\sum\_{\forall \forall} (\mathcal{A}, \mathcal{B}) \subseteq \sum\_{\exists \forall} (\mathcal{A}, \mathcal{B})\,.\tag{3.62}
$$

A visualization of the solution sets and their relations is given in Example 3.6.

#### Example 3.6:

Consider the following 2×2 interval type linear equation system (ILES) taken from [Sha96]:

$$
\begin{pmatrix} [2, 4] & [-2, 1] \\ [-1, 2] & [2, 4] \end{pmatrix} \cdot \mathbf{X} = \begin{pmatrix} [-2, 2] \\ [-2, 2] \end{pmatrix} \cdot \tag{3.63}
$$

The controllable solution set ∑︀ ∃∀ and the strong solution set ∑︀ ∀∀ are empty for the ILES (3.63). It can be seen in Fig. 3.9 that the algebraic solution ∑︀ is a subset of the tolerable solution set ∑︀ ∀∃ which is a subset of the united solution ∑︀ ∃∃. It is also clearly visible that neither the tolerable nor the united solution can be included in classical intervals parallel to the axes without creating spurious solutions.

Figure 3.9: Dierent solution sets for the interval type linear equation system

The calculation of all given solution sets is computational expensive, as the calculation of the hulls is -Hard according to [Hor13]. Even to check whether a solution set is empty is still an -Complete problem as shown by [Sha96].

The problem becomes more tractable, if it is regarded from a dierent point of view. Assume a given point real solution candidate . The question is now to determine whether the solution candidate belongs to any of the dened solution sets without calculating the sets explicitly. The approach used in this thesis was introduced by [Bee72] and uses the so-called theorem of Prager-Oettli [Oet64]. The resulting criterion for interval arithmetic problems in Def. 3.6 is used to determine whether is a member of the united solution set ∑︀ ∃∃.

Denition 3.6 (Theorem of Prager-Oettli) A given solution candidate is part of the united solution set ∑︀ ∃∃ i.e.

$$X\_s \in \sum\_{\exists \exists \exists \dots} \tag{3.64}$$

if and only if fullls the inequality

$$|A\_c X\_s - B\_c| \le A\_\Delta |X\_s| + B\_\Delta \tag{3.65}$$

based on center and radius of the regressor matrix = ⟨, Δ⟩ and the measurement vector = ⟨, Δ⟩, given by the interval type measurement values ⟨⟩ =1 and ⟨⟩ =1 [Bee72, p. 235].

This theorem was extended by [Hla14] to the general -solution:

$$\forall X\_s \in \sum\_{AE} \Leftrightarrow \left| A\_c X\_s - B\_c \right| \le \left( A\_\Delta^{\exists} - A\_\Delta^{\forall} \right) \left| X\_s \right| + B\_\Delta^{\exists} - B\_\Delta^{\forall}.\tag{3.66}$$

A solution candidate vector is part of the -solution if and only if (3.66) holds. This criterion can be specialized to t the four other solutions sets as given in Tab. 3.1.

Table 3.1: Conditions for the membership of to a specic solution set.


Further considerations regarding existence and uniqueness of the solution are only available for the algebraic solution set ∑︀ . Two approaches for this purpose are sketched in Appendix D.

# 4 Guaranteed Verication of Point Real Systems

The theoretical foundation of the thesis is developed and illustrated in this chapter. Therefore a very simple and comprehensive linear time invariant model structure is used to focus on the method itself. The general principles introduced in this chapter can be extended to other types of system models.

## 4.1 System Setup

Denition 4.1 (Linear Time Invariant System) A discrete time, linear time invariant (LTI) system can be modeled as

$$y\_k = \sum\_{i=1}^{n\_a} a\_i y\_{k-i} + \sum\_{i=1}^{n\_c} c\_i u\_{k-i} \tag{4.1}$$

with the discrete time input and output , the input and output order and as well as the input parameters [1, 2, . . . , ] and the output parameters [1, 2, . . . , ] . This modeling approach is also known as AutoRegressive system with eXogenous input (ARX).

Based on the model assumption of Def. 4.1 it is possible to set up the specication of the nominal system, as given in Def. 4.2. The set of nominal parameters as introduced in Sec. 2.1.1 is assumed to be given in the specication.<sup>4</sup> Two possibilities to determine these parameters in practice are introduced in Appendix E.

Throughout this thesis the superscript \* will be used to denote values that are part of a specication or the nominal value. Note that the set of parameters is given by a distinguished point real vector for the current LTI setting.<sup>5</sup>

<sup>4</sup> For other applications, e.g. fault-tolerant control, the method works similarly but the specication is given in a dierent manner.

<sup>5</sup> The used model assumption does not allow a direct feedthrough as this property is not regarded in the given setting. To allow a direct feedthrough the second sum needs to be changed to start from zero, leading to ∈ {0, 1, . . . , }.

#### Denition 4.2 (Specication of a Linear Time Invariant System)

The (direct) specication \* of an LTI system according to Def. 4.1 is given by the mandatory values


$$\bullet \bullet^\* = \left[ a\_1^\*, a\_2^\*, \dots, a\_{n\_a^\*}^\*, c\_1^\*, c\_2^\*, \dots, c\_{n\_c^\*}^\* \right]^T, \text{the nominal parameters}$$

and the optional values

• \* = ⟨⟩ max( \* ,\* ) =1 , the initial output values

$$\bullet \ U\_{init}^\* = \langle u\_k \rangle\_{k=1}^{\max\left(n\_a^\*, n\_c^\*\right)}, \text{the initial input values}$$

leading to the overall specication

$$S\_d^\* = \{ \Theta^\*, n\_a^\*, n\_c^\*, U\_{init}^\*, Y\_{init}^\* \}. \tag{4.2}$$

If the initial values are known, the future evolution of the system output trajectory is only dependent on the input signal. It is thus possible to compare the behavior of the trajectory for dierent inputs. If there are no initial values, the behavior of the trajectory will dier for the same inputs in the case of dierent used initial values. If they are provided, there need to be at least = max ( \* , \* ) + 1 initial values to enable the rst evaluation of the autoregressive system description according to Def. 4.1.

It is assumed that the nominal system is developed and built and ready to be veried. Thereby the verication object (VO) is assumed to be available as a (physical) black box with one or more input and output ports. It is possible to excite the system via the input and to measure the resulting output. Further insights, like internal structure, components and wiring, software, plans or internal states are not accessible. This approach can be applied in various states of system development. Therefore there is a wide range of exact physical representations of the VO black box such as models, program code, components or units.

The verication method is running on a digital device that not necessarily generates the input signal itself. Therefore input and output values need to be measured to be available for the verication method. Measurement data is always subject to noise which is assumed to be modeled throughout the thesis based on the following denition.

#### Denition 4.3 (Sensor Noise Properties)

All available information about the VO is given in terms of measurement data of the input ⟨,⟩ =1 and output ⟨,⟩ =1. The measurement data is obtained using sensors providing guaranteed notions of sensor precision that allow interval type enclosure of the measurement values.

All measurement values , and , are extended to intervals such that the true system values , and , are guaranteed to be included in the interval , and ,, respectively. There are three dierent ways these guarantees can be obtained:

#### a) Absolute deviation

The sensor precision is denoted by a maximum deviation of ± . This leads to the interval enclosure of

$$u\_{true,k} \in \mathfrak{u}\_k = [u\_{meas,k} - \delta^a\_u, \ u\_{meas,k} + \delta^a\_u] \tag{4.3}$$

$$y\_{true,k} \in \mathcal{Y}\_k = \left[ y\_{meas,k} - \delta^a\_y, \ y\_{meas,k} + \delta^a\_y \right]. \tag{4.4}$$

#### b) Relative deviation

In this case the sensor precision is given in terms of a relative deviation of ∈ [0, 1] which is leading to

$$u\_{true,k} \in \mathfrak{u}\_k = [u\_{meas,k} \cdot (1 - \delta\_u^r), \ u\_{meas,k} \cdot (1 + \delta\_u^r)] \tag{4.5}$$

$$y\_{true,k} \in \mathfrak{y}\_k = \left[ y\_{meas,k} \cdot (1 - \delta\_y^r), \ y\_{meas,k} \cdot (1 + \delta\_y^r) \right]. \tag{4.6}$$

#### c) Combined deviation

A common case is the combination of the both aforementioned deviation types. The deviation is dened to be ∈ [0, 1] times the current measurement value but at least ± , resulting in

$$u\_{true,k} \in \mathfrak{u}\_k = u\_{meas,k} \cdot \left[ \min\left( (1 - \delta\_u^r), \left( 1 - \frac{\delta\_u^a}{u\_{meas,k}} \right) \right),$$

$$\max\left( (1 + \delta\_u^r), \left( 1 + \frac{\delta\_u^a}{u\_{meas,k}} \right) \right) \right] \tag{4.7}$$

$$y\_{true,k} \in \mathfrak{y}\_k = y\_{meas,k} \cdot \left[ \min\left( (1 - \delta\_y^r), \left( 1 - \frac{\delta\_y^a}{y\_{meas,k}} \right) \right),$$

$$\max\left( (1 + \delta\_y^r), \left( 1 + \frac{\delta\_y^a}{y\_{meas,k}} \right) \right) \right].\tag{4.8}$$

The properties of Def. 4.3 are used to set up interval type enclosures of the measurement data that are guaranteed to include the true system value. The resulting structural setup of the measurement process is depicted in Fig. 4.1.

Figure 4.1: Structure of measurement setup

A basic assumption in the eld of parameter identication is the property of persistent excitation according to Assumption 2.1. The information provided in any data set is highly dependent on the input signal that was used to generate the output. According to [Ast95, p. 63] there are several methods to ensure persistent excitation of a system. Exemplary persistently exciting inputs are e.g. white noise, pseudorandom binary sequences or a moving average process [Ise10, p. 251]. One possible excitation procedure tted to the specic settings regarded in this thesis was developed in [Rie17]. The main idea is sketched in Appendix F.

## 4.2 Time Invariant Full Consistency

The general notion of consistency introduced in Chapter 2 is now transferred to the specic setting of LTI systems. The direct specication \* includes one distinctive point real nominal parameter vector. Thus all results show according to Def. 2.2. The observed behavior of the regarded VO is given in terms of input output measurement data. The nominal behavior is specied according to Def. 4.2. The VO is called full consistent with its specication if the measurement data can be explained by all specied parameters. The verication question is formally stated in Problem 4.1.

Problem 4.1 (Time Invariant Full Consistency) Is the nominal system, specied by a direct specication

$$S\_d^\* = \{ \Theta^\*, n\_a^\*, n\_c^\*, U\_{init}^\*, Y\_{init}^\* \}, \tag{4.9}$$

full consistent with the input-output behavior given by the interval type enclosures of measurement values

$$\begin{bmatrix} \mathbf{U}\_{meas}, \ \mathbf{Y}\_{meas} \end{bmatrix} = \begin{bmatrix} \left< \mathbf{u}\_{meas,k} \right>\_{k=1}^{T}, \ \left< \mathbf{y}\_{meas,k} \right>\_{k=1}^{T} \end{bmatrix} \tag{4.10}$$

i.e. can the measurement data be explained by the nominal system?

Problem 4.1 can be solved using the united solution set according to Def. 3.1.

### Proposition 4.1 (Time Invariant Full Consistency)

The interval enclosure of the measurement data [U, Y], given for the discrete sampling points = {1, 2, . . . , }, leading to the interval type regressor matrix and the interval type measurement vector , of a VO is called full consistent with a direct specication \* , if the specied system parameters Θ\* = [︁ \* 1 , \* 2 , . . . , \* \* , \* 1 , \* 2 , . . . , \* \* ]︁ are part of the united solution set ∑︀ ∃∃ given by the measurement data, i.e.

(Θ\* ∈ ∑︀ ∃∃ (, )) <sup>⇔</sup> <sup>−</sup> <sup>⇒</sup> . (4.11)

#### Proof:

The nominal parameter vector Θ\* = [︁ \* 1 , \* 2 , . . . , \* \* , \* 1 , \* 2 , . . . , \* \* ]︁ , given by a direct specication \* according to Def. 4.2, can be interpreted as a solution candidate for the ILES (3.46) which is set up by the interval type enclosure of the measurement data. If Θ\* is part of the solution set of the ILES (3.46), the specication \* is able to explain the measurement data.

The problem is formulated in Kaucher interval arithmetic, therefore it is necessary to dene which solution set is used. Considering the interval enclosure of the measurement data given in Def. 4.3, it is obvious that it is not possible to determine the true value , and , as there are two distortion steps between the true values and the interval enclosure. First the true value is changed by the measurement random noise . Second, the sensor is only as precise as given by its property.

However, it is guaranteed that the true values of , and , are included in the measurement intervals

$$u\_{true,k} \in \mathfrak{u}\_{meas,k} \tag{4.12}$$

$$y\_{true,k} \in \mathfrak{y}\_{meas,k} \,. \tag{4.13}$$

Starting from time = max ( \* , \* ) + 1 there are enough measurement values to set up the regressor equations. Each additional measurement value leads to an additional row in the regressor matrix .

The unknown true values can be assumed to form an unknown true point real regressor matrix

$$A\_{true} = \begin{bmatrix} y\_{true,k\_{min}-1} & \cdots & y\_{true,k\_{min}-n\_a^\*} & & & \\ y\_{true,k\_{min}} & \cdots & y\_{true,k\_{min}+1-n\_a^\*} & & & \\ \vdots & \ddots & \vdots & & & \\ y\_{true,T-1} & \cdots & y\_{true,T-1-n\_a^\*} & & & \\ \end{bmatrix} \begin{bmatrix} u\_{true,k\_{min}-1} & \cdots & u\_{true,k\_{min}+1-n\_c^\*} \\ u\_{true,k\_{min}} & \cdots & u\_{true,k\_{min}+1-n\_c^\*} \\ \vdots & \ddots & \vdots \\ u\_{true,T-1} & \cdots & u\_{true,T-1-n\_c^\*} \\ \end{bmatrix} \tag{4.14}$$

and an unknown true point real measurement vector

$$B\_{true} = \begin{bmatrix} y\_{true,k\_{min}}, \ y\_{true,k\_{min}+1}, \dots, y\_{true,T} \end{bmatrix}^T. \tag{4.15}$$

These elements are linked via an unknown true parameter vector Θ, that fullls

$$A\_{true} \Theta\_{true} = B\_{true}.\tag{4.16}$$

Based on the enclosure of the true values in Def. 4.3 holds:

$$A\_{true} \in \mathcal{A}\_{meas} \tag{4.17}$$

$$B\_{true} \in B\_{meas}.\tag{4.18}$$

∑︀ With the set denition (3.53) follows that Θ is an element of the united solution set ∃∃ (, ).

It is impossible to determine the true values and from the given measurement data. Therefore each point real element of the interval type regressor matrix and the interval type measurement vector is a possible true value. Thus the whole united solution set can be considered as correct solution of the ILES.

The given direct specication \* shows <sup>−</sup> with the given measurement data [,], if and only if the nominal parameter vector Θ\* is part of the united solution set ∑︀ ∃∃ (, ). Due to the underapproximating property holds

$$Full\ Consumption^{-} \Rightarrow Full\ Consumption^{-}\tag{4.19}$$

and thus the VO is full consistent in the sense of this thesis.

The calculation of the united solution set is computationally expensive as introduced in Section 3.1.2. However, it is not necessary to calculate the whole solution set in this setting as there is a candidate solution given in form of the specication. Thus it is sucient to check whether the specied parameter vector Θ\* is part of the united solution set without calculating the solution set explicitly. Prop. 4.1 can be checked very eciently using the theorem of Prager-Oettli according to Denition 3.6 by evaluating the single equation (3.65). Therefore (3.65) is reformulated for the given measurement values to

$$|A\_{meas,c}\Theta^\* - B\_{meas,c}| \le A\_{meas,\Delta} |\Theta^\*| + B\_{meas,\Delta} \Leftrightarrow \Theta^\* \in \sum\_{\exists \exists \exists} (A\_{meas}, B\_{meas}).\tag{4.20}$$

As stated in Section 3.1.3 the existence and uniqueness of the solution sets is still an open question. A necessary condition for the existence of any solution set is that the ILES (3.46) is solvable. For the general overdetermined setting, this can be checked using the approaches given in Appendix C. However, their application is limited as the problem is in general hard. The only further considerations regard the algebraic solution set and are sketched in Appendix D.

Note that the introduced method preserves time-invariance when checking for consistency. This is due to the used specication and represents a main dierence to the direct image based methods used in fault detection as introduced in Section 2.2. This property will become even more clear in Chapter 5.

A further necessary condition is persistent excitation of the VO which is given in Assumption 2.1, developed to ensure full rank according to Assumption 3.1.

Therefore it is in general not guaranteed that there is a nonempty united solution set available and thus there are situation in which the proposed method is not applicable.

However, it is possible to facilitate a favorable situation by proper experiment design. One possibility to determine a benecial excitation signal is given in Appendix F.

The application of time invariant full consistency for LTI systems is demonstrated in Example 4.1 and was presented to the scientic community in [Sch17b] and [Sch17c][Sch19].

#### Example 4.1:

This example shows the verication of a linear, time invariant system as introduced in Prop. 4.1. Assume the following direct specication

$$S\_{d,1}^\* = \left\{ \Theta^\* = [0.9825, \ 0.0675] \right, n\_a^\* = 1, n\_c^\* = 1, U\_{init}^\* = [0] \right\}, Y\_{init}^\* = [0] \}. \tag{4.21}$$

The simulations are done with a virtual VO, correctly implemented as discrete time linear ARX system

$$y\_k = 0.9825y\_{k-1} + 0.0675u\_{k-1} \tag{4.22}$$

with sampling time ∆ = 1s. The system is excited using a noise signal with uniformly distributed amplitude , ∈ [0, 10] with mean = 5. It is assumed that the input is measured using a sensor with a maximum relative fault of = 0.05. The resulting enclosure of the input measurement signal U is depicted in the rst subplot of Fig. 4.2. Nevertheless the system (4.22) is fed with the undisturbed input signal . The resulting output signal is measured using a sensor with the same properties as the input sensor. The enclosed measurement output signal Y is depicted in the second subplot of Fig. 4.2.

Figure 4.2: Measured input signal with = 0.05

It can be shown that Prop. 4.1 holds for \* ,1 and the measurement data and which proofs full consistency of measurement and specication formally.

A graphical representation is given in Fig. 4.3. The lines depict borders of the united solution set, generated by the dierent rows of the regressor matrix. Feasible parameters need to be located in between the borders of all rows of the measurement matrix. The parameters given in specication \* ,1 are marked with a green cross and form a feasible solution of the given problem as they are located within the united solution set of all measurement data. Hence it is possible to explain the measured data with the parameters given in specication \* ,1 .

All given measurement values are used in this example to set up the regressor matrix and the measurement vector. This leads to ∈ IR(29×2). In this case, the full rank Assumption 3.1 for = 2 parameters leads to rank () != 2 which is fullled for the given dynamic and excitation signal.

An example with failed verication can be given for the case that the verication method is applied using a dierent specication on the same measurement data. For this purpose

$$S\_{d,2}^\* = \left\{ \Theta^\* = [1.15, \, 0.08] \,, n\_a^\* = 1, n\_c^\* = 1, U\_{init}^\* = [0] \,, Y\_{init}^\* = [0] \right\}. \tag{4.23}$$

is used.

A graphical representation of the parameters is given by the red mark in Fig. 4.3. In this case, the parameters do not lie within the borders of the united solution set of the given measurement data. Thus the system is not veried.

Figure 4.3: Visualization of the united solution given by the measurement data

## 4.3 Conclusion

The main idea of Kaucher arithmetic based verication was introduced in this chapter. Therefore the precise system and problem setting was dened and explained. The problem setup leads to a very distinct situation with only poor knowledge about the true measurement values which are enclosed in intervals. If this setting is regarded from a dierent point of view, it can be interpreted as a specic quantor based solution set denition that matches exactly the united solution set introduced in the mathematical preliminaries. This property can be used to check full consistency of the measurement data and the specication in a set membership procedure that is computationally very eective. The assumption of a full rank interval regressor matrix leads to preliminaries on the sensors and the noise assumptions. It is possible to check whether a specic solution candidate is part of the united solution given by measurement data. This property was demonstrated using an illustrative example.

The main advantage of the introduced method is that it focuses on the united solution set. Thereby it is possible to avoid wrapping and dependency eects and to calculate a solution set free of spurious solutions. This property is very benecial in the case of safety critical systems as it avoids type II errors (hidden alarms).

# 5 Guaranteed Verication of Interval Type Systems

The basic idea introduced in the previous chapter is now extended to an interval type speci cation. Thus the parametrization is given by an interval type vector Θ\* instead of a point real vector Θ\* .

### Denition 5.1 (Interval Type Specication of a Linear System)

An interval type specication \* of a linear system is given by the mandatory values


and the optional values

$$\bullet \ Y\_{init}^{\*} = \langle y\_k \rangle\_{k=1}^{\max(n\_a^\*, n\_c^\*)}, \text{the initial output values}$$

$$\bullet \ U\_{init}^\* = \langle u\_k \rangle\_{k=1}^{\max\left(n\_a^\*, n\_c^\*\right)}, \text{ the initial input values.}$$

This leads to the overall specication

$$S\_i^\* = \{\Theta^\*, n\_a^\*, n\_c^\*, U\_{init}^\*, Y\_{init}^\*\}. \tag{5.1}$$

Based on this specication the system is implemented. It is assumed that the resulting VO is given in a form that provides the input and output signals as described in the specication. Again, this can be the case for a variety of test objects, depending on the specic point of the development cycle for which the specication \* was dened. Methods to determine the nominal parameters in practice are given in Appendix G.

Even though the specication is now given by interval type values, the implemented system has to provide real output data at any given time. Thus the real implementation of the VO has to use a specic real parametrization. This leads to the denition of interval type linear systems as given in Def. 5.2.

#### Denition 5.2 (Interval Type Linear System)

A discrete time, linear, interval type system can be modeled as

$$y\_k = \sum\_{i=1}^{n\_a} a\_{i,k} y\_{k-i} + \sum\_{i=1}^{n\_c} c\_{i,k} u\_{k-i} \tag{5.2}$$

with the discrete time input and output , the input and output order and as well as the time variant parameters

$$\Theta\_k = \begin{bmatrix} a\_{1,k}, a\_{2,k}, \dots, a\_{n\_a,k}, c\_{1,k}, c\_{2,k}, \dots, c\_{n\_c,k} \end{bmatrix}^T \in \Theta^\*. \tag{5.3}$$

The necessary data is available as disturbed, discrete time measurement data enclosed in intervals according to Def. 4.3. Also the persistent excitation Assumption 2.1 and the full rank Assumptions 3.1 are still required to hold.

The given system denition leads to the time variant regressor vector

$$A\_k = \left[ y\_{k-1}, y\_{k-2}, \dots, y\_{k-n\_a}, u\_{k-1}, u\_{k-2}, \dots, u\_{k-n\_c} \right] \tag{5.4}$$

and thus the system equation can be transferred to

$$y\_k = A\_k \Theta\_k \tag{5.5}$$

for a specic time step ≥ with = max (, ) + 1.

This can be interpreted as the realization of a time variant system whose interval type specication is given according to Def. 5.3.

### Denition 5.3 (Interval Enclosure of Time Variant Parameter)

The parameter values Θ evolve during a specic time ∈ {1, 2, . . . , } and can be enclosed in the interval

$$\Theta\_k \in \Theta = \left[ \theta^{(1)}, \theta^{(2)}, \dots, \theta^{(n)} \right]^T \tag{5.6}$$

with = + and () = [︂ min (︂⟨ () ⟩ =1)︂ , max (︂⟨ () ⟩ =1)︂]︂, denoting the minimum and maximum value of the -th component within the regarded time.

The time variance is given only in the parameters, the model structure, especially the model orders and , are time constant. Furthermore this interpretation is not necessarily benecial for all time variant systems as the resulting interval enclosures can be very large depending on the time variant dynamic of the system parameters.

## 5.1 Interval Type Full Consistency

In the case of an interval type specication both consistency denitions ( and ) according to Def. 2.2 and Def. 2.1 are possible. In this section, the idea of is extended to interval type systems. Then the situation is relaxed to in the next section.

Problem 5.1 (Interval Type Full Consistency)

Is the nominal system, specied by an interval type specication

$$S\_i^\* = \left\{ \Theta^\*, n\_a^\*, n\_c^\*, U\_{init}^\*, Y\_{init}^\* \right\}, \tag{5.7}$$

full consistent with the input-output behavior given by the interval type enclosures of measurement values

$$\begin{bmatrix} \mathbf{U}\_{meas}, \ \mathbf{Y}\_{meas} \end{bmatrix} = \begin{bmatrix} \left< \mathbf{u}\_{meas,k} \right>\_{k=1}^{T}, \ \left< \mathbf{y}\_{meas,k} \right>\_{k=1}^{T} \end{bmatrix} \tag{5.8}$$

i.e. do all elements of the parameter vector Θ\* fulll Prop. 4.1 for all measurement data?

The full consistency problem is solved by Prop. 5.1.

## Proposition 5.1 (Interval Type Full Consistency)

The interval enclosure of the measurement data [U, Y] given for the discrete sampling points = {1, 2, . . . , }, forming the regressor matrix and the measurement vector of a VO, is called to be full consistent with an interval type specication \* , if the complete set of specied parameters Θ\* is part of the united solution set ∑︀ ∃∃ given by the measurement data, i.e.

$$(\Theta^\* \subseteq \sum\_{\exists \exists \exists} (A\_{meas}, B\_{meas})) \Leftrightarrow Full\ Consistency^- \Rightarrow Full\ Consistency \quad \text{(5.9)}$$

### Proof:

Full consistency follows directly from applying Prop. 4.1 to all possible point real parameter vectors Θ\* ∈ Θ\* given in the interval type specication \* .

The inverse relation of Prop. 5.1 leads to the implications given in Prop. 5.2.

#### Proposition 5.2 (Inverse of Full Consistency)

If there is at least one Θ\* ∈ Θ\* that does not show full consistency according to Prop. 4.1, the interval type specication \* is not full consistent with the measurement data.

### Proof:

Full consistency according to Prop. 5.1 is dened for all parameters Θ\* ∈ Θ\* . A parameter Θ˜ \* ∈ Θ\* that does not show full consistency according to Prop. 4.1 leads to

$$
\Theta^\* \not\subseteq \sum\_{\exists \exists \exists \ (} A\_{meas}, B\_{meas})\,. \tag{5.10}
$$

and thus the interval type specication \* is not full consistent with the measurement data.

Theoretically, Prop. 5.1 can be checked by applying the verication equation (4.20) based on the theorem of Prager-Oettli to each parameter vector Θ\* ∈ Θ\* . However, to realize this approach in an algorithmic implementation it is necessary to draw ℎ discrete samples from the continuous intervals. The number of discrete parameter vectors to check ℎ thus becomes a relevant design parameter. According to Prop. 5.2, a single inconsistent vector falsies the full consistency property. This leads to the requirement that ℎ has to be very large to cover the given parameter range suciently. Even though a single evaluation of the theorem of Prager-Oettli is computationally very eective as stated in Chapter 4, the computation time rises proportionally with ℎ. Additionally, as this thesis aims on calculating guaranteed results, the step size used to sample the interval type parameter vector needs to be very ne, even tending to zero. This high resolution needs to be applied to each component of the parameter vector. Afterwards it is used to build all possible combinations including the samples of the dierent components. Assuming a resolution of samples on each component of Θ\* leads to

$$n\_{check} = n\_s^{n\_a^\* + n\_c^\*} \tag{5.11}$$

applications of the theorem of Prager-Oettli . This number increases polynomial with and leads to large computation times for suciently high resolutions. Thus the sampling based approach is computationally infeasible.

Restructuring the problem can be used to avoid the necessity to cover the whole parameter area. The verdicts can be calculated based on the vertexes of the nominal parameter set only and then can be generalized to the whole nominal set if convexity properties are fullled.

The maximum number of points to check is thus reduced to

$$n\_{checkk} = 2^{n\_a^\* + n\_c^\*} \tag{5.12}$$

which resembles a computationally feasible number, especially for low system orders \* and \* . The vertexes of the hyperrectangle given by the interval type parameter vector Θ\* can be determined according to Def. 5.4.

## Denition 5.4 (Vertexes of a Hyperrectangle)

The nominal interval vector Θ\* ∈ IR \* + \* ×1 denes

$$n = 2^{n\_a^\* + n\_c^\*} \tag{5.13}$$

vertexes ∈ IR \* + \* ×1 of a hyperrectangle that can be indexed using a decimal index ∈ {0, 1, . . . , − 1}. The index is subsequently transformed to its binary representation that can be interpreted as (1 × \* + \* ) dimensional vector were the -th vector component is denoted as () .

The specic values of the vertexes can be generated by interpreting the binary index component wise for ∈ {1, 2, . . . , \* + \* } and extracting the limits from the respective nominal parameter vector element:

$$V\_{v\_{dec}}^{(i)} = \begin{cases} \underline{\Theta}^{\*(i)} & \text{, if } V\_{bin}^{(i)} = 0\\ \overline{\Theta}^{\*(i)} & \text{, if } V\_{bin}^{(i)} = 1. \end{cases} \tag{5.14}$$

An illustration of Def. 5.4 is given in Example 5.1.

Example 5.1:

Consider the following (2 × 1) interval vector with \* = \* = 1

$$\boldsymbol{\Theta}^\* = \begin{bmatrix} \begin{bmatrix} 2 \ \vdots \ \end{bmatrix}, \ \begin{bmatrix} 4, \ \textbf{6} \end{bmatrix} \end{bmatrix}^T \tag{5.15}$$

The resulting rectangle has = 2<sup>2</sup> = 4 vertexes with ∈ {0, 1, 2, 3} and ∈ {1, 2} leading to the indexes given in Tab. 5.1.


Table 5.1: Vertexes of a hyperrectangle

It is now possible to set up an alternative formulation of Prop. 5.1, that solves Problem 5.1 with vertex based full consistency.

#### Proposition 5.3 (Vertex Based Full Consistency)

The interval enclosure of the measurement data [U, Y] given for the discrete sampling points = {1, 2, . . . , }, forming the regressor matrix and the measurement vector of a VO, is called to be full consistent with an interval type specication \* , if all vertexes dened by the set of specied parameters Θ\* are located in the same orthant and are part of the united solution set ∑︀ ∃∃ given by the measurement data, i.e.

( ⊆ ∑︀ ∃∃ (, )) <sup>⇔</sup> <sup>−</sup> <sup>⇒</sup> . (5.16)

### Proof:

The united solution set can form various shapes, but it was shown by [Sha10] that the general AE-solution is convex within each orthant. As the united solution set is a special case of the general AE-solution, this property does also hold for ∑︀ ∃∃ (, ).

The specied parameters Θ\* and thus the resulting vertexes are all located within the same orthant.

The theorem of Prager-Oettli can be checked for the = 2 \* + \* vertexes in nite time. Based on the direct application of the denition of a convex set given in [Bro08, p. 662] follows:

If (3.65) holds for any two of the vertexes and , with , ∈ {1, 2, . . . , − 1}, ̸= , all vectors Θ = + (1 − ) , with 0 ≤ ≤ 1 are also part of the united solution set.

#### Example 5.2:

Assume the same setting as in Example 4.1. However the specication is now given as an interval type specication

$$S\_i^\* = \left\{ \Theta^\* = \left[ \begin{bmatrix} 0.9725, \ 0.9925 \end{bmatrix}, \ \begin{bmatrix} 0.0665, \ 0.0685 \end{bmatrix} \right]^T, n\_a^\* = 1, n\_c^\* = 1 \right\}. \tag{5.17}$$

The simulations are done using the linear discrete time ARX system (4.22) leading to the same measurement data as given in Fig. 4.2 in Example 4.1. The resulting verication setting is depicted in Fig. 5.1. The nominal parameters given in \* are depicted as green square. It can be seen that all four vertexes = {0, 1, 2, 3} are located within the united solution set given by the measurement data. The specication and the measurement are thus guaranteed to be full consistent according to Prop. 5.3.

Figure 5.1: Example setting for an interval type specication \* 

## 5.2 Interval Type Basic Consistency

Until now the set of VO behavior was assumed to be considerably larger than the specication. When using interval type specications this is not necessarily the case. It is possible that the specication set is of the same size as the set of VO behavior or even larger. Therefore it is not longer possible to enclose the whole specication in the VO behavior. The resulting verication question can be formulated as follows:

Problem 5.2 (Interval Type Basic Consistency) Is the nominal system, specied by an interval type specication

$$S\_i^\* = \left\{ \Theta^\*, n\_a^\*, n\_c^\*, U\_{init}^\*, Y\_{init}^\* \right\},\tag{5.18}$$

basic consistent with the input-output behavior given by the interval type enclosures of measurement values

$$\begin{bmatrix} \mathbf{U}\_{meas}, \ \mathbf{Y}\_{meas} \end{bmatrix} = \begin{bmatrix} \left< \mathbf{u}\_{meas,k} \right>\_{k=1}^{T}, \ \left< \mathbf{y}\_{meas,k} \right>\_{k=1}^{T} \end{bmatrix} \tag{5.19}$$

i.e. is there at least one parameter vector Θ\* ∈ Θ\* that fullls Prop. 4.1?

The Venn chart of a basic consistent setting is depicted in Fig. 5.2. Due to the inner enclosure of the VO behavior the achieved verdict is still type II error free. The consistent set is given by the green shaded square.

This set it formally stated in Prop. 5.4 and solves Problem 5.2.

Figure 5.2: Basic consistent result for a large specication

#### Proposition 5.4 (Interval Type Basic Consistency)

The interval enclosure of the measurement data [U, Y] given for the discrete sampling points ∈ {1, 2, . . . , }, forming the regressor matrix and the measurement vector of a VO, is called basic consistent with an interval type specication \* , if there is a nonempty consistent set, i.e. a nonempty intersection between the nominal set Θ\* and the united solution set ∑︀ ∃∃ (, )

(Θ\* ∩ ∑︀ ∃∃ (, ) ̸<sup>=</sup> <sup>∅</sup>) <sup>⇔</sup> <sup>−</sup> <sup>⇒</sup> . (5.20)

### Proof:

All parameter vectors included in the interval specication Θ\* are suitable representations of the correct system behavior. Thus \* can be interpreted as a set of direct specications \* . Each of these direct specications \* ∈ \* can be used to check consistency according to Prop. 4.1. If there is at least one full consistent direct specication included in the interval specication, the VO behavior can be explained by this parameter and the VO is denoted as basic consistent.

 <sup>−</sup> is sucient for the genuine system, as that there is at least one parameter within the nominal set that is able to explain the measurement data.

## 5.2.1 Algorithmic Solutions

A straight forward approach to check basic consistency uses the vertexes only. However, the shape of the resulting consistent set will change if one or more vertexes are inconsistent. This change is depicted exemplary for the 2 case in Fig. 5.3.

Figure 5.3: Degradation of the consistent set for dierent consistent vertexes (green)

The shape changes from a rectangle, in case all four vertexes are part of the consistent set, to a single point if only one vertex shows consistency. The main drawback of this procedure is that there might be a consistent set, even if no initial vertex is consistent. This situation is exemplary depicted in Fig. 5.5.

Also, the basic shape of the resulting consistent set changes which can be a disadvantage for the further algorithmic processing of the result.

A solution for the rst problem is given by checking more points that are not a vertex. However this leads to the sampling based approach as introduced in the previous section, with the respective runtime limitations explained there.

The problem of nding the right resolution in the sampling based approach can be solved using optimization methods. The idea is to use the logic of optimization algorithms to guide the sampling process.<sup>6</sup> Optimization procedures can be used to determine which discrete points to check if one or more vertexes of the specication are not element of the consistent set.

To use optimization methods for the choice of sampling points the problem can be reformulated to a feasibility problem. The interval type measurement data is used to set up the constraints that frame the united solution set ∑︀ ∃∃. As derived in the previous chapter, every parameter within the united solution set ∑︀ ∃∃ is able to explain the measurement data. The constraints limit the search area of the feasibility problem. Feasible solutions need to fulll all constraints. Parameter values that are located outside the united solution set ∑︀ ∃∃ are not feasible with respect to the constraints.

The interval type specication \* includes the nominal parameter vector Θ\* which can also be denoted as nominal set . The nominal set represents the maximum area of potentially consistent parameters. This initial restrictions can be stated in terms of linear inequality constraints as follows:

#### Denition 5.5 (Constraints Given by the Nominal Set)

The interval type parameter vector Θ\* given in the specication \* can be used to set up the set of 2( \* + \* ) linear inequality constraints that restrict the feasibility problem to the nominal set :

$$c\_N^{(i)}\left(\Theta\right) \qquad\qquad\qquad\coloneqq\quad\underline{\theta}^{\*\left(i\right)} - \underline{\theta}^{\left(i\right)} \le 0 \tag{5.21}$$

 ( \* + \* +) (Θ) := <sup>−</sup> \*() + () ≤ 0 (5.22)

with the number of parameters ∈ {1, 2, . . . , \* + \* }.

<sup>6</sup> Suitable optimization methods are e.g. grid search, golden section search or dichotomous search [Wil64] if there is only minimal information available. If there is further knowledge about the shape of the problem, there are more sophisticated algorithms that can direct the search eort very eciently into the relevant regions, e.g. Newton method, simplex method or interior point method [Noc06].

The united solution set ∑︀ ∃∃ can be reformulated in terms of the interval type measurement data as linear matrix inequality constraints (LMI) as given in Def. 5.6.

#### Denition 5.6 (Constraints Given by the Measurement Data)

The measurement data [U, <sup>Y</sup>] = [︁ ⟨u,⟩ =1 , ⟨y,⟩ =1]︁ forming the regressor matrix and the measurement vector of a VO, can be used to set up the set <sup>ℳ</sup> of 2 ( − max ( \* , \* )) linear inequalities

$$c^{(i)}\_{\mathcal{M}}(\Theta) \qquad \qquad := \underline{-(A\_{meas} \cdot \Theta)}^{(i)} + \underline{B}^{(i)}\_{meas} \le 0 \tag{5.23}$$

$$c\_{\mathcal{M}}^{(T-\max\left(n\_a^\*, n\_c^\*\right)+i)}\left(\Theta\right) \qquad := \ \overline{\left(\mathcal{A}\_{meas} \cdot \Theta\right)}^{(i)} - \overline{B}\_{meas}^{(i)} \le 0 \tag{5.24}$$

with the number of rows in the regressor matrix ∈ {1, 2, . . . , − max ( \* , \* )}, i.e. the number of system equations instantiated by dierent measurement points.

Each pair of constraints represents the upper and lower bound of the solution set derived from one line of the ILES (3.46). They can be interpreted as hyperstripes in the parameter space, leading to the setting shown in Fig. 5.4 for the 2 case. Using constraints according to Def. 5.6 limits the search area of the optimization algorithm to the inner approximation and thus guarantees that there are no type II errors possible in the resulting consistent set. The consistent set is given by the shaded region.

Figure 5.4: Exemplary constraints of the feasibility problem in 2

Note that the number of constraints in <sup>ℳ</sup> is directly related with the number of used sampling points and that the number of parameters = \* + \* is of minor importance. However there have to be at least = = max( \* , \* ) + 1 samples in the measurement to set up the rst hyperstripe. It is now possible to determine an alternative solution for Problem 5.2, based on the constraints of Def. 5.5 and Def. 5.6.

#### Proposition 5.5 (Feasibility Based Basic Consistency)

The interval enclosure of the measurement data [U, Y] given for the discrete sampling points ∈ {1, 2, . . . , }, forming the regressor matrix and the measurement vector of a VO, is called basic consistent with an interval type specication \* , if the consistent set is nonempty, i.e. if there is at least one solution Θ˜ that fullls all constraints

$$\left( \left( c\_{\mathcal{N}}(\tilde{\Theta}) \le 0 \right) \land \left( c\_{\mathcal{M}}(\tilde{\Theta}) \le 0 \right) \right) \Leftrightarrow Basis \; consistency \;^- \Rightarrow Basis \; consistency \; \tag{5.25}$$

#### Proof:

According to Prop. 5.4 there needs to be at least one parameter that is part of the united solution set ∑︀ ∃∃ as well as part of the parameter set given in the interval type specication \* to ensure basic consistency. All parameters that fulll the constraint set <sup>ℳ</sup> of Def. 5.6 are part of the united solution set ∑︀ ∃∃. All parameters that fulll the constraint set of Def. 5.5 are part of the parameter set given in the interval type specication \* . If there is at least one parameter vector Θ˜ that fullls and ℳ, all conditions for basic consistency are fullled.

The resulting situation is exemplary depicted in Fig. 5.5. It can be seen that no initial vertex shows consistency, as none of them is part of the underapproximation. Therefore there is no vertex based basic consistency. Feasibility based basic consistency can be achieved as there is a consistent set within the inner approximation and the nominal set. The consistent set is depicted as the shaded green area.

Figure 5.5: Example setting showing feasibility based consistency

## 5.3 Conclusion

This chapter introduced the extension of the Kaucher based method to interval type speci cations. The resulting nominal set is required to be located within one orthant. The denition of full consistency is still applicable if all parameters given in the interval type specication are part of the united solution set. A straight forward approach was introduced that used the orthant wise convexity of the united solution set to dene full consistency based only on the vertexes of the specied parameter set.

Nevertheless, full consistency is a rather strict criterion and it is likely that there are some specied parameters that are part of the united solution and some that are not. Following the concept of basic consistency, it is sucient to show that there is at least one parameter vector that is part of the united solution set as well as of the specied parameter range. This property can be veried by checking individual arbitrary points for consistency. The choice of these points can be structured by using the concept of a feasibility problem. Therefore the notions dening the united solution as well as the specied parameter set are transferred to linear matrix inequality constraints. If there is a solution of the feasibility problem, the VO and the specication are guaranteed to be basic consistent.

The method is still free of type II errors as the feasibility based consistent set is constrained by the genuine united solution set.

# 6 Guaranteed Verication of Hybrid Systems

Switched hybrid systems consist of two distinct parts with dierent properties and modeling goals. The dynamic part is used to model the plant dynamics as introduced and used in the previous chapters. The additional discrete event part models the superimposed switching logic. Based on logical rules, the discrete event part can activate dierent discrete states that will show dierent dynamic behavior. Dierent operation modes can thus be modeled as several subsystems, showing individual behavior. All subsystems are interconnected by the discrete event switching mechanism.

The switched hybrid system structure is depicted in Fig. 6.1 and is formally introduced in this section<sup>7</sup> .

Figure 6.1: Structure of the hybrid system model

The hybrid system ℋ consists of a set of dynamic subsystems and a superimposed switching mechanism represented by the state machine .

The subsystems () ∈ show dierent behavior based on an individual parametrization. The state machine produces a switch signal which resembles its current discrete state. This signal is used to control an input and an output switch that determines which subsystem is activated.

<sup>7</sup> To improve readability the term "hybrid system" is used instead of "switched hybrid system" throughout this thesis.

The activated subsystem is fed with the general input signal and the resulting output signal is connected to the output of the hybrid system. It is possible to use the optional input and output values that are specied in the nominal system to start the active subsystem after a switch. Otherwise the current input and output values are kept across the switch. The input and output signals are also fed to the state machine where they are used to update the discrete state of the discrete event system. Due to this extended structure there are additional subjects included in the verication question that will be covered in this chapter. First the formalized model structure needs to be appended to a hybrid formulation. Therefore the discrete event part is modeled in the following, based on [Cas99, p. 66].

Then the verication problem is split in two subproblems: verication of the dynamic part and verication of the discrete event part. If both parts are veried individually, the next step is to examine their connection and interaction. This is done in three steps of increasing complexity.

Initially, the setting is simplied by assuming the discrete state to be measurable. This setting is used to introduce the basic hybrid method. Second this assumption is dropped such that the current active discrete state needs to be determined for a given set of switches. Finally an algorithm is developed that is able to determine the switching times and the active discrete states from measured input and output data only.

The necessary knowledge for the verication procedure is thus the same as in the previous chapters except that there are now several nominal systems and an additional specication of the discrete event system part.

### Denition 6.1 (Discrete State)

A discrete state

$$q^{(i)} \in \mathcal{Q} := \left\{ q^{(1)}, q^{(2)}, \dots, q^{(n\_q)} \right\} \tag{6.1}$$

is a vertex of a graph e.g. of a state machine. It is part of a given set of discrete states .

Note that for the ease of notation the "discrete state" is called "state" in the reminder of this thesis.

To implement logical conditions and constraints in the state machine a set of events is de ned:

#### Denition 6.2 (Event)

There is a set of events

$$\mathcal{E} := \left\{ e^{(1)}, e^{(2)}, \dots, e^{(n\_e)} \right\}. \tag{6.2}$$

Each event () is dependent on specic activation limits

$$\mathcal{U}^{(i)} = \left[ \underline{l}^{(i)}, \,\, \overline{l}^{(i)} \right],\tag{6.3}$$

with ∈ {1, 2, . . . , }, dened for a given enabler signal := ⟨⟩ =1. The event () is dened to be active as long as the value of the enabler signal lies within the given thresholds

$$w\_k \in \mathfrak{l}^{(i)}.\tag{6.4}$$

Note that Def. 6.2 does not pose any conditions on the activation limits. Therefore it is possible that several events are active at the same time. In the context of this thesis, events are allowed to be active for several time steps, e.g. as long as the enabler signal stays within the specied limits.

The transitions of the state machine are dened by a transition function.

#### Denition 6.3 (Transition Function)

A transition function

$$f: \mathcal{Q} \times \mathcal{E} \to \mathcal{Q} \tag{6.5}$$

represents a directed connection between two states, labeled by an event. In general is a partial function on its domain.

The notation (1) : (︀ (1), (1))︀ = (2) means that transition (1) forms a directed connection from state (1) to state (2), dependent on event (1) .

A transition can change the state of a state machine if the assigned event is active, but not necessarily has to. This is due to the fact that several events can be active at any given time, but there is not more than one transition allowed to conduct a switch. However it is also possible that the state does not change even though there are several activated transitions.

Based on the above denitions it is possible to set up the state machine representing the discrete event part.

#### Denition 6.4 (State Machine)

A state machine is dened to be given by the 4-Tupel

$$Z := \left\{ \mathbb{Q}, \mathcal{E}, f, q^{(1)} \right\}, \tag{6.6}$$

with a nite set of states , a nite set of events ℰ, a transition function and an initial state (1) .

The state of the discrete event part can be used to determine the system orders and parameters of the dynamic part necessary to set up a system according to Def. 5.2. This leads to the denition of a state dependent, discrete time, linear, interval type system:

Denition 6.5 (State Dependent Discrete Time Linear Interval Type System) The state dependent, discrete time, linear, interval type system can be modeled as

$$s^{(q\_k)} := \ y\_k = -\sum\_{i=1}^{n\_a(q\_k)} \mathbf{a}\_i(q\_k) y\_{k-i} + \sum\_{i=1}^{n\_c(q\_k)} \mathbf{c}\_i(q\_k) u\_{k-i} \tag{6.7}$$

with the discrete time input , output and state . The input and output orders () and () as well as the interval type system parameters Θ = [︀ 1(), 2(), . . . , ()(), 1(), 2(), . . . , ()() ]︀ are dependent on the current state . All subsystems () form the set of subsystems

$$\mathcal{S} = \left\{ s^{(1)}, s^{(2)}, \dots, s^{(n\_q)} \right\}.\tag{6.8}$$

Each dynamic subsystem is directly linked with a discrete state. Therefore the state dependent dynamic subsystem ( () ) is denoted by () for the ease of notation. It is now possible to dene the overall hybrid system.

### Denition 6.6 (Hybrid System)

A hybrid system ℋ consists of two system parts:

Discrete Event Part The superimposed switching mechanism given by a state machine

$$Z = \left\{ \mathcal{Q}, \mathcal{E}, f, q^{(1)} \right\},\tag{6.9}$$

according to Def. 6.4.

Dynamic Part The discrete time linear interval type systems are given by a nite set of subsystems

$$\mathcal{S} = \left\{ s^{(1)}, s^{(2)}, \dots, s^{(n\_q)} \right\} \tag{6.10}$$

where each subsystem () is active if and only if is the current state. The subsystems () are dened according to Def. 6.5.

In general it is not possible to measure the current state of the state machine. However, for didactic reasons Assumption 6.1 is introduced and later dropped.

#### Assumption 6.1 (Measured State Signal)

The current state of the state machine can be measured and it is correctly given in the state signal

$$Q\_{meas} := \left\langle q\_{meas,k} \right\rangle\_{k=1}^T. \tag{6.11}$$

The measured state signal consists of several segments with dierent active states. A change in the active state is called switch.

#### Denition 6.7 (Switch)

The rst time index at which a new state is active i.e.

$$q\_{meas,k} \neq q\_{meas,k-1},\tag{6.12}$$

is called switch .

The switches within a state signal are additionally indexed in chronological order ,, ∈ {1, 2, . . . , ℎ} with the total number of switches denoted by ℎ. The time index given by the rst sampling point of the measurement data represents the rst occurrence of the initial state and is thus dened to be ,<sup>1</sup> = 1.

The time index right before a switch, i.e. the last time index the current state is active, is called end of the current segment ′ , = ,+1 − 1. The end of the last segment is given by the last time index of the measurement data which leads to ′ ,ℎ = . A schematic sketch for ℎ = 3 is given in Fig. 6.2.

Figure 6.2: Schematic view of switches

## 6.1 Verication of Hybrid Systems with Mapped State Signal

The introduced denitions can be used to set up the specication of a hybrid system.

Denition 6.8 (Specication of an Interval Type Hybrid System) An interval type specication of a hybrid system according to Def. 6.6 is given by

$$S\_{H,i}^\* := \{ Z^\*, \mathcal{S}\_i^\* \} \tag{6.13}$$

with the nominal state machine \* according to Def. 6.4 and the set of nominal dynamic systems \* , according to Def. 5.1.

A special case of Def. 6.8 is given if the parameters of the dynamic subsystems are point real values.

Denition 6.9 (Specication of a Point Real Hybrid System) A point real specication of a hybrid system according to Def. 6.6 is given by \* , := { \* , \* } (6.14)

with the nominal state machine \* according to Def. 6.4 and the set of nominal dynamic systems \* , according to Def. 4.2.

The dynamic part of a system according to Def. 6.9 is also called linear time variant system with piecewise constant parameters.

The verication method developed in this chapter is again based on interval enclosures of the input and output measurement data

$$\left[\left\{U\_{meas}, Y\_{meas}\right\} = \left[\left\_{k=1}^{T}, \left\_{k=1}^{T}\right] \tag{6.15}$$

extended by a measured state signal according to Assumption 6.1. The state signal leads to a set of measured states . It is assumed that the elements of the set of measured states can be mapped on the set of specied states. In this case, the set is called mapped set of states according to Def. 6.10.

### Denition 6.10 (Mapped Set of States)

The set of measured states , consisting of the unique values of the state signal = ⟨,⟩ =1, is called mapped with the specication \* ,, if all measured states () ∈ with ∈ {1, 2, . . . , ||} can be mapped to an equivalent nominal state (())\* ∈ \* given in the specication i.e.

$$q\_{meas}^{(j)} = q^{(i(j))\*}, \; j \in \{1, 2, \dots, |\mathcal{Q}\_{meas}|\}. \tag{6.16}$$

As \* , is a special case of \* ,, Def. 6.10 is also valid for \* ,. In case is a mapped set of states, the state signal is called mapped state signal. If the mapped state ()\* is known, the respective mapped nominal subsystem ()\* is also known. The hybrid veri cation problem can now be formulated as follows:

Problem 6.1 (Mapped Set of States Based Point Real Hybrid Consistency) Is the nominal hybrid system, specied by a point real hybrid specication

$$S\_{H,d}^\* = \{Z^\*, \mathcal{S}\_d^\*\} \tag{6.17}$$

consistent with the input-output behavior given by the interval type enclosures of measurement values

$$\left[\left.U\_{meas}, Y\_{meas}\right\rangle\right] = \left[\left<\mathfrak{u}\_{meas,k}\right>\_{k=1}^{T}, \left\_{k=1}^{T}\right] \tag{6.18}$$

and the measured mapped state signal , i.e. can the measurement data be explained by the nominal system?

The problem can be solved by tackling the two system parts individually. This is possible due to an implicit connection given by the matching of the discrete states which is done based on the dynamic parameters. First the dynamic subsystem is considered and veried in the next section. Afterwards the verication of the discrete event part is introduced. Finally the results are combined to verify the overall hybrid system.

## 6.1.1 Verication of the Dynamic Subsystems

To verify the individual dynamic subsystems it is necessary to split the interval type measurement data [U, Y] based on the information given in the mapped state signal. The resulting segments

$$\left\langle \left[ \mathbf{U}\_{meas}^{(j)}, \mathbf{Y}\_{meas}^{(j)} \right] \right\rangle\_{j=1}^{n\_{switch}} = \left\langle \left[ \left< \mathbf{u}\_{meas,k} \right>\_{k=k\_{\tau,j}}^{k\_{\tau',j}}, \left< \mathbf{y}\_{meas,k} \right>\_{k=k\_{\tau,j}}^{k\_{\tau',j}} \right] \right\rangle\_{j=1}^{n\_{switch}} \tag{6.19}$$

can be veried individually against the respective specication in the set of subsystems \* . It is possible to verify the individual dynamic behavior<sup>8</sup> of each state present in the measurement data () ∈ as dened in Prop. 6.1.

### Proposition 6.1 (Dynamic Consistency of a Segment)

The interval type enclosure of the measurement data is split into ∈ {1, 2, . . . , ℎ} parts, given by the segments [︁ U () , Y () ]︁ . Each segment represents a specic state () , a regressor matrix () and a measurement vector () . The segment is dynamic consistent with the respective mapped subsystem (())\* ∈ \* , if the specied system parameters Θ(())\* = [︁ \* 1 (()), \* 2 (()), . . . , \* \* (())(()), \* 1 (()), \* 2 (()), . . . , \* \* (())(())]︁ are part of the united solution set ∑︀() ∃∃ <sup>=</sup> ∑︀ ∃∃( () , () ), i.e. if

$$
\Theta^{(i(j))\*} \in \sum^{(j)}\_{\exists \exists} \ . \tag{6.20}
$$

### Proof:

The measured state signal includes the true active states. It is thus guaranteed that only measurement data generated by subsystem () based on the active state () is included in [︁ U () , Y () ]︁ and that this data is not corrupted by measurement generated by other subsystems. The current state () is a mapped state according to Def. 6.10 and thus the connection between measurement and specication is also correct and the respective nominal subsystem (())\* is known.

Using this information, the setting can be reduced to the time invariant consistency problem given in Problem 4.1 for segment and subsystem (())\* and time invariant consistency can be checked according to Prop. 4.1 which proves Prop. 6.1.

The considerations are now extended to the complete set of measurement data, consisting of several segments.

<sup>8</sup> As a direct specication \* is used in the denition, "dynamic consistency" means "time invariant full consistency" throughout this chapter.

Proposition 6.2 (Dynamic Consistency of the Measurement Data) The mapped state signal and the segmented measurement data ⟨[︁U () , Y () ]︁⟩ℎ =1 of a VO, leading to the state () , the regressor matrices () and the measurement vectors () with = {1, 2, . . . , ℎ}, are dynamic consistent with a set of direct specications \* , if there is dynamic consistency of each segment given in the measurement data with its respective mapped subsystem (())\* i.e.

$$\Theta^{(i(j))\*} \in \sum\_{\exists \exists \exists}^{(j)}, \forall j \in \{1, 2, \dots, n\_{switch}\}. \tag{6.21}$$

#### Proof:

Prop. 6.2 results straight forward by applying Prop. 6.1 to all = {1, 2, . . . , ℎ} segments given in the segmented measurement data ⟨[︁<sup>U</sup> () , Y () ]︁⟩ℎ =1 .

It is possible to reformulate this proposition to get the inverse relation similar to Prop. 5.2.

Proposition 6.3 (Inverse of Dynamic Consistency of the Measurement Data) The mapped state signal and the segmented measurement data ⟨[︁U () , Y () ]︁⟩ℎ =1 of a VO, leading to the state () , the regressor matrices () and the measurement vectors () with = {1, 2, . . . , ℎ} are called dynamic inconsistent with a set of direct specications \* , if there is at least one segment ∈ {1, 2, . . . , ℎ} given in the measurement data that does not show dynamic consistency with its respective mapped subsystems (())\* according to Prop. 6.1, i.e.

$$\exists j = \{1, 2, \ldots, n\_{switch}\} \; \middle| \; \Theta^{(i(j))\*} \notin \sum\_{\exists \exists}^{(j)}.\tag{6.22}$$

#### Proof:

For dynamic consistency of the measurement data according to Prop. 6.2 it is necessary that all segments of the measurement data are dynamic consistent according to Prop. 6.1. If there is a segment that does not show dynamic consistency, this segment can not be explained by the specication. Thus there is unspecied behavior and it is not possible to explain the whole measurement data by the specication.

Note that these propositions are based on the segments given in the measurement data. However it is possible that there is nominal behavior that is not present in the measurement data. Though this will not change the dynamic consistency result for the measurement, it will inuence the discrete event verication result introduced in the next section.

Also note that a mapped state signal is used in Prop. 6.2 and Prop. 6.3. This means that all segments given in the measurement data can be mapped to the specication, as stated in Def. 6.10.

## 6.1.2 Verication of the Discrete Event System

The next step in the verication of the hybrid system is given by regarding the discrete event part. Therefore it is necessary to determine the state machine that generated the measurement data. The active states given in the mapped state signal represent a trace of the unknown generating state machine . The system dynamics represent the state of the system and not an emission of a state or an event. Therefore it is in this case possible to reconstruct based on this trace. The reconstructed generating state machine can then be compared with the specied state machine \* by comparing the corresponding states and transitions.

## Proposition 6.4 (Full State Consistency)

The mapped state signal given for the discrete sampling points = {1, 2, . . . , }, leading to the mapped set of states of the discrete event part of a VO is called full state consistent with the nominal state machine \* , if

$$\mathcal{Q}^\* = \mathcal{Q}\_{meas}.\tag{6.23}$$

## Proof:

All nominal states \* are given in the specication of the state machine \* . The states implemented in are given in the mapped set of states . According to Def. 6.10, this means that both sets are dened on the same elements. Full state consistency means that exactly the specied states are given in the measurement data, i.e. that both sets contain the same elements. This comparison is given in (6.23) and proves full state consistency.

Due to a measurement scenario that does not cover all states it is possible that not all dynamics are present in the measurement data. This case leads to partial state consistency.

### Proposition 6.5 (Partial State Consistency)

The mapped state signal given for the discrete sampling points = {1, 2, . . . , }, leading to the mapped set of states , of the discrete event part of a VO is called partial state consistent with the nominal state machine \* , if

$$
\mathbb{Q}^\* \supset \mathbb{Q}\_{meas}.\tag{6.24}
$$

#### Proof:

Based on Prop. 6.4 and Def. 6.10, and \* are dened on the same elements, i.e. there is a mapping () = (())\* for ∈ {1, 2, . . . , ||}. If additionally (6.24) holds,

$$q\_{meas}^{(j)} \in \mathcal{Q}\_{meas} \Rightarrow q\_{meas}^{(j)} = q^{(i(j))\*} \in \mathcal{Q}^\* \tag{6.25}$$

holds as well. This means that all implemented states () ∈ are also part of the specication \* . Thus there are only specied states in the measurement data. However, there are states in the specication that are not part of the implementation and prevent full state consistency. The discrete event part of a VO is hence called partial state consistent.

Partial consistency of the discrete event part of a VO is a hint to improve the measurement scenario or to collect more measurement data.

If there is neither full nor partial consistency, the system is state inconsistent according to Prop. 6.6.

#### Proposition 6.6 (State Inconsistency)

The state signal given for the discrete sampling points = {1, 2, . . . , }, leading to the set of states , of the discrete event part of a VO is called state inconsistent with the nominal state machine \* , if

$$
\mathbb{Q}^\* \not\supseteq \mathbb{Q}\_{meas}.\tag{6.26}
$$

#### Proof:

If (6.26) holds, there are implemented states () ∈ that are not part of the speci cation \* . Thus there are unspecied states in the measurement data. The discrete event part of a VO is hence called state inconsistent.

Prop. 6.6 can be connected with the inverse of dynamic consistency of the measurement data as given in Prop. 6.3. If there is an additional state () ∈ / \* , the respective measurement data [︁ U () , Y () ]︁ cannot be explained by any ()\* within the specication. In this case there is both, dynamic inconsistency according to Prop. 6.3 and state inconsistency according to Prop. 6.6.

The second part of the discrete event system to be veried is the transition function. Additionally to the mapped state signal<sup>9</sup> ⟨,⟩ −1 =1 , the set of measured events ℰ needs to be obtained. According to Def. 6.2 there is an active event () , ∈ ℰ if , ∈ ()\* .

<sup>9</sup> The last measurement value for = can not be evaluated as there is no following state = + 1.

Full transition consistency is then dened as follows:

### Proposition 6.7 (Full Transition Consistency)

The mapped state signal ⟨,⟩ −1 =1 and the set of events ℰ of the discrete event part of a VO are called full transition consistent with the nominal state machine \* , if

$$\left(\forall q\_{meas,k}\in\left\langle q\_{meas,k}\right\rangle\_{k=1}^{T-1}\right)\land\left(\exists e\_{meas,k}^{(i)}\in\mathcal{E}\_{meas}\right):\left(f^\*(q\_{meas,k},e\_{meas,k}^{(i)})=q\_{meas,k+1}\right).\tag{6.27}$$

#### Proof:

All nominal transitions are given in the transition function \* according to Def. 6.3. The transition function \* (,, () ,) is evaluated for the current measurement state , and the current events () ,. If the transition function yields the following measurement state ,+1 for at least one event () , the right hand side of (6.27) holds. Thus the observed transition at time is part of the nominal transition function.

If the right hand side of (6.27) holds for the measurement sequence = {1, 2, . . . , − 1}, i.e. (∀, ∈ ⟨,⟩ −1 =1 ), all observed state transitions are dened in the nominal transition function \* . Thus the measurement is full transition consistent.

Prop. 6.7 also implies that the current values of the enabler signal , are within the nominal limits ()\* at each switch = , − 1 with = {2, 3, . . . , ℎ}. <sup>10</sup> Other than state consistency, full transition consistency can be achieved although a nominal transition is not triggered by the measurement data.

The notion of partial transition consistency includes specied transitions that are triggered at unexpected times.

#### Proposition 6.8 (Partial Transition Consistency)

The mapped state signal ⟨,⟩ −1 =1 , and the set of measured events ℰ of the discrete event part of a VO are called partial transition consistent with the nominal state machine \* , if

$$\left(\exists q\_{meas,k}\in\left\_{k=1}^{T-1}\right)\land\left(\exists\tilde{e}\in\mathcal{E}^\*\right)\colon\left(f^\*\left(q\_{meas,k},\tilde{e}\right)=q\_{meas,k+1}\right)\land\left(\tilde{e}\neq e\_{meas,k}^{(i)}\right).\tag{6.28}$$

<sup>10</sup> It is not necessary to check the rst switch, as it represents the begin of the experiment.

### Proof:

The transition function \* is evaluated as explained in Prop. 6.7, but using the set of nominal events ℰ \* instead of the set of measured events ℰ. Thus \* (,, ˜) can yield the correct following measurement state ,+1 for any specied event ˜ ∈ ℰ\* , regardless of the current measured events () ,.

Condition (6.28) is fullled if there is at least one transition in the measurement sequence = {1, 2, . . . , − 1} that was triggered by an unexpected event ˜ ̸= () ,.

Unspecied transitions and transitions connecting unspecied states lead to transition inconsistency according to Prop. 6.9. This represents the situation, where it is not possible to explain the observed transition by the transition function.

### Proposition 6.9 (Transition Inconsistency)

The mapped state signal ⟨,⟩ −1 =1 , and the set of events ℰ of the discrete event part of a VO are called transition inconsistent with the nominal state machine \* , if

$$\left(\exists q\_{meas,k}\in\langle q\_{meas,k}\rangle\_{k=1}^{T-1}\right)\land\left(\forall\tilde{e}\in\mathcal{E}^\*\right)\;:\;\left(f^\*(q\_{meas,k},\tilde{e})\neq q\_{meas,k+1}\right).\tag{6.29}$$

### Proof:

Condition (6.29) is fullled if there is at least one unspecied transition at a state , ∈ ⟨,⟩ −1 =1 that is not dened for any nominal event ˜ ∈ ℰ\* . Therefore the measured transition is unspecied, which is inconsistent in the sense of this thesis.

The results of state and transition consistency are combined such that the weakest result of the individual conditions determines the result of the whole discrete event system. The respective propositions are given in the following.

### Proposition 6.10 (Full Discrete Consistency)

The discrete event part of a VO is called full discrete consistent with the nominal state machine \* , if there is full state consistency according to Prop. 6.4 and full transition consistency according to Prop. 6.7.

### Proof:

The main components of the discrete event part according to Def. 6.4 are the set of states \* , the nite set of events ℰ \* and the transition function \* . The consistency of this components is veried using the propositions about state and transition consistency. Only if all components show full consistency according to the respective propositions, the overall system also shows full consistency. Thus full discrete consistency is only given if there is full state consistency according to Prop. 6.4 and full transition consistency according to Prop. 6.7.

## Proposition 6.11 (Discrete Inconsistency)

The discrete event part of a VO is called discrete inconsistent with the nominal state machine \* , if there is state inconsistency according to Prop. 6.6 or transition inconsistency according to Prop. 6.9.

### Proof:

Discrete inconsistency is also an integral property of the main components of the discrete event part as shown in Prop. 6.10. Therefore the overall discrete system is inconsistent as soon as there is at least one inconsistent component, i.e. if there is state inconsistency according to Prop. 6.6 or transition inconsistency according to Prop. 6.9.

### Proposition 6.12 (Partial Discrete Consistency)

The discrete event part of a VO is called partial discrete consistent with the nominal state machine \* , if there is neither full discrete consistency according to Prop. 6.10 nor discrete inconsistency according to Prop. 6.11.

## Proof:

If there is neither full discrete consistency according to Prop. 6.10 nor discrete inconsistency according to Prop. 6.11, there is at least one main component that shows partial consistency according to Prop. 6.5 or Prop. 6.8. Even though the other main component might show full consistency according to Prop. 6.4 or Prop. 6.7, the integral property can not be better than the properties of the included main components.

The implementation of this propositions is straight forward, as all necessary sets and values are available in the used setting. More realistic scenarios assuming less a priori knowledge are introduced in Section 6.2 and Section 6.3.

## 6.1.3 Combination of the Dynamic and the Discrete Verication Results

The results of the discrete event system part can be joined with the results of the dynamic system part to achieve the overall assessment of the hybrid system. The solution of Problem 6.1 is thus given by Prop. 6.13 as follows:

Proposition 6.13 (Mapped Set of States Based Point Real Hybrid Consistency) The mapped state signal , the interval type enclosures of the measurement data [,] and the set of events ℰ of the verication object are called consistent with a direct hybrid specication \* ,, if


## Proof:

Based on the mapped state signal according to Def. 6.10 the currently active subsystem is known at each time step. The resulting trace of the state machine is checked for consistency with the discrete event specication \* by regarding the states and transitions. Consistency of the states can be checked by Prop. 6.4 and consistency of the transitions by Prop. 6.7. If both parts are consistent, the measurement is consistent with the specied state machine. The dynamic of the included subsystems can be checked according to Prop. 6.2, again assuming a mapped set of measured states . If both system parts are consistent, the overall hybrid system is consistent.

Consistency of the hybrid system results from the combination of results for the dynamic and discrete subsystems according to Prop. 6.13. Inconsistency of the hybrid system is given as soon as one subsystem is inconsistent. Else the hybrid system is partial consistent, meaning that there is no inconsistent subsystem but at least on subsystem is partial consistent.


An overview of the dierent propositions and possible results is given in Tab. 6.1.

By applying Prop. 6.2, consistency of the dynamic part of the hybrid system can be shown in a straight forward way. The results are calculated based on the propositions of the previous chapters and thus show the same guaranteed properties as dened there. A direct speci cation \* was used throughout the chapter for notational simplicity. It is straight forward to extend all propositions to hold also for an interval type specication \* . This is due to the fact that the dynamic verication is based on the united solution set given by the measurement data. Therefore all properties stay the same except that the verication of dynamic consistency is done based on Prop. 5.1 instead of Prop. 4.1.

Subsystems that are veried using the Kaucher based approach are guaranteed to be correct and it is not possible that there are any hidden faults present in the system.

The state machine has to use point real numbers in both cases. Therefore the denitions and propositions for the discrete event part are unchanged.

An example of the hybrid verication procedure for a mapped set of measured states is given in Example 6.1.

#### Example 6.1:

Assume a direct hybrid specication \* , = [ \* , \* ]. The nominal state machine \* is depicted in Fig. 6.3.

Figure 6.3: Specied state machine \*

The state machine consists of two states \* = {︀ (1)\* , (2)\* }︀ = {1, 2}, the events are based on the enabler signal that is dened to be the output signal = :

$$e^{(1)\*}: w\_k \in l^{(1)} = [1, \, 2] \tag{6.30}$$

$$e^{(2)\*}: w\_k \in l^{(2)} = [31, 33] \tag{6.31}$$

$$[e^{(3)\*} : w\_k \in l^{(3)} = [-\infty, \infty] \tag{6.32}$$

$$\left[e^{(4)\*}: w\_k \in \mathfrak{l}^{(4)} = [-\infty,\,\infty]\right. \tag{6.33}$$

The events (3)\* and (4)\* are enabled for all values of the enabler signal, leading to the permanent possibility to stay in the current state, based on the transition function:

$$f^\*\left(q^{(1)\*}, e^{(1)\*}\right) = q^{(2)\*}\tag{6.34}$$

$$f^\*\left(q^{(2)\*},e^{(2)\*}\right) = q^{(1)\*}\tag{6.35}$$

$$f^\*\left(q^{(1)\*},e^{(3)\*}\right) = q^{(1)\*}\tag{6.36}$$

$$f^\*\left(q^{(2)\*},e^{(4)\*}\right) = q^{(2)\*}.\tag{6.37}$$

The set of dynamic systems \* is given by rst order systems, i.e. () = () = 1, ∀ ∈ {1, 2} leading to

$$s^{(i)\*}:\ y\_k = a\_1^\*(i)y\_{k-1} + c\_1^\*(i)u\_{k-1}.\tag{6.38}$$

The nominal parameters of the dynamic subsystems are given in Tab. 6.2.


Table 6.2: Nominal parameters of the subsystems (1)\* and (2)\*

There are no optional initial input and output values given. Thus the input and output values are kept across the switches.

The implementation is assumed to be done by one or more human developers. Therefore there might be inconsistencies in the resulting VO. Note that the implemented system is assumed to consist of real hard- and software and to include a given plant that cannot be changed. Therefore the corresponding state machine and its dynamical subsystems are not directly known. Nevertheless it is possible to excite the system and measure its output and state signals. The random excitation signal used in this example is given by

$$
u\_k = 1 + 0.2\eta\_k,\tag{6.39}$$

where is drawn from a standard normal distribution. The resulting measurement data is given in Fig. 6.4. Thereby the output was enclosed by intervals using an additive fault of = 0.5. The switching times are based on the information given in the mapped state signal

$$Q\_{meas} = [1, 1, 1, 1, 1, 2, 2, 2, 2, 1, 1, 1, 1, 1, 1] \tag{6.40}$$

leading to the switches = [1, 6, 10]. The relevant values of the enabler signal are given at the time steps right before a switch ,−<sup>1</sup> with ∈ {2, 3} i.e. <sup>5</sup> = [0.6, 1.1] and <sup>9</sup> = [31.9, 32.9].

Figure 6.4: Measured trajectory and subsystem switches

#### Dynamic Consistency

The subsystems are veried using the introduced Kaucher based method. The resulting feasibility signals are depicted in Fig. 6.5. The rst verication result can be calculated for = max(, ) + 1. Due to the autoregressive system of order = = 1 in this example it is thus not possible to calculate a verication result for the very rst element of each segment. It can be seen, that all three segments = {1, 2, 3} can be explained by the respective mapped nominal states = {1, 2} using the Kaucher based method according to Prop. 6.2. Therefore the dynamic subsystems are full consistent with the specication \* .

Figure 6.5: Verication result for each segment (using a mapped state signal)

#### Discrete Consistency

There is a mapped set of states given in this example. Therefore the measured set of states is given by the unique values in the mapped state signal

$$\mathcal{Q}\_{meas} = \{1, 2\} = \left\{ q^{(1)\*}, q^{(2)\*} \right\} = \mathcal{Q}^\* \tag{6.41}$$

which leads to full state consistency according to Prop. 6.4. The set of measured events can be extracted from the measurement data. First, the events leading to the switches in ∈ {6, 10} are determined:

$$e\_{meas,5}^{(1)} \colon w\_5 = [0.6, \ 1.6] \cap [1, \ 2] \neq \emptyset \; :e^{(1)\*} \tag{6.42}$$

$$e^{\langle 2\rangle}\_{meas,5} : \boldsymbol{w}\_5 = [0.6, \ 1.6] \cap [-\infty, \ \infty] \neq \emptyset : e^{\langle 3\rangle \ast} \tag{6.43}$$

$$e^{(3)}\_{meas,5} : w\_5 = [0.6, 1.6] \cap [-\infty, \infty] \neq \emptyset : e^{(4)\*}\tag{6.44}$$

$$\{e\_{meas,9}^{(1)} : w\_9 = [31.9, 32.9] \cap [31, 33] \neq \emptyset : e^{(2)\*} \tag{6.45}$$

$$e^{(2)}\_{meas,9} : \boldsymbol{w}\_9 = [31.9, \, 32.9] \cap [-\infty, \, \infty] \neq \emptyset : e^{(3)\*}\tag{6.46}$$

$$e\_{meas,9}^{(3)} : w\_9 = [31.9, \, 32.9] \cap [-\infty, \, \infty] \neq \emptyset : e^{(4)\*} \, . \tag{6.47}$$

This means that {︀ (1)\* , (3)\* , (4)\* }︀ are activated at = 5 and {︀ (2)\* , (3)\* , (4)\* }︀ are activated at = 9. The events can now be applied to the nominal transition function:

$$f^\*\left(q\_{meas,5}, e\_{meas,5}^{(1)}\right) = f^\*\left(q^{(1)\*}, e^{(1)\*}\right) \stackrel{!}{=} q^{(2)\*} = q\_{meas,6} \tag{6.48}$$

$$f^\*\left(q\_{meas,5}, e\_{meas,5}^{(2)}\right) = f^\*\left(q^{(1)\*}, e^{(3)\*}\right) \stackrel{!}{=} q^{(1)\*} \neq q\_{meas,6} \tag{6.49}$$

$$f^\*\left(q\_{meas,5}, e\_{meas,5}^{(3)}\right) = f^\*\left(q^{(1)\*}, e^{(4)\*}\right) = \emptyset \tag{6.50}$$

$$f^\*\left(q\_{meas,9}, e\_{meas,9}^{(1)}\right) = f^\*\left(q^{(2)\*}, e^{(2)\*}\right) \stackrel{!}{=} q^{(1)\*} = q\_{meas,10} \tag{6.51}$$

$$f^\*\left(q\_{meas,9}, e^{(2)}\_{meas,9}\right) = f^\*\left(q^{(2)\*}, e^{(3)\*}\right) = \emptyset \tag{6.52}$$

$$f^\*\left(q\_{meas,9}, e\_{meas,9}^{(3)}\right) = f^\*\left(q^{(2)\*}, e^{(4)\*}\right) \stackrel{!}{=} q^{(2)\*} \neq q\_{meas,10}.\tag{6.53}$$

It can be seen that (6.48) and (6.51) hold. This leads to the verication of the transition function in ∈ {5, 9}, that generates the switches ∈ {6, 10}. The events (3)\* and (4)\* are enabled for all ∈ {1, 2, . . . , 15} and provide the possibility to stay in the same state for several time steps. To improve readability, only the verication of one exemplary step is shown. The activated events are

$$e^{(1)}\_{meas,8} : \; w\_8 = [15.3, \; 16.3] \cap [-\infty, \; \infty] \neq \emptyset : e^{(3)\*}\tag{6.54}$$

$$e^{(2)}\_{meas,8} : \boldsymbol{w}\_8 = [15.3, \ 16.3] \cap [-\infty, \ \infty] \neq \emptyset : e^{(4)\*}.\tag{6.55}$$

These are applied to the nominal transition function

$$\int \left( q\_{meas,8}, e\_{meas,8}^{(1)} \right) = f^\* \left( q^{(2)\*}, e^{(3)\*} \right) = \emptyset \tag{6.56}$$

$$f^\*\left(q\_{meas,8}, e^{\binom{2}{2}}\_{meas,8}\right) = f^\*\left(q^{\binom{2)\*}{}}, e^{\binom{4)\*}{}}\right) \stackrel{!}{=} q^{\binom{2)\*}{}} = q\_{meas,9}.\tag{6.57}$$

Thus the behavior is valid at time = 8. Similar results are obtained for all other time steps. The nominal transition function holds ∀, ∈ , which leads to full transition consistency according to Prop. 6.7. Hence full state consistency (Prop. 6.4) and full transition consistency (Prop. 6.7) hold in the given example. This leads to full discrete consistency between and \* according to Prop. 6.10.

### Hybrid Consistency

The previous partial results are now combined with respect to hybrid consistency as given in Prop. 6.13. It was shown that is full consistent with \* and that is full consistent with \* . Therefore the verication object and the specication \* , are consistent. This means that the superimposed state machine as well as the linear dynamic subsystems of the VO that produced the measurement in Fig. 6.4 are full consistent with the specication of \* , given in this example. Therefore the VO is veried with respect to the nominal system.

## 6.2 Verication of Hybrid Systems With Given Switching Times

In general it is not possible to measure the internal signals and states of a VO. Therefore the setting is changed and the assumption of an available mapped state signal is dropped. Nevertheless, it is still assumed that the correct times of the switches are available, even though the respective active states are unknown. The resulting consistency problem is given in the following:

Problem 6.2 (Point Real Hybrid Consistency with Given Switches) Is the nominal hybrid system, specied by a direct hybrid specication

$$S\_{H,d}^\* = \{Z^\*, \mathcal{S}\_d^\*\} \tag{6.58}$$

consistent with the input-output behavior given by the interval type enclosures of measurement values

$$\left[\mathbf{U}\_{meas}, \mathbf{Y}\_{meas}\right] = \left[\left<\mathbf{u}\_{meas,k}\right>\_{k=1}^{T}, \left<\mathbf{y}\_{meas,k}\right>\_{k=1}^{T}\right] \tag{6.59}$$

and the set of switches

$$\{k\_{\tau,j}\}\_{j=1}^{n\_{switch}},\tag{6.60}$$

i.e. can the measurement data be explained by the nominal system?

To solve the problem, it is necessary to determine a mapped state signal . The measurement data can be segmented based on the given switches. The result can be interpreted as an unmapped state signal as it is not known which state is active after each switch. Therefore is is necessary to determine the correct nominal subsystem for each segment. There are two important assumptions that need to hold to ensure an unambiguous mapping from nominal subsystems to measurement segments.

Assumption 6.2 (Prager-Oettli-Distinguishability of the Set of Subsystems) Each two nominal subsystems ()\* , ()\* ∈ \* with ̸= are called Prager-Oettli-Distinguishable (PO-Distinguishable) with respect to specic segmented measurement data ⟨[︁U () , Y () ]︁⟩ℎ =1 , if there is no segment ∈ {1, 2, . . . , ℎ} that fullls Prop. 4.1 for both subsystems ()\* and ()\* .

This means that two distinct nominal subsystems given in the specication are suciently dierent with respect to the measurement data, noise assumptions and the interval enclosure. The second assumption transfers this property to the segments of the measurement data.

### Assumption 6.3 (Mappability of the Measurement)

The segmented measurement data ⟨[︁<sup>U</sup> () , Y () ]︁⟩ℎ =1 is called mappable to the set of dynamic subsystems \* , if there is no specied subsystem ()\* ∈ \* that fullls Prop. 4.1 for any segment [︁ U () , Y () ]︁ that was generated by another dynamic system () with () ̸= ()\* .

If Assumption 6.2 and 6.3 hold, it is possible to determine a mapped state signal. Therefore Problem 6.2 can be solved by the following proposition:

### Proposition 6.14 (Point Real Hybrid Consistency with Given Switches)

The direct hybrid specication \* ,, the interval type enclosures of the measurement data [,] and the set of switches {,} ℎ =1 of a verication object can be used to set up a mapped state signal if the specied set of subsystems \* is POdistinguishable with respect to the measurement data according to Assumption 6.2 and the measurement data is mappable to \* according to Assumption 6.3.

The availability of a mapped state signal transforms the problem to a mapped set of states based point real hybrid consistency problem. This problem can be solved using Prop. 6.13.

### Proof:

An unmapped state signal can directly be constructed from the given correct switches {,} ℎ =1 according to (6.19). As Assumption 6.2 and 6.3 hold, individual nominal subsystems within the specication \* can be distinguished from each other.

Also, the united solution set ∑︀() ∃∃ dened by the measurement data of each segment [︁ U () , Y () ]︁ , generated by a dynamic system () , cannot be explained by any other nominal system ()\* ∈ \* , with () ̸= ()\* . The generating subsystem can thus be determined unambiguously for each segment. The mapped nominal subsystems can be used to determine the active states with ∈ {, , , + 1, . . . , ′ ,} for all segments ∈ {1, 2, . . . , ℎ}. The resulting signal is a mapped state signal according to Def. 6.10. The given measurement data [,], together with the given set of nominal subsystems \* and the extracted mapped state signal represents the setting of Problem 6.1 that can be solved by Prop. 6.13.

Segments generated by an unspecied subsystem / ˜ ∈ \* cannot be mapped to a nominal subsystem ()\* if ˜ is PO-Distinguishable from all nominal subsystems ()\* ∈ \* according to Assumption 6.2. If this is not the case, the measurement data generated by ˜ can be explained by at least one nominal subsystem ()\* ∈ \* and thus it is impossible to recognize the subsystem ˜. Note that the denition is based on all measurement data of a segment, i.e. it is possible that partial measurement data of a segment can be included in more then one nominal subsystem.

The method leads to a mapping algorithm that compares the dynamic of each nominal subsystem, given by the states, with every segment ∈ {1, 2, . . . , ℎ} of the measurement data. This comparison is done based on the united solution set, as given in Prop. 4.1. The owchart of the algorithm is given in Fig. 6.6.

An exemplary application of point real hybrid consistency with known switches is given in Example 6.2.

Figure 6.6: Flowchart of the mapping algorithm

#### Example 6.2:

Assume the same setting as introduced in Example 6.1, without the assumption of a given measured mapped state signal. Instead, a set of switches is given by

$$k\_{\tau} \in \{1, 6, 10\}. \tag{6.61}$$

Based on the switches and the length = 15 of the measurement data, the endpoints of the segments can be calculated:

$$k\_{\tau'} \in \{5, 9, 15\}. \tag{6.62}$$

The nominal parameters given in Tab. 6.2 and the algorithm of Fig. 6.6 are used to determine the mapping between segments and states.

To show the correct classication of all subsystems, the algorithm is altered such that it compares all possible nominal subsystems with each segment instead of proceeding to the next segment as soon as a consistent subsystem is found.

The results are depicted in Fig. 6.7. Each subgure shows the evaluation of both nominal subsystems (1)\* and (2)\* for one of the segments ∈ {1, 2, 3}. The shaded areas mark measurement data that does not belong to the segments and thus is not taken into account for the respective verication. It can be seen, that only one generating nominal subsystem can be veried for each of the three segments. This leads to an unambiguous mapping

$$Q = [1, 1, 1, 1, 1, 2, 2, 2, 2, 1, 1, 1, 1, 1, 1] \tag{6.63}$$

#### which is the same as given in (6.40).

The matching algorithm is based on Prop. 6.1. Therefore dynamic consistency is given, if it is possible to generate a mapped state signal . The discrete verication can be done as described in the previous section. The combination of both yields the hybrid verication result. Both results are equivalent with the calculations and results demonstrated in Example 6.1 and are thus not repeated here.

## 6.3 Verication of Hybrid Systems With Unknown Switching Times

In the last step even the knowledge about the switching times is dropped. Only the speci cation and the input and output measurement data are available. Thus the switching times as well as the respective modes need to be reconstructed from the measurement data only. Therefore an additional segmentation step is necessary in the procedure. This step aims at nding the unknown switches , at which the active generating subsystem changes. The corresponding verication problem is given in Problem 6.3.

Problem 6.3 (Point Real Hybrid Consistency) Is the nominal hybrid system, specied by a direct hybrid specication

$$S\_{H,d}^\* = \{Z^\*, \mathcal{S}\_d^\*\} \tag{6.64}$$

consistent with the input-output behavior given by the interval type enclosures of measurement values

$$\left[\left.U\_{meas}, Y\_{meas}\right\rangle = \left[\left\_{k=1}^{T}, \left\_{k=1}^{T}\right] \tag{6.65}$$

without any knowledge about the state or the switching times?

The problem can be solved, if it is possible to determine a mapped state signal from the inputoutput data only. If this signal is constructed, Problem 6.3 can again be reduced to Problem 6.1 which is solved by Prop. 6.13. However, in this setting there is no information about the active states or the switches. Therefore an additional identication and segmentation procedure is necessary. The procedure introduced in this thesis is based on previous work of the author, published in [Die13a], [Die13b] and used in [Die17]. However, while the original work aims on identifying an a priori unknown library of subsystems from the measurement data, the concept is adapted in this thesis to consider a given set of nominal subsystems. Additionally, it is extended to the interval arithmetic context of guaranteed verication.

First, switches that are detected by the segmentation and identication method are dened.

#### Denition 6.11 (Detected Switch)

The time instant that does not show dynamic consistency according to Prop. 6.1 of any nominal subsystems ()\* ∈ \* with the current regressor matrix , and the current measurement vector , is called detected switch ˆ,+1 = .

To ensure correct segmentation, the available measurement data needs to be Prager-Oettli-Segmentable:

#### Assumption 6.4 (Prager-Oettli-Segmentability)

The measurement data [︁ U () , Y () ]︁ and [︁ U (+1) , Y (+1) ]︁ of two consecutive segments and + 1 that are generated by distinct subsystems ()\* ̸= (+1)\* are called Prager-Oettli-Segmentable with respect to a given set of subsystems \* , if there is no subsystem ()\* ∈ \* that fullls Prop. 6.1 for all measurement data points in the combined segment [, , ′ ,+1].

Assumption 6.4 implies Prager-Oettli-Distinguishability of the set of subsystems \* according to Assumption 6.2. Based on the segmentability assumption it is possible to extract a mapped state signal to solve Problem 6.3.

### Proposition 6.15 (Point Real Hybrid Consistency)

The direct hybrid specication \* , and the interval type enclosures of the measurement data [,] of a verication object can be used to set up a mapped state signal , if the measurement data is PO-segmentable with respect to \* ,.

The availability of a mapped state signal transforms the problem to a mapped set of states based point real hybrid consistency problem. This problem can be solved using Prop. 6.13.

### Proof:

If Prager-Oettli-Distinguishability (Assumption 6.2) and mappability of the measurement (Assumption 6.3) are fullled, there is only one possible consistent subsystem ()\* ∈ \* at the end of a segment ˆ ′ , = ˆ,+1 − 1. This subsystem needs to represent the active state of the whole segment [︁ ˆ, , ˆ ′ , ]︁ .

On the other hand, it is impossible that this subsystem is consistent with the measurement data of the entire next segment generated by a dierent subsystem (+1). A nominal subsystem ()\* ∈ \* that fullls Prop. 6.1 for all measurement points in the entire combined segment [, , ′ ,+1] will also fulll Prop. 6.1 for any part of the combined segment. This holds especially for the parts [, , ′ , ] and [,+1, ′ ,+1], i.e. the same nominal subsystem is consistent with two distinct segments [︁ U () , Y () ]︁ and [︁ U (+1) , Y (+1) ]︁ . This is only possible if ()\* = () = (+1) which is not allowed in the case of mappability of the measurement according to Assumption 6.3.

Thus it is possible to determine a switch ˆ, ≥ , for each ∈ {2, 3, . . . , ℎ}, i.e. the true number of segments is determined even if the detected switches do not exactly match the real ones. Based in this result it is possible to determine the respective active subsystem for all segments, whereas each estimated segment [︁ ˆ, , ˆ ′ , ]︁ at least partly overlaps with the true segment [, , ′ , ]. This leads to a mapped state signal with correct mapping for all segments, even if the detected segment boundaries might slightly dier from the true ones.<sup>11</sup> Thus Problem 6.3 is transformed to Problem 6.1 which completes the proof.

If Prager-Oettli-Segmentability according to Assumption 6.4 does not hold, it is not possible to determine the switch. The owchart of the algorithm implementing the introduced identication and segmentation method is given in Fig. 6.8.

<sup>11</sup> A detailed proof of this property is given in Section 6.3.1.

Figure 6.8: Flowchart of the identication and segmentation algorithm

The inner loop ('-loop') depicted in the ow chart compares all = |\* | nominal subsystems ()\* ∈ \* , with the current segment. If there is a consistent nominal subsystem, it is added to the set of consistent subsystems . The next loop ('-loop') is running as long as the set of consistent subsystems , is non-empty. If this is the case, the current segment can be explained by at least one nominal subsystem. Therefore the segment is extended by one time step.

This new segment is again veried by the -loop. If the set of consistent subsystems is empty, i.e. = ∅, none of the nominal subsystems is able to explain the current segment. However, the measurement was veried in the previous step. Therefore a detected switch<sup>12</sup> is recognized at <sup>ˆ</sup>,+1 <sup>=</sup> and the active subsystems for the segment are given by ,−<sup>1</sup>. Note that due to Assumption 6.3 only one subsystem is allowed to explain an entire segment of the measurement data. This leads to ⃒ ⃒ ⃒ ,^ ′, ⃒ ⃒ ⃒ != 1.

However, multiple consistent subsystems |,| ≥ 1 are possible for partial segments ∈ {, + 1, , + 2, . . . , ′ , − 1}.

Detecting a switch and determining the state of the nished segment ends the -loop of the algorithm in Fig. 6.8. The measurement values belonging to the just nished segment are removed from the considered measurement data.

<sup>12</sup> The rst switch of a system is dened to be ^ ,<sup>1</sup> = 1, according to Def. 6.7.

All counters and intermediate values are reinitialized and the next iteration of the outer loop ('-loop') starts for the following segment.

In order to achieve the hybrid verication result, the discrete and dynamic results are combined according to Section 6.1.3. An application of the procedure is given in Example 6.3.

## Example 6.3:

Assume the same setting as introduced in Example 6.1 except that there are neither a mapped state signal nor any information about the switches.

The rst iteration of the -loop of the identication and segmentation algorithm (Fig. 6.8) is depicted in the rst subplot of Fig. 6.9.

Both subsystems are considered for verication in each step of the -loop. It is possible to verify subsystem (1)\* for ∈ {2, 3, 4, 5}. The rst result can again be calculated for = 2 and the rst switch is recognized at ,<sup>2</sup> = 6. This is the rst time both dynamic subsystems show inconsistency, i.e. ,<sup>6</sup> = ∅.

Note that the algorithm will break the -loop at = 6. However this was not done in Fig. 6.9 to show that the verication results stay infeasible for all regarded segments in the remaining measurement time.

In the reinitialization step, the detected rst segment is deleted from the measurement data. This is depicted as shaded area in subgure 2 and 3 of Fig. 6.9. The second iteration of the -loop veries subsystem (2)\* for ∈ {7, 8, 9} and detects the next switch at ,<sup>3</sup> = 10. Subsystem (1)\* is evaluated to be infeasible for the whole available measurement. Again the infeasible result for both subsystems are given until the end of the measurement in contrary to the genuine break of the -loop.

The third segment shows consistency with subsystem (1)\* for ∈ {11, 12, 13, 14, 15} which leads to ′ ,<sup>3</sup> = . This is feasible with respect to Def. 6.7. The subsystem (2)\* is evaluated to be infeasible for segment 3.

The results can be used to set up the matched state signal

$$Q = [1, 1, 1, 1, 1, 2, 2, 2, 2, 1, 1, 1, 1, 1, 1] \tag{6.66}$$

which corresponds again with the ground truth given in (6.40). The successful mapping for all time steps implies continuous consistency as there is an unambiguous nominal subsystem mapped to each time step.

The nominal state machine \* of Example 6.1 is used here as well. Due to = , all values needed for the verication of the discrete part are the same as in Example 6.1. This leads to full discrete consistency between the unsegmented measurement data and the specication.

Finally, full hybrid consistency can also be concluded for this setting. This shows a successful verication of a hybrid system only based on interval type measurements of the input-output data and the nominal system \* ,.

## 6.3.1 Convergence of the Identication and Segmentation Algorithm

To show convergence of the identication and segmentation algorithm it is possible to extract more information about the switches from the measurement data by adding a backward iteration. As a basic property of the identication and segmentation algorithm the true switches are always overestimated and never underestimated. This can be shown by considering extreme values of the additive noise as follows:


Note that → ∞ will also lead to the violation of Prager-Oettli-Segmentability given in Assumption 6.4, as well as the violation of the full rank Assumption 3.1. In practice it is thus important to ensure a suitable such that all assumptions of distinguishability, segmentability and the rank are met.

It is not possible to determine the precise amount of time = ˆ, − , the switch will be overestimated. Nevertheless there will be a gradient from instantaneous detection = 0 in the case of → 0 and the maximum overestimation in the case of → ∞.

To determine the switch more precisely, the detection algorithm can be extended with an antichronological iteration. The end time of the last segment is thereby given as the time of the last measurement ′ ,ℎ = by denition. The regressor matrix is set up such that it includes the values from a variable time up to the nal time . If the dynamic consistency of Prop. 4.1 is fullled for all measurement values [︁ ⟨u,⟩ = ⟨y,⟩ = ]︁ , the segment is extended backwards in time by reducing ← − 1. If the segment extends over a switch, i.e. < , , data from two distinct subsystems is included in the regressor matrix. The detection of this switch is dependent on the same assumptions as in forward direction, namely Prager-Oettli-Distinguishability and Prager-Oettli-Segmentability. The switches detected backwards in time show the same behavior of overestimation as the switches detected in the forward iteration.

The initial switch is guaranteed to be correct, as it is given by the rst measurement data point by denition. The results of the backward and forward iteration will occur chronologically alternating, see Fig. 6.10. The time between the switch detection in backward direction and the switch detection in forward direction is guaranteed to frame the true switching time. The true switching time is thus overapproximated as well in forward direction ˆ,, as in backward direction ˆ,,, leading to a switch segment

$$k\_{\tau,j} \in [\hat{k}\_{\tau,j,b}, \hat{k}\_{\tau,j,f}].\tag{6.67}$$

The measurement data between the switch segments, i.e. for the time steps

$$k \in [\hat{k}\_{\tau, j, f}, \hat{k}\_{\tau, j+1, b}] \tag{6.68}$$

belong to a so called trust segment. The trust segment is guaranteed to contain only measurement data generated by a single subsystem. Verication results calculated based on trust segments are not disturbed by measurement data generated by other dynamic systems. The results of the algorithm converge to the true values for a suitable setting of the enclosed noise and the enclosing interval width .

Figure 6.10: Example of the alternating occurrence of trust and switch segments

## 6.4 Conclusion

The concept of verication of dynamic systems based on Kaucher interval arithmetic introduced in Chapter 4 was extended to hybrid systems in this chapter. Therefore a hybrid model consisting of a discrete time, value continuous, dynamic system and a discrete event state machine were formally introduced. The behavior of the dynamic part is given by parameters that are determined by the discrete event states. As long as there is only one discrete state active, the dynamic verication can be done as in the nonhybrid case. If the active state changes, the procedure has to be adapted. Each segment of measurement data generated by a single active subsystem can be veried individually.

The problem becomes more complex, if it is necessary to determine the segments and the active subsystems within the segments. Three dierent approaches to solve this problem were introduced.

In a rst step, the state signal was measured and the given states were assumed to be mapped with the specication. In a second step, only the switching times were known and it was necessary to determine the respective active states. Conditions of Prager-Oettli-Distinguishability on the specication and the mappability of the measurement data were introduced that ensure the theoretical possibility of determining this information.

In a third step, there was no information at all about the switches and the active states. Therefore an algorithm was developed that is able to determine both, switches and states, in an iterative identication and segmentation procedure. If the introduced property of Prager-Oettli-Segmentability holds, this procedure can be applied successfully.

The verication of the discrete event part can be done in the same way for all three settings. The trace given by the mapped state signal is used to set up the measured state machine which can be compared easily with the dened state machine. Therefore the set of states and the transition function are compared with the nominal values given in the specication.

This approach was partially developed in [Hen15] and published in [Sch17a]. Also it was applied to the practical example of a battery management system [Lem15] and a hybrid braking system [Glü17] and it was presented in [Sch16][Sch18a][Sch19].

# 7 Extended Kaucher Based Guaranteed Verication

This chapter introduces several extensions to the Kaucher based verication framework developed in the previous chapters. Therefore the point of view is changed from nding "at least one" consistent parameter to calculating the "largest possible set" of parameters within the united solution set. This is benecial as the calculation of the exact solution set is an - Hard problem as stated in Section 3.1.3. This chapter introduces a method that combines the Kaucher based method with optimization techniques to calculate the maximum inner approximation of the feasible set.

This approximation is done using dierent shapes given by the combination of the objective function and the constraints. Three dierent shapes are presented and analyzed. The basic approach uses orthogonal enclosure leading to classic interval type borders.

In a second step, this approach is extended to zonotopic sets to exploit specic properties of the measurement data. It is widely known in the interval community that using zonotopic sets avoid large underapproximations that arise from axis parallel orthogonal enclosure [Jau01][Asa06][Pui06][Alt08][Ing09][Mai16][Roe16][Wan17]. Finally all restrictions on the shape of the resulting set are dropped and a general polytope is accepted as result. Although this is the most general shape and yields the largest inner enclosures, it is impossible to describe the result in terms of interval values, thus limiting the usability of this approach.

The last section of the chapter introduces an alternative application of the Kaucher based verication method in an online setting. Therefore the verication procedure is extended to allow an iteratively increasing measurement set. It is thus necessary to provide repeated approximations of the solution set as well as repeated verdicts to evaluate the consistency. This resembles the situation of an ongoing diagnosis process of a running application.

## 7.1 Solution Set Approximations

Instead of solving the problem whether a particular (nominal) parameter vector is part of the united solution set, the question is now to nd the (whole) feasible set given by the measurement data. Therefore the constraints <sup>ℳ</sup> of the feasibility problem given by the measurement data - as dened in Def. 5.6 - are regarded.<sup>13</sup> The feasible set is based only on the measurement data and thus no information about the consistency of the VO is included. The feasible set is dened as:

Denition 7.1 (Feasible Set)

The feasible set ℱ is given by

$$\mathcal{F} = \left\{ \Theta \in \mathbb{R}^{n\_a^\* + n\_c^\*} : c\_{\mathcal{M}}^{(i)}(\Theta) \le 0, \,\forall i \in \{1, 2, \dots, 2\left(T - \max\left(n\_a^\*, n\_c^\*\right)\right)\} \right\}, \tag{7.1}$$

i.e. the set of all parameters Θ that fulll the constraints given by the measurement data as dened in Def. 5.6.

Note that the genuine feasible set here is given according to Def. 5.6 which represents the united solution set ∑︀ ∃∃. The feasible set is thus not necessarily connected and not necessarily constrained by borders parallel to the axis (see Section 3.1.2).

The feasible set can also be dened by a set of vertexes:

### Denition 7.2 (Vertex Based Feasible Set)

The feasible set ℱ is constrained by the convex hull (see [Bro08, p. 663]), given by the set of vertexes = {︀ 0, 1, . . . , ||−<sup>1</sup> }︀ :

$$\mathcal{F}\left(\mathcal{V}\right) = \left\{ \Theta \in \mathbb{R}^{n\_a^\* + n\_c^\*} : \Theta = \sum\_{i=0}^{|\mathcal{V}|-1} \alpha\_i V\_i \, \middle| \, \left( \forall i: \alpha\_i \ge 0 \right) \wedge \sum\_{i=0}^{|\mathcal{V}|-1} \alpha\_i = 1 \right\}.\tag{7.2}$$

With this denition it is possible to transfer the setting into an optimization problem based on the vertexes of the convex hull. The shape of the resulting approximation of the feasible set ℱ becomes a design parameter that is reected by the objective function and the constraints. The solution of the optimization is thus not necessarily of the same shape as the genuine feasible set.

The approximation of the feasible set can be done in dierent ways. This thesis uses three different approaches: hyperrectangular approximation, zonotopic approximation and polytopic approximation. All three approaches are introduced in the next sections.

<sup>13</sup> The constraints of the nominal set dened in Def. 5.5 are not necessary for the calculation of the feasible set.

## 7.1.1 Hyperrectangular Solution Set Approximation

In the hyperrectangular case, the objective function is set up such that the optimization yields the largest hyperrectangle area within the united solution set. This shape represents an interval type result, constrained parallel to the coordinate axis.

The solution set is determined using an optimization setting as given in Prop. 7.1.

### Proposition 7.1 (Optimization Based Hyperrectangular Solution Set)

The hyperrectangular approximation of the feasible set ℱ <sup>2</sup> is given by the set of vertexes <sup>2</sup>. The set of vertexes <sup>2</sup> is calculated based on the measurement data [U, Y] given for the discrete sampling points ∈ {1, 2, . . . , } of a VO. It is dened as the solution of the optimization problem

$$\mathcal{V}^{\Box} = \operatorname\*{argmax}\_{\Theta} \left( J^{\Box} \left( \Theta \right) \right) \tag{7.3}$$

with the objective function

$$J^{\square} \left( \Theta \right) = \prod\_{i=1}^{n\_a^\* + n\_c^\*} \left( \overline{\theta}^{(i)} - \underline{\theta}^{(i)} \right) \tag{7.4}$$

that is subject to the constraints

$$c^{(i)}\_{\mathcal{P}}(\Theta) := \underline{\theta}^{(i)} - \overline{\theta}^{(i)} \le 0 \tag{7.5}$$

$$c\_{\mathcal{M}}(\Theta) \qquad \qquad \qquad \le 0 \tag{7.6}$$

for = {1, 2, . . . , \* + \* }. Thereby constrains the solution to be proper and <sup>ℳ</sup> is given by the measurement data according to Def. 5.6.

#### Proof:

The result of the optimization problem (7.3) is given by the set of vertexes <sup>2</sup> that denes the hyperrectangular approximation ℱ <sup>2</sup>. The hyperrectangular approximation is proper in all dimensions due to (7.5). The united solution set is given by the measurement data and consists of all parameters that fulll (7.6) according to Def. 5.6. As the optimization problem (7.3) is constrained by (7.6), all elements of the resulting set of vertexes are part of the united solution set ∑︀ ∃∃ and thus

$$\mathcal{F}^{\Box} = \mathcal{F}\left(\mathcal{V}^{\Box}\right) \subseteq \mathcal{F} = \sum\_{\exists \exists \exists} \, . \tag{7.7}$$

In general, an -dimensional parameter space leads to |2| = 2 dierent vertexes that determine the solution set. In the hyperrectangular case, the set of vertexes 2 is directly given by the interval type parameter vector as illustrated in Example 7.1.

### Example 7.1:

For an = 2 dimensional interval type parameter vector

$$\boldsymbol{\Theta} = \begin{bmatrix} \boldsymbol{\theta}^{(1)}, \ \boldsymbol{\theta}^{(2)} \end{bmatrix}^T \tag{7.8}$$

the resulting rectangle is given by the set of |2| = 2 = 4 vertexes:

$$\mathcal{V}^{\Box} = \left\{ \left[ \underline{\theta}^{(1)}, \underline{\theta}^{(2)} \right], \left[ \overline{\theta}^{(1)}, \underline{\theta}^{(2)} \right], \left[ \overline{\theta}^{(1)}, \overline{\theta}^{(2)} \right], \left[ \underline{\theta}^{(1)}, \overline{\theta}^{(2)} \right] \right\}. \tag{7.9}$$

Without the constraint set given in (7.5), the setting can lead to improper solutions. These solutions cannot be interpreted and therefore are useless with respect to a real system. This is due to the fact that there is no width dened for improper intervals. Nevertheless, the objective function is dened on inmum and supremum of the intervals and can thus be evaluated even for improper intervals. In case of an even number of parameters being improper, the area multiplication in (7.4) yields a positive value. This value can be increased to innity in the "improper direction" leading to impossible results. Such improper solutions are not suitable for the verication setting, as the measurement was obtained from a real system, with real generating parameters. An illustration of the rectangular inner enclosure is given in Example 7.2.

#### Example 7.2:

Assume the united solution of the measurement data to be given as the blue shape in Fig. 7.1. The solution is in general not unique because there might be other possible solutions of the same size. Two possible area maximal inner enclosures are given by the green rectangles. Note that this example is showing the basic concept of the enclosure and thus the denoted values are chosen arbitrary.

## 7.1.2 Zonotopic Solution Set Approximation

In this approach, the united solution set is constrained by a set of hyperstripes in the parameter space, generated by the measurement data. The orientation of these hyperstripes is similar for consistent measurement data. This leads to a united solution set showing a zonotopic shape which is depicted as the shaded area in Fig. 7.2.

The formal description of the hyperstripes is done based on [Ble11] such that each line of the regressor matrix denes a set of feasible parameters ℱ () . Thereby ℱ () denotes the feasible set at the specic time step . The feasible hyperstripes are given by

$$\mathcal{F}^{(k)} = \{ \Theta \in \mathbb{R}^{n\_a^\* + n\_c^\*} : -\delta^a \le B\_{meas,k} - A\_{meas,k}\Theta \le \delta^a \}\tag{7.10}$$

with being the width of the interval enclosure, i.e. the maximum absolute sensor fault. The hyperstripe can be written in normalized form as

$$\mathcal{F}^{(k)} = \left\{ \Theta \in \mathbb{R}^{n\_a^\* + n\_c^\*} : \left| \frac{B\_{meas,k}}{\delta^a} - \frac{A\_{meas,k}}{\delta^a} \Theta \right| \le 1 \right\}.\tag{7.11}$$

Figure 7.2: Constraints showing the general zonotopic shape (shaded area) of the united solution set in the case of two parameters

The feasible set ℱ that includes all measurement information up to time step , can be determined recursively by intersecting the hyperstripes ℱ () :

$$\mathcal{F}\_k = \mathcal{F}\_{k-1} \cap \mathcal{F}^{(k)}.\tag{7.12}$$

Therefore the resulting feasible set for all available measurement data is given by ℱ . The approximation of this set can be done by a zonotopic set with a very good t. Denition 7.3 describes such a general zonotopic set as given in [Ble11].

#### Denition 7.3 (Zonotopic Set)

A zonotopic set is constrained by the convex hull of the set of vertexes ◇ = {︀ 0, 1, . . . , |◇|−<sup>1</sup> }︀ . The calculation of the vertexes is done by

$$\mathcal{V}^{\diamond} = P^0 \oplus H^0 \mathbf{K}^V = \left\{ P^0 + H^0 \mathbf{z} : \mathbf{z} \in \mathbf{K}^V \right\} \tag{7.13}$$

with the center of the zonotope <sup>0</sup> ∈ R ( \* + \* ×1), the radius matrix <sup>0</sup> ∈ R ( \* + \* × ) and a unitary box composed of an arbitrary number of unitary interval vectors = [−1, 1].

The ⊕-operator denotes the Minkowski-Sum [Ber08, p. 291] that is used to calculate the vertexes of the zonotope. This means that the center 0 is added to the given unitary vectors (i.e. corners) that are scaled by the respective radius value. The adaption to the zonotopic approximation of the feasible set is done in Def. 7.2, leading to the notation ℱ ◇ . The respective optimization problem is set up such that a zonotopic approximation ℱ ◇ ∑︀ of the united solution ∃∃ is achieved.

#### Proposition 7.2 (Optimization Based Zonotopic Solution Set)

The zonotopic approximation of the feasible set ℱ ◇ is given by the set of vertexes ◇ . The set of vertexes ◇ is calculated based on the measurement data [U, Y] given for the discrete sampling points ∈ {1, 2, . . . , } of a VO. It can be computed based on the optimal scaling parameter

$$\alpha^{\diamond} = \underset{\alpha}{\text{argmax}} \left( J^{\diamond} \left( \alpha \right) \right), \tag{7.14}$$

the initial center <sup>0</sup> and the radius matrix <sup>0</sup> given by the outer enclosure of the measurement data. The vertexes of the zonotope ◇ are calculated by

$$\mathcal{V}^{\diamond} = P^0 \oplus \alpha^{\diamond} H^0 \mathbf{K}^V = \left\{ P^0 + \alpha^{\diamond} H^0 \mathbf{z} : \mathbf{z} \in \mathbf{K}^V \right\}.\tag{7.15}$$

The optimization problem consists of the objective function

$$J^{\diamond}(\alpha) = \alpha \tag{7.16}$$

that is subject to the constraints

$$c\_{\mathcal{M}}(\mathcal{V}^{\diamond}) \le 0 \tag{7.17}$$

given by the measurement data according to Def. 5.6.

#### Proof:

The result of the optimization problem (7.14) is given by the set of vertexes ◇ that denes the zonotopic approximation ℱ ◇ . The zonotopic approximation is proper in all dimensions as it is based on an initial outer approximation calculated using classic interval arithmetic and subsequently scaled using a point real parameter. The united solution set is given by the measurement data and consists of all parameters that fulll (7.17) according to Def. 5.6. As the optimization problem (7.14) is constrained by (7.17), all elements of the resulting set of vertexes are part of the united solution set ∑︀ ∃∃ and thus

$$\mathcal{F}^{\diamond} = \mathcal{F}\left(\mathcal{V}^{\diamond}\right) \subseteq \mathcal{F} = \sum\_{\exists \exists \exists} \ . \tag{7.18}$$

This method has its roots in [Ble11] and [Sch17c] and was developed in [Sch18b]. The initial parameters of the optimization are given by the center 0 and the radius matrix <sup>0</sup> which are determined by calculating the outer enclosure as in [Ble11], given in (7.10)-(7.12) extended to Kaucher arithmetic notation.

The initial zonotope to calculate this outer enclosure needs to be chosen suitably. One possibility is to use the nominal region Θ\* = [Θ\* − Θ\* <sup>Δ</sup>, Θ\* + Θ\* <sup>Δ</sup>] expressed as <sup>0</sup> = Θ\* and <sup>0</sup> = Θ\* <sup>Δ</sup>. It is also possible to calculate the point real central solution Θ using the center matrizes , and ,. The initial zonotope is then given by <sup>0</sup> = Θ and <sup>0</sup> = with an arbitrary small value > 0.

Each measurement interval is iteratively interpreted as a hyperstripe containing the possible parameters. The intersection between the hyperstripe and the zonotope is calculated and the common region is used to calculate the new radius matrix. This procedure leads to a zonotopic outer enclosure of the feasible parameter set. Due to the repeated calculation of outer enclosures, the area of the zonotope might grow with the considered measurement data.

Starting from the nal outer enclosure that frames the intersection of all available measurement data, the scaling factor is minimized until all vertexes of the zonotope are part of the united solution. This can be checked using Prop. 4.1.

The center point of the zonotope is not moved by the shrinking procedure. The idea of maximum possible parameter variability within the zonotope is realized by the objective function (7.16) that maximizes the scaling factor .

Additional assumptions on the optimization problem are:


The rst assumption is due to the fact that intervals containing zero can be interpreted erroneously as inverse elements and thus cancel the inuence of some parameters. The second assumption is necessary to prevent increasing intervals that may arise even when Kaucher interval arithmetic is used.

A sketch of a zonotopic inner approximation is given in Example 7.3.

### Example 7.3:

Assume the united solution of the measurement data to be given as the blue shape in Fig. 7.3. The center of the zonotope is depicted by the green circle. It is given by the center of the outer enclosure that is used as initial zonotope. The algorithm does not change the center, leading to the area maximal inner enclosure given by the green zonotope.

Note that there might be solutions of equal or larger size possible for a dierent zonotope center. However the choice of an optimal center is not included in the current version of the algorithm. The denoted values are chosen arbitrary in this example.

Figure 7.3: Exemplary area maximal zonotopic approximation of the united solution set

## 7.1.3 Polytopic Solution Set Approximation

The most general enclosure is given by the shape of a convex polytope. However in this case the solution is represented by a list of points instead of a mathematical description such as intervals (in the hyperrectangular case) or center and radius matrix (in the zonotopic case). The only conditions on a valid polytopic enclosure consists of the requirement that points of the list are part of the united solution set ∑︀ ∃∃ according to Prop. 4.1 and that the resulting polytope is convex. The computational eort of the polytopic approximation is in the same order of magnitude as the computational eort of the hyperrectangular approximation. The respective solution of the optimization problem is given in proposition Prop. 7.3.

#### Proposition 7.3 (Optimization Based Polytopic Solution Set)

The polytopic approximation of the feasible set ℱ <sup>D</sup> is given by the set of vertexes <sup>D</sup>. The set of vertexes <sup>D</sup> is calculated based on the measurement data [U, Y] given for the discrete sampling points ∈ {1, 2, . . . , } of a VO. It is dened as the solution of the optimization problem

$$\mathcal{V}^{\odot} = \operatorname\*{argmax}\_{\Theta^{\odot}} \left( J^{\odot} \left( \Theta^{\odot} \right) \right) \tag{7.19}$$

with the objective function

$$J^{\odot} \left( \Theta^{\odot} \right) = \operatorname{area} \left( \Theta^{\odot} \right). \tag{7.20}$$

The function area (·) calculates the hypervolume of a polytope given by a list of points ΘD. All points of the list, i.e. vertexes of the set ℱ <sup>D</sup>, are subject to the constraints

$$c\_{\mathcal{M}}(\Theta^{\odot}) \le 0 \tag{7.21}$$

with <sup>ℳ</sup> given by the measurement data according to Def. 5.6.

#### Proof:

The result of the optimization problem (7.19) is given by the set of vertexes <sup>D</sup> that denes the polytopic approximation ℱ <sup>D</sup>. The polytopic approximation is proper in all dimensions as it is the convex hull of <sup>D</sup>. The united solution set is given by the measurement data and consists of all parameters that fulll (7.21) according to Def. 5.6. As the optimization problem (7.19) is constrained by (7.21), all elements of the resulting set of vertexes are part of the united solution set ∑︀ ∃∃ and thus

$$\mathcal{F}^{\diamond} = \mathcal{F}\left(\mathcal{V}^{\diamond}\right) \subseteq \mathcal{F} = \sum\_{\exists \exists \exists} \,. \tag{7.22}$$

The choice of the vertexes of the initial list is of minor importance. Possible choices are the vertexes of the nominal set, the vertexes of a zonotopic outer enclosure or the vertexes given by the central solution disturbed by a small parameter > 0.

### Example 7.4:

Assume the united solution of the measurement data to be given as the blue shape in Fig. 7.4. A possible inner enclosure is given by the green polytope.

Note that this solution is not unique. There might be other possible solutions of the same size. Again, this example is showing the basic concept of the enclosure and thus the denoted values are chosen arbitrary.

Figure 7.4: Exemplary polytopic inner approximation of the united solution set

## 7.2 Kaucher Based Diagnosis

The consistency for interval type systems was introduced in Chapter 5. Prop. 5.4 focuses on basic consistency and includes the main result that forms the foundation of the considerations in this chapter. All other assumptions and denitions are assumed to be fullled and applicable as well.

In case the calculation of the verication method can be fast enough with respect to the regarded dynamic system, the method can be applied in an online setting to tackle the diagnosis problem. In the resulting diagnosis setting, the interpretation of the problem includes the temporary feasible set ℱ that is approximated using one of the three approximation shapes introduced in Section 7.1. The intersection between the temporary feasible set ℱ and the nominal set forms an inner enclosure of the consistent set for the whole measurement time according to Sec. 5.2.

### Proposition 7.4 (Approximation Based Basic Consistency)

The input-output behavior given by all available interval type enclosures of measurement values

$$\begin{bmatrix} \mathbf{U}\_{meas}, \ \mathbf{Y}\_{meas} \end{bmatrix} = \begin{bmatrix} \left< \mathbf{u}\_{meas,k} \right>\_{k=1}^{T}, \ \left< \mathbf{y}\_{meas,k} \right>\_{k=1}^{T} \end{bmatrix} \tag{7.23}$$

is basic consistent with the nominal system specied by an interval type specication

$$S\_i^\* = \left\{ \Theta^\*, n\_a^\*, n\_c^\*, U\_{init}^\*, Y\_{init}^\* \right\}, \tag{7.24}$$

if the intersection between nominal set and temporary feasible set ℱ is nonempty, i.e.

∩ ℱ ̸= ∅. (7.25)

### Proof:

The nominal set consists of all parameters within the specication Θ\* . Basic consistency according to Prop. 5.4 is given, if there is at least one parameter Θ ∈ Θ\* within the specied parameter set that is able to explain the measurement data.

According to Def. 7.1, all parameters of the feasible set ℱ are able to explain the measurement data for ∈ {1, 2, . . . , }.

The intersection between the nominal set and the feasible set ℱ contains parameters that are both, part of the nominal set and part of the feasible set. If this intersection is nonempty, there is at least one parameter that is consistent according to Prop. 4.1.

It is in general not possible that the method yields full consistency in the diagnosis setting. However, the approximation methods described in the previous section can be used to verify dynamic systems given by an interval type specication \* . 14

In the diagnosis setting, an approximation of the feasible set is used to calculate an inner enclosure of the consistent set. The verication result is guaranteed in the sense of this thesis as it is based on Kaucher arithmetic. The denition of consistency uses the united solution set as given in Def. 3.1. The choice of vertex points used during the verication is done based on the introduced optimization procedure hence ensuring an eective coverage of the feasible area given by the measurement data. The shape of the calculated solution is determined directly by the setup of the objective function and the additional constraints. Furthermore, the calculated inner approximation aims at nding an area maximal representation of the united solution set in terms of the dened approximation shape. Thus the feasible set represents the maximum possible parameter variability in the given setting that is guaranteed to be able to explain the measurement data.

<sup>14</sup> This method is also applicable to a point real specication. Nevertheless a point real specication can be veried directly by Prop. 4.1 and does not need the optimization based extensions in order to approximate the feasible set.

It is assumed that the diagnosis algorithm is running in parallel with the VO. The diagnosis system is supplied with a new measurement value in each sampling cycle which is used to calculate a temporary result. This leads to the diagnosis algorithm as given in Fig. 7.5.

Figure 7.5: Diagnosis algorithm

A VO that is basic consistent for all measurement data at the nal time , considering all measurement data with < according to Prop. 7.4, is assumed to be also basic consistent for < ˜ .

If the algorithm yields basic consistency between the measurement data and the specication, the verdict is guaranteed to be correct. Therefore there are no hidden faults i.e. no type II errors possible.

If the algorithm yields inconsistency, the verdict might be erroneous, i.e. showing a false alarm. This is due to the used inner approximation of the feasible set which does not necessarily cover all feasible parameters.

Three exemplary settings and the resulting solution sets are depicted in Fig. 7.6. The black lines depict the constraints given by the measurement data that frame the united solution. The nominal parameters are given by the black dashed rectangle.

Figure 7.6: Exemplary results for the three dierent approximation shapes

The rst plot shows an inner approximation using an rectangular shape. It can be seen that the location and shape of the rectangle is not unique and that other rectangles of the same size might be possible within the united solution. The second picture shows the approximation by a zonotope. The general t of the zonotope is better as it can follow the contour of the constraints to some extent. The third gure shows the approximation by a polytope. This setting shows the best t, even though the resulting shape is given by a list of four points instead of an algebraic description. All three examples show an intersection between the nominal set and the approximation of the united solution set, given by the shaded areas. The regarded example setting thus shows a situation with guaranteed basic consistency between measurement and specication.

## 7.2.1 The Center Misplacement Eect

The zonotopic approximation leads to the best tradeo between accuracy and ease of description. This shape is chosen for the further explanations of the diagnosis approach. As already mentioned, false alarms are possible for all regarded shapes due to the usage of inner approximation of the feasible set.

In the special case of a zonotopic approximation, false alarms can also result from the so called Center MisPlacement Eect (CMP). CMP denotes the eect of disregarding feasible results due to a bad center and shape of the zonotope.

An exemplary situation showing a CMP eect is given in Fig. 7.7.

Figure 7.7: Center misplacement eect

The blue lines depict the constraints given by the measurement data. The center of the zonotope is given by the green circle within the green zonotope. Center and shape of the zonotope were calculated from an outer approximation of the solution set as in [Ble11]. Afterwards the scaling parameter was adapted such that all vertexes of the zonotope are within the united solution of the problem, given by the shaded area. Temporary feasible vertexes of the zonotope that were checked during this optimization procedure are given by green crosses.<sup>15</sup> Due to the location of the center and the shape of the zonotope, the scaling factor needs to be very small to ensure that the vertexes <sup>2</sup> and <sup>3</sup> stay within the united solution set.

The resulting zonotope does not intersect with the nominal set, from which the lower right corner is depicted by the dashed black square in the left of the gure.

Even though the nal vertex is rejected there are consistent intermediate vertexes (green crosses within the nominal set) that are rejected because the respective temporary zonotope was rejected as not all vertexes were part of the united solution for this value of the scaling parameter .

Certain measures can be taken to avoid CMP. The most straight forward is to move the center of the zonotope or to change its shape. However, in general it is not trivial to chose a suitable change in center and shape algorithmically.

<sup>15</sup> Temporary vertexes of the zonotope that were checked during the optimization procedure and found to be infeasible are marked as red circles.

As the method is also designed for higher dimensions and should work as automatically and autonomously as possible, it is neither possible nor suitable to visualize the setting and request a human operator to adapt the center or the shape.

An algorithmic workaround is given by the use of intermediate vertexes that are checked during the optimization. The optimization can be stopped as soon as there is at least one feasible intermediate vertex detected, i.e. if a green cross is evaluated. Nevertheless this will lead to point-wise results instead of a feasible set of a given shape. An eective way to avoid CMP is provided by the enclosure of the nominal set in the constraints of the optimization problem. However, this will also pose higher restrictions on the feasible set.

## 7.3 Conclusion

This chapter introduced extensions to the Kaucher based verication method developed in the previous chapters.

The rst extension is given by calculating the largest inner approximation of the united solution set. This was done using an optimization setting. The precise setup of the objective function and the constraints determines the resulting shape of the so called feasible set. The feasible set was approximated using three dierent geometric shapes (hyperrectangle, zonotope, polytope). The resulting set is an inner approximation of the united solution set, meaning that there are several equivalent approximations within the united solution set.

The second extension is given by the application of the verication method in a diagnosis setting. The diagnosis algorithm was introduced in an iterative setting, calculating temporary results for each sampling cycle. Due to the inner approximation used, the results can show a false alarm. A vivid cause of a false alarm is given by the center misplacement eect, that can occur in case of a zonotopic approximation. The applicability of the Kaucher based diagnosis method depends on the specic settings of the regarded system. Therefore it is necessary to evaluate each system individually before applying Kaucher based diagnosis.

# 8 Application and Results

This chapter presents the application of the developed methods to tank systems with a varying number of tanks and an adjustable set of connections. The settings are analyzed in simulation and practice.

Tank systems form a class of wide spread theoretic control applications. The basic setting is usually given by one or more tanks with a nominal outow and a controllable inow. The goal is to adjust or maintain a nominal height of the uid in a tank. The process can be disturbed by additional leakages or inows or by congestions in the in- or outow pipes. Depending on the specic setup, cross ows between tanks are possible. Those cross ows are in general dependent on the current lling level of the concerned tanks. All measurement data is obtained using a real three-tank process available at the Institute of Control Systems (IRS) at the Karlsruhe Institute of Technology (KIT). A picture of the lab setting is shown in Fig. 8.1. The algorithmic calculations are done on a Lenovo ThinkPad T460s powered by an Intel® Core™ i7-6600 CPU using 12GB main memory. The implementation was done in Matlab© 2012b.

Figure 8.1: Three-tank process lab setting at the Institute of Control Systems (IRS)

The following subsections set up the dynamic models and introduce the properties and possibilities of the dierent scenarios. First the methods of Chapter 5 "Guaranteed Verication of Interval Type Systems" are applied to real measurement data obtained from a single-tank process.

Second, the application is extended to a two-tank setting showing hybrid behavior. A mapped state signal is used to demonstrate the basic functionality of the method developed in Chapter 6 "Guaranteed Verication of Hybrid Systems". The results are discussed and interpreted. Third, the diagnosis method developed in Chapter 7 "Extended Kaucher Based Guaranteed Verication" is applied to simulation data of a four-tank process. Three dierent fault types are used to demonstrate the fault detection properties of the Kaucher based method. Several fault intensities demonstrate the possibility of the method to detect even very small faults. Finally the diagnosis method is applied to real measurement data provided by a single-tank process. It is shown that all of the regarded faults can be detected successfully using the introduced methods.

## 8.1 Application: Guaranteed Verication for Interval Type Systems (Single-Tank)

The basic setting is given by a single-tank process which is sketched in Fig. 8.2.

Figure 8.2: Sketch of the single-tank

It is possible to set up a single-tank system consisting only of tank 2 by closing the respective valves in the three-tank lab system.<sup>16</sup> All valves in the system are binary valves which are only open or closed. The height ℎ<sup>2</sup> of tank 2 is measured, as well as the ow <sup>2</sup> of pump 2.

<sup>16</sup> Note that the number of the tanks in the lab setting is (from left to right) 1 - 3 - 2.

The outow of each tank is governed by the formula of Torricelli [Tip00, p. 360] which leads to the nonlinear time continuous dynamic of a single-tank

$$\frac{\mathrm{d}h\_2(t)}{\mathrm{d}t} = -\frac{1}{A\_2} \underbrace{a\_2 \sqrt{2gh\_2(t)}}\_{\text{outflow}} + \frac{1}{A\_2} \underbrace{\gamma\_2 v\_2(t)}\_{\text{inflow pump 2}}.\tag{8.1}$$

with the outow pipe cross section 2, the tank cross section 2, the gravitational force and the constant <sup>2</sup> according to Appendix G, Tab. G.1. The following simplications and the resulting model equations are based on the considerations in [Ble11].

The model is discretized using the Euler method with sampling time ∆ leading to

$$h\_{2,k} = h\_{2,k-1} - \frac{a\_2}{A\_2} \sqrt{2gh\_{2,k-1}} \Delta t + \frac{\gamma\_2}{A\_2} v\_{2,k-1} \Delta t + e\_{2,k} \tag{8.2}$$

where 2, is the additive disturbance including sensor and discretization faults. Equation (8.2) is reformulated to the pseudo linear regressor form

$$
\varphi\_k \theta\_k = y\_k \tag{8.3}
$$

with

$$
\varphi\_k = h\_{2,k-1} \tag{8.4}
$$

$$\theta\_k = 1 - \frac{a\_2}{A\_2} \sqrt{\frac{2g}{h\_{2,k-1}}} \Delta t + \frac{e\_{2,k}}{h\_{2,k-1}} \tag{8.5}$$

$$
\Delta y\_k = h\_{2,k} - \frac{\gamma\_2}{A\_2} v\_{2,k-1} \Delta t. \tag{8.6}
$$

It can be seen that the parameter given in (8.5) is depending on the height ℎ2,−<sup>1</sup> which renders it time variant. The range of the time variant parameter can be interpreted as an interval set that includes all possible parameter values as well as some spurious solutions. The interval enclosure of the parameter is given by the bounding box of the time variant parameter.

It is possible to calculate the interval enclosure of the time variant parameter for a specic nominal setting (2, = 0). This setting consists of a given operation range ℎ<sup>2</sup> ∈ [ℎ2,, ℎ2,] and a xed sampling time ∆. The calculated parameter interval can then be used as the nominal set in further considerations.

Based on the tank properties given in Appendix G, Tab. G.1 it is possible to set up the parameter range for a nominal and a faulty tank conguration. To realize the faulty behavior all available valves are opened. This means the leakage outow valve, the lower connection valves and the nominal outow valves. The resulting height dependent parameter is depicted in Fig. 8.3 and can be used a priori to reason about detectability.

Figure 8.3: Values of depending on ℎ<sup>2</sup> for dierent outow congurations

It can be seen that there is a gap between the nominal behavior (green line) and the faulty behavior using all possible outows (red line). This leads to the hypothesis that it is possible to separate the nominal behavior from the faulty tank conguration using the method from Chapter 5. Due to the upper valve position ℎ = 30cm, the depicted values are valid in the case of a closed upper connection valve 23 only. The nominal parameter \* = [0.93, 0.98] is used for tank levels in the range ℎ<sup>2</sup> ∈ [5, 30] cm.

The hypothesis is veried using the following steps: First, a nominal description i.e. a speci cation, of the system in the necessary ARX form is set up. Second, the system is implemented i.e. the real tank is manufactured and used to collect measurement data. Third, a faulty version of the system is implemented, i.e. unspecied outows are added to the tank by opening the respective valves.

The collected measurement data for ℎ<sup>2</sup> is then enclosed by interval values

$$h\_{2,k} = \left[ h\_{2,k} (1 - \delta\_{h\_2}^r), \ h\_{2,k} (1 + \delta\_{h\_2}^r) \right] \tag{8.7}$$

and used to verify the correct system with respect to the nominal parameters. Finally, measurement data from the faulty system is used to show that it is not possible to verify the faulty behavior using the nominal parameters.

The initial height is set to ℎ2, = 5cm, the nominal outow valve <sup>2</sup> is open and pump 2 is constantly running, providing the maximal inow of <sup>2</sup> = 6.5l/m. The resulting nominal setting is depicted in Fig. 8.4. It can be seen that it is possible to verify the measurement of the fault free system supposing a relative fault of <sup>ℎ</sup><sup>2</sup> = 0.02 for the measurement signal of the height ℎ2. The basic consistency as introduced in Chapter 5 is used to calculate the results. The calculation time for the algorithm is = 16.8s which is less than the duration of the experiment = 60s.

Figure 8.4: Implementation of the single-tank setting that is basic consistent with the specication

Now a faulty implementation of the tank system is considered. Therefore the faulty setting is realized by opening the leakage outow, the lower connection and the nominal outow valves which leads to a major change in the system dynamics.

The changed dynamics are not able to reach the nominal nal height of ℎ2,<sup>60</sup> = 29.1cm as the maximum pump ow cannot compensate the additional outow. The resulting measurement data is depicted in Fig. 8.5. It is again analyzed using a relative fault of <sup>ℎ</sup><sup>2</sup> = 0.02. Fig. 8.5 shows that it is not possible to verify the faulty implementation for ≥ 4. The necessary calculation time is = 47.5s.

Figure 8.5: Implementation of the single-tank setting that is inconsistent with the specication due to additional outows

The change in the system dynamics has to be strong to prevent the verication of the faulty system. If there is only a slight change in the system dynamics, the faulty system is veried because the new behavior can be explained with the range of the nominal parameter set. This is the case, if a faulty system is implemented consisting of other combinations of outows and connection valves except the introduced setting.

When having a closer look on the tank parameters, this is rather intuitive. The connection valves cross section 32 and 32 are the same as the nominal outow 2. The leakage is only slightly bigger that the nominal outow. Thus each individual valve leads to none, respectively very slight variations with respect to the nominal behavior. Any combination of two outows is also veried using the original setting.

This concludes the rst application example concerning the guaranteed verication for interval type systems as introduced in Chapter 5. It is clear that the results are depending on the value of the used relative fault ℎ<sup>2</sup> . Increasing the fault increases the variety of enclosed trajectories and can thus be interpreted as increasing the available system behavior. This leads to a higher chance to achieve nominal behavior enclosed in the measurement data and thus to verify the system.

## 8.2 Application: Guaranteed Verication for Hybrid Systems (Two-Tank)

The single-tank-system can be extended by adding another tank, connected via two horizontal valves. There is no additional pump and thus no external inow to the new tank. However the new tank has a nominal outow and it is possible to open and close the connection valves. A schematic description is depicted in Fig. 8.6.

The dynamic of the main tank needs to be extended with the ows induced by the new tank. Those ows depend on the height dierences between the two levels, related to the static height of the lower and upper valves ℎ32 and ℎ32:

$$
\Delta\_{2,3,l,k} = \max(h\_{2,k}, h\_{32l}) - \max(h\_{3,k}, h\_{32l}) \tag{8.8}
$$

$$
\Delta\_{2,3,u,k} = \max(h\_{2,k}, h\_{32u}) - \max(h\_{3,k}, h\_{32u}).\tag{8.9}
$$

The resulting model is again discretized using the Euler method with sampling time ∆

$$\begin{split} h\_{2,k} &= h\_{2,k-1} - \frac{1}{A\_2} \underbrace{a\_2 \sqrt{2gh\_{2,k-1}} \Delta t}\_{\text{outflow}} - \frac{1}{A\_2} \underbrace{\text{sign}(\Delta\_{2,3,l,k-1}) a\_{32l} \sqrt{2g|\Delta\_{2,3,l,k-1}|} \Delta t}\_{\text{lower cross flow tank 3}} \\ &- \frac{1}{A\_2} \underbrace{\text{sign}(\Delta\_{2,3,u,k-1}) a\_{32u} \sqrt{2g|\Delta\_{2,3,u,k-1}|} \Delta t}\_{\text{upper cross flow tank 3}} + \frac{1}{A\_2} \underbrace{\gamma\_2 v\_{2,k-1} \Delta t}\_{\text{inflow pump 2}} + e\_{2,k} \tag{8.10}$$

with all values according to Appendix G, Tab. G.1.

Figure 8.6: Schematic sketch of the two-tank experiment

The cross sections of both tanks are the same, i.e. <sup>2</sup> = 3, leading to symmetric height changes induced by the cross ow. The term 2, is the additive disturbance including sensor and discretization fault. The system description can again be transformed to the pseudo linear regressor form (8.3) with

$$\varphi\_k = \begin{bmatrix} h\_{2,k-1}, \ |\Delta\_{2,3,l,k-1}|, \ |\Delta\_{2,3,u,k-1}| \end{bmatrix} \tag{8.11}$$

$$
\Theta\_k = \begin{bmatrix} \theta\_k^{(1)}, \,\theta\_k^{(2)}, \,\theta\_k^{(3)} \end{bmatrix}^T \tag{8.12}
$$

$$y\_k = h\_{2,k} - \frac{\gamma\_2}{A\_2} v\_{2,k-1} \Delta t. \tag{8.13}$$

The elements of the parameter vector Θ are given by

$$
\theta\_k^{(1)} = 1 - \frac{a\_2}{A\_2} \sqrt{\frac{2g}{h\_{2,k-1}}} \Delta t + e\_{2,k}' \tag{8.14}
$$

$$\theta\_k^{(2)} = \text{sign}(\Delta\_{2,3,l,k-1}) \frac{a\_{32l}}{A\_2} \sqrt{\frac{2g}{|\Delta\_{2,3,l,k-1}|}} \Delta t + e\_{2,k}^{\prime\prime} \tag{8.15}$$

$$\theta\_k^{(3)} = \text{sign}(\Delta\_{2,3,u,k-1}) \frac{a\_{32u}}{A\_2} \sqrt{\frac{2g}{|\Delta\_{2,3,u,k-1}|}} \Delta t + e\_{2,k}^{\prime\prime\prime} \tag{8.16}$$

with unknown composition of the fault 2, = ′ <sup>2</sup>,/<sup>ℎ</sup>2,−<sup>1</sup> + ′′ <sup>2</sup>,/|Δ2,3,,−1<sup>|</sup> + ′′′ <sup>2</sup>,/|Δ2,3,,−1<sup>|</sup>. The parameters (2) and (3) given in (8.15) and (8.16) show singularities in case |∆2,3,,−<sup>1</sup>| or |∆2,3,,−<sup>1</sup>| approach zero.

Therefore the enclosing intervals (2) and (3) become very large when the operation range [ℎ2,, ℎ2,] is including or close to the height of the valve ℎ32. It is thus necessary to chose the operation range with a sucient distance to ℎ32 to achieve meaningful parameter intervals.

## 8.2.1 Measurement With Mapped State Signal

The rst approach for the verication of the resulting hybrid system was introduced in Section 6.1 and used a so called "mapped state signal" according to Denition 6.10.

This means that the exact switching times and the respective active subsystems are known correctly. Therefore the hybrid verication task is reduced to sequential verication of the subsystems present in the measurement data.

The general system behavior given in equation (8.10) is therefore transferred to a more specic setting. Tank 3 is assumed to be empty (ℎ3,<sup>1</sup> = 0cm) with its nominal outow valve open and the lower connection valve 32 closed. The upper connection valve 32 is open, as well as the nominal outow valve of tank 2.

The resulting hybrid scenario consists of two states: State 1 is active if ℎ2, ≤ ℎ32, i.e. the upper connection valve does not inuence the system dynamics. State 2 is active for ℎ2, > ℎ32, i.e. an additional outow is given through the upper connection valve.

The hybrid scenario is described as follows:

In state 1, starting at ℎ2,<sup>1</sup> = 5cm, pump 2 is used to ll tank 2. The nominal behavior of tank 2 is identical with the single-tank behavior described in Section 8.1.

State 2 is reached, when ℎ<sup>2</sup> rises above the height of the upper connection valve, i.e. ℎ2, > ℎ32 = 30cm. Thus the system dynamic changes due to the additional cross ow from tank 2 to tank 3 through the connection valve.

The resulting dynamic of state 2 is given by

$$h\_{2,k} = h\_{2,k-1} - \frac{1}{A\_2} \left( \underbrace{a\_2 \sqrt{2gh\_{2,k-1}} \Delta t}\_{\text{nominal outflow}} + \underbrace{a\_{32u} \sqrt{2g|h\_{2,k-1} - h\_{32u}|} \Delta t}\_{\text{upper cross flow to tank 3}} - \underbrace{\gamma\_2 v\_{2,k-1} \Delta t}\_{\text{inflow by pump 2}} + e\_{2,k} \right). \tag{8.17}$$

In state 2 the level of tank 2 is always higher than the upper connection valve i.e. ℎ2,−<sup>1</sup> ≥ ℎ32. Therefore the absolute value operator | · | on (ℎ2,−<sup>1</sup> −ℎ32) can be dropped and the singularity present in (8.16) is avoided.

The upper cross ow to tank 3, given in (8.17) can thus be reformulated:

$$a\_{32u}\sqrt{2g(h\_{2,k-1}-h\_{32u})}\Delta t\tag{8.18}$$

$$=a\_{32u}(h\_{2,k-1} - h\_{32u})\sqrt{\frac{2g}{(h\_{2,k-1} - h\_{32u})}}\Delta t\tag{8.19}$$

$$\lambda = a\_{32u} h\_{2,k-1} \sqrt{\frac{2g}{(h\_{2,k-1} - h\_{32u})}} \Delta t - a\_{32u} h\_{32u} \sqrt{\frac{2g}{(h\_{2,k-1} - h\_{32u})}} \Delta t \tag{8.20}$$

$$\tau = h\_{2,k-1} \left( a\_{32u} \sqrt{\frac{2g}{(h\_{2,k-1} - h\_{32u})}} \Delta t - a\_{32u} h\_{32u} \sqrt{\frac{2g}{h\_{2,k-1}^2 (h\_{2,k-1} - h\_{32u})}} \Delta t \right) \tag{8.21}$$

Finally, the model for state 2 is given in in pseudo linear regressor form (8.3) with

<sup>2</sup>

$$\begin{split} \varphi\_{k} &= h\_{2,k-1} \\ \theta\_{k} &= 1 - \frac{a\_{2}}{A\_{2}} \sqrt{\frac{2g}{h\_{2,k-1}}} \Delta t \quad &- \frac{a\_{32u}}{A\_{2}} \sqrt{\frac{2g}{(h\_{2,k-1} - h\_{32u})}} \Delta t \\ &+ \frac{a\_{32u}}{A\_{2}} h\_{32u} \sqrt{\frac{2g}{h\_{2,k-1}^{2} (h\_{2,k-1} - h\_{32u})}} \Delta t + \frac{e\_{2,k}}{h\_{2,k-1}} \tag{8.23} \\ y\_{k} &= h\_{2,k} - \frac{\gamma\_{2}}{A\_{2}} v\_{2,k-1} \Delta t. \end{split} \tag{8.24}$$

The nominal parameter can again be determined depending on the level ℎ2,−<sup>1</sup> for dierent outow congurations. The resulting parameter ranges are depicted in Fig. 8.7. and are used to determine the nominal values of the system parameters.

Figure 8.7: Values of depending on ℎ<sup>2</sup> for dierent outow congurations at state 1 (left) and state 2 (right)

State 1 is assigned with the same values as in Section 8.1 i.e.

$$
\theta(1)^\* = \begin{bmatrix} 0.93, \ 0.98 \end{bmatrix}. \tag{8.25}
$$

In state 2, i.e. starting from ℎ2,−<sup>1</sup> = 30cm, the time variant parameter for nominal outow only can be enclosed by the interval

$$
\theta(2)^\* = \begin{bmatrix} 0.965, \ 0.974 \end{bmatrix}. \tag{8.26}
$$

The nominal parameters are now used to verify the hybrid setting. Again the collected measurement data of the height ℎ<sup>2</sup> is enclosed by interval values

$$h\_{2,k} = \left[ h\_{2,k} (1 - \delta\_{h\_2}^r), \ h\_{2,k} (1 + \delta\_{h\_2}^r) \right] \tag{8.27}$$

with <sup>ℎ</sup><sup>2</sup> = 0.15.

The upper part of Fig. 8.8 shows a hybrid test run, starting with tank 3 being empty and the level of tank 2 being at ℎ2,<sup>1</sup> = 5cm. The level of tank 2 is increased using pump 2 and thus crosses the valve height ℎ32 at = 64 seconds. The tank dynamics are changed by the crossing as the upper connection valve now acts as an additional outow of tank 2.

The verication results are depicted in the lower part of Fig. 8.8. State 1 can be veried as long as the level in tank 2 is below the connection valve, i.e. ℎ2, ≤ ℎ32. Afterwards, state 2 is veried until the end of the measurement sequence.

Note that in this case it is not necessary to perform cross validation, i.e. applying the parameters of state 2 to measurement from state 1 and vice versa. This is due to the fact that the real switching time and the respective active state are given by the mapped state signal.

Figure 8.8: Verication result in the hybrid case for state 1 and state 2 with a mapped state signal

## 8.2.2 Measurement Without Mapped State Signal

The introduced setting is now generalized by omitting the mapped state signal, still assuming known switching times. This means that it is still known there is a state change at = 64 in the scenario, but now it is unknown whether the system switches from state 1 to state 2 or vice versa. Therefore all segments have to be analyzed twice, using the nominal parameters of both states. This cross verication is used to determine the active state of the respective subsystem.

To ensure successful cross validation, Prager-Oettli-Distinguishability as given in Assumption 6.2 has to be fullled. Therefore the nominal parameters of state 1 and state 2 are investigated. It is obvious that the nominal parameters of state 2 are a subset of the nominal parameters of state 1, as

$$\theta^\*(1) = [0.93, \ 0.98] \supset [0.965, \ 0.974] = \theta^\*(2)\tag{8.28}$$

which leads to the existence of a common parameter

$$
\theta\_{com}^\* = \theta^\*(1) \cap \theta^\*(2) = [0.965, \, 0.974] \,. \tag{8.29}
$$

Therefore all parameters that are within the set of common parameters \* are consistent with both states by denition.

It is hence not possible to distinguish the states as Prager-Oettli-Distinguishability is not fullled in the current setting. Since this property is the main preliminary of hybrid verication without mapped state signal it is formally impossible to demonstrate the viability of the method using this setting.

The performance of the algorithm in case of fullled Prager-Oettli-Distinguishability was demonstrated in Example 6.2. Furthermore it was shown that it is possible to segment and verify the measurement data even without information about the switches if Prager-Oettli-Segmentability holds. The respective setting and the results are given in Example 6.3.

## 8.3 Simulation: Diagnosis By Kaucher Based Guaranteed Verication (Four-Tank)

Figure 8.9: Schematic view of the used four-tank system

The regarded setting is now changed to a slightly dierent four-tank setup. The four-tank process is an established benchmark in literature and was proposed by [Joh00]. Here it is used to apply the diagnosis method introduced in Chapter 7. This application was published and presented in [Sch18b].

The four-tank setting is depicted in Fig. 8.9. For symmetry reasons the setting can be reduced to tank 1 and tank 3.

The dynamic of tank 1 is chosen to be the objective of the verication. The heights ℎ<sup>1</sup> and ℎ<sup>3</sup> of both tanks are measured, as well as the on/o signal <sup>1</sup> of pump 1. The ow of pump 1 is split by the input valve <sup>1</sup> leading to <sup>1</sup> = 0.7 of the ow going to tank 1 and (1 − 1) of the ow going to tank 4. The ows from pump 1 as well as from tank 3 are considered as inputs. The respective equations are similar to (8.2), now taking into account an additional inow depending on ℎ3.

All simplications and the resulting model equations are based on the considerations of [Ble11].

Discretization using Euler Method and sampling time ∆ leads to

$$h\_{1,k} = h\_{1,k-1} - \frac{a\_1}{A\_1} \underbrace{a\_1 \sqrt{2gh\_{1,k-1}} \Delta t}\_{\text{outflow}} + \frac{1}{A\_3} \underbrace{a\_3 \sqrt{2gh\_{3,k-1}} \Delta t}\_{\text{inflow from tank 3}} + \frac{1}{A\_1} \underbrace{v\_{in1} \gamma\_1 v\_{1,k-1} \Delta t}\_{\text{inflow by pump 1}} + e\_{1,k} \tag{8.30}$$

with parameters according to Appendix G, Tab. G.2. The additive disturbance 1, includes sensor and discretization faults.

The pseudo linear regressor form (8.3) is now given by

$$y\_k = h\_{1,k} - \frac{v\_{in1}\gamma\_1}{A\_1}v\_{1,k-1}\Delta t\tag{8.31}$$

$$
\varphi\_k = \begin{bmatrix} h\_{1,k-1} \ h\_{3,k-1} \end{bmatrix} \tag{8.32}
$$

$$
\Theta\_k = \begin{bmatrix} \theta\_k^{(1)} \ \theta\_k^{(2)} \end{bmatrix}^T \tag{8.33}
$$

with the time variant parameters

$$
\theta\_k^{(1)} = 1 - \frac{a\_1}{A\_1} \sqrt{\frac{2g}{h\_{1,k-1}}} \Delta t \tag{8.34}
$$

$$
\theta\_k^{(2)} = \frac{a\_3}{A\_3} \sqrt{\frac{2g}{h\_{3,k-1}}} \Delta t. \tag{8.35}
$$

It is assumed that the operation range of the tank system is ℎ1, ∈ [2, 10.5] cm and ℎ3, ∈ [1, 15] cm which leads to (1)\* = [0.921, 0.965] and (2)\* = [0.029, 0.112].

The resulting midpoint radius expressions of the parameters are (1)\* = 0.943, (1)\* <sup>Δ</sup> = 0.022 and (2)\* = 0.0705, (2)\* <sup>Δ</sup> = 0.0415.

The optimization based diagnosis approach using a zonotopic approximation is chosen in this example. Therefore the measurement data and the nominal parameter set are used to set up the constraints of the optimization problem.

The nominal feasible parameter box Θ\* is used to build the initial zonotope with:

$$P^0 = \begin{bmatrix} \theta\_c^{(1)\*} & \theta\_c^{(2)\*} \end{bmatrix}^T \tag{8.36}$$

$$H^0 = \begin{bmatrix} \theta\_{\Delta}^{(1)\*} & 0\\ 0 & \theta\_{\Delta}^{(2)\*} \end{bmatrix}. \tag{8.37}$$

Afterwards the outer enclosure of the intersection between initial zonotope and measurement data is calculated using (7.10)-(7.12). The resulting zonotope is used as starting point 0 0 , <sup>0</sup> 0 of the optimization problem. The solution of the optimization problem is thus a zonotopic approximation of the united solution set given by . All parameter vectors included in the optimal solution set are solutions of the ILES (3.46). If the intersection of the specication and measurement is nonempty, the algorithm calculates a feasible set in this area. The results and limitations of the approach are demonstrated in the following, using several dierent settings.

## 8.3.1 Fault Free Setting

First a fault free scenario is given as depicted in Fig. 8.10. The scenario includes parts with pump on and o and thus shows a variety of dierent water level dynamics both in tank 1 and tank 3. The measurement data of ℎ<sup>1</sup> and ℎ<sup>3</sup> are enclosed using intervals with radius <sup>ℎ</sup><sup>1</sup> = <sup>ℎ</sup><sup>3</sup> = 0.05cm leading to the interval values

$$h\_{1,k} = \begin{bmatrix} h\_{1,k} - \delta\_{h\_1}^a \ \vdots \ h\_{1,k} + \delta\_{h\_1}^a \end{bmatrix} \tag{8.38}$$

$$h\_{3,k} = \begin{bmatrix} h\_{3,k} - \delta\_{h\_3}^a \ , \ h\_{3,k} + \delta\_{h\_3}^a \end{bmatrix}. \tag{8.39}$$

The results of the optimization based zonotopic method are given in subplot 4 of Fig. 8.10. The algorithm calculated a feasible set of parameters for the time segments [1, ] with ∈ [[1, 1297] , [1572, 2000]]. However, time segments starting in the beginning and ending in ∈ [1298, 1571] are not veried. Therefore there is temporarily no consistency according to Prop. 7.4 given in this segment. This is due to the CMP eect as introduced in Section 7.2.1.

A detailed view on the relevant time instants is given in Fig. 8.11 which displays the change of the result from consistent to inconsistent and back.

The constraints given by the measurement data are depicted by the blue lines in Fig. 8.11, the nominal set by the shaded area and the zonotopic approximation of the united solution set is shown as the green zonotope.

At = 1297, the measurement data is proven to be basic consistent with the specication as there is a nonempty intersection between nominal set and the approximation of the united solution set (orange part of the zonotope in Fig. 8.11, subgure 1).

At = 1298, the verication result is inconsistent for the rst time. However, there is still a feasible region within the nominal parameter set, shown by the green crosses that depict vertexes fullling Prop. 4.1 (Fig. 8.11, subgure 2). Those points were used by the algorithm while calculating a suitable factor . However there is no intersection between the nal green zonotope and the shaded nominal region. This behavior reects exactly the denition of the CMP eect introduced in Section 7.2.1. The CMP eect is observable until = 1571, (Fig. 8.11, subgure 3).

Starting from = 1572, additional constraints given by new measurement data are taken into account. Therefore, the center and shape of the outer enclosure is changed. This results in a zonotopic approximation of the united solution set that provides a nonempty intersection with the nominal set again (Fig. 8.11, subgure 4), leading to a successful verication.

As those later results are calculated based on all measurement data, including the possibly inconsistent times ∈ [1298, 1571], the results showing the CMP eect are corrected and the verication result for > 1571 can be generalized for all ∈ [1, 2000].

Figure 8.10: Verication result for the fault free setting

Figure 8.11: Zoom on the time instants showing the CMP eect

The method is applied iteratively to calculate an individual verication result for each time step . The calculations are done oine and the necessary calculation time for the complete data set is = 703.8s < 2000s = . Regarding the calculation time of each measurement time step shows that , < 1s which means that the method might in general be real time capable. However, the algorithm is based

on an optimization procedure with non-deterministic runtime. Therefore special measures need to be taken to ensure deterministic runtime of each step. For an initial estimate the average runtime of the optimization algorithm is used by regarding the total runtime of the method throughout this chapter.

## 8.3.2 Additive Faults

There are several phenomena that can lead to an additive fault of an sensor. A possible sensor fault is called "freeze", when the sensor will return a xed constant value. Another common sensor fault is "oset", which means that the sensor will add a constant bias to the true measurement value. A third additive sensor property is the specic sensor noise. Noise is not regarded as an eect to be detected here. However it is crucial to know the sensor noise precisely to choose the bound of the interval enclosure correctly.

Given the correct faultless but noisy sensor data , a freeze fault of value , occurring at time , can be expressed as follows:

$$s\_{f,k} = \begin{cases} s\_k & \forall k \in [1, k\_{err} - 1] \\ f\_f & \forall k \in [k\_{err}, T] \end{cases} \tag{8.40}$$

The measurement values of ℎ<sup>1</sup> and ℎ<sup>3</sup> are enclosed using <sup>ℎ</sup><sup>1</sup> = <sup>ℎ</sup><sup>3</sup> = 0.05cm. The case of a freeze fault of = 6.0cm on measurement ℎ<sup>1</sup> at = 650 is depicted in Fig. 8.12. It can be seen that the detection time is equal to the fault time = 650 = which means that the freeze fault is detected instantaneously.

The results for several dierent freeze fault amplitudes on ℎ<sup>1</sup> and the respective fault detection times are given in Tab. 8.1. All detected faults were checked in detail to identify settings showing the CMP eect.

In case of = 6.2cm a CMP condition occurred. This is due to the fact that the used freeze fault intensity is enclosed in the <sup>ℎ</sup><sup>1</sup> = 0.05cm interval around the real value of ℎ1,<sup>650</sup> = 6.22cm. Thus at = 651 the freeze fault case is really close to the correct value of ℎ1,650+1, meaning that there is a feasible parameter mapping ℎ1, = 6.2 to ℎ1,+1 = 6.2. Subsequent points do not provide additional information, as the sensor value is xed by the freeze fault. The only additional information is provided by the pump signal 1. The varying pump signal leads to a movement of the center of the outer enclosing zonotope. At = 684 the center of the zonotope is moved to a position that generates a CMP eect that is erroneously interpreted as an inconsistency.

Figure 8.12: Verication results for freeze fault of = 6.0cm at = 650

All calculation times for freeze faults given in Table 8.1 are less than the time of the measurement signal < 2000s = .


Table 8.1: Dierent fault amplitudes and resulting detection times for freeze fault

The second regarded malfunction is an oset fault. In this case the sensor value is not xed, but a specic value is added to each measurement:

$$s\_{o,k} = \begin{cases} s\_k & \forall k \in [1, k\_{err} - 1] \\ s\_k + f\_o & \forall k \in [k\_{err}, T] . \end{cases} \tag{8.41}$$

Again the measurement values of ℎ<sup>1</sup> and ℎ<sup>3</sup> are enclosed using <sup>ℎ</sup><sup>1</sup> = <sup>ℎ</sup><sup>3</sup> = 0.05cm. The results for an oset fault of = 0.7cm on sensor ℎ<sup>1</sup> are depicted in Fig. 8.13. Again the verication result changes to infeasible right at the moment the fault gets eective i.e., at = 650 = .

Further results for dierent fault amplitudes are given in Table 8.2. Instantaneous detection is possible up to a fault amplitude of = 0.15cm. Note that the measurement noise is enclosed using <sup>ℎ</sup><sup>1</sup> = <sup>ℎ</sup><sup>3</sup> = 0.05cm which leads to an interval width of 2 <sup>ℎ</sup><sup>1</sup> = 0.1cm. This is very close to the fault amplitude = 0.15cm. When using = 0.1cm - which is exactly the interval width - the CMP eect occurs. The necessary calculation time is less than the signal duration for all regarded oset fault intensities.


Table 8.2: Dierent fault amplitudes and resulting detection times for oset fault

Figure 8.13: Verication results for oset fault of = 0.7cm at = 650

## 8.3.3 Multiplicative Faults

Multiplicative faults can be related to faults in system components i.e. a congested or leaking pipe or decreasing pump performance. Such a multiplicative fault directly inuences the system parameter:

$$\theta\_{err,k} = \begin{cases} \theta\_k & \forall k \in [1, k\_{err} - 1] \\ \theta\_k + f\_\theta & \forall k \in [k\_{err}, T] \end{cases} \tag{8.42}$$

A maximum absolute deviation of <sup>ℎ</sup><sup>1</sup> = <sup>ℎ</sup><sup>3</sup> = 0.05cm is used to enclose the measurement values of ℎ<sup>1</sup> and ℎ3. An exemplary setting for (1) = 0.035 at = 1200 is depicted in Fig. 8.14, further results are given in Tab. 8.3.

It can be seen, that the faulty parameter inuences the value of ℎ1. This change in system dynamic is recognized by the verication method at = 1200 = .

All detected inconsistencies were checked in detail. Reliable results are possible up to <sup>1</sup> = 0.010. This is a very small value with respect to the nominal parameter variability (1) <sup>Δ</sup> = 0.025 which shows the new method is very sensitive.

The condition < 2000s = holds for all entries in Tab. 8.3.


Table 8.3: Dierent fault amplitudes and resulting detection times for parameter fault

Figure 8.14: Verication results for multiplicative fault of (1) = 0.035 at = 1200

## 8.4 Application: Diagnosis By Kaucher Based Guaranteed Verication (Single-Tank)

The diagnosis method is now applied to real measurement data instead of simulation data as in the previous chapter. Therefore the IRS three-tank setting (introduced in Section 8.1) is used, again reduced to the single-tank setup. The respective geometric parameters can be taken from Appendix G, Tab. G.1.

The following scenario is regarded: The water level in tank 2 has an initial height of ℎ2,<sup>1</sup> = 24.48cm and is rising due to the input ow from pump 2. Pump 2 is running at a high load with varying intensity.

First, the fault free setting is evaluated to show that the method is able to verify the nominal setting. Then the additive sensor faults "freeze" and "oset" are applied to the measurement data. Finally a scaling fault on the height measurement data is considered.

The results of dierent fault intensities as well as the detection and calculation times are given in several tables. Each result was evaluated carefully to determine CMP conditions. The results were initially published and presented in [Sch18c].

## 8.4.1 Fault Free Setting

The regarded operation range is dened to be ℎ<sup>2</sup> ∈ [24, 46] cm and used with (8.5) to obtain the nominal range \* = [0.971, 0.979].

The result is calculated by using an interval width of <sup>ℎ</sup><sup>2</sup> = 0.4cm to enclose the measurement data of ℎ2. The pump measurement data is assumed to be noiseless and thus used as point real value.

The fault free behavior is depicted in Fig. 8.15. It is veried for the entire measurement time. The necessary calculation time is = 34.4s < 120s = .

Figure 8.15: Fault free measurement data of the single-tank diagnosis scenario

## 8.4.2 Additive Faults

The rst considered additive fault is given by a sensor freeze. The used mathematical model to distort the fault free measurement data of ℎ<sup>2</sup> is given by (8.40). The measurement is enclosed using an absolute deviation of <sup>ℎ</sup><sup>2</sup> = 0.4cm. The resulting system run for = 37.9cm on the measurement of ℎ<sup>2</sup> at = 60 is depicted in Fig. 8.16.

It can be seen that it is not possible to verify the measurement data as soon as the freeze fault is active. The failure is detected at = 60 = , i.e. at the very rst time the measurement is distorted.

An evaluation of the performance of the method for several dierent freeze fault intensities is listed in Tab. 8.4. It can be seen that it is possible to detect faults in a large range from = 42.0cm to = 37.9cm. The lower value is very close to the correct value ℎ2, = 37.54cm.

All faults are detected directly at their rst appearance, i.e. at = 60 = . All results were checked carefully to ensure that there is no CMP eect present in the results. All calculation times are less than the measurement time, i.e. < 120s = .


Table 8.4: Dierent freeze fault amplitudes for = ℎ2,<sup>60</sup> = 37.54cm

Figure 8.16: Measurement data with freeze fault = 37.9cm

Second, an oset fault setting is applied. Therefore a constant oset is added to the faultless measurement data of ℎ2, according to (8.41). Again, an absolute deviation of <sup>ℎ</sup><sup>2</sup> = 0.4cm is used to enclose the measurement. The performance of the method is shown exemplary in Fig. 8.17 and Fig. 8.18.

The large oset of = 5cm in Fig. 8.17 is rather obvious and could also be detected by an expert. On the other hand, the very small oset of = 0.35cm in Fig. 8.18 is very hard to distinguish from the fault free measurement depicted in green.

Nevertheless the zonotopic method is able to detect it at the moment of its rst appearance. This is a very powerful property as the detected oset of = 0.35cm is smaller than the used interval radius of <sup>ℎ</sup><sup>2</sup> = 0.4cm.

This performance is due to the dynamic between time instant − 1 and time instant that is created by the appearance of the oset fault. This dynamic is detected instantaneously as it is outside of the nominal parameter range.

Several results for dierent oset intensities are given in Tab. 8.5. The table shows that instantaneous detection, i.e. = is possible within the range of = 0.35cm and = 5cm. No result shows the CMP eect which means that they are of good quality and provide reliable results using a zonotopic approximation of the united solution set.

The calculation time of all results is also given in Tab. 8.5. It can be seen that the measurement data can be processed in less than the genuine signal time, i.e. < 120s = holds for all fault intensities.


Table 8.5: Dierent oset fault amplitudes for = ℎ2,<sup>60</sup> = 37.54cm

Figure 8.17: Measurement data with oset fault = 5cm

Figure 8.18: Measurement data with oset fault = 0.35cm

## 8.4.3 Scaling Faults

In the diagnosis scenario discussed in this section so far, real measurement data from a singletank process is used. To realize multiplicative faults with the same measurement data, a scaling fault in the corresponding sensor of ℎ<sup>2</sup> is assumed. This leads to the following scaling fault model

$$s\_{s,k} = \begin{cases} s\_k & \forall k \in [1, k\_{err} - 1] \\ s\_k \cdot f\_s & \forall k \in [k\_{err}, T] \end{cases} \tag{8.43}$$

which replaces the former multiplicative fault (8.42). The absolute deviation to enclose the measurement of ℎ<sup>2</sup> remains <sup>ℎ</sup><sup>2</sup> = 0.4cm. The results are depicted in Fig. 8.19 and Fig. 8.20 for = 0.95 and = 1.01 respectively.

It can be seen that even factors very close to one (e.g. = 1.01, meaning a deviation of 1%) can be detected.

Results for an extensive range of factors are given in Tab. 8.6.

It is not possible to detect the fault intensity of = 0.97 as this parameter is very close to the nominal parameter \* = [0.971, 0.979] representing the desired system dynamics. This means there is a deviation of 0.1% between = 0.97 and \* = 0.971 which is one order of magnitude less than for = 1.01.

All successfully detected faults lead to = 60 = , i.e. they are detected right at their appearance. There was no CMP condition present in the regarded settings.

Again, it is possible to calculate the results for all fault amplitudes in less than the genuine signal time, i.e. < 120s = .


Table 8.6: Dierent scaling fault amplitudes for = ℎ2,<sup>60</sup> = 37.54cm

Figure 8.19: Measurement data with scaling fault = 0.95

Figure 8.20: Measurement data with scaling fault = 1.01

## 8.5 Conclusion

The methods and theories developed throughout this thesis were applied and demonstrated in this chapter.

First, simulation data of a single-tank process was used to show the performance of the verication method based on Kaucher arithmetic and zonotopic inner enclosures of the united solution set.

This approach was extended to the hybrid setting given by a two-tank system. It was shown that the introduced method is able to verify the correct system in case of known switching times and known active subsystems.

The application to a four-tank process showed that it is not possible to verify the system in various faulty settings even for very small fault amplitudes. This is a relevant indicator that a fault is present in the system and can thus be used for fault detection. The performance of the developed method was shown for three dierent and common fault types, namely freeze fault, oset fault and multiplicative fault.

The same diagnosis algorithm was nally applied to real measurement data provided again by the single-tank process. It could be shown that the algorithm obtains valuable results by detecting even very small faults from real world data.

The calculation time of all introduced examples was less than the genuine time of experiment on a standard laptop. Therefore the application might in general be suitable for online application in a diagnosis setting.

# 9 Conclusion

Modern engineering is able to develop and build complex and powerful systems to an unprecedented extend. The functionality of these systems is rapidly increasing and masters tasks that used to be subject to highly trained humans. The challenge how to build such systems is nearly completed. Still remaining is the question how to ensure correct functionality of such powerful safety critical systems. Current safety analysis relies on sophisticated methods from the eld of testing. Even though these methods are very mature, they are essentially falsication approaches meaning that there are type II errors by denition. However, in the case of safety critical systems, it is necessary to ensure the absence of type II errors. This thesis provides the foundations for a new specication and verication approach able to provide the necessary type II error free results.

Therefore a new notion of set based consistency for dynamic systems with a given speci cation is presented. Kaucher interval arithmetic is used to enclose the measurement data in a bounded error sense. Thus, the specied behavior of a dynamic system can be veried by measurement data even in the presence of noise and sensor uncertainty. Consistency is dened using the Kaucher arithmetic united solution set which leads to mathematically guaranteed results. The verdicts calculated by the new Kaucher based method can not show type II errors (hidden faults) by denition and are thus suitable to provide a reliable verication of safety critical systems.

It was proven mathematically that this holds for a wide class of systems, including time invariant, interval type and hybrid systems, which can be used to describe even nonlinearities. The notion of consistency was extended to include the discrete event part of a hybrid system and requirements on the connection of the two system classes were derived. Several extensions were introduced, leading to a new iterative identication and segmentation algorithm for hybrid systems which is able to handle even unknown switching times. In case the calculations can be done fast enough, the developed approach can also be used for the diagnosis of dynamic systems. Requirements on sampling time and hardware performance have to be determined for each specic setting individually.

The presented methods were successfully applied to several example systems, consisting of a variation of dierent tank settings. The results were shown, interpreted and discussed.

The results provide the base to answer the research question that governed this thesis. The new theories, methods and algorithms developed in this thesis form the foundation for reliable safety analysis of highly automated safety critical systems. The results of this thesis can be used to solve the arising problems of current powerful and interconnected systems that are increasingly interleaving our daily live.

# A Analysis Perspectives

A popular denition to distinguish dierent analysis perspectives was coined by [Boe84]: validation means "building the right product" whereas verication means "building the product right".

This is used to set up a high level dierentiation between validation and verication / falsi cation as given in Fig. A.1. Validation is always concerned with the desire of the customer and evaluates the question whether the developed functionality fullls this desire. The eld of validation is very important and large research eort including psychology, behavioral science and linguistics has been put on it in the last decades [Mac95][Que98][Fau03][Fol08][Bar13]. Nevertheless validation is not part of this thesis.

The objective of security focuses on the detection of intentional misuse by (un)authorized subjects, e.g. due to hacking attacks or user errors. The whole eld of security, including conscious misuse, hacking or manipulation is also not in the scope of this thesis.

Figure A.1: Evaluation terminology

The realization of the specication is done by human engineers, therefore it is likely that there are mistakes during the process of implementation. A wide spread approach to nd these mistakes is given by the concept of falsication, which tries to determine a so called counter example that shows unspecied or wrong behavior. If it is not possible to determine a counter example it is assumed that there are no counter examples at all and thus the system is considered to be correct. However, due to limited runtime of the falsication process it is possible that there are undetected (hidden) faults in the implementation. Therefore type II errors are possible which is an disadvantage in case of safety critical systems.

If the specication is very formal, the implemented system can be analyzed in a formal way.

Verication methods aim on proving the correctness of the implemented system in (all) operating conditions. The goal is to prove that the system always shows nominal behavior.

Verication and falsication methods are in general conducted during the development process, while the system operates in some kind of articial environment. The evaluation can be done oine and might thus need more calculation time or can be run several times during the development process. Mistakes occurring during system operation are tackled by methods of the diagnosis and monitoring eld. They need to run online in parallel to the real system operation and are thus required to be very fast.

In case of model based diagnosis, a model of the nominal system is generated that is used to calculate the nominal system output in parallel to the real verication object (VO). Therefore the inputs of the VO are measured and also applied to the nominal model. The resulting outputs are compared with the measured outputs of the VO which leads to a so called residual vector (see among others [Ise93][Ven03a][Ven03c][Ble10]). In case of an undisturbed system, the residual vector is zero if there is no fault present in the VO. A fault is detected if there is a non-zero residual vector. If it is necessary to gain further knowledge of the fault, more sophisticated methods can be applied to localize the exact point of fault occurrence within the VO ([Ven03a][Ven03c][Che14]). A drawback of the diagnosis approach is that - due to measurement noise and model imprecision - the residual vector is not always exactly zero even in the fault free case [Ise06, p. 198].

## B Derivation of the Interval Distribution

This appendix provides the derivation of a probability distribution on the parameter connecting two intervals and .

Two examples are presented in Chapter 3. Example 3.4 shows a setting with a proper result of and Example 3.5 demonstrates a setup leading to an improper solution. The interval ranges of and are sampled with ∆ = ∆ = 0.0001 and used to calculate the resulting parameter for all possible combinations.

It is also possible to theoretically derive the shown results. Therefore, two random variables and are dened with uniform distribution between the inmum and the supremum of the interval values and . The probability density functions of the two random variables are given by:

$$f\_u\left(u\right) = \begin{cases} \begin{array}{c} \frac{1}{\overline{u} - \underline{u}} \\ 0 \end{array}, \forall u \mid \underline{u} \le u \le \overline{u} \\\ 0 \end{array} \tag{B.1}$$

and

$$f\_y\left(y\right) = \begin{cases} \frac{1}{\overline{y} - \underline{y}} & , \, \forall u \mid \underline{u} \le u \le \overline{u} \\\ 0 & , \text{ else.} \end{cases} \tag{B.2}$$

A general probability density function according to [Bro08, p. 816] has to fulll the assumptions

$$f\left(x\right) \ge 0, \,\forall x\,\tag{B.3}$$

$$\int\_{-\infty}^{\infty} f\left(x\right) dx = 1.\tag{B.4}$$

Assumption (B.3) is valid for (B.1) and (B.2) by denition. Assumption (B.4) can be shown as follows:

$$\begin{split} \int\_{-\infty}^{\infty} f\_u\left(u\right) du &= 1 \\ &= \underbrace{\int\_{-\infty}^{\underline{u}} f\_u\left(u\right) du}\_{0} + \underbrace{\int\_{\underline{u}}^{\overline{u}} f\_u\left(u\right) du}\_{\left[\frac{1}{\overline{u}-\underline{u}} u\right]\_{\underline{u}}^{\underline{u}}} + \underbrace{\int\_{\overline{u}}^{\infty} f\_u\left(u\right) du}\_{0} \\ &= \left[\frac{1}{\overline{u}-\underline{u}} \underline{u}\right] - \left[\frac{1}{\overline{u}-\underline{u}} \underline{u}\right] = 1. \end{split}$$

The proportional parameter with

$$
\mu \cdot p = y \tag{B.5}
$$

thus can be interpreted as random variable

$$p = g\left(u, y\right) = \frac{y}{u}.\tag{\text{B.6}}$$

There are dierent possible realizations of and depending on the specic values of . Therefore the probability density function of is according to [Jon02, p. 118] given as

$$f\_p\left(p\right) = \int\_{-\infty}^{\infty} \left|u\right| f\_u\left(u\right) f\_y\left(u \cdot p\right) du. \tag{B.7}$$

With the constant densities of the uniform distributions () and (), and assuming only non-negative input values > 0, (B.7) can be relaxed to

$$f\_p(p) = \int\_0^\infty u \underbrace{f\_u(u)}\_{\substack{\text{constant for} \\ \underbrace{u \le u}\_{\text{else}} \le \overline{u} \\ \text{else } 0}} \underbrace{f\_y(u \cdot p)}\_{\substack{\text{constant for} \\ \text{else } 0}} du. \tag{B.8}$$

The antiderivative is zero for all /∈ [ /, /] and <sup>1</sup>/ ∈/ [/, /]. Else the densities consist of constant values, leading to the antiderivative

$$\left(f\_p\left(p\right) = \left[\frac{1}{2}c\_u c\_y u^2\right]\_0^\infty\right)\_0^\infty \tag{B.9}$$

with = 1 − and = 1 − . The evaluation of () depends on the inmum and supremum of and and can be generalized as

$$f\_p(p) = \frac{1}{2} c\_u c\_y \max\left(0, \left(\min\left(\overline{u}, \max\left(\underline{y}/p, \overline{v}/p\right)\right)\right)^2 - \left(\max\left(\underline{u}, \min\left(\underline{y}/p, \overline{v}/p\right)\right)\right)^2\right). \tag{B.10}$$

It is now possible to draw the derived density function for specic values of and . Exemplary plots for a proper and an improper setting are given in Example B.1.

#### Example B.1:

The plot for the values of = [2, 3] and = [4, 9] according to Example 3.4 is depicted in Fig. B.1. The density function has the same shape as the sampling based result given in Fig. 3.5. The plateau in the gure shows that the solution is proper.

Figure B.1: Probability density function () for the proper case

The plot of an improper setting according to Example 3.5 is given in Fig. B.2. The input and output intervals are = [2, 3] and = [4, 5] leading to an improper solution and an eroded plateau.

Figure B.2: Probability density function () for the improper case These results are in accordance with the theory introduced in Chapter 3.

## C Full Rank Criteria

It is in general -hard to determine whether a given interval matrix has full rank, respectively to check the matrix for singularity [Sha14]. However, there are some criteria to determine the property of full rank [Sha14]. Four sucient conditions and one necessary and sucient condition are given in the following.

It is necessary to introduce the absolute value of an interval

$$\left|\underline{x}\right| = \max\left(\left|\underline{x}\right|, \left|\overline{x}\right|\right) \tag{C.1}$$

and the magnitude

$$x^{+} = \begin{cases} \min\left( \left| \underline{x} \right|, \left| \overline{x} \right| \right) & \text{, if } 0 \notin x \\ 0 & \text{, else.} \end{cases} \tag{C.2}$$

The rst sucient condition for quadratic problems is based on diagonal dominance. The interval matrix ∈ IR(×) is nonsingular, if it is diagonal dominant. This means the inequality

$$a^{(ii)+} > \sum\_{\substack{j=1\\j\neq i}}^n \left| \mathbf{a}^{(ij)} \right|\tag{C.3}$$

holds for ∈ {1, 2, . . . , }.

There are two approaches to extend this condition to overdetermined equation systems, i.e. ∈ IR(×) with > . The rst approach searches for diagonal dominant subsquares within the overdetermined interval matrix. If there is such a diagonal dominant subsquare, the whole interval matrix has full rank. However there might be no diagonal dominant subsquare even though the matrix has full rank. This can be due to permutation of rows of the matrix. Even though permuted lines do not change the rank of a matrix, it does change the appearance of diagonal dominant subsquares. However, the property that permuting rows does not change the rank of the matrix can also be used to solve the problem. The lines can be permuted algorithmically such that diagonal dominant subsquares are created. This condition is still sucient for overdetermined systems.

A second sucient condition for full rank of an interval matrix ∈ IR(×) is based on the spectral radius. Thereby the spectral radius () is dened to be the largest absolute singular value of the matrix [Lax02, p. 195]. If the spectral radius fullls

$$\rho\left(\left|\left(A\_c\right)^\dagger\right|A\_\Delta\right) < 1\tag{C.4}$$

and the center matrix has full rank, also the interval matrix has full rank. Thereby <sup>d</sup> = (︀ )︀<sup>−</sup><sup>1</sup> denotes the pseudo inverse of the matrix ∈ R (×) with ≥ .

A third sucient condition is based on the singular values of the matrix ∈ IR(×) . If the condition

$$
\sigma\_{\text{max}}\left(A\_{\Delta}\right) < \sigma\_{\text{min}}\left(A\_{c}\right) \tag{C.5}
$$

is fullled, the interval matrix has full rank. Thereby max () and min () denote the greatest, respectively smallest, singular value of the matrix . The singular values are dened as the nonnegative solutions to the system

$$
\begin{pmatrix} 0 & A^T \\ A & 0 \end{pmatrix} \begin{pmatrix} x \\ y \end{pmatrix} = \sigma \begin{pmatrix} x \\ y \end{pmatrix} . \tag{C.6}
$$

Condition four uses an absolute subordinate matrix norm || · || of ∈ IR(×) . Assuming full rank of the center matrix , the sucient condition is given by

$$||A\_{\Delta}|| < ||A\_{c}^{+}||^{-1}.\tag{C.7}$$

If (C.7) holds, has full rank. The proofs of all four sucient conditions are given in [Sha14].

According to [Roh12] there is a fth, necessary and sucient condition: An interval matrix ∈ IR(×) with ≥ has full rank i

$$\left|A\_c X\right| \le A\_\Delta \left|X\right|\tag{C.8}$$

.

with ∈ R can only be solved by the zero solution = [0, 0, . . . , 0]

Suciency is based on the idea that there is a non trivial solution ̸= [0, 0, . . . , 0] as soon as the matrix does not have full rank. Necessity follows from the existence of the non trivial solution. If this is the case, cannot have full rank or the non trivial solution does not solve (C.8). An extensive proof of this condition is given in [Sha14]. However, the approach directly aims on an -Hard problem which means that it can only be checked approximately.

# D Existence and Uniqueness of the Algebraic Solution Set

The sucient conditions on the existence of an algebraic solution given in [Sha96], [Mar99] and [Lak99] are sketched in this appendix. To follow those ideas, two more interval arithmetic notations are necessary.

Using the dual(·) operator given in (3.37), the proper projection pro () is dened as

$$\text{pro}\,(x) = \begin{cases} x & \text{, if } x \text{ is proper} \\ \text{dual}\,(x) & \text{, else.} \end{cases} \tag{D.1}$$

The second property is -nonsingularity. A quadratic point real matrix ∈ R (×) is called -nonsingular if

$$Qx = 0 \Leftrightarrow x = 0 \in \mathbb{I}\mathbb{R}^n\tag{D.2}$$

holds. Otherwise is called -singular.

According to [Sha96] there is an algebraic solution ∑︀ to the interval linear equation = with ∈ IR(×) for any ∈ IR , if is suciently narrow and pro () contains an -nonsingular point matrix.

Thereby "suciently narrow" means that |Δ| is suciently small.

The proof of existence given in [Mar99] is based on the iterative approach to determine the algebraic solution set given in [Kup95]. For this proof, the notation of the diagonal matrix () is introduced for an interval matrix ∈ IR(×)

$$D(\mathbf{A}) = \left(\mathbf{d}^{(i,j)}\right)\_{1 \le i \le n, 1 \le j \le n} = \begin{cases} \mathbf{a}^{(i,j)} & \text{, if } i = j \\ 0 & \text{, if } i \ne j. \end{cases} \tag{\text{D.3}}$$

The inverse of the diagonal matrix is given by

$$D^{-1}(A) = \left(\mathring{d}^{(i,j)}\right)\_{1 \le i \le n, 1 \le j \le n} = \begin{cases} ^1\!\!/ \text{dual}\left(a^{(i,j)}\right) & \text{, if } i = j\\ 0 & \text{, if } i \ne j. \end{cases} \tag{D.4}$$

Using the dual(·) operator from (3.37). The iterative solution algorithm given in [Kup95] converges to the algebraic solution ∑︀ if

$$||D^{-1}(\mathcal{A})|| \le 1\tag{D.5}$$

$$||\mathcal{A} + \text{opp}\left(D(\mathcal{A})\right)|| \le 1\tag{D.6}$$

holds, with opp (·) according to (3.34). The used matrix norm || · || is the maximum of the linewise sum of the absolute interval values (C.1):

$$||\mathcal{A}|| = \max\_{1 \le i \le n} \left( \sum\_{k=1}^{n} \left| a^{(i,k)} \right| \right). \tag{D.7}$$

The interested reader is referred to [Mar99] for further considerations.

A generalized approach for overdetermined systems ∈ IR(×) was introduced in [Lak99]. The regressor matrix is split in three parts with = <sup>0</sup> + <sup>1</sup> + <sup>2</sup> and

$$A\_0 = \left(\mathbf{a}\_0^{(i,j)}\right)\_{1 \le i \le m, 1 \le j \le n} = \begin{cases} \mathbf{a}^{(i,j)} & \text{, if } \underline{a}^{(i,j)} \overline{a}^{(i,j)} \ge 0 \\ 0 & \text{, else} \end{cases} \tag{\text{D.8}}$$

$$\mathbf{A}\_1 = \left( \mathbf{a}\_1^{(i,j)} \right)\_{1 \le i \le m, 1 \le j \le n} = \begin{cases} \mathbf{a}^{(i,j)} & \text{, if } \underline{\mathbf{a}}^{(i,j)} < 0 < \overline{\mathbf{a}}^{(i,j)} \\ 0 & \text{, else} \end{cases} \tag{\text{D.9}}$$

$$\mathbf{A}\_2 = \left( \mathbf{a}\_2^{(i,j)} \right)\_{1 \le i \le m, 1 \le j \le n} = \begin{cases} \mathbf{a}^{(i,j)} & \text{, if } \underline{\mathbf{a}}^{(i,j)} > 0 > \overline{a}^{(i,j)} \\ 0 & \text{, else.} \end{cases} \tag{\text{D.10}}$$

The problem can be reformulated as an extended system that considers the upper and lower bounds of the interval values explicitly, as given in [Lak99]. This problem can then be transferred to a set of inequality conditions. It is possible to show that there is not more than one solution for any ∈ if the derived set of inequality conditions has zero as unique solution. The extensive proof is given in [Lak99].

## E System Behavior Specication

The verication methods developed in this thesis are based on the assumption of a system specication available in ARX form. In a practical setting, it is necessary to determine these nominal parameters. Control engineering specications are in general based on tolerance bands, steady-state errors, rise and settling times or acceptable overshoots. Such specications inherently show interval properties - even though in general no interval arithmetic is used.

This appendix provides two approaches to determine the ARX parameters of such intuitive graphic specications. In the time domain, a method is introduced to determine the parameters from a desired step response. This step response can be set up using the drag-and-drop function provided by a toolbox. A second method determines the parameters from the frequency domain. Therefore only the tolerance band widths and pass/cut-o frequencies of a lter function need to be specied. In case the Kaucher based method is applied in a diagnosis setting, it is benecial to use a nominal physical model of the regarded process. This physical model can also be used to determine the desired ARX parameters.

## E.1 Time Domain Specication

This specication approach allows an intuitive specication of the desired behavior based on time domain input-output behavior. The rst step is to specify an input signal and the desired resulting output signal. Also the class of the desired system behavior has to be given. The time domain input-output behavior is then used to determine the parameters of a transfer function in the complex s-plane. In this appendix a step input is used to determine the properties of a proportional gain rst order time delay system (PT1). The method given in [Föl13, p. 77] can be used to determine the system parameters based on a given step response. A time continuous step response denotes the output values () generated by a step input

$$u(t) = \sigma(t) = \begin{cases} 0, & k < 0 \\ 1, & k \ge 0. \end{cases} \tag{E.1}$$

The complex frequency domain transfer function of a basic PT1 system is given by

$$G(s) = \frac{k\_p}{1 + \hat{T}s} \tag{\text{E.2}}$$

with gain and time delay ˜. If both parameters are determined, () can be transformed to its discrete time representation. The desired nominal parameters Θ\* are then given by the parameters of the discrete time transfer function.

For given input-output data [(), ()] with ∈ [0, ] it is possible to determine the transfer function parameters. The gain is given by the stationary value = <sup>∞</sup> which is dened to be the last point of the measurement (), assuming is large enough to allow () to settle. If the output signal is sampled with sampling time ∆, the information of the resulting points can be used to calculate the time delay ˜. Therefore each available sampling point = (∆) is used to calculate an auxiliary value

$$
\eta\_k = 1 - \frac{y\_k}{y\_{\infty}} \tag{E.3}
$$

which is then used to determine the time delay

$$
\tilde{T}\_k = -\frac{k\Delta t}{\ln\left(\eta\_k\right)}.\tag{\text{E.4}}
$$

The time delay of the transfer function ˜ is given by the arithmetic mean of all = Δ values of ˜ , i.e.

$$
\tilde{T} = \frac{1}{n\_k} \sum\_{k=1}^{n\_k} T\_k. \tag{E.5}
$$

Afterwards the transformation to discrete time is done. The resulting parameters Θ\* are used to determine the ARX step response.

The introduced functionality is implemented in a Toolbox. The application of this Toolbox is demonstrated in Example E.1.

#### Example E.1:

Based on an initial arbitrary input-output signal, the toolbox provides the possibility to move the sampling points via drag-and-drop. Fig. E.1 shows an exemplary output signal that was created as the step response of a PT1 system with gain = 10 and ˜ = 2. The resulting continuous trajectory was sampled with ∆ = 1s. The blue points depict the sampling points that can be moved using drag-and-drop. The user can move the sampling points such that the resulting trajectory shows the desired behavior. In this case, the toolbox calculates the system parameters according to (E.3)-(E.5). The resulting system is able to generate the desired values for the given step input. The depicted setting leads to the time discrete ARX parameters = 0.64227, = 3.6077. The respective time discrete step response is given by the red trajectory in Fig. E.1.

Figure E.1: Time domain specication toolbox

It can be seen in Fig. E.1 that the time discrete trajectory (red) is close to the specied points (blue). This demonstrates that it is possible to determine the ARX parameters of a system by using a graphical user interface with drag-and-drop to set up the desired step response.

## E.2 Frequency Domain Specication

This specication approach allows an intuitive specication of the desired behavior based on designing the amplitude response of the system. First order systems can be interpreted as low pass lters. Each Filter has a specic frequency domain characteristic consisting of the location and the width of the passband and the stopband. Based on this information, the method of [Lüc80, p. 147] is used to determine the lter coecients. The resulting lter can be transformed to discrete time which leads to the desired ARX coecients.

The method is applicable for low-pass, high-pass, bandpass and band-rejection lters. In this appendix the design of a low-pass is presented. Therefore the cut o frequency Ω, the stop band frequency Ω and the respective passband width ∆ and stopband width ∆ (see Fig. E.2) need to be dened by the user. These frequencies are dened with respect to the periodic interval of the frequency response 0 ≤ Ω ≤ . The frequencies are now transformed into the 0 ≤ ≤ ∞ domain by

$$f\_p = \tan\left(^{\Omega\_p}\!/^{\circ}\!/ 2\right) \tag{E.6}$$

$$f\_s = \tan\left(^{\Omega\_s}\!/\!/2\right). \tag{E.7}$$

These values lead to the normalized low-pass representation

$$f\_{p,norm} = 1\tag{E.8}$$

$$f\_{s,norm} = f\_{\prime}/f\_p. \tag{E.9}$$

The normalized low-pass representation can be achieved for all four kinds of lters, by using dierent transformations. The following design routine is thus applicable in every setting. To ensure that the given passband and stopband limits are met, the auxiliary variables

$$\tilde{\Delta}\_d = \frac{\sqrt{2\Delta\_d - \Delta\_d^2}}{1 - \Delta\_d}, \text{ for } 0 \le f \le 1 \text{ (Passband)}\tag{E.10}$$

$$
\bar{\Delta}\_s = \frac{\sqrt{1 - \Delta\_s^2}}{\Delta\_s}, \quad \text{ for } f\_{s, norm} \le f \text{ (Stopband)}\tag{E.11}
$$

are calculated. The transfer function of an exponential lter is then given by

$$G(f) = G\_0 \frac{1}{\prod\_{i=1}^{q} f - f\_{\infty, i}} \tag{\mathbb{E}.12}$$

with poles ∞, and normalization constant 0. The denominator order ∈ N can be calculated with

$$q \ge \frac{\log\_{10}\left(\check{\Delta}\_s / \check{\Delta}\_d\right)}{\log\_{10}\left(f\_{s,norm}\right)}.\tag{\mathbb{E}.13}$$

The real part and the imaginary part of a complex pole ∞, are given by

$$\Re\left(f\_{\infty,i}\right) = -\epsilon^{-1/q}\sin\left(\frac{2i-1}{q}\frac{\pi}{2}\right) \tag{\text{E.14}}$$

$$\text{CS}\left(f\_{\infty,i}\right) = \quad \epsilon^{-1/q}\cos\left(\frac{2i-1}{q}\frac{\pi}{2}\right). \tag{E.15}$$

The factor and the respective normalization can be used to adjust the level of the frequency response. The used value of can be chosen from the interval

$$\boldsymbol{\epsilon} = [\underline{\boldsymbol{\epsilon}}, \,\, \overline{\boldsymbol{\epsilon}}] = \left[ \frac{\tilde{\Delta}\_s}{(f\_{s,norm})^q}, \,\, \tilde{\Delta}\_d \right]. \tag{\text{E.16}}$$

Thereby choosing = means that the frequency response touches the constrained region at the end of the passband ,, whereas = means that it touches the constrained region at the beginning of the stopband ,.

The normed lowpass is now transformed back to its genuine frequency form and afterwards into the time discrete representation in order to extract the desired nominal parameters Θ\* of the transfer function. An application of this method is given in Example E.2.

### Example E.2:

The frequency domain specication of a low-pass lter is given by ∆ = 0.1, ∆ = 0.2, Ω = 0.4 and Ω = 0.7. This means that the desired frequency response is located within the green area in Fig. E.2. The introduced lter design procedure of [Lüc80, p. 147] is applied to the setting.

Based on the specied values, the choice = leads to the ARX coecients

$$[a\_1, a\_2, a\_3] = [0.1425, -0.3387, 0.0130] \tag{\text{E.17}}$$

$$\left[c\_1, c\_2, c\_3, c\_4\right] = \left[0.1479, 0.4437, 0.4437, 0.1479\right].\tag{\text{E.18}}$$

The respective frequency response is depicted as solid blue line in Fig. E.2. Using the same values but choosing = leads to

$$[a\_1, a\_2, a\_3] = [-0.2643, -0.3518, -0.0244] \tag{\text{E.19}}$$

$$\left[c\_1, c\_2, c\_3, c\_4\right] = \left[0.2051, 0.6152, 0.6152, 0.2051\right],\tag{\text{E.20}}$$

displayed as dashed line in Fig. E.2.

Figure E.2: Frequency domain specication toolbox

# F Excitation Signal Design

Hybrid system verication poses specic requirements on the excitation signal. It is assumed that these requirements are fullled throughout this thesis. However, determining a suitable excitation signal is in general not trivial. The excitation signal of a measurement is often chosen depending on the intended purpose of the experiment. Arbitrary noise signals (white Gaussian noise) can be used to ensure persistent excitation of all frequencies. Arbitrary meaningful signals (impulse or step signal) are used to perform control theoretic modeling such as impulse response or step response. Also there are specically designed input signals tted to the implemented logic in the current verication object.

In the context of hybrid systems as regarded in this thesis, there are two properties that need to be fullled. Each subsystem with its respective individual dynamic needs to be persistently excited. Furthermore, all states of the superimposed state machine need to be activated once. Therefore the respective switching thresholds have to be met to enable the switch event. The situation that a switch is triggered during the excitation and identication phase of each subsystem has to be avoided. A rst possible solution idea was developed in the master thesis [Rie17]. The basic outline is sketched in this appendix. The method uses three steps:


## F.1 Path Calculation

The superimposed state machine is transformed to its graph representation . A coverage algorithm is used to determine paths that include all states and all transitions of the graph. Additionally, the length of the path needs to be minimal to enable short measurement times. If such an optimal excitation signal is used, missing states or transitions can be used to prove inconsistency.

The problem of state and transition coverage can be reduced to transition coverage only. This is due to the structure of the specication, where each state needs to be connected to a transition.

To use a modied depth rst state coverage algorithm, the graph is transformed such that all transitions are represented by states in the transformed graph ′ and vice versa (see Fig. F.1).

Figure F.1: Genuine graph (left) and transformed graph ′ (right)

A modied recursive depth rst algorithm is started in a specied initial state and checks the number of possible successors of each successor of the current state. Also the distance to the successors are taken into account to enable short paths. The result of the algorithm is a path that covers all states of ′ and thus all transitions of . This result is used to identify each subsystem in the path.

## F.2 Persistent Excitation Based on Fisher Information Matrix

Persistent excitation of each subsystem is ensured by a specic input signal. This input signal is calculated based on the Fisher information matrix [Eba14][Man10], which is only applicable for stable systems. Using the Fisher information matrix , the parameter covariance of an estimator is limited by the Cramer Rao Bound [Goo77] to

$$cov(\Theta) \ge \frac{1}{M}.\tag{F.1}$$

Thereby is dened using the expectation {·} as

$$M(\Theta, U) = E\left\{ \left[ \frac{\partial \log \left( p\left( y \mid \Theta, U \right) \right)}{\partial \Theta} \right]^T \left[ \frac{\partial \log \left( p\left( y \mid \Theta, U \right) \right)}{\partial \Theta} \right] \right\}. \tag{F.2}$$

The probability ( | Θ, ) resembles the situation that is observed if the true parameters are given by Θ while using the input = ⟨⟩ =1. To achieve a parameter covariance as close as possible to the Cramer Rao Bound, the Fisher matrix has to be maximal with respect to the input signal used and the parameters. This can be achieved by using D-optimality for the given nominal parameters as dened in [Man10]:

$$U^\* = -\min\_U(\log \det(M(\Theta^\*, U))).\tag{F.3}$$

To solve the optimization Problem F.3, an initial feasible input signal is chosen. This signal is then optimized iteratively for each time step , until the optimization converges. Each input value \* is thereby bounded to the range of feasible input values given by the user.

## F.3 Transfer to the Switch Threshold

After the identication of the subsystem, it is necessary to activate the successive switch. This is done by transferring the relevant system value within its activation limits () .

The specic event and thus the activated transition are already determined in the result of the path calculation. This is done using the well known Hamilton formalism. Therefore the objective function is set up in terms of the dierence between the desired value () = 1 2 (︁ () + () )︁ and the current value , i.e. ∆ = − () . The resulting objective function is given by

$$J = \frac{1}{2} \Delta y\_T S \Delta y\_T + \frac{1}{2} \sum\_{k=1}^{T-1} \Delta y\_k Q \Delta y\_k \tag{F.4}$$

with the penalty matrices = 1 and = 1. It is now possible to set up and solve the Hamilton equations [Sag68]. The rst step is to transfer the ARX system description to a vector matrix notation:

$$
\begin{bmatrix} y\_{k+1} \\ y\_k \\ \vdots \\ y\_{k-n\_a+2} \end{bmatrix} = \underbrace{\begin{bmatrix} a\_1 & a\_2 & \dots & a\_{n\_a} \\ 1 & 0 & \dots & 0 \\ \vdots & \ddots & \ddots & \vdots \\ 0 & \dots & 1 & 0 \end{bmatrix}}\_A \underbrace{\begin{bmatrix} y\_k \\ y\_{k-1} \\ \vdots \\ y\_{k-n\_a+1} \end{bmatrix}}\_{\tilde{Y}\_k} + \underbrace{\begin{bmatrix} c\_1 & \dots & c\_{n\_c} \\ 0 & \dots & 0 \\ \vdots & \ddots & \vdots \\ 0 & \dots & 0 \end{bmatrix}}\_C \underbrace{\begin{bmatrix} u\_k \\ u\_{k-1} \\ \vdots \\ u\_{k-n\_c+1} \end{bmatrix}}\_{\tilde{U}\_k}.\tag{5.5}
$$

Then the Hamilton equation is set up:

$$H(y\_k, u\_k, k) = \frac{1}{2} \Delta y\_k S \Delta y + \lambda\_{k+1}^T (A\tilde{Y}\_k + C\tilde{U}\_k). \tag{\text{F.6}}$$

The derivative of is given by

$$\frac{\partial H}{\partial u} = \lambda\_k \mathbf{C}^T. \tag{F.7}$$

with

$$
\lambda\_k = \Delta \mathbf{y}\_k + \mathbf{A}^T \,\lambda\_{k+1} \tag{\text{F.8}}
$$

leading to

$$
\Delta u = -\alpha \frac{\partial H}{\partial u},
\tag{\text{F.9}}
$$

$$u\_k^{(i+1)} = u\_k^{(i)} + \Delta u\_k. \tag{\text{F.10}}$$

The parameter is used to scale the result in case the calculated solution violates the feasible input range.

This procedure is applied to all subsystems within the calculated path to construct the overall excitation signal. This signal is then applied to the VO and the resulting output values are measured. The resulting input and output measurement data can then be used in any of the methods introduced in this thesis.

## G Tables of Geometric Parameters

The parameters of the three-tank lab setting at the Institute of Control Systems (IRS) are given in Tab. G.1.


Table G.1: System properties of the IRS three-tank lab setting


The parameters of the four-tank simulation setting are given in Tab. G.2.

Table G.2: System properties of the simulated four-tank lab setting

## References

## Public References


## Own Publications and Conference Contributions


[Var19] Varga, B., Meier, S., Schwab, S. and Hohmann, S. Model Predictive Control and Trajectory Optimization of Large Vehicle-manipulators. IEEE International Conference on Mechatronics, 2019.

## Supervised Theses


## **Karlsruher Beiträge zur Regelungs- und Steuerungstechnik (ISSN 2511-6312) Institut für Regelungs- und Steuerungssysteme**



This book introduces a new specification and verification approach for dynamic systems. The introduced approach is able to provide type II error free results by definition, i.e. there are no hidden faults in the verification result. The approach is thus suitable to provide a reliable verification of safety critical systems.

A new notion of set based consistency for dynamic systems with a given specification is presented. Therefore Kaucher interval arithmetic is used to enclose the measurement data in a bounded error sense. The resulting method is able to verify the specified behavior of a dynamic system against its measurement data even in the presence of noise and sensor uncertainty. Consistency is defined using the Kaucher arithmetic united solution set which leads to mathematically guaranteed results.

It is proven mathematically that the desired property holds for a wide class of systems, including time invariant, interval type and hybrid systems, which can be used to describe even nonlinearities. Several extensions are introduced, leading to a new iterative identification and segmentation algorithm for hybrid systems which is able to handle even unknown switching times. In case the calculations can be done fast enough, the developed approach can also be used for the diagnosis of dynamic systems.

The presented methods are successfully applied to several example systems, including theoretic settings and a variation of different tank settings.

The new theories, methods and algorithms developed in this work form the foundation for reliable safety analysis of highly automated safety critical systems.

Guaranteed Verification of Dynamic Systems

ISSN 2511-6312 ISBN 978-3-7315-0965-3