Pontificia Universidad Católica del Perú Escuela de Posgrado Master Thesis Observer Studies of Turbocharger System Para optar el grado de: Magister en Ingeniería de Control y Automatización Presentado por: Maria Cristina Tejada Zuñiga Asesor (TU Ilmenau): Prof. Dr.-Ing. Johann Reger Asesor (PUCP): Prof. Javier Sotomayor Moriano 29/03/2016 I Statement of Authorship I, Maria Cristina Tejada Zuñiga, hereby declare that this master thesis presented here is to the best of my knowledge and belief original and the result of my own investigations, unless otherwise acknowledged. Formulations and ideas taken from other sources are cited as such. This work has not been published and submitted, either in part or whole, for a degree at this or any other University. II Acknowledgements To my family for being my essential pillar because of their unconditional support perfectly maintained over time. All that I am, in my education, both academically, and in life, is thanks them. I would like to thank my responsible supervisors who helped me despite their filled schedules, for their advice and opportune tips. To CONCYTEC and DAAD for the opportunity and the scholarship. Also, I thank my friends, the people who are always closest to me. This work has been possible thanks to them. Abstract The use of diesel engine turbochargers is increasing today, as it represents an option that offers high efficiency and low fuel consumption. To design the control system in order to reduce the level of exhaust emissions there is a need for information about all states that are not measurable. To this end, observers or virtual sensors are more frequently applied, achieving estimates of the system states from inputs and measured output. To propose an observer, the precise mathematical model of the air path diesel engine system is used. This is a nonlinear model of a third order which is analyzed in terms of observability. From the point of view of systems theory, certain conditions and the existence of a transformation of the system state, called diffeomorphism, need to be evaluated. Observers have been designed based on different approaches: Extended Luenberger Observers, High Gain Observers, Sliding Modes Observers and Extended Kalman-Bucy Filters. They have been validated by simulation for the system under consideration in this work. Zusammenfassung Die Verwendung von Dieselmotor-Turboladern ist heutzutage weit verbreitet, da es eine Option darstellt, die einen hohen Wirkungsgrad und geringen Kraftstoffverbrauch bietet. Zur Gestaltung des Regelungssystems, um die Höhe der Abgasemissionen zu kontrollieren, gibt es einen Bedarf an allen Zuständen, die nicht messbar sind. Zu diesem Zweck werden Beobachter oder virtuelle Sensoren häufig angewendet, um Schätzungen mittels gemessener Aus- und Eingänge anzugeben. Der Entwurf für den Beobachter gründet auf einem genauen mathematischen Modell des Luftpfads im Dieselmotorsystem. Das verwendete Modell ist dritter Ordnung nicht-lineares. Es wird in Bezug auf die Beobachtbarkeit analysiert. Aus der Sicht der Systemtheorie müssen dazu bestimmte Bedingungen und die Existenz einer Transformation des Systemzustands, genannt Diffeomorphismus, ausgewertet werden. Die Beobachter wurden auf Grundlage unterschiedlicher Ansätze entwickelt : Erweiterte Luenberger Beobachter, High Gain Beobachter, Sliding-Mode Beobachter und Erweiterte Kalman-Bucy-Filter. Sie wurden durch Simulation am betrachteten System validiert. Contents 1. Introduction 1 1.1. Motivation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 1.2. Problem Description . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2 1.3. Observers Relevance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 1.4. Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5 2. Model of a Turbocharged Diesel Engine 7 2.1. Model Description . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7 2.2. Singularities of the Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10 2.3. Block Diagram of the System . . . . . . . . . . . . . . . . . . . . . . . . . . 11 2.4. Complete Model in Affine Form . . . . . . . . . . . . . . . . . . . . . . . . . 12 3. Observability for Nonlinear Systems 14 3.1. Definitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14 3.1.1. Observability for Autonomous Nonlinear Systems . . . . . . . . . . 17 3.1.2. Observability for General Nonlinear Systems . . . . . . . . . . . . . 19 3.1.3. Normal Observable Form for Nonlinear Systems . . . . . . . . . . . 20 3.2. Symbolic Analysis of the System Observability . . . . . . . . . . . . . . . . 23 3.2.1. Analysis without Inputs . . . . . . . . . . . . . . . . . . . . . . . . . 25 3.2.2. Analysis of the Input xegr . . . . . . . . . . . . . . . . . . . . . . . . 27 3.2.3. Analysis of the Input xvgt . . . . . . . . . . . . . . . . . . . . . . . . 29 3.2.4. Conclusion about the Observability of the System . . . . . . . . . . 30 4. Design of Observers for Nonlinear Systems 31 4.1. Extended Luenberger Observer . . . . . . . . . . . . . . . . . . . . . . . . . 32 4.1.1. Derivation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32 4.1.2. Calculation of the Observer Gain in Original Coordinates . . . . . 34 4.1.3. Realization and Results . . . . . . . . . . . . . . . . . . . . . . . . . 35 4.2. High Gain Observer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39 4.2.1. Derivation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39 4.2.2. Considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43 4.2.3. Numerical Limits and Implementation . . . . . . . . . . . . . . . . 47 4.2.4. Realization and Results . . . . . . . . . . . . . . . . . . . . . . . . . 48 4.3. Extended Kalman-Bucy Filter . . . . . . . . . . . . . . . . . . . . . . . . . . 54 4.3.1. Kalman Filter for Linear System . . . . . . . . . . . . . . . . . . . . 54 4.3.2. Extended Kalman Filter . . . . . . . . . . . . . . . . . . . . . . . . . 57 4.3.3. Convergence Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . 61 4.3.4. Realization and Results . . . . . . . . . . . . . . . . . . . . . . . . . 62 4.4. Sliding-Modes Observer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65 4.4.1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65 4.4.2. Homogeneity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65 4.4.3. Derivation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66 4.4.4. Realization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68 4.4.5. Error Dynamics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69 Contents VI 4.4.6. Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70 4.5. Others Approaches . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74 4.5.1. Lyapunov Based Methods . . . . . . . . . . . . . . . . . . . . . . . . 74 4.5.2. Extended Linearization . . . . . . . . . . . . . . . . . . . . . . . . . 74 4.5.3. Lie-algebraic Techniques . . . . . . . . . . . . . . . . . . . . . . . . . 74 4.5.4. Horizon Moving Estimator . . . . . . . . . . . . . . . . . . . . . . . 75 5. Comparative Simulations 76 5.1. Implementation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76 5.2. Comparative Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77 5.3. Disturbances and Noise . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77 5.3.1. Output Noise . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79 5.3.2. Uncertainties in Parameters . . . . . . . . . . . . . . . . . . . . . . . 82 6. Conclusions and Perspectives 91 6.1. Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91 6.2. Perspective . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92 Bibliography 94 A. Appendix A 99 A.1. Parameter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99 A.2. Abbreviations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 100 List of figures 102 List of tables 103 1. Introduction 1.1. Motivation The automotive industry is facing many problems with the tougher legislation on emissions required by the governments and the increasing concern of climate change [1]. To cope with these requirements many different solutions are being researched: variable valves, variable compression, variable turbine geometry and exhaust gas recirculation. These solutions, introduced by making the base engine more flexible, also introduce interesting problems for control engineering [2]. In this work, the use of turbochargers in diesel engines is considered. This kind of engines are more frequently used nowadays, are applied in order to achieve the maximal efficiency, decreasing resources and reaching environmental specifications. They use the remaining thermal energy of the exhaust to drive a turbine, which is mechanically coupled to a compressor, increasing the pressure of the outside air which enters the engine. With the higher amount of oxygen in the cylinder more fuel can be combusted within one cycle. Hence, more chemical energy is released, increasing the power of the engine. For the control of the emissions of diesel engines, the work described in [3] has suggested that a reduction of Oxides of Nitrogen (NOx) emissions can be achieved by increasing the intake manifold Exhaust Gas Re-circulation (EGR) fraction, and Particulate Matter (PM), also known as smoke can be reduced by increasing the Air/Fuel Ratio (AFR) [4]. The EGR and the AFR rates are controlled by the Variable Geometry Turbine (VGT) and EGR and the actuators which positions determine the amount of the EGR flow in the intake manifold and thus, controls the AFR and the EGR ratio variables. To get information about some system variables, the acquisition of measurement data by using a suitable sensor system is the most direct way. It is often, however, that the measurements are faulty or can not be satisfactorily performed. For temperature sensors, for instance, there is a significant inertia in the measurement, which makes them unsuitable for fast dynamic processes. Some states are not measurable at all for physical reasons, e.g., the magnetic flux density used in electromagnetic systems. From a practical point, and due to economic reasons it makes sometimes no sense to use a measuring device. In such cases, the use of an observer is the best option. These virtual sensors reconstruct vital system variables based on other measurements and mathematical information of the system as a model. It is then the task of the observer to process the 1. Introduction 2 data to handle information about the unknown states into a usable form to be used in further tasks. The purpose of an observer is the estimation of the values of the states of a system. Given a correct model, the observed states could be used for diagnosis or control. Observers can also be used in the case of nonlinear systems. However, this is not as easy as in the linear case. 1.2. Problem Description Over the last years, diesel engines have gained interest because they provide higher fuel efficiency than gasoline engines. However, diesel engines have the disadvantage of the complicated exhaust gas treatment. The continuous increase of the requirements in international standards about exhaust gases makes it necessary to improve the system permanently. In the past decades, several researches have been dedicated to the control of modern diesel engines. Many controllers were proposed in the literature, e.g., Lyapunov control design [5], controller based on a Linear Parameter-Varying (LPV) model for turbocharged diesel engine [6], indirect passivation [7], predictive control [8], feedback linearization [9] and sliding mode control [10]. Others approaches for air-to-fuel ratio control for turbocharged diesel engine were proposed in the literature [11, 12, 13]. The different solutions are mostly nonlinear and model-based. They have been found to be a suitable way to control the air-to-fuel ratio effectively. In the controller design it is usually assumed that the state variables are measurable or computable. However this is not the case for our system, as was explained in [14]. The air-flow in the exhaust manifold oscillates with a frequency proportional to the engine speed. As result of this oscillation the value of the pressure in the exhaust manifold will also present oscillations. For this reason to measure the exhaust pressure is very difficult and expensive. The intake manifold pressure or boost pressure, is measurable. The compressor power can be computed, but the sensors used to compute the compressor power are quite expensive, and, therefore, it is assumed to be unmeasurable. Then to relax this assumption the use of observers to estimate the unmeasurable states px and Pc therefore is proposed as a solution to the problem in this work. From a practical point of view, neither px nor Pc are measurable in serial production engine [3]. An other solution that has been researched consists of using the MAP and the MAF sensors with the open loop maps to estimate px and Pc for more information see the works in [14, 15]. These methods of estimation have the advantage that they are based only on the manipulation of algebraic equations and do not need model-based observers. The aim of this master thesis is to explore some techniques to observe the states of this system. Here the study of the observability of turbocharger diesel engine is presented 1. Introduction 3 and validated through four different approaches with different levels of complexity: • Extended Luenberger Observer • High Gain Observer • Extended Kalman Bucy Filter • Sliding-Mode Observer To perform observability investigations and to pursue, where appropriate, the implementation of an observer, a mathematical system model must be placed first, which then serves as the subject of research. The modeling of air path system of a diesel engine, which is composed of a turbocharger with variable turbine geometry, exhaust gas recirculation, intercooler and more throttle, is not an easy task and consists largely of reading identification and map creation, as [16] had presented in detail. Moreover, complicated functional relationships arise, for example, by attaching to flow functions. Through versatile couplings describing measurements, a comprehensive information flow exists in the system which can be used for observation. However, this creates a very complex mathematical model that needs to be mastered. Fortunately, simplifications and approximations have been made. Figure 1.1 shows an example of the considered turbocharged diesel engine including VGT and EGR, in diesel engines that are equipped with variable geometry turbochargers (VGT) and exhaust gas recirculation (EGR), as here, both elements introduce feedback loops from the exhaust to the intake manifold. Figure 1.1.: Illustration of the Scania six cylinder engine with EGR and VGT [2] One possibility to reduce emissions in diesel engines, in particular nitrogen oxide (NOx), is to recirculate the exhaust gas through the exhaust gas recirculation (EGR) valve in the intake manifold [5]. The recirculated exhaust gas decreases the temperature of combustion 1. Introduction 4 in the cylinders which results in an NOx reduction. However, the amount of exhaust gas that can be recirculated depends on the operating conditions because it reduces the amount of fresh air in the intake manifold that is needed to provide the required engine torque as well as to avoid smoke generation. The function of the turbocharger is to increase the power density of the engine by forcing additional fresh air into the cylinders of the engine in order to be able to burn more fuel without smoke generation, which allows the injection of more fuel without reaching the smoke bound. The turbine has a variable geometry (VGT) that allows the adaptation of the turbine efficiency based on the operating point of the engine, and it is driven by the energy in the exhaust gas. The second feedback path to the intake manifold from the exhaust manifold is due to exhaust gas recirculation, which is controlled by an EGR valve. The oxygen in the intake is replaced by recirculated exhaust gases, thereby reducing the temperature profile of the combustion and hence the emissions of nitrogen oxides. The interactions are relatively complex; a detailed description can be found in [17] and the references therein. The VGT actuator is usually used to control the intake manifold absolute pressure (MAP), while the EGR valve controls the mass air flow (MAF) into the cylinders. Both the EGR and VGT paths are driven by the exhaust gases. However, both the EGR and VGT actuator are equipped with position sensors. They have been identified as first-order systems and included in the overall model. 1.3. Observers Relevance This subsection presents a brief description of observers, their benefits and a review of the main applications. In many engineering applications, it is desirable to have estimates of states that can not be measured, or the sensors are too expensive or even impossible to install physically. An interesting alternative is to use an observer, in order to produce an estimate of the states taking advantage of the perfect knowledge of the system, in the form of a mathematical model, and the information obtained from output measurements. There are many reasons why a measurement or estimation is desired, but in the field of control engineering can be mentioned at least three: control, fault detection and monitoring. All those purposes are shown in Figure 1.2. Supervision: In some processes it is necessary that the user or operator knows the certain values of process variables to take an appropriate action, such as stopping the process if some variable reaches some value. The aim is that the estimated state should be as close as possible to the actual value even when acting disturbances in the process lead to a proper decision. Fault Detection: In any system, failures are inevitable. Leaks, breaks, sensor errors, etc., can cause terrible consequences from lost efficiency to accidents. In order to detect 1. Introduction 5 Disturbances u y System Observer Control Fault Detection Supervision Figure 1.2.: The Observer as the center of control system these failures before they have unintended consequences observers are used. The fault detection by analytic redundancy uses often an observer to determine the status of the system and possible failures. The idea is to generate a fault signal alarm, this will take a zero value if there are no fault and another value if they had. Control: Many control strategies need the complete state vector to produce the control signal. In general, all states are not measurable and an estimate based on observer is required. The objective here is to control the behavior of the system from the estimated values. In this case it is important to verify the stability of the closed-loop system, including both the observer and the controller. For linear systems, the principle of separation ensures that one can use the vector of estimated states instead the vector of the process states in the controller without affecting the closed loop stability, but this is not a rule for nonlinear systems case. 1.4. Overview In Chapter 2, the air-path of a diesel engine with turbocharger and its corresponding mathematical model, which will be used along the thesis is described. This chapter is completed with the definition of the singularities of the system and the area of validity operation. Finally, the complete model is presented in the affine form. Chapter 3 presents the concepts necessary for the analysis of observability for nonlinear systems. Additionally, it presents systems in the observable normal form (ONF) and the conditions for finding a diffeomorphism capable of converting the original nonlinear system to this form. Analysis of observability is developed for the proposed model, first analyzing the model autonomously, and then each entry separately. Finally, a conclusion about the observability of the system is presented. Chapter 4 presents four approaches to the development of observers to the system under consideration, for each one the corresponding mathematical derivation is made and the 1. Introduction 6 analysis of the error dynamics. Finally, their performance is simulated with different parameters for estimating the states of the nominal system. Chapter 5 presents comparative simulations of the four approaches presented, include testing measurement noise, disturbances and uncertainties of the model. Finally, in Chapter 6 the conclusions are presented, and proposals for future work and developments. In addition, the thesis has an appendix describing the nomenclature, values of model parameters and abbreviations have been used throughout the document. 2. Model of a Turbocharged Diesel Engine 2.1. Model Description This thesis presents the model proposed in [5]. This is a simplified third-order nonlinear model derived from an eighth-order nonlinear mean-value model of the air path of a turbocharged diesel engine with EGR and VGT. The full-order air path model is a eighth- order one which contains as states: intake and exhaust manifold pressure (pi and px), oxygen mass fractions in the intake and exhaust manifolds, turbocharger speed and the two states describing the actuator dynamics for the two control signals. In order to obtain a simple control law, the model was reduced to a third-order under the following assumptions: • The oxygen mass fraction variables are ignored due to they are not coupled with other engine dynamics and additionally are difficult to measure. • The temperature variables are omitted because dynamics of temperatures are slow compared to pressure and flow dynamics. The detailed derivation of the eighth-order nonlinear mean-value model of the engine under investigation from which the model presented here was derived, can be found in [18]. The manifolds dynamics are described solely by differentiation of the ideal gas law pV = nRT resulting in one differential equation for each intake and exhaust manifold pressure. The notation of the mass flow will be Wi j where i is the origin and j the destination of the flow respectively. Temperatures are denoted with Tk and pressures with pk a list of the used indices is presented in Table2.1. For a complete list of constants and quantities see A.1. Table 2.1.: Indices used in the model Indice Description a ambient conditions c compressor i intake manifold x exhaust manifold e engine (cylinders) r recirculation t turbine II. MODEL OF A DIESEL ENGINE The flo w Wxi in (1) describes the flo w through the EGR valve. It is given by In this section a mean-value model of the airpath of a √ ( ) turbocharged diesel engine with EGR is described. For a Aegr(xegr)px 2pi pi detailed derivation of the mean-value model see [13, 15] and Wxi = √ 1 − (3) RTx px px the references quoted therein. A schematic diagram of the considered turbocharged diesel engine including the EGR where Aegr(xegr) is the effective area of the EGR valve, and VGT is shown in Figure 1. A third order nonlinear model Tx = 509K is the exhaust manifold temperature, and R = J can be derived using the conservation of mass and energy, the 287kgK is the gas constant. Moreover, the flo w Wie from the ideal gas law for modeling the intake and exhaust manifold intake manifold into the cylinders is modeled by the speed- pressure dynamics, and a firstorder differential equation with density equation [15] time constant τ for modeling the power transfer dynamics piNVd Wie = η (4) of the VGT. Under the assumption that the intake and v 120TiR exhaust manifold temperatures, the compressor and turbine with the volume efficiency ηv = 0.87, the engine speed efficiencies, the volumetric efficiency and the time constant N , the intake manifold pressure Ti = 313K, and the τ of the turbocharger are constant, this modeling approach displacement volume Vd = 0.002m3. The turbine flo w Wxt results in the nonlinear model [13, 15] in kg/s is given by ( ( ) ) RTi p p ṗi = (Wci + Wxi − Wie) x x W V xt = (axvgt + b) c − 1 + d i √ pa pref RTx − − √ (1) ( ) (5) ṗx = (Wie Wxi Wxt + Wf ) Vx × Tref 2pa − pa 1 1 Tx px px Ṗc = (−Pc + ηmPt) . τ with the parameters a = −0.136, b = 0.176, c = 0.4, Here p d = 0.6, the reference pressure p = 101.3kPa, and the i is the intake manifold pressure, px the exhaust man- ref ifold pressure, P the power transfered by the turbocharger, reference temperature Tref = 298K. Finally, the turbine c τ = 0.11s the time constant, ηm = 0.98 the mechanical power Pt is modeled by the equation efficiency, and V = 0.006m3 ( ( ) ) i , Vx = 0.001m3 the volumes of µ − pa the intake and exhaust manifold. In the diesel engine model Pt = WxtcpTxηt 1 (6) px (1), Wci describes the relation between the flo w through the with the turbine efficiency ηt = 0.76. Furthermore, the compressor and the power. This relation is modeled as engine speed N and the fueling rate Wf are considered as ηc ( )Pc known external parameters. Wci = , c T µ p − (2) p a As shown in Figure 1 the exhaust gas recirculation and the i p 1 a turbocharger introduce feedback paths in the turbocharged where η = 0.61 is the compressor efficiency, T = 298K diesel engine. Furthermore, since the exhaust gas recircu- c a the ambient temperature, c = 1014.4 J , c = 727.4 J lation and the turbocharger are both driven by the exhaust p kgK v kgK c −c gas, the turbocharged diesel engine with EGR is a coupled 2. Model of haeTautrabtochoanrsgtaendtDpiressesluErengainde volume, µ = p v c = 0.286 a p nonlinear8system. constant, and pa = 101.3kPa the ambient pressure. The control inputs of the diesel engine model (1) are the EGR position xegr and the VGT position xvgt. The EGR Wci Ti, pi,mi position and the VGT position range between 0 and 1. W Intake Manifold xi However, to simplify the considerations, the effective areas Tic, pc EGR Aegr and Avgt = axvgt + b are directly used as control x T , p inputs instead of the EGR valve position x r r x egr and the VGT Intercooler position xvgt. Since the effective area Aegr as well as the effective area Avgt are monotonically increasing functions Cooler in their variables [15], the positions xegr and xvgt can be Tc, p T , p ,m c Wex x x x uniquely determined from the effective areas. Furthermore, Exhaust Manifold the control inputs Aegr and Avgt are constrained due to the Wxt minimal and maximal EGR and VGT positions. In particular, Nt Compressor Turbine the input constraints of the nonlinear diesel engine model are VGT xv given by Tt, pt Oxicat A ∈ [0m2, 1.8 × 10−4m2] Ta, p egr a (7) Avgt ∈ [0.04, 0.176]. In the next sections, a NMPC controller is designed in order FiguFirge. 21.1.:SScchhemematiactdiciadgriamgroafma toufrbaocthuarrbgeodchdiaersgeledengdiineesewlitehnEgGinRe. withto control the diesel engine described by the model (1). EGR [19]. The turbocharger dynamics are approximated by the power transfer with the time constant τ. The complete dynamics is represented by: RT ṗ i Ṫi i = (Wci + Wxi −Wie) + p V T i i i RTx Ṫ ṗx = (Wie + W f −W x xi −Wxt) + p V T x x x 1 Ṗc = (−P τ c + Pt) (2.1) Where the states are pi, px, and Pc represent the intake manifold pressure, the exhaust manifold pressure, the power transferred by the turbocharger respectively . A further approximation of the diesel engine model 2.1 considers the intake and exhaust temperatures constant such that the effect of Ṫi and Ṫx on pi and px respectively, is neglected. This approximation is showed in [20] where it is justified comparing linearized models for different temperatures with the effects on the frequency response, resulting a negligible effect. Additional details on turbocharger modeling and parametrization [18]. Wci describes the relation between the flow through the compressor and the power and can be derived under the assumption of isentropic compression. But due to different irreversibilities across the compressor for example the friction losses, the compression process is in fact not isentropic, therefore, the compressor isentropic efficiency ηc is introduced. This relation is modeled as: ηc P W c ci = (2.2) c p pTa ( i )µp − 1 a 2. Model of a Turbocharged Diesel Engine 9 Wxi describes the flow through the EGR valve, it is modeled by the standard orifice flow equation [21], where the effective area of the turbine is identified as a linear function. This relation is given by: √ A W EGR(xEGR)px 2pi p xi = √ (1 i − ) (2.3) RTx px px where AEGR(xEGR) is the effective area of the EGR valve, Tx is the exhaust manifold temperature, and R is the gas constant. Moreover, the flow Wie from the intake manifold into the cylinders is modeled by the speed density equation, the approximation was developed in [22] as follows: p W iNVd ie = ηv (2.4) 120RTi with the volume efficiency ηv, the engine speed N, the intake manifold pressure Ti, and the displacement volume Vd. The turbine flow Wxt pa√rametr√ized in [20] is given by: ( ( ) ) ( ) pa px Tre f 2p W (ax b) c 1 d i p 1 a xt = vgt + − + − (2.5) px pre f Tx px px where the effective area of the turbine is expressed as a linear function of the VGT position with the parameters a, b, c, d, the reference pressure is Pre f , and the reference temperature is Tre f . The next equation represents the turbine power Pt, where the turbine efficiency ηt is necessary due to the same reason as above for the relation between turbine power Pt and the turbine flow Wxt, then Pt is given by: ( ( ) ) p µ Pt = W a xtcpTxηt 1 − (2.6) px Furthermore, the engine speed N and the fueling rate W f are considered as known external parameters. As shown in Figure 2.1, the exhaust gas recirculation and the turbocharger introduce feedback paths in the turbocharged diesel engine. Furthermore, since the exhaust gas recirculation and the turbocharger are both driven by the exhaust gas, the turbocharged diesel engine with EGR is a coupled nonlinear system. The control inputs of the diesel engine model 2.1 are the EGR position xegr and the VGT position xvgt. The EGR position and the VGT position range is between 0 and 1. However, in order to simplify the considerations of the model, the effective areas of the valves: Avgt = axvgt + b and Aegr are used as control inputs instead of the VGT position xvgt and the EGR valve position xegr. Since the effective area Aegr as well as the effective area Avgt are monotonically increasing functions in their variables [22], the positions xegr and xvgt can be uniquely determined from the effective areas. Furthermore, the control inputs Aegr and Avgt are constrained due 2. Model of a Turbocharged Diesel Engine 10 to the minimal and maximal EGR and VGT positions. In particular, the input constraints of the nonlinear diesel engine model are given in m2 by: Aegr ∈ [0 , 0.00018] Avgt ∈ [0.04 , 0.176] This simplified version of the system, in contrast to the full non-linear model,contains several constant parameters: the compressor and turbine efficiencies ηc and ηt, the volumetric efficiency ηv; the intake and exhaust manifold temperatures Ti, and Tx, and the time constant of the turbocharger power transfer τ. Obviously, these parameters vary with the conditions of the engine and keep constant is only an approximation. However, even with all assumptions they capture the dynamics of the system, at least in the region of the low-speed and medium load, which is under research here. 2.2. Singularities of the Model It should be noted that the model has a singularity when intake pressure equals the ambient pressure, since the denominato(r of)the flow Wci in the equation 2.2 is: pi µ − 1 (2.7) pa Thus if pi = pa then the compressor flow becomes infinite. There exists also another singularity when the turbine stalls. This simplified model can not handle this situation and as is explained in [14] then the system is valid on the the set Ω = {(pi,px,Pc) : pi > pa,px > pa,Pc > 0}. Fortunately, it was demonstrated that this set is invariant, i.e., every trajectory starting in Ω stays in Ω in [13]. Figure 2.2.: Valid operating area for the model The Figure 2.2 shows the colored regions where singularities exist, the set Ω is represented as the area without color. 2. Model of a Turbocharged Diesel Engine 11 2.3. Block Diagram of the System For a better understanding of the relationship among the equations governing this model the Figure 2.3 presents a block diagram of the complete system. Figure 2.3.: Block diagram of a turbocharged diesel engine In order to simulate the dynamic of the process different values of the inputs was used, see Table 4.1 where all times are in seconds and the inputs in percentage. For this simulation scenario the engine speed considered is N = 2000 RPM , and the fuel rate W f = 4 Kg/h. The initial conditions of the states are pi(0) = 1.086e5 Pa; px(0) = 1.105e5 Pa; Pc(0) = 350.2 W. The values of simulation and the parameters of the system are all taken from [19]. Table 2.2.: Values of the inputs for the simulation t ≤ 2 2 < t ≤ 4 4 < t ≤ 6 6 < t ≤ 8 xegr 0.0 0.1 0.7 0.5 xvgt 0.0 0.7 0.1 0.5 From the simulation with the described values, the model presents the dynamics show in 2. Model of a Turbocharged Diesel Engine 12 the Figure 2.4 for each one of its states. 5 5 x 10 x 10 1.3 1.4 1.35 1.25 1.3 1.2 1.25 1.2 1.15 1.15 1.1 1.1 1.05 1.05 0 1 2 3 4 5 6 7 8 0 1 2 3 4 5 6 7 8 Time[s] Time[s] 1400 1200 1000 800 600 400 200 0 1 2 3 4 5 6 7 8 Time[s] Figure 2.4.: Dynamic behavior a turbocharged diesel engine 2.4. Complete Model in Affine Form The complete model summarizing all the relation descr√ibed previously, is given by: ( ) RTi ηc 1 N Vd RTi p p p ṗ P p x 2 i 1 i i = − η + √ − u Vi cpT ( pi µ c v a − 1) 60 2V i i Vi R√Tx √px p 1 pa ( ( ) ) ( x ) Txηv N V ṗ d RT p p Tre f 2p p RT p b x c x 1 d x a 1 a x x = √i − − + √− + W Ti 60 2V f x V(x p)a (pre(f Tx) p)x px √Vx RTx px p p RT p p Tre f 2p p − √ 2 i 1 i − u + a x c x − 1 d x a + (1 a − )u Vx RTx px p 1 x V 2 ( ) ( ( ) x p ) a √ √pre f T p p ( x ) x x Pc cpηtTx pa µ px px Tre f 2p p Ṗc = − + 1 b c a a − − 1 + d √ 1 − τ τ px ( ( pa ) ) pre f Tx√ px( p)x ηtTxcp 1 RT a( ) x px px Tre f 2pa p + p µ c − 1 + d 1 a − u τ 2 i − 1 Vx pa pre f Tx px px pa Then considering the mo[del of this d]iesel engine in the formΣ : ẋ = f (x)+ga(x)u1+gb(x)u2, ′ where the states are x = pi px Pc , the inputs are u1 = Aegr(xegr) and u2 = xvgt, is given p [Pa] i P [W] c p [Pa] x 2. Model of a Turbocharged Diesel Engine 13 by:    RT η 1 N V  i c d   ( )x V c T 3 − ηv x   µ 60 2V 1   i p a x1 i  − 1 pa   ( ( ) ) √ √ f (x) = Txηv N Vd RT x  x b x c 2 x 1 d 2 Tre f 2pa p RT (1 a ) x W   − − + − +   Ti 60 2V 1 x V f   x pa pre f Tx x2 x2 Vx   ( ) ( ( ) ) √ √  x3 cpηtTx pa µ x x Tre f 2p p  − + 1 − b c 2 − 1 + d 2 a (1 a − )  τ τ x2 pa pre f Tx x2 x2 √  ( )   RTi x2 x1 x1   √ 2 1  −  V  i RTx x2 x2  g (x) =  √ RT x x ( x  a  x 2 1 1    − √ 2 1 − )   V x x  x RTx 2 2   0   0       ( ( ) ) √ √   RTx x2 x2 T  re f 2p a c 1 d a pa  − + (1 − )  g Vx pa pre f Tx x2 x 2 b(x) =   (2.8)   √ √   ( ) ( ( ) )  ηtTxcp 1 RTx x2 x2 Tre f 2pa pa  a c − 1 + d (1 − )  τ x1 µ Vx pa pre f T x  x 2 x2  1 − pa 3. Observability for Nonlinear Systems Before an observer can be designed, it must be ensured that the non-linear system is actually observable, although specially for nonlinear systems this step is often neglected. Observability responds the question when is it possible to determine the values of the states x from the input u and the output y of a system in a time interval. For this purpose, the concepts of observability must first be defined, where it is important to distinguish between global and local observability. Given the nature of the nonlinear observer is in this specific case, some concept will be clarified in this chapter note that additional literature can be found in [23, 24]. The study of observability is required for all systems before the design of an observer. An observer is useful if the state information is not available either often not all states required system can be measured, for reasons of cost or either own physical limitations.Then the necessary information about the system states must be generated by an observer. In the case study of this thesis the temperature is a variable whose dynamics is not detectable by sensors, or for example for measuring turbocharger speed sensors are simply too expensive. Through a method of observation after proper identification system model, diagnosis or control system can be performed using the estimated states. For this purpose, observers are designed for dynamic systems and are based on the mathematical model of the system, the measurement data of at least one variable of the system and a prior knowledge to try to reconstruct the real system state. For the design and implementation of an observer we must first examine whether if an observation is possible. Therefore, criteria are established to verify the observability and representations of the original system to try to facilitate the reconstruction of their states and meet the objectives of the observation. In this chapter, the theoretical foundations on observability and important related definitions are described. The treated system here, the Turbocharger Diesel Engine, is evaluated in reference to it. Some observers for nonlinear systems are presented in this chapter and discussed and examined more closely in the following chapter. 3.1. Definitions Now the observer problem and observability problems are explained in detail. A good overview on this topic can be found in the initial chapter of [25], to which is referenced 3. Observability for Nonlinear Systems 15 below. The following general time-invariant nonlinear system:  x(t) = f (x(t),u(t)) Σ : (3.1) y(t) = h(x(t)) is considered. It is x(t) Rn ∈ X ⊆ , u(t) ∈ U ⊆ Rm and emphasizing y(t) R1 ∈ Y ⊆ , then there is a Single-Output-System. The functions f and h are known, and y(t) and u(t) are measurable. Furthermore, f is local Lipschitz continuous or sufficiently smooth, ϕ(t; x0,u(·)) denotes the solution of the non-homogenous differential equation. The goal of an observer is to get an estimate x̂ of the state x of the Σ System, which converges to the real value of the state. Therefore, the observer errors may be defined as e(t) := x̂(t) − x(t). The task will be that observer the error dynamics is stabilized asymptotically to the equilibrium e = 0. This observer problem is boarded in the following definitions. Definition 3.1 Observer [25] Consider the dynamical system Σ. An observer of Σ is given as an auxiliary system: Ẋ = F(X(t),u(t),y(t),t) (3.2) x̂ = H(X(t),u(t),y(t),t) such that with e(t) := x̂(t) − x(t): (i) e(0) = 0⇒ e(t) = 0 ∀t ≥ 0; (ii) lim ||e(t)|| = 0 t→∞ If (ii) holds for any x(0), x̂(0), so it is called global observer. Additionally, the observer will be exponential if (ii) holds with exponential convergence, and tunable if the convergence rate can be tunable. In general, for an observer as above mentioned, the inputs and outputs are considered at any time interval in I = [t0, t f ]. If is it possible for any time interval I from the knowledge of y(t) and u(t) for t ∈ I to uniquely determine x(t0) , the system Σ is called observable. Here, reference is now made to an approach and formulation that has been particularly established by [26]. Definition 3.2 Indistinguishability A pair of initial states (x 1 0 , x 2 0 ) ∈ X × X of the system Σ are called indistinguishable, if: ∀u ∈ U ∀t ≥ 0 : h(ϕ(t; x 1 0 ,u(·))) = h(ϕ(t; x 2 0 ,u(·))) (3.3) A state x is indistinguishable from x0i f thepair(x, x0) is indistinguishable Indistinguishability is so an equivalence relation on X. Thus observability can be defined as follows: 3. Observability for Nonlinear Systems 16 Definition 3.3 Observability [25] The system Σ is called observable (relative to x0) if it does not admit any indistinguishable pair (or any state is indistinguishable relative to x0). The result is: if the system is observable, there exists an observer. This definition is still very general and with no practical relevance. But it serves very well to motivate the derivation of the next section. For a linear system, the observability is independent of the input u(t), that means if it is observable for u(t) = 0 then it is observable for every input. Specifically, the system can be examined first for structural observability. That means whether systems can be observed by their mathematical structure at all. For example, graph theoretical approaches such as in [27] may be selected. Formulated colloquially these algorithms check whether a dynamic connection between the measured parameters and the remaining states is there. Due to the various couplings of the system it is easy to recognize that structural observability is given. Definition 3.4 Weak observability (WO) The system Σ is called weakly observable (relative to x0) when there exists a neighborhood U for any x in which there is no indistinguishable state from x in U. This definition not prevents the cases where the trajectories have to go far from U before one can distinguish between two different states of U. In order to a more precisely explain this situation the following definition is given: Definition 3.5 Local Weak Observability (LWO) The system Σ is called local weakly observable (with respect to x0), if for all x exist a neighborhood U, such that in each neighborhood V of x where is V ⊆ U, there is no indistinguishable state from x in V if in the time intervals considered the trajectories remain in V. This means in simple words that one can distinguish every state from it neighbors if it is no too far.This definition permits the rank conditions characterization. A geometric characterization of observability criteria is introduced here. It is a subset of the state space. Definition 3.6 Observability space The observability space O(h) of the system Σ is the smallest vector space of real-valued of ∞ C functions that contains the components of h and is closed under the Lie derivative along fu := f (·,u) for any constant u m ∈ R (namely such that for any ϕ ∈ O(h) : L fuϕ(x) = ∂ϕ(x) f (x,u) ∈ O(h)). ∂x Here is also revealed for the first time, the role played by the input u. This issue will be deepened in the following section specifically in regard to the nonlinear systems. This means we may distinguish each state of its neighbor without being too far from him. This notation is of great practical relevance because it excludes many unfavorable events and leads later to the rank criterion. Furthermore, it refers directly to the definition of observability map. 3. Observability for Nonlinear Systems 17 In addition, even a stronger formulation of observability, the local observability, based on the so called U be formulated indistinguishable, as introduced in [26]. As already mentioned, the system input u plays an important role in the nonlinear case. Therefore, certain assumptions about the inputs are taken. Definition 3.7 Universal Input The input u is called universal in [0,t] for the system Σ, when ∀x0 , x1 ∃τ ∈ [0,t] : h(ϕ(τ; x0,u(·))) , h(ϕ(τ; x1,u(·))). If u is not universal, then it is called singular input. One can imagine that systems devoid of singular inputs facilitate the observation. For such systems, the following observability terms can be defined. Definition 3.8 Uniformly Observability (UO) The system Σ is called uniformly observable, if every input u is universal. There are systems which have a particular relevance. As with the LTI systems, this even means a kind of independence of the observability of the inputs. The systems that can be converted to the normal form, belong to this class. So this is rather a structural property. Examples of uniform and non-uniform system classes are listed [28]. Of course, there this form is also demonstrated for the simplified turbocharger model. Based on this, even more concepts are introduced in [29], but these are not considered here due to the application. Finally, it should be mentioned that for the observability of nonlinear systems diverse theories exist. Here is only given a brief introduction. In the mathematical sense [30] introduces the concept of the infinitesimal observability. As known in the linear case, the concept of detectability can also be used in non-linear systems. This is, for example, useful if no observability in the considered area is present. A practical application and interesting background material can be found in [31]. To proof the existence of an observer practicaly and, where appropriate, determine numerically, an easily analyzable method is needed. The most important criterion to study the observability is the rank criterion of [26] presented in this section. In [32] one may find again a good summary of the concept. 3.1.1. Observability for Autonomous Nonlinear Systems As a first step of an analysis for the nonlinear observability, autonomous system is considered. ẋ = f (x), y = h(x). (3.4) First a characterization of the map of observability O(h) is made from definition. These attempts will find this area a kind of base. This system is independent of an input u(t).For the analysis the Lie differentiation is needed. 3. Observability for Nonlinear Systems 18 Intuitively, one can imagine as to the procedure step by step deriving the output y after time t. Consider the following non-linear system of equations with respect to system : y = h(x) = L0 f h(x) ∂h(x) ẏ = f (x) = L1h(x) ∂x f . .. (n−1) y = L n−2 f L f h(x) = Ln−1 f h(x) Then captures the above Lie-derivative in a vector   h(x)   L f h(x)    q(x) =  L2 f h(x)    (3.5)  . ..  Ln−1 f h(x) and considering   y      ẏ      z  .  =  ..  (n 1)  −  (3.6) y  gives: z = q(x) (3.7) If there exists the inverse function q−1(z) = x. Then x can be determined with the (n−1) knowledge of y,ẏ, ..., y . Thus a nonlinear system is globally observable when the mapping z = q(x) has a unique solution for all x. For most cases of nonlinear system this inverse function impossible to compute, then another criteria could be applied in order to verify at least a locally observability. For this purpose it is developed a Taylor series around a point of operation xo. ∂q(x) z = q(xo) + (x − xo) + · · · (3.8) ∂x x=xo From this equation is seen that from the knowledge of z−q(xo) the state x is reconstructable due to: ∂q(x) z − q(xo) = |x=xo(x − xo) (3.9) ∂x 3. Observability for Nonlinear Systems 19 Then the system is locally observable when the Jacobian Matrix Q(xo) has rank n:  0 ∂q(x)  ∂L f h(x)  Q(x ) = (x x ) =   o − ∂x o x=xo  ∂x . ..  (3.10) ∂Ln−1 f h(x) ∂x x=xo Now the system of linear equations z − q(xo) = Q(xo)(x − xo) has a unique solution for x. In this case it can be ensured that in a neighborhood U = ||x(0) − xo|| < ρ the system under consideration is observable. This analysis for autonomous system may be made also for non autonomous systems considering the input u as a constant. 3.1.2. Observability for General Nonlinear Systems From the above approximation an analysis for general nonlinear systems ẋ = f (x,u), y = h(x,u) (3.11) is considered. Now the system is dependent on an input u(t), the procedure is the same step by step deriving the output y after time t. Consider the following non-linear system of equations with respect to system : y = h(x,u) ∂h ∂h ẏ = f (x,u) + u̇ = g1(x,u,u̇) ∂x ∂u ∂g ÿ 1 ∂g ∂g = f (x u) 1 , + u̇ 1 + ü = g2(x,u,u̇,ü) ∂x ∂u ∂u̇ . .. (n−1) y ∂g ∑n−1 n−2 ∂g (i) (n−1) = f (x,u) n−2 + u u ∂x ∂ui = gn−1(x,u,u̇, · · · ) −1 i=1 In this case captures the above      y  ẏ       h(x,u)  g1(x,u,u̇)    (n−1) z =  ... (n 1) =  ... −  (n 1)  = q(x,u,u̇, · · · u ) (3.12) − y gn−1(x,u,u̇, · · · u ) (n−1) Also in this case if exist the inverse function q−1(z,u,u̇, · · · u ) = x, analogous to the case of autonomous systems the global or local observability of the system can be determined. In the case of a system in an affine form the analysis is the same and the transformation 3. Observability for Nonlinear Systems 20 equations can be described with terms of Lie differentiation, this is appreciated in the subsection 3.2 where the observability of the Turbocharged Diesel Motor is evaluated. 3.1.3. Normal Observable Form for Nonlinear Systems In this section the normal observable form for nonlinear systems is presented from the previous analysis. Considering (n−1) x = q−1(z,u,u̇, · · · u ) (3.13) and (n−1) z = q(x,u,u̇, · · · u ) (3.14) Then the original system may be represented as: (n) ∂g ∑n g (x u u̇ u ) n−1 ∂h (i) n , , , · · · = f (x,u) n−1 + u (3.15) ∂x ∂ui−1 i=1 With          ẏ   ż1     z2    ż   2   .    ÿ .   z3 .  .  . ż  =  .    .  =  ..      (3.16) n−1  (n−1) ż  y (n)   zn  1 (n−1) (n)  n y gn(q− (z,u,u̇,  · · · u ),u,u̇, · · · u ) and y = z1; (3.17) now, taking a transformation: (n−1) (n−1) (n) ϕ(z,u,u̇, · · · u ) = g −1 n(q (z,u,u̇, · · · u ),u,u̇, · · · u ) This results in:    z2    z  3   .  ż =  .  .  (3.18)  zn (n−1)  ϕ(z,u,u̇,  · · · u ) y = z1 (3.19) This is an alternative system representation called Nonlinear Observable Form. Figure 3.1 shows the structure corresponding to the represetation of a third order nonlinear system in ONF. 3. Observability for Nonlinear Systems 21 Figure 3.1.: Structure of a third order system in an Observable Normal Form If there is a system in this structure, then is easy to analyze the observability from the system equations. Because from to the initial value y = z1 and from the output value y and its derivatives y(i), all state values zi are determined. A system that is in non-linear observation normal form, hence is always globally observable. Furthermore, any global observable system can be transformed into the normal form of observation . This transformation called diffeomorphims exist only for some system for this analysis, with notation used in [33],[34], the following terms are defined. Determining the observability of a nonlinear system is a complicated task. In the case of smooth systems a idea about how to probe the observability is the differentiation of the output, this leads to define the observability map. Definition 3.9 Observability map The function ok : X → Rk is called observability map, according to Definition 3.6 of Σ with order k ∈N with    h(x)        L  f   uh(x)  o (x) :=  L2 f h(x) k  u  (3.20)  . ..  Lk−1h(x)fu The corresponding observability (x) Rk×n Ok ∈ follows from    dh(x) 2  ∂o (x)  dL f h(x) (x) : k = =  u  Ok  . (3.21) ∂x  ..  dLk−1 f h(x) u 3. Observability for Nonlinear Systems 22 For the solution of the observer problem, it would be advantageous if the observability map would be at least injective. This is a criterion similar to that of the well known Kalman theorem of observability of linear system. If only is needed prove a weak observability, the next concept is easy to test. Theorem 3.10 Observability rank criterion The observability Ok(x) from definition 3.9 is invertible, if ∃k ∈N and x ∈ X so that rank (Ok(x)) = n , (3.22) is ok a diffeomorphism and Σ is weak local observable around x. In order to assure the existence of the diffeomorphism for a particular non-linear system, the next concepts [35] are presented. For a positive integer r < n the matrix,  dh(x)     dL fuh(x)  2  Or(x) :=  dL f h(x)  u  . ..   (3.23)  dLr−1 f h(x) u , is called reduced observability matrix, when r = n it results in the well known observability matrix. Theorem 3.11 There exists a local diffeomorphism in a neighborhood of x n 0 ∈ R with T(x0), transforming the system into a normal observable form if C1 rankOr(x) = r, C2 [adi f v, ad j f v] = 0, 0 ≤ i, j ≤ r − 1, C3 [g, adi f v] = 0, 0 ≤ i ≤ r − 2, in some neighborhood of x0 Where v is an arbitrary smooth solution, called starting vector, of: Ok(x) · v(x) = e r r ∈ R , (3.24) Where er denotes the rth column of the identity matrix. The case if r = n implies the existence of the Normal Observable Form whereas for r < n could be achieved a Partial Normal Observable Form [31]. The upper bound for r is the rank of the observability matrix. The choice of the integer r offers some degrees of freedom. In particular, smaller values of r lead to weaker existence conditions C2 and C3. An extra condition assures that the diffeomorphism is global: it is if adi jr, 0 ≤ i ≤ r − 1 are complete vector fields. The evaluation of this criterion is generally not a simple task. To evaluate the rank condition, the calculation of the determinant of the observability matrix is needed. This is determined by the gradient of the Lie derivatives of the output. With increasing system order so it will be more difficult to calculate the determinant and also the individual terms and determining analytically singular values. 3. Observability for Nonlinear Systems 23 In the next two subsections, this evaluation is made for our system 2.8. The next statement is important and is taken as a basis for the design of various methods of observation as will be seen later. From this analysis an interesting alternative representation of the original system may be achieved, which requires no explicit inversion of the observability map on and preserves still all previous statements valid. Consider this two locally reciprocally convertible systems: on(x) ẋ = f (x) + g(x,u) ż = A0z + ϕ(z,u) (3.25) o−1 n (ξ) Furthermore, the following relationships apply: d ∂o (x) z = on(x) n ⇒ ż = on(x) = ẋ (3.26) dt ∂x 1 d ∂o−1 1 (z) x = o−n (z) ⇒ ẋ = o−n (z) = n ż (3.27) dt ∂z Rearranging and inserting the obtained(result: ) ∂on(x) −1 ( ) f (x) + g(x,u) = A z + ϕ(z,u) ∂x 0 ( ) ∂on(x̂) −1 ẋ = ż (3.28) ∂x This result will be very useful in the design of the observers of the Chapter 4. Now only the observability matrix On(x̂) must be inverted, which is much simpler than the determination of the complete transformation inverse of an arbitrary non-linear mapping. This knowledge simplifies significantly the work with the system under consideration an makes it possible an easier implementation of the observers. 3.2. Symbolic Analysis of the System Observability In this section a symbolic evaluation of the observability based on the rank criterion is demonstrated. Furthermore, the results from this section will be use in the next chapter. There system Σ is given by   β1(x1)x3 − α1x1  f (x) =  α2x1 − β2(x ) + Ψ2 x3  − + β (x  τ 3 2) 3. Observability for Nonlinear Systems 24 with N V α1 = η d v 60 2Vi Txηv N V α2 = d Ti 60 2Vx RT ηc 1 β (x ) i 1 1 = ( ) Vi cpTa x1 µ − 1 ( ( pa ) ) √ √ RTx x2 x Tre f 2p p β2(x2) = b c − 1 2 a a + d (1 − ) Vx ( pa ) ( (pre f )Tx ) x2 √ x2√ cpηtTx pa µ x2 x2 Tre f 2p p β3(x2) = 1 a − b c − 1 + d (1 a − ) τ x2 pa pre f Tx x2 x2 RTx Ψ = W V f x The corresponding observability map on is then calculated deriving the output successively, with h(x) = x1 given by: y = h(x) The first derivative is calculated as follows: ∂y ∂x ∂h(x) ẏ = = ẋ ∂x ∂t ∂x ∂h(x) = ( f (x) + ga(x)u1 + gb(x)u2) ∂x ∂h(x) ∂h(x) ∂h(x) = f (x) + g (x)u + g (x)u ∂x ∂x a 1 ∂x b 2 ẏ = L f h(x) + Lgah(x)u1 + Lgbh(x)u2 3. Observability for Nonlinear Systems 25 The second derivative, that is the last in this calculus since n = 3, is calculated as follows: ∂ẏ ∂x ∂(L f h(x) + Lgah(x)u1 + Lg h(x)u2) ÿ b = = ẋ ∂x ∂t ∂x ∂(L f h(x) + Lgah(x)u1 + Lgbh(x)u2) = ( f (x) + ga(x)u1 + gb(x)u ∂x 2) ∂L f h(x) ∂Lgah(x)u1 ∂Lgbh(x)u2 = f (x) + f (x) + f (x) ∂x ∂x ∂x ∂L f h(x) ∂Lgah(x)u1 ∂Lgbh(x)u2 + g ∂x a(x)u1 + ga(x)u1 + ga(x)u ∂x ∂x 1 ∂L f h(x) ∂Lgah(x)u1 ∂Lgbh(x)u2 + gb(x)u2 + gb(x)u2 + g ∂x ∂x ∂x b(x)u2 ÿ = L2 f h(x) + LgaL f h(x)u1 + LgbL f h(x)u2 + L f Lgah(x)u1 +L2 g h(x)u2 1 + L a gbLgah(x)u1u2 + L f Lgbh(x)u2 +L 2 gaLgbh(x)u2u1 + Lg h(x)u2 2 + Lgah(x)u̇1 + Lgbh(x)u̇ b 2 With these expressions it is needednow to define the map on(y,ẏ, ÿ,u,u̇) o (x,u,u̇) =    h(x)   n   L f h(x) + Lgah(x)u1 + Lgbh(x)u2 (x u u̇)  (3.29) ψ , , Where ψ(x,u,u̇) = L2 f h(x) + LgaL f h(x)u1 + LgbL f h(x)u2 + L f Lgah(x)u1 +L2 g h(x)u2 1 + L a gbLgah(x)u1u2 + L f Lgbh(x)u2 +LgaLgbh(x)u u + L2 2 1 g h(x)u2 2 + Lgah(x)u̇1 + Lgbh(x)u̇ b 2 From this result the observability analysis of the system 2.8 is made. 3.2.1. Analysis without Inputs Considering the above procedure the symbolic analysis of the system under consideration would be difficult. In order to achieve a good approach about the observability, the analysis is made first for the system without inputs dynamics, this means u1 = u2 = 0 3. Observability for Nonlinear Systems 26 then the map obtained with help of software is given by:   (x)   o3 =  h(x)   x1  L h(x) =  ( β1(x1)x3 − α 1x1 )  f  (3.30) 2 ∂β1(x1)  x  L f h(x) β1(x1) 2 3  −α1x3 + x3 − − α1x  ∂x1 τ 1 In order to find the observability the observability matrix O3(x) is calculated. This corresponds to the derivative of the map o3(x) respect to the states x. The next matrix is the result:   1 0 0   ∂o (x)   (x) 3 = = ∗ β1(x1) 0 O3   (3.31) ∂x ∂β (x ) ∗ ∗ β1(x1) 3 2 ∂x2 where due to that the resulting matrix is a lower triangular matrix, it is only necessary to verify that the elements on the diagonal are different from zero to ensure that the determinant of the matrix is not zero. Then for the element at the position 2,2 the analysis is: RT η 1 β1(x i c 1) = ( ) Vi cpTa pi µ − 1 p RTi ηc ( 1 ) a , 0 Vi cpTa pi µ − 1 p ( a 1 ) , 0 pi µ − 1 pa then for this case exist a singularity when pi = pa. The analysis for the element 3,3 of the matrix is: ∂β3(x2) , 0 ∂x2 The result for this equation is a complex mathematical expression whose behavior is represented in Figure 3.2. Here we can see that this expression only could be zero if px < pa. However, as was discussed in Section 2.2 the work area of the system does not include that region. From this analysis it can be concluded that the system is at least locally weakly observable in the non-colored area of Figure 2.2. The remaining analysis is the evaluation of the inputs to determine whether these are universal and if so the system is globally observable. 3. Observability for Nonlinear Systems 27 Figure 3.2.: Value of O3,3 vs x2 3.2.2. Analysis of the Input xegr Given the system Σ : ẋ = f (x)+ ga(x)u1 + gb(x)u2, is considered in this section u2 = xvgt = 0 and therefore Lgbh(x) = 0  √  (  RTi x2 √ x1 x1  √ 2 1 − )  V  i RTx x2 ( x2  ga(x)  =  RTx x   2 x 2 1 x1  − √ 1 − ) Vx RTx x  2 x2  0  In this case the observability map on calculated with y = h(x) = x1 results:   o3(x)  =  h(x)  L h(x) + L h(x)u f ga 1  L2 f h(x) + LgaL f h(x)u1 + L f Lgah(x)u 2 1 + Lg h(x)u2 + L u̇  a 1 ga 1    x1   √ RT (  = β (x )x α x i x2 x1 x   1 1 3 − 1 1 + √ 2 1 1 − )u  1 (3.32) Vi RT  x x2 x2  B(x) 3. Observability for Nonlinear Systems 28 with: 2∂β1(x1) x3 ∂β (x ) B(x) = β1(x1)(−α1x3√+ x3 − + β3(x2)) − α1x1 + u1[( x 1 1 −α1 + 3 ) ∂x1 τ ∂x1 RTi x2 x1 x1 ∂Lgh(x1,x2) ( √ 2 (1 − )) + (β (x )x − α V x x ∂x 1 1 3 1x1) + i RTx 2 2 1 √ ∂Lgh(x1,x2) x3 2 ∂Lgh(x1,x2) RT x x x (− + β3(x2))] + u1 ∗ [ ( i 2 √ 2 1 (1 1 − ))] ∂x2 τ ∂x1 Vi RTx x2 x2 the above expression is fully of complex mathematical forms and the observability matrix is   1 0 0  ∂o3(x)  ∂β1(x1) RTi x1  O3(x) = = ∂x −α1 + Pc √ √ u β (x )  (3.33) ∂x V 1 1 1  1 i RT  x 2x1(x2 − x1)  ∂B(x)∂x1 ∂B(x)∂x2 ∂B(x)∂x  3 where due to the structure, it will be useful to apply cofactors in order to verify that the determinant of the matrix is not zero. Then for cofactors, the analysis is: RTi √ √ x1 u1 β1(x  1) det(O3(x)) 1   = ·  Vi RTx 2x1(x2 − x1)  ∂B(x) ∂B(x)  ∂x2 ∂x3 RTi √ x1 ∂B(x) ∂B(x) = √ u V 1 · − β1(x ∂x 1) · i RTx 2x1(x2 − x1) 3 ∂x2 Then for this case we can assure that the input xegr is a universal input if: RTi √ x1 ∂B(x) ∂B(x) √ u1 · − β1(x1) · , 0 Vi RTx 2x1(x2 − x1) ∂x3 ∂x2 The analytic calculation of this expression, that means finding zeros, results in a complex task even for the computational software. Then we can not demonstrate symbolically that no values exist with the states within the validity area of the system where the determinant is equal to zero. However, a valid option is to graph the behavior of the determinant, which was expressed analytically as a function of states, considering the dynamics of the system and the entire range of input values. This approach is made in order to find graphically whether this determinant at some point becomes zero value, and if so, the values of the states where this singularity exists. With this aim, the Figure 3.3 is analyzed Thus, as noted, there is no value of the states in which the determinant becomes zero, then we can affirm that xegr is a universal input. 3. Observability for Nonlinear Systems 29 5 5 x 10 x 10 1.22 1.28 1.26 1.2 1.24 1.18 1.22 1.16 1.2 1.14 1.18 1.16 1.12 1.14 1.1 1.12 1.08 1.1 −3 −2.5 −2 −1.5 −1 −0.5 0 −3 −2.5 −2 −1.5 −1 −0.5 0 det(O ) 9 x 10 det(O ) 9 3 3 x 10 800 750 700 650 600 550 500 450 400 350 300 −3 −2.5 −2 −1.5 −1 −0.5 0 det(O ) 9 3 x 10 Figure 3.3.: Behaviour of the determinat of O3(x) regard the states 3.2.3. Analysis of the Input xvgt The system Σ : ẋ = f (x) + ga(x)u1 + gb(x)u2, is considered in this section u1 = xegr = 0 and therefore Lgah(x) = 0   0   ( ( ) ) √ √  gb(x) =  RT  a x x c 2 x Tre f 2p p  1 d 2 a (1 a )   − + −   Vx pa ( ( pre)f )T  x √x2 √x2  ηtTxcp 1 RTx x2 x2 T 2p p re f a a   a( ) c − 1 + d (1  − ) τ x1 µ Vx pa pre f Tx x  2 x2  − 1  pa The corresponding observabilitymap on is given by:   h(x)   x1  o (x) =  L h(x)  =  ( β1(x1)x3 − α1x1 3 f ∂β (x ) x ( ) a )  2  L f h(x) + LgbL f h(x)u2 β1(x1) −α1x3 + x2 1 1 3 3 − + β3(x2) 1 + u  ∂x τ b 2 − α1x1 1 (3.34) With the respective observability matrix: ∂o (x)   1 0 0  (x) 3 = =    ∗ β1(x1) O3 ∂x ( a ) 0  (3.35) ∂β (x ) ∗ ∗ 1 + u2 β1(x 3 2  b 1)  ∂x2 p [Pa] i P [W] c p [Pa] x 3. Observability for Nonlinear Systems 30 where due to that the resulting matrix is a lower triangular matrix, it will be only necessary to verify that the elements on the diagonal are different from zero to ensure that the determinant of the matrix is not zero. Then for the element at the position 2,2 the analysis is: RT η (x ) i c ( 1 β1 1 = ) , 0 Vi cpTa pi µ − 1 pa RTi ηc ( 1 ) , 0 Vi cpTa pi µ − 1 p ( a 1 ) , 0 pi µ − 1 pa Thus for this case exists a singularity when pi = pa. The analysis for the element 3,3 of the matrix is: ∂β3(x2) , 0 ∂x2 This equation was treated in the abo(ve sectio)n but we have another condition a 1 + u , 0 b 2 b − , u a 2 b Since − = 1.2941, then this value is not within the range of possible input values [0,1] a for u2. Then the input xavg is a universal input because any possible value of the states could turn the determinant in zero. 3.2.4. Conclusion about the Observability of the System After the analysis made in the previous sections, we can conclude that the system under consideration is uniformly observable in the operation area of validity since that the autonomous system is observable and additionally the both inputs are universal inputs. The table 3.1 shows the ranges of operation of the variables. Table 3.1.: Valid values of the states Variable Low Limit Upper Limit Unit pi pa 1.55 5 · 10 [Pa] px pi 1.75 · 105 [Pa] Pc 0 2600 [W] 4. Design of Observers for Nonlinear Systems After the observability of the system is verified, an observer can be designed. The design of observers for nonlinear systems has attracted the interest of many researchers and is still a very important and challenging task in control design. A lot of approaches had been development, the most popular is the Extended Luenberger Observer that uses the theory existent for the linear case, this observer and other three kinds of observers will be explained and development for the system under consideration in this work. Additionally, in the final part of this chapter others designs with interesting approaches and are mentioned. As mentioned earlier, the goal is to reconstruct the states of the system from control signals input and a single measured state. Figure 4.1 shows the block diagram for this design. Figure 4.1.: Block diagram of the observer for a Turbocharger Diesel Engine 4. Design of Observers for Nonlinear Systems 32 4.1. Extended Luenberger Observer The Luenberger observer is the most common approach for the development of estimators for linear systems, the version applicable to nonlinear systems is called Extended Luenberger observer. In this section, we consider the observer design based on a generalization of the Extended Luenberger Observer, this approach is based on the work of Röbenak [31], and considers single output system: ẋ = f (x,u) y = h(x) with smooth maps f : Rn n → R , and h : Rn → R. The problem of observer design for nonlinear systems was treated in many researches, normally based on normal forms that required that the nonlinear matrix should be regular where it is needed to decompose the original system in an observable part and another unobservable part. This approach has the advantage that is capable of designing an observer for a system in a partial observer form. 4.1.1. Derivation The observer get the same structure as that of a Luenberger observer: x˙̂ = f (x̂,u) + k(x̂,u)(y − h(x̂)), where we have to determine the observer gain k. The block diagram of this approach is Figure 4.2.: Diagram of a Extended Luenberger Observer. shown in 4.2 The error of the observer is given by e = x − x̂ then: ė = f (e + x̂,u) − f (x̂,u) + k(x̂,u)(y − h(x̂)) 4. Design of Observers for Nonlinear Systems 33 Thus is obtained a non-linear differential equation for the observer error. Replacing in the output: y = h(x) = h(e + x̂) (4.1) in the non-linear differential equation yields the observer error: ė = f (e + x̂,u) − f (x̂,u) + k(x̂,u)(h(e + x̂) − h(x̂)) (4.2) Then the design aim is with the above non-linear system equation, to construct k such that in e = 0 an asymptotically stable rest position is present. The simplest procedure, for the design of the observer matrix k is via a Taylor expansion in a fixed point x̂o, with u interpreted as a system parameter. That is we have ∂ f ∣∣ f (x̂o + ∆x̂o + e,u) = f (x̂o,u) + ︸ ︷∣∣︷ ·(∆x̂o + e) + · · · ∂x̂ x̂= x̂︸o A When the error tends to zero, results in: f (x̂o + ∆x̂o,u) = f (x̂o,u) + A · ∆x̂o + · · · The same procedure for the output function: ∂h ∣∣ h(x̂o + ∆x̂o + e,u) = h(x̂o,u) + ︸ ∣ ·(∆x̂ + e) + · · · ∂ x̂ ︷∣ o x̂︷= x̂︸o C whereby there is obtained h(x̂o + ∆x̂o,u) = h(x̂o,u) + C · ∆x̂o + · · · Substituting the above Taylor series, in the observer error Equation 4.2, then gives: ė ≈ (A − kC)e (4.3) That is the estimation equation of the linear observer. It should be noted that it represents only an approximation due to the neglect of higher derivatives in the Taylor series. Thus, taking the matrix (A−kC) Hurwitz, the observer error e→ 0 when t→∞. The situation is valid only around the point of operation. In order to get better results, so the linearization is described as a function of x̂ and u, that means: ∂ f (x̂,u) A(x̂,u) = ∂x̂ ∂h(x̂) C(x̂) = ∂x̂ 4. Design of Observers for Nonlinear Systems 34 Then the matrices are no longer constant, and do not depend on the working point. Furthermore, the observer matrix k is selected now as a function of x̂ and u. Thus arises instead of linear error equation, the nonlinear: ė = (A(x̂,u) − k(x̂,u)C(x̂))e (4.4) The aim is now the appropriate choice of k(x̂,u) interpreted consistently. This is done so that all the eigenvalues of F = (A(x̂,u) − k(x̂,u)C(x̂)) has negative real parts, however, it is intuitive and difficult to justify rigorously since it is based on a linearization. Thus: P(s) = det(sI − F) = d∏et(sI − (A(x̂,u) − k(x̂,u)C(x̂))) n = (s − λi) i=1 For the calculation of the observer matrix k(x̂,u) it is considered the characteristic polynomial if the form: P(s) = sn + an−1(k(x̂,u))sn−1 + · · · + a0(k(x̂,u)) (4.5) 4.1.2. Calculation of the Observer Gain in Original Coordinates We assume that the conditions of 3.11 are fulfilled, then the diffeomorphism exits. Applying the transformation to system 4.1. In order to express the gain for the observer shown in the equation 4.5 in original coordinates first the diffeomorphism T and S are considered as inverse maps to each other: S(T(x)) = x T(S(z)) = z. In this part the next notation will be used: • The Lie brackets are defined by [ f g] ∂g f ∂ f , = ∂x − ∂x g • Iterated Lie brackets are given by adk f v = [ f ,adk−1 f v] with ad0 f v = v Where v was defined in Theorem (3.11). More information about this issues is discussed in [36],[37]. Then based on [35] if there is a local diffeomorphism, then it could be shown that: Tadi ∂ − f v = ∀ i = 0,...,n − 1 (4.6) ∂zi+1 4. Design of Observers for Nonlinear Systems 35 Considering TS = I, whereby adi ∂ − f v = S ∀ i = 0,...,n − 1 (4.7) ∂zi+1 With (4.5) is obtained S′(ẑ)l = p0v(x̂) + p1ad r−1 − f v(x̂) + + pr−1ad f v(x̂) (4.8) − then T(adr v + ad adr−1 f −g f v) = Tadr f v + Tad adr−1 − − − −g − f v = T[− f ,adr−1v] + T[ g,adr−1 f − v] − − f = [−T( f + g),Tadr1 v] − f ∂ = ∑[−( f̄ + ḡ), ] ∂zr n ∂ ∂ = ( ( f̄ j + ḡ j) ) ∑ ∂zr ∂z j=1 j n ∂ ∂ = ( ( f̄ j + ḡ j) ) ∑ ∂z ∂z j r =1 j n ∂ ∂ = (( α ) ) ∂z j ∂z j=1 r j ∂ = α ∂z j r Hence, from the above relations, the gain in original coordinates is given by: k(x̂,u) = p0v(x̂) + p1ad r−1 r r−1 − f v(x̂) + · · · + pr−1ad f v(x̂) + ad f v(x̂) + ad ad − − −g − f v(x̂) (4.9) It is visible that the gain k depend of the observed states ẋ and of the input u, this is a nonlinear version of the Ackermann formula applies also to for systems that are not uniformly observable. It was assumed that all the existence conditions are fulfilled, but for the computation of k only the first condition is required. 4.1.3. Realization and Results In order to simulate the dynamic of the process different values of the inputs were used, see Table 4.1 where all times are in seconds and the inputs in percentage. For this simulation scenario the engine speed considered is N = 2250 RPM, and the fuel rate W f = 6 Kg/h. The initial conditions of the states are pi(0) = 1.086e5 Pa, px(0) = 1.105e5 Pa, Pc(0) = 350.2 W. Three different designs were probed, with the three observer eigenvalues placed at the 4. Design of Observers for Nonlinear Systems 36 same value. These cases are show Table 4.2 Table 4.1.: Values of the inputs for the simulation t ≤ 0.5 0.5 < t ≤ 1 xegr 0.2 0.9 xvgt 0.7 0.6 Table 4.2.: Eingenvalues Case λ2 1 −8 2 −14 3 −20 The results for each state of the system are shown in Figures 4.3, 4.4, 4.5 respectively. The trajectories of the system are drawn with solid lines. We used dashed lines for the observer. In addition, we plotted the trajectories of the error between the real states and the estimated states using dash-dotted lines. It can be seen that the extended Luenberger observer designed with eigenvalue furthest from the origin converges faster. In fact, the rate of convergence depends of the place on the eigenvalues as expected. On the other hand, the observer gain was difficult to compute. 5 4 x 10 x 10 1.35 0.5 λ λ 1 1 λ λ 2 2 1.3 λ 0 3 λ 3 Model 1.25 −0.5 1.2 −1 1.15 −1.5 1.1 −2 1.05 −2.5 0 0.2 0.4 0.6 0.8 1 0 0.05 0.1 0.15 0.2 0.25 Time[s] Time[s] Figure 4.3.: Estimate pi 5 4 x 10 x 10 1.35 0.5 λ λ 1 1 λ λ 2 2 0 1.3 λ 3 λ 3 Model −0.5 1.25 −1 1.2 −1.5 1.15 −2 1.1 −2.5 0 0.2 0.4 0.6 0.8 1 0 0.05 0.1 0.15 0.2 0.25 Time[s] Time[s] Figure 4.4.: Estimate px Furthermore, for a better comparative of the performances of the three design the Figure 4.6 show the root mean square error for the three states. p [Pa] p [Pa] x i e e p p x i 4. Design of Observers for Nonlinear Systems 37 1200 500 λ λ 1 1 λ 1000 λ 2 2 λ 400 3 λ 3 Model 800 300 600 200 400 100 200 0 0 −200 −100 0 0.2 0.4 0.6 0.8 1 0 0.05 0.1 0.15 0.2 0.25 Time[s] Time[s] Figure 4.5.: Estimate Pc 8 x 10 5 λ 1 λ 2 0 λ 3 0 0.02 0.04 0.06 0.08 0.1 Time[s] 8 x 10 5 0 0 0.02 0.04 0.06 0.08 0.1 Time[s] 5 x 10 4 2 0 0 0.02 0.04 0.06 0.08 0.1 Time[s] Figure 4.6.: Root mean square error P [W] c √ √ √ (P 2c − P̂c) (px − p̂x)2 (pi − p̂i)2 e P c 4. Design of Observers for Nonlinear Systems 38 The complete performance is easier to compare and such as shown in the Table 4.6 that contain the Mean Square Error for each case, this is only a comparative value without other meaning, which unit is the same of the state unit squared, and is given by: 1 ∑n MSE = (x̂ 2 n i − xi) i=1 Table 4.3.: Mean Square Error MSE(pi) MSE(px) MSE(Pc) λ = −8 4.0395 · 106 2.9295 6 · 10 2.8062 103 · λ = −14 3.1091 · 106 2.6178 6 · 10 4.7976 103 · λ = −20 2.6725 · 106 3.2307 6 · 10 7.0937 103 · These results shows that although the convergence time decreases using poles placed more distant from origin, there is a trade off because a shorter convergence means in this case, see Pc, more overshoot, which will make the estimation error higher. We show a method for observer nonlinear system with a good performance based on a Extended Luenberger observer with additionally the advantage of the capability of estimate states even if the observer matrix is singular, this is not the case of the system worked here, but however, it is an important topic that could be take in account at the moment to design observer for other systems. Moreover, this approach gives an expression to obtain the gain without necessity of transforming the system to the normal form, that is the observer gain can be obtained without an explicit computation of this form. 4. Design of Observers for Nonlinear Systems 39 4.2. High Gain Observer The basic concept of the High-Gain Observer was developed by different research groups [38, 39, 40]. High gain observer design can be carried out for nonlinear systems, whose dynamics can be decomposed into a linear and a Lipschitz continuous nonlinear part. The observer uses a linear output injection, which is specified by a constant gain matrix. Due to this simple structure, high gain observers are frequently used in practical applications. The observer gain is often chosen via eigenvalue placement. However, the formal existence conditions are more complicated. In many cases, there is a finite bound on the maximum feasible Lipschitz constant of the nonlinear part for which the error dynamics can be stabilized.The existing results can be improved significantly if the structure of the linear part is taken into account. The estimation of the dynamics of the states of the system is made in this chapter trough a High Gain Observer. As described in [41], this is a relatively simple approach to nonlinear observer design and presents robust features. The basic idea is easy to understand: It separates the system under consideration in a linear and a non-linear part. Then a conventional linear observer is first nominally designed for the linear part. The second step is then to try to find the greatest possible gain that domine the nonlinear part and achieved an asymptotically stable estimation error dynamics. After checking the observability of the system as seen in Section 3.2 and observer test approaches were considered, then in Section 4.2.1 it is shown how the concept of the high gain observer can be derived . Finally, the design method and the difficulty of the implementation is presented. 4.2.1. Derivation This section will present, how an observer that stabilizes asymptotically the estimation error dynamics of the system under consideration can be found with the high-gain approach. The exposition is based on [42] and the summary of [33]. It will be presented a fairly detailed evidence, to make it clear how these characteristics and assumptions of the observer are made. To illustrate the operation of a high-gain observer, assume for now the non-linear system in the form: ẋ(t) = f (x,u) y(t) = g(x) could be transformed into the non-linear observability normal form, if the observability is assured using a diffeomorphisms. In this equation the function ϕ describes the 4. Design of Observers for Nonlinear Systems 40 nonlinearities of the system. The system description is then in the following ONF n−1 ż(t) = Az + bϕ(z,u,u̇, · · · , u ) y(t) = cTz with     0 1 0 · · · 0 0 A =  0 0 1 · · · 0  .. .. .. . . . . .  0    .  [ ] . . . .  , b =  ..  , cT = 1 0 · · · 0 0 0 0 · · · 1 00 0 0 0 1· · · The matrix and vectors are associated with linear system description called Brunovsky normal form. Similar to the Luenberger observers for linear systems, the high-gain observer is given for the above system with the following estimation dynamics: n−1 z˙̂ = Aẑ + bϕ(ẑ,u,u̇, · · · , u ) + l(ε)(y − ŷ) y = cTz with     ε−1  0 · · · 0  k1         0 ε−2     · · · 0    l(ε)   =  . . . .  k    2      .   = Λ(ε)K .. .. . . .. .. 0 0 −n · · · ε kn Here, for the gain observer vector, the parameter is set ε > 0, and Λ(ε) is a diagonal matrix having the values ε−i as elements dii of its diagonal. The vector l(ε) will be denoted with K = [k1 k2 · · · k ]T n a constant vector. The high-gain observer for the system is shown in Figure 4.7. The observer error e = z − ẑ has the following dynamics n−1 n−1 ė = (A − l(ε)cT)e + b(ϕ(z,u,u̇, · · · , u ) − ϕ(ẑ,u,u̇, · · · , u )) n−1 n−1 ė = (A −Λ(ε)KcT)e + b(ϕ(z,u,u̇, · · · , u ) − ϕ(ẑ,u,u̇, · · · , u )) If we assume initially, n−1 n−1 ϕ(z,u,u̇, · · · , u ) = ϕ(ẑ,u,u̇, · · · , u ) so the error dynamics is linear with ė = (A−Λ(ε)lcT)e and can be predetermined with the 4. Design of Observers for Nonlinear Systems 41 Figure 4.7.: Structure of a High Gain Observer using ONF eigenvalues of the system matrix of the observer error   −1 −l1ε 1 · · · 0 −l2ε−2 0 · · · 0 A −Λ(ε)lcT =  . . .    .. .  .. . . ..  l ε−n 0 0− n · · · whose characteristic polynomial is P(s) = sn l1 l l + sn−1 2 + n−2 n 2 s + · · · + ε ε εn λ1 λ2 λ (s )(s ) (s n = + + · · · + ) ε ε ε With the respectively eigenvalues in the form: λ λ̃ i i = ε The eigenvalues λ̃i move with the value ε from the origin of the complex plane. The eigenvalues are getting larger with decreasing ε, that is, they move farther and farther to the left in the complex plane. It follows that: Re(λ lim Re(λ̃ ) lim i) i = = −∞ ε→0 ε→0 ε then since all eigenvalues are selected with real part negative the error dynamic is stable. The Figure 4.8 illustrates the dependency of eigenvalues λ̃i with ε. n−1 n−1 For the considered case ϕ(z,u,u̇, · · · , u ) = ϕ(ẑ,u,u̇, · · · , u ) the observer error converges to zero due to the linear error dynamics. The convergence is faster if smaller parameter ε 4. Design of Observers for Nonlinear Systems 42 Figure 4.8.: Behavior of eigenvalues in a High Gain Observer [34] are chosen. n−1 n−1 Now is considered the case when ϕ(z,u,u̇, · · · , u ) , ϕ(ẑ,u,u̇, · · · , u ) ,. Transforming the estimation error e by e1      ê   1      e2  1  ε− ê  1 0 · · · 0  0 ε−1  · · · 0  e 2   =  .   ..  =   =  . . .  ê = εΛ(ε)ê · · · .. .. . .. . . en ε−(n−1)ên 0 0 ε−(n−1) · · · The error dynamics is given by: n−1 n−1 εΛ(ε)e˙̂ = (A −Λ(ε)lcT)εΛ(ε)ê + b(ϕ(z,u,u̇, · · · , u ) − ϕ(ẑ,u,u̇, · · · , u )) multiplying Λ−1 by the left ˙̂ n−1 n−1 εe = (εΛ−1(ε)AΛ(ε) − lcTεΛ(ε))ê + Λ−1(ε)b(ϕ(z,u,u̇, · · · , u ) − ϕ(ẑ,u,u̇, · · · , u )) then with this consideration      ε 0 · · · 0  0 1 0 · · · 0   ε−1 0 · · · 0  εΛ−1  (ε)AΛ(ε)  =   2  0 ε · · · 0       0 0 1   · · · 0  0 ε−2  · · · 0  .. .. . . ..   .. .. .  . . . . . . .. . . . ...       = A .. . . . . .  . . . ..  0 0 εn 0 0 0 −n · · · · · · 0 0 0 · · · ε 4. Design of Observers for Nonlinear Systems 43 and      −1 l1 0 · · · 0 ε 0 · · · 0    l2 0 · · · 0 lcT  εΛ(ε) = ε   . . .    0 ε−2  · · · 0   .. .. . . . ..   .. .. . . . . . . ..  = lcT nl 0 0 0 0 ε−  n · · · · · · and taking Λ−1(ε) = b , we obtain the next transformed estimation error equation ˙̂ n−1 n−1 e = (A T − lc )ê + εnb(ϕ(z,u,u̇, · · · , u ) − ϕ(ẑ,u,u̇, · · · , u )) (4.10) n−1 From this equation it can be seen directly that the nonlinear term εnb(ϕ(z,u,u̇, · · · , u n−1 ) − ϕ(ẑ,u,u̇, · · · , u )) becomes negligible for smaller ε. The linear error dynamics of the high-gain observer results e˙̂ = (A − lcT)ê (4.11) and has a constant matrix   −l1 1 0 · · · 0   −l2 0 1 · · · 0 A − lcT =  .. .. .. . . .. . . . . .  −ln 0 0 · · · 0 whose eigenvalues are prespecified arbitrarily. Then choosing ε only sufficiently small, the observers will have a stable behavior and the observer error e will converge to zero or in others words the estimated value x̂ will converge to the system state x. From the fact that for small values ε the elements of the observer vector l(ε) = Λ(ε)l result in large values, resulting the name high-gain observers. It should be noted that the eigenvalues of the high-gain observer seen in the first case are greater than the eigenvalues of temporally transformed observer with the system matrix by a factor of 1/ε . 4.2.2. Considerations For simplifying the expressions in this section from now on the factor 1/ε will be represented as L. Now a few considerations to the corresponding Lyapunov function will be performed. The direct method of Lyapunov stability for stability is described in [43]. First, a simple coordinate change of the estimation error is performed. Let ζ := Λ−1e. The matrix Λ is positive definite for L > 0 and thus always invertible. This follows the dynamics of ζ: 4. Design of Observers for Nonlinear Systems 44 ζ̇ = Λ( −1ė = Λ−1(A −)ΛKC)Λζ = Λ−1AΛ − KCΛ ζ = (LA − K · LC)ζ = L(A − KC)ζ Furthermore apply that for (A − KC) Hurwitz, there is ∀Q = QT > 0 ∃!P = PT > 0: (A KC)T − P + P(A − KC) = −Q (4.12) Thus the existence of a quadratic Lyapunov function V with V(ζ) = ζTPζ ensured. For the time derivative results: ∂V(ζ) [ ] V̇(ζ) = ζ̇ = LζT · (A − KC)TP + P(A − KC) ζ ∂ζ = −LζTQζ < 0 ∀ζ , 0 Consider the following statement valid for symmetric matrices: M = MT ∀ > 0 : λmin(M) x 2 T || || ≤ x Mx ≤ λmax(M)||x 2 || (4.13) where λmin(·) is the smallest and λmax(·) the biggest eigenvalue for the particular matrix referred. Then it results: V̇(ζ) ≤ −Lλmin(Q) 2 ||ζ|| (4.14) V V ≤ λmax(P)||ζ 2 2 || ⇔ ||ζ|| ≥ (4.15) P>0 λmax(P) Thus there is obtained: λ (Q) V̇ LγV , γ : min ≤ − = (4.16) λmax(P) Then using the comparison principle [43] the following assessment was obtained : V (ζ(t)) = ζ(t)TPζ(t) ≤ exp[−Lγt] · V(ξ(0)) = exp[−Lγt] · ξT 0 Pξ0 (4.17) Thus, as well as the relationship 4.13, results in the following relationship for the transformed estimation error ζ: 2 λ ζ(t) max(P) exp[ Lγt] ζ 2 || || ≤ − · || 0|| (4.18) λmin(P) It is easy to see that the error dynamics is asymptotically stable for L > 0. By inverse transformation we may easily calculate the estimation errors e. Splitting the observer gain in Λ and K creates here structural degrees of freedom, which can be used for the addition of non-linearities and disturbances now. Furthermore, remains to choose only the parameters L to the linear observer design of the nominal system. This makes the approach relatively simple. With these considerations and the notation introduced the 4. Design of Observers for Nonlinear Systems 45 concept of the high gain observer, and its properties can be deduced. System with disturbances For the system Σ̃ with K as the Lipschitz constant of ϑ the following observer is proposed: ( ) z˙̂ (t) = Aẑ(t) + ϕ0(ẑ) + ΛK y − Cẑ , ẑ(0) = ẑ0 x̂(t) = o−1 n (ẑ(t)) Now analogous to the previews of estimation errors e := ẑ − z the estimation error dynamics: ė(t) = (A −ΛKC)e(t) + ΛKw(t) − env(t) + en (ϑ(ẑ(t)) − ϑ(z(t))) (4.19) where ∆ϑ = ϑ(ẑ) − ϑ(z) = ϑ(z + e) − ϑ(z). Now, the estimation error in the coordinates ζ = Λ−1e can be calculated: ζ̇ = Λ−1ė = L(A − KC)ζ + Kw Λ−1 − env + Λ−1en∆ϑ Now let V be a quadratic Lyapunov candidate of the form V = ζTPζ with P = PT > 0, as a solution of the Lyapunov function for the linear part of the system matrix A − KC and given Q = QT > 0, as above. Then the time (derivative is as follows: ) V̇(ζ) = −LζTQζ + 2ζTP Kw Λ−1 − env + Λ−1en∆ϑ (4.20) As an auxiliary calculation now the Lipschitz-continuous function ϑ is considered. The following applies: |∆ϑ| = |ϑ(ξ + e) − ϑ(ξ)| ≤ k · ||e|| (4.21) ! So that L ≥ 1: Λ−1en∆ϑ = L−nen (ϑ(ξ + Λζ) − ϑ(ξ)) L−n ≤ en√· k||Λζ|| k = en √n (Lζ1)2 + (L2ζ2)2 + . . . + (Lnζ )2 L ( ) ( ) n 1 2 1 2 = enk ζ + ( )2 Ln−1 1 Ln−2 ζ2 + . . . + ζn L≥1 ≤ k · ||ζ|| Thus, with the help of the triangle inequality, the Cauchy-Schwarz inequality and the 4. Design of Observers for Nonlinear Systems 46 relationship (4.13) further assessments are carried out: V̇ ≤ −LζTQζ + ︸2ζ ︷TP︷k w︸ − ︸2ζ T P ︷Λ︷− 1 e n ︸v + ︸2ζ T P Λ ︷−︷1 e n ∆ ︸ϑ ≤2λmax(P)||k||·||ζ||w∞ ≤2λmax(P)L−nv∞ ≤2λmax(P)k||ζ||2 (4.14) ≤ −(Lλmin(Q) − 2kλmax(P)) 2 ||ζ|| + 2λmax(P)||ζ||( −n ||k||w∞ + L v∞) (4.15),(4.16) λ (P) √ ≤ −(Lγ − 2k)V + 2 √max ( −n ||k||w∞ + L v∞) V λmin(P) √ The Lyapunov function is now w(ritten w)ith W := V and therefore V̇ 1 2k λ Ẇ 2 L γW √max(P) = −n √ ≤ − − + (||k||w∞ + L v∞) (4.22) 2 V 2 γ λmin(P) Now again with the comparison principle L 2k 0 := γ and integration: [ 1 ] W(t) ≤ exp − γ(L − L0)t W 2 0 √ ∫ λ t [ ] max(P) 1 + [ (||k||w∞]+ L−nv∞) · exp − γ(L − L0)τ dτ λmin(P) 0 2 1 = exp − γ(L − L0)t W 2 0 √λmax(P) n 2 ( [ 1 ]) + ( − ||k||w∞ + L v∞) · 1 − exp − γ(L − L0)t λmin(P) γ(L − L0) 2 [ 1 ] ︸ ︷︷ ︸ ≤1 ∀t≥0,L≥L0 λ (P) 2 ≤ exp max − γ(L − L0)t W0 + √ (||k||w + L−n ∞ v∞) 2 λmin(P) γ(L − L0) ! Important here is that L ≥ L0 applies. From this it can be deduced directly on the transformed√error: (4.15) V [ 1 ] 2c2 ||ζ|| ≤ ≤ cP · exp − γ(L − L0)t ζ + P · || || (||k||w + L−nv ) λmin(P) 2 γ(L ∞ ∞ − L0) √ λmax(P) where cP := √ . Finally, the result for the estimation error e = Λζ with ||e n || ≤ L ||ζ|| λmin(P) und ||ζ 1 || ≤ L ||e0|| is given by: [ 1 ] 2c2 k Ln 2 || || 2c ||e(t)|| ≤ cP · exp − γ(L − L0)t Ln−1 ||e0|| + P w + P v ∀t ≥ 0 2 γ(L ∞ − L0) γ(L − L0) ∞ With this assessment all the important features of the approach of high gain observer are shown. If no errors in the design, with increased profit of L, the error decay is exponential. With L > L0 asymptotic stability of the dynamics of the estimation error is assured. The large values of L also amplify the measurement noise but reduce the impact of process failure. This effect is called peak phenomenon. and should always be considered in the 4. Design of Observers for Nonlinear Systems 47 design. Systems with inputs This section describes the previous considerations to systems to be expanded with input u. Based on Σ̃ an analogous observer approach is used. The proof for this is different only in a point of the previous derivation. It is about the auxiliary calculation based on the Lipschitz estimate from equation (4.21). One regards the component-wise Lipschitz estimate o∣f ϕ and considers its particular structur∣e. Th∣e∣ following applie ∣∣ ∣∣s: ϕi (ξ1 + e ∣ ∣∣ ∣∣ 1, . . . ,ξi + ei,u) − ϕ (ξ1, . . . ,ξi,u) ∣ ≤ ki∣∣[e1, . . . ,ei,︸0, ︷. .︷. ,︸0]∣∣ , (4.23) n−i Zeros where ki > 0, i = 1, . . . ,n are the components of the Lipschitz constant and ei represents the respective component of the estimation error. Now considering the off multiplied component in the estimation of Lyapunov. And then met Λ−1 · (ϕ(ξ + Λ−1ζ,u) − ϕ(ξ,u)) following statement: 1 ∣∣∣ ( ) ∣∣ ki ∣∣∣∣ ∣∣∣ϕ ξ i T ∣ Li i 1 + Lζ1, . . . ,ξi + L ζi,u − ϕi (ξ1, . . . ,ξi,u) ∣ ≤ ∣∣ Li∣∣ Λ · [ζ , . . . ,ζ ∣∣ 1 i,0, . . . ,0] ∣∣ L≥1 k ∣∣[ζ , . . . ,ζ ,0, . . . ,0]T∣∣∣∣ ≤ i 1 i ∣∣ With k := max {ki} as total Lipschitz constant is obtained: i=1,...,n −1 ||Λ ∆ϕ|| ≤ k · ||ξ|| (4.24) The rest of the proof can be continued analogously to give the same estimate for the estimation error that the above section. This step of the Lipschitz constant determination has particularly importance for the design method. 4.2.3. Numerical Limits and Implementation Theoretically, an infinite gain for continuous systems could be envisaged. Since the implementation of the observer, however, will be discrete, the sampling Ts here is subject to certain limits. In practice, one of the biggest obstacles is to ensure that the observer, and in particular the peaks remain in the valid range. A relatively simple approach to do this is the direct estimate of an upper limit for the gain L under a given k and a fixed maximum anticipated initial deviation emax. Considering the discrete Euler realization of the o(bserver) dynamics −1 ˙̂ ∂o (x̂) ( ) x= f (x̂) + g(x̂)u n (+ ) Λk y − C ∂x 1 ẑ − Update  x̂+ x̂ T  ∂o f (x̂) g(x̂)u n(x̂) ( ) ⇒ = + s · + + Λk y − Cẑ  ∂x  , 4. Design of Observers for Nonlinear Systems 48 a limit to be determined. It applies to the discrete system x̂+ := x̂((k+1)Ts) and x̂ := x̂(kTs). Taking x̂ (h)hom ∈ O , then should also x̂+ ∈ O(h)hom. This relationship can then via L be changed to determine a bound. This procedure will now be demonstrated using the example of the criterion x̂ > 0, this notation is interpreted componentwise. Let f = T−1 s s the sampling and ey := y − h(x̂) the error of measurement. Applying the fo(llowing: ( ) ) −1 x̂+ = x +( Ts · )f (x̂) + g(x̂)u ∂o (x̂) + n ∂x Λkey > 0 ∂o −1 n(x̂) ⇔ − ∂x ( Λkey) (< fsx̂ + f (x̂) + g(x̂)u On pos.def. ke < ∂on(x̂) ) ⇒ −Λ y ∂x fsx̂ + f (x̂) + g(x̂)u komp.weise [( ) ] Lih (( e ) <[( ∂on(x̂ ( ) ⇒ i − y ∂x )) fsx̂ + f (x̂) + g(x̂)u i ey<0 ] ) 1 L < 1 ∂on(x̂) ( ) ⇒ h e ∂x fsx̂ + f (x̂) + g(x̂)u i i max,1 i The evaluation of this criterion can then be carried out numerically. Symbolic analysis with appropriate software can optionally be carried out for the remaining restrictions. Alternatively, a heuristic tuning for the upper limit can also be done via simulations. 4.2.4. Realization and Results For the observer design individual steps of the method are presented. The starting point is first define the observability of the nonlinear system of the form Σ. As described in Section 3.2 initially shown the system is LWO for the validity area. In these areas, based on the portion of the homogeneous system of a transformation of the state under section 3.28 transformation take place. If all criteria are met, the system can be expressed in an ONF. Then the non-linear function ϕ has to be analyzed in more detail. The aim is to obtain the smallest possible Lipschitz constant. For this, the function is considered component-wise with i = 1, . . . ,n considered and attempts to assign each component to a separate Lips∣chitz constant. ∣∣ ∣ ∣∣ ∣∣ ϕi (ξ1 + e1, . . . ,ξ e ∣ ∣∣ ∣∣ i + i,u) − ϕ (ξ1, . . . ,ξi,u) ∣ ≤ ki∣∣[e1, . . . ,ei,0, . . . ,0]∣∣ In certain cases Lipschitz constant can be directly determined by considerations and skillful transformations analytically. If this is not so easy, numerical methods can be used. As listed in [44] , using the mean value theorem following ∣∣∣ ( ) ( ) ∣∣∣ ∣∣∣∣∣∣∣∣∂ϕ (ξ,u) ∣∣∣∣∣ estimate can be made: u i ∣ ∣∣ ∣∣ ϕi ξ̃, − ϕi ξ̂,u ∣∣ ≤ sup ∣∣ · ∣∣ξ̃ − ξ̂∣∣∣∣ (4.25) ξ ∂ξ For this method, therefore, the derivatives of each component according to the state are used. This results in all in a Lipschitz constant k := max {Ki} . There is a critical area i=1,...,n where the partial derivatives tend to infinity, and will therefore not be Lipschitz. This is 4. Design of Observers for Nonlinear Systems 49 also affected by the frequency measurement data. Therefore, the robustness and stability of the estimation error can not be guaranteed in this case. From simulations however, a reasonable estimate can be checKed . Then the gain for the nominal portion is designed so that the system matrix is Hurwitz. As already was discussed, the design method of the observer here corresponds to a linear design. For simplicity, a direct pole placement is made. To this are added desired ! eigenvalues or a desired range σ∗ = λ∗ , . . . ,λ∗{ 1 n} ⊂ C − . n det(λI − (A − kC)) = Π (λ − λ∗) i=1 i = λn + h n−1 n−1λ + . . . + h1λ + k Now Q := I is the identity matrix with dimension n. Then the Lyapunov equation is: (A T − kC) P + P(A − kC) = −I a unique solution P = PT > 0. This gives the important rise for estimating the gain constant γ with λ γ min(Q) = , λmax(P) where λmin(Q) = 1 und λmax(P) the maximal eigenvalue of P is designated. Finally, the lower bound of the gain L is determined as follows: ! 2k L > L0 = . (4.26) γ With this simple formula the matrix Λ is completed. Here, the phenomenon of peak should always be considered. In practice, numerical problems also impose maximum profit. Here is an alternative realization for the observer now to be presented, the background without explicit inversion of the observability map on remaining valid still all previous statements about the high gain observer. Now considering the observer approach again: ( ) z˙̂ = (Aẑ + ϕ()ẑ,u) + Λk y − Cẑ (x̂) −1 ∂o ⇒ x˙̂ n = ( ) z˙̂ ∂x −1 ( ) ( ) (x̂) (x̂) −1 ∂on ∂on ( ) = Aẑ +( ϕ(ẑ,u)) + Λk y − Cẑ ∂x −1 ∂o (x̂) (∂x ) = f (x̂) + g(x̂,u) n + Λk y − h(x̂) ∂x 4. Design of Observers for Nonlinear Systems 50 This results in the following equivalent observer: ( )−1 x˙̂ ∂on(x̂) ( ) = f (x̂) + g(x̂,u) + Λk y − h(x̂) (4.27) ∂x Now only the observability matrix On(x̂) are inverted, which is much simpler than the determination of the inverse of an non-linear mapping. This knowledge now allows for accommodating significantly more system classes than before, and now the implementation is possible. Figure 4.9 is the corresponding block diagram. Figure 4.9.: Structure of a High Gain Observer There is only yet a design problem: the Lipschitz estimate of the transformed nonlinearity ϕ is required. Here we can place either a heuristics tuning or by implicit solutions of a nonlinear system of equations used. Thus, using the implicit function theorem (see [43] a solution in an optional particularly critical environment expected to be carried out and so an estimate can be performed. This then falls naturally still coarser than before. According to the method, the observer must be tested in different situations simulation and adjusted if necessary. In order to simulate the dynamic of the process different values of the inputs were used, the same of the previous case see Table 4.1 where all times are in seconds and the inputs in percentage. For this simulation scenario, the engine speed considered is N = 2250 RPM, and the fuel rate W f = 6 Kg/h. The initial conditions of the states are pi(0) = 1.086e5 Pa;px(0) = 1.105e5 Pa; Pc(0) = 350.2 W. Three different designs were probed, with the three different ε. These cases are shown in Table 4.4 Table 4.4.: Simulation cases, ε Case ε 1 0.3 2 0.5 3 0.9 4. Design of Observers for Nonlinear Systems 51 The results for each state of the system are shown in Figures 4.10, 4.11, 4.12 respectively. The trajectories of the system are drawn with solid lines. We used dashed lines for the observer. In addition, we plotted the trajectories of the error between the real states and the estimated states using dash-dotted lines. It can be seen that the extended Luenberger observer designed with smallest ε , converges faster than the other cases but on the other hand, the peaking phenomenon is higher. 5 4 x 10 x 10 1.35 0.5 ε =0.3 ε =0.3 ε =0.5 ε =0.5 1.3 ε =0.9 0 ε =0.9 Model 1.25 −0.5 1.2 −1 1.15 −1.5 1.1 −2 1.05 −2.5 0 0.2 0.4 0.6 0.8 1 0 0.05 0.1 0.15 0.2 0.25 Time[s] Time[s] Figure 4.10.: Performance of the High Gain observer for the state pi. Error of the estimation 5 4 x 10 x 10 1.4 0.5 ε =0.3 ε =0.3 ε =0.5 ε =0.5 0 1.35 ε =0.9 ε =0.9 Model −0.5 1.3 −1 1.25 −1.5 1.2 −2 1.15 −2.5 1.1 −3 0 0.2 0.4 0.6 0.8 1 0 0.05 0.1 0.15 0.2 0.25 Time[s] Time[s] Figure 4.11.: Performance of the High Gain observer for the state px. Error of the estimation Furthermore, for a better comparation of the performances of the three design the Figure 4.13 show the root mean square error for the three states. The complete performance is easier to compare and such as shown in the Table 4.5 that contain the Mean Square Error for each case, this is only a comparative value without other meaning, which unit is the same of the state unit squared, and is given by: These results show that the convergence time decreases using small values for ε, that due to poles are placed more distant from the origin, there is a trade off because a shorter convergence means in this case, greater peak at the beginning. To summarize, this approach considers that a diffeomorphism for the system not necessary to calculate the observer in original coordinates, only the inverse of the Jacobian of the observability mapping is sufficient, this procedure results a little easier to calculate for p [Pa] p [Pa] x i e e p p x i 4. Design of Observers for Nonlinear Systems 52 1200 500 ε =0.3 ε =0.3 ε =0.5 ε =0.5 1000 ε =0.9 400 ε =0.9 Model 800 300 600 200 400 100 200 0 0 −200 −100 0 0.2 0.4 0.6 0.8 1 0 0.05 0.1 0.15 0.2 0.25 Time[s] Time[s] Figure 4.12.: Performance of the High Gain observer for the state Pc. Error of the estimation 8 x 10 5 ε =0.3 ε =0.5 ε =0.9 0 0 0.02 0.04 0.06 0.08 0.1 Time[s] 8 x 10 10 5 0 0 0.02 0.04 0.06 0.08 0.1 Time[s] 5 x 10 4 2 0 0 0.02 0.04 0.06 0.08 0.1 Time[s] Figure 4.13.: Root mean square error P [W] c √ √ 2 √ (Pc − P̂c)2 (px − p̂x) (pi − p̂i)2 e P c 4. Design of Observers for Nonlinear Systems 53 highly nonlinear systems. It has been shown that an independent observer for the nominal linear design of the system, also based on the knowledge of the Lipschitz-constant of the nonlinearity part the gain ε for the matrix Λ can be estimated. It is relevant that A − KC is Hurwitz, another important part is the relationship of inequality for estimation error that explains the phenomenon of peak. Table 4.5.: Mean Square Error MSE(pi) MSE(px) MSE(Pc) ε = 0.3 2.0865 · 104 1.1842 · 104 17.9822 ε = 0.5 3.1523 · 104 2.4937 3 · 10 25.1868 ε = 0.9 4.9789 104 · 4.2822 · 103 50.9930 Essentially, if a larger gain is chosen the method results in: • Measurement noise and measurement errors are amplified • Reduction of the influence of process disturbances • The estimation error converges faster to its rest position (for the best case e = 0) • Higher peak effect, that is, the error in the initial moment Here one faces a trade off. Therefore, the observer approach is well suited for systems which are unreliable and data measured is filtered. It is also to be noted that in presence of interference a deviation of the estimation error can occur, thus asymptotic stability is not always perfectly achieved. However, this shift can be kept reasonably bounded, with a large enough gain. 4. Design of Observers for Nonlinear Systems 54 4.3. Extended Kalman-Bucy Filter One of the most used tools for design state observers and for many other applications is the Kalman filter. In 1960, Rudolph E. Kalman published his famous paper [45] which establishes the basics of this technique. Since then, there has been a multitude of applications, tools, papers and books describing and expanded the qualities of this filter. The Kalman filter is essentially a set of mathematical equations that implement an estimator of the predictor-corrector type that is optimal in the sense of minimizing the covariance of the estimated error. The great advantage of the Kalman filter is its relative simplicity and its robustness and ability to work remarkably well in many situations. A Kalman filter is an optimal estimator that infers parameters of interest as the states of a system from indirect, inaccurate or uncertain observations. If all noise is Gaussian, the Kalman filter minimizes the mean square error of the estimated parameters. The algorithm is recursive such that new measurements can be processed as they arrive. 4.3.1. Kalman Filter for Linear System The original theory of the Kalman Filter requires discrete time linear dynamic system description by vector difference equation with additive white noise that models unpredictable disturbances. Then, suppose we have a discrete linear system with state space as follows: x(k + 1) = Ax(k) + Bu(k) + v(k) y(k) = Cx(k) + w(k) with the matrices A Rn×n, B Rn×m ∈ ∈ and C p×n ∈ R , Where v(k) is a variable that represents the noise in the process and w(k) represents the noise in the measurement. Both variables are independent of each other, i.e. uncorrelated, white, and with a normal probability distribution: p(wk−1) ∼ N(0,Q) p(vk) ∼ N(0,R) The Kalman filter provides estimates of the state variables of a process using a feedback control: the filter estimates the process state at a point in time and then get a feedback of the measures. Therefore, we can divide the Kalman filter equations into two groups: time update called prediction and update measures, known as the correction phase. In the prediction phase the current state and error covariance, known as P, is calculated from the error covariance in the previous time. During the correction phase updating the 4. Design of Observers for Nonlinear Systems 55 Kalman gain is performed using the calculated error covariance in the prediction phase, measurements are taken and the state estimation calculated in the prediction phase is corrected, and finally it updates the error covariance P, the Kalman gain and the error covariance estimated in the previous instant. The Kalman filter is quite easy to calculate for small system orders because it is practically linear, except for the matrix inversion. It can also prove that the Kalman filter is an optimal estimator of the state of a process, given a metric quadratic error. • Initialization: x̂(0) P(0) k := 0 • Prediction: x̂(k + 1|k) = Ax̂(k) + Bu(k) P−(k + 1) = AP(k)AT + Q • Correction: ( )−1 K(k + 1) = P−(k + 1)CT CP−(k +( 1)CT + R ) x̂(k + 1|k + 1) = x̂(k + 1|k) + K(k + 1) y(k + 1) − Cx̂(k + 1|k) P(k + 1) = (I − − K(k + 1)C) P (k + 1) • Iteration: x̂(k + 1) = x̂(k + 1|k + 1) k = k + 1; Where the notation x̂(k+1|k). represents estimate x̂(k+1) at time k+1 based on information at the time k. This is a predictor-corrector method, which only depends on the preceding estimated state while the global cost functional J is minimized. This is a big advantage for online executability of the algorithm. The setting parameters of the observer, in this case, are the covariance matrices Q and R. It is recommended to choose positive definite diagonal matrices. The diagonal entries of Q should, in this case, be greater if greater is the noise is presented in the process. The entries of matrix R should be made greater, when measurement quality of the respective sensor becomes worse. Ultimately, this parameter choice boils down to tests or simulations. The structure of the Kalman Filter and the interdependencies of the different equations are shown in 4.14 4. Design of Observers for Nonlinear Systems 56 Figure 4.14.: Schematic diagram of a Kalman Filter. Kalman-Bucy Filter The Kalman-Bucy filter is the continuous time version of the Kalman filter. In the same way, that the discrete version is be understood it also works. Recall that the linear system with noise is given by: ẋ = Ax + Bu + v y = Cx + w it is disturbed by two zero-mean, normally distributed white noise processes v and w that are uncorrelated. These noises have the covariance matrices Q and S cov {v(t1), v(t2)} = Qδ(t1 − t2) cov {w(t1),w(t2)} = Rδ(t1 − t2) The Kalman filter with the filter matrix L determins with x̂ an estimate of the state variables vector x. For the observed system, the structure of the Kalman filter is completely the same structure of the Luenberger observer, as we know from the linear systems theory : x˙̂ = (A − LC)x̂ + Bu + Ly (4.28) Kalman filter and Luenberger observer are identical in their equations. They differ in determining the matrix gain L., since in the Luenberger observers the eigenvalues of the estimation error dynamics are always chosen, the resulting gain matrix L is fixed. The Kalman matrix L, however, is designed so that the influence of the noise v and w is minimal. This is done by the errors and the expected values.The solution has the form: L = PCTR−1 (4.29) 4. Design of Observers for Nonlinear Systems 57 The matrices R and Q are, in general, unknown and are often assumed symmetric, positive definite and bounded matrices. Only after the actual realization of the iterative optimization process gives some reasonable R and Q matrices that may leads to a satisfactory design result. 4.3.2. Extended Kalman Filter The extended Kalman filter is a variation of the Kalman filter to treat the problem of state estimation when the system is discrete and nonlinear. Suppose we have a nonlinear system as follows: x(k + 1) = f (x(k),u(k)) + v(k) y(k) = h(x(k)) + w(k) For the discrete Kalman filter, v is a variable representing the noise in the process and w represents the noise in the measurement. Both variables are assumed to be uncorrelated, white, and with a normal probability distribution: p(wk−1) ∼ N(0,Q) p(vk) ∼ N(0,R) The form of the extended Kalman filter is similar to the discrete Kalman filter, now only the system is nonlinear therefore needs to be linearized so that we can apply the Kalman filter. EKF solves this problem by calculating the Jacobian linearization of f and h around the estimated state. In the correction phase of the extended Kalman filter, the Kalman gain is also performed using the error covariance calculated in the prediction phase, measurements are taken and state estimation calculated in the prediction phase is corrected,finally the error covariance (P) with Kalman gain and the error covariance calculated in the previous time it is updated. As shown, the extended Kalman filter is a way to obtain a first order approximations of optimal terms. When the model is highly nonlinear, these approaches can generate expectations and covariance very different from the real expectations and covariance. These discrepancies can lead to malfunction or even filter divergence. Another drawback of the extended Kalman filter is that it requires the calculation of Jacobian matrices, which is not trivial in most cases and then could occur calculation errors difficult to detect. • Initialization: x̂(0) P(0) k := 0 4. Design of Observers for Nonlinear Systems 58 • Prediction: x̂(k + 1|k) = f (x̂(k),u(k)) ∂ f (x,u) ∣∣ A(k) = ∣ ∂x ∣ x=x̂(k),u=u(k) P−(k + 1) = A(k)P(k)A(k)T + Q • Correction: ∂h(x) ∣∣ C(k + 1) = ∣ ∂x ∣ x=x̂(k+1|k) ( )−1 K(k + 1) = P−(k + 1)C(k + 1)T C((k + 1)P−(k + 1)C(k + 1)T + R) x̂(k + 1|k + 1) = x̂(k + 1|k) + K(k + 1) y(k + 1) − C(k + 1)x̂(k + 1|k) P(k + 1) = (I − − K(k + 1)C(k + 1)) P (k + 1) • Iteration: x̂(k + 1) = x̂(k + 1|k + 1); k = k + 1; Extended Kalman-Bucy Filter In the case of a nonlinear system, the continuous version of this estimator considers the nonlinear system given by: ẋ = f (x,u) + v (4.30) y = g(x,u) + w (4.31) The estimation equation of the Kalman filter can formally be applied to the non-linear situation and gives the observer equation x˙̂ = f (x̂,u) + L(y − ŷ) (4.32) The observer form is the same than analyzed before, but now with the advantage in the choice of the matrix L. Therefore, one chooses the observer matrix L time-dependent, or depending on the course of the trajectories of the system. These noises have the covariance matrices Q and R Then the design of the L matrix is carried out by means of known design equation of classical Kalman filter L = PCTR−1 (4.33) where the matrix M is now calculated by the time-dependent Riccati equation solving the 4. Design of Observers for Nonlinear Systems 59 differential Riccati equation Ṗ(t) = A(t)P(t) + P(t)AT(t) + Q P(t)CT(t)R−1 − C(t)P(t) (4.34) Figure 4.15.: Schematic diagram of the Extended Kalman Bucy Filter Here, the matrices A(t) and C(t) result from a first-order Taylor approximation, i.e., a linearization around the current and the known r∣eference point x̂(t) ∂ f ∣ A(t) = ∣ ∂x ∣∣∣∣x̂(t) ∂g C(t) = ∂x ∣ x̂(t) For the initial value for the desired matrix P(t) there{ is required the co}variance matrix : P(0) = cov {x0 − x̂0,x0 − x̂0} = E (x0 − x̂0)(x T 0 − x̂0) The structure of the extended Kalman filter and the interdependencies of the different equations are shown in 4.15 It should be noted that the differential form of the Riccati Equation 4.34 is not resolved offline, to determine the stationary value of P. This is therefore not possible offline because A(t) continuously changes. Here is needed to determine continuously the Jacobian matrices A(t) and C(t). Stability and quality of the estimate are not saved in the extended Kalman filter, normally must be checked by 4. Design of Observers for Nonlinear Systems 60 simulation. Therefore, it has the disadvantage that we get no general statement, but stability and predictive accuracy are only valid for each conducted specific simulations. In the design of the Kalman filter, that is, the choice of R and Q, experience is required and necessary. The matrices R and Q are in general unknown and are often assumed to be unitary matrices. Only after the actual realization of the iterative optimization process gives the R and Q matrices and then it leads to a satisfactory result design. Selection of initial matrices In order to achieve a good response of the system, the following statement can be considered at moment to select the initial values for the covariance matrices • R: Measure of accuracy of the measurement Often take a very small value The greater R, the smaller the Kalman gain, i.e. adapting to changing process more slowly • Q: Measure of accuracy of the states The greater Q, the greater K, i.e. adapting to changing process faster. • P: Diagonal Matrix with greater values. The greater P, more inaccurate the preliminary information. Note that the general system has a differential equation of order 1 2 n2 + 3 2 n because of the Riccati Differential Equation and the Differential Equation of the observer, and should be solved simultaneously, since A(t) and C(t) depends on x̂(t) and u(t). The practical implementation of the extended Kalman-Bucy filter requires, except in very simple cases, a numerical solution of the Riccati differential equation by means of an integration process for example by means of the Runge-Kutta method. The numerical solution of the Riccati differential equation can lead to calculation problems. For highly-nonlinear systems as discussed in this thesis can be considered for the calculus of A and C through symbolic analysis software like Maple or Mathematica.A more practical option is to use high-precision numerical methods, as the method used in this case, Step Complex Differentiation explained below. Step Complex Differentiation The step complex differentiation method employs complex arithmetic to obtain the numerical value of the first derivative of a real-valued analytic function of a real variable, avoiding the loss of precision inherent in the traditional numerical method, for example, the most used for this aim called finite differences. The algorithm of step complex differentiation method is based on the features of analytic 4. Design of Observers for Nonlinear Systems 61 functions, i.e., infinitely differentiable and it can be smoothly extended into the complex plane. Then given F(x) such an analytic function, x0 a point on the real axis, and h a real parameter. Expand F(z) in a Taylor series off the real axis. h2Fx ih3F F(x0 + ih) = F(x0) + ihF(x ) 0 0 − − + ... (4.35) 2! 3! Now taking the imaginary part of both sides and later divide by h. Im(F(x0 + ih)) F(x0) = + O(h2) (4.36) h The method is based then on the easy evaluation of the function F at the imaginary argument x0 + ih, and dividing by h, gives an approximation to the value of the derivative, F(x0), that is accurate to order O(h2). We might as well choose h = 108. Then the error in the approximation is about the same size as the round off error involved in storing a double precision floating point value of F(x0). Then considering h = 108, the whole complex step differentiation algorithm is given by: Im(F(x0 + ih)) F(x0) ≈ h With this easy approach will the matrices A and C are calculated for the realization of the simulation of the Extended Kalman-Bucy Filter for the system under consideration. 4.3.3. Convergence Analysis The analysis is made using Lyapunov theorem of stability, according [33]. The estimation error is given by: e = x̂ − x then ė(t) = A(t) − P(t)CT(t)R−1(t)C(t)]e(t) + H(t)w(t) − v(t) Ṗ(t) = P(t)AT(t) + A(t)P(t) − P(t)CT(t)R−1(t)C(t)P(t) + Q(t) with e(t0) = e0 and P(t0) = P0. Considering a Lyapunov function candidate: V(t,e(t)) = eT(t)P−1(t)e(t) According the theorem this function must satisfies the following conditions: 4. Design of Observers for Nonlinear Systems 62 1. V is positive definite This condition, is fulfilled if the pair (A(t),C(t)) is uniformly completely observable that means l1‖e(t) 2 V(t,e(t)) l e(t) 2 ‖ ≤ ≤ 2‖ ‖ 2. V̇ is negative definite V̇ = ėT d (t)P−1(t)e(t) + e(t)TP−1(t)ė(t) + eT(t) P−1(t)e(t) dt d V̇ = eT(t)(A HC)TP−1 − (t)e(t) + e(t)TP−1(t)(A −HC)e(t) + eT(t) P−1(t)e(t) dt V̇ = −e(t)TP(t)−1Q(t)P(t)−1e(t) − eT(t)C(t)TR(t)−1C(t)e(t) + 2e(t)TP(t)−1(t)(H(t)w(t) − v(t)) ≤ c1‖e(t) 2 ‖2 + 2c2‖e(t)‖2‖(H(t)w(t) − v(t))‖2 Then the observation error is input to state stable with respect to H(t)w(t)-v(t), if H(t)w(t)-v(t)=0 then e=0 is a locally, uniformly, asymptotically stable at the equilibrium point. 4.3.4. Realization and Results In order to simulate the dynamic of the process, the values of the control inputs, initial conditions and the other inputs are the same as in the previous cases. Three different designs were probed, with the three values for Q, considering a diagonal matrix with all the elements of the diagonal identical. There are Qii = 0.1, Qii = 1, Qii = 10 . The results for each state of the system are shown in Figures 4.16, 4.17, 4.18 respectively. The trajectories of the system are drawn with solid lines. We used dashed lines for the observer. In addition, we plotted the trajectories of the error between the real states and the estimated states using dash-dotted lines. It can be seen that the observer design has almost the same performance with all Q. 5 4 x 10 x 10 1.35 0.5 Q Q 1 1 Q Q 2 2 1.3 Q 0 Q 3 3 Model 1.25 −0.5 1.2 −1 1.15 −1.5 1.1 −2 1.05 −2.5 0 0.2 0.4 0.6 0.8 1 0 0.1 0.2 0.3 0.4 0.5 Time[s] Time[s] Figure 4.16.: Performance of the Extended Kalman Bucy filter for the state pi. Error of the estimation Furthermore, for a better comparison of the performances of the three design the Figure 4.19 show the root mean square error for the three states. p [Pa] i e p i 4. Design of Observers for Nonlinear Systems 63 5 4 x 10 x 10 1.4 0.5 Q Q 1 1 Q Q 2 2 1.35 Q 0 Q 3 3 Model 1.3 −0.5 1.25 −1 1.2 −1.5 1.15 −2 1.1 −2.5 0 0.2 0.4 0.6 0.8 1 0 0.1 0.2 0.3 0.4 0.5 Time[s] Time[s] Figure 4.17.: Performance of the Extended Kalman Bucy filter for the state px. Error of the estimation 1200 350 Q Q 1 1 Q Q 2 300 2 1000 Q Q 3 3 Model 250 800 200 600 150 100 400 50 200 0 0 −50 0 0.2 0.4 0.6 0.8 1 0 0.1 0.2 0.3 0.4 0.5 Time[s] Time[s] Figure 4.18.: Performance of the Extended Kalman Bucy filter for the state Pc. Error of the estimation 8 x 10 5 Q 1 Q 2 0 Q 3 0 0.05 0.1 0.15 0.2 0.25 Time[s] 8 x 10 5 0 0 0.05 0.1 0.15 0.2 0.25 Time[s] 5 x 10 2 1 0 0 0.05 0.1 0.15 0.2 0.25 Time[s] Figure 4.19.: Root mean square error of the Extended Kalman Bucy filter P [W] c p [Pa]x √ √ √ (Pc − P̂ 2c) (px − p̂ 2x) (pi − p̂ )2i e e P p c x 4. Design of Observers for Nonlinear Systems 64 The complete performance is easier to compare and such as shown in Table 4.6 that contain the Mean Square Error for each case. This approach is based on a stochastic perspective the only thing that can be concluded by comparing the results is the correct convergence of the estimator for the three states in all three cases proposed. The observer based on Extended Kalman Filter generalizes and improves the Extended Luenberger Observer, especially because the gain of the observer is recalculated at each iteration. The main disadvantage is the necessity of information about the noise. The choice of the initial values of the matrices P, Q, and R play a role in the result of the estimation. Another disadvantage is the computational cost for the calculation of the differential Riccati equation at each iteration. The calculation of the matrices A and C do not involve greater computational cost because they are calculated using the Step Complex Differentiation method. Table 4.6.: Mean Square Error for the Extended Kalman Bucy Filter MSE(pi) MSE(px) MSE(Pc) Qii = 0.1 6.8987 106 · 4.4475 · 106 5.2248 103 · Qii = 1 6.7616 6 · 10 4.4055 · 106 4.9416 3 · 10 Qii = 10 6.4546 6 · 10 4.3189 · 106 4.6186 3 · 10 4. Design of Observers for Nonlinear Systems 65 4.4. Sliding-Modes Observer 4.4.1. Introduction This chapter is based in the work of J.Moreno presented in [46], the aim of this section is to design an observer for the system 2.8. The observer proposed takes advantages of two interesting mathematical characteristics: discontinuity and homogeneity. Discontinuity is normally used in control due to its simplicity, and is the base for the sliding mode theory, the information required is binary and then requires only a switch or relay, and generally it has good performance in terms of robustness, stability and convergence time. There exist two approaches in discontinuous systems: The first, the Sliding Mode controllers, in which a switch creates new trajectories, and the other one, Switching controllers without sliding modes, in which the system behavior moves from one trajectory to another by switching. 4.4.2. Homogeneity For the complete explanation about the homogeneity as a mathematical characteristic first it is necessary to define: Definition 4.1 A vector field f : Rn n → R is called homogenous of degree δ ∈ R with the dilatation d ς ς ς k: (x1,x2, · · · ,xn) → (κ 1x1, κ 3x2, · · · ,κ nxn), where r = (r1,r2, · · · ,rn) are some positives weights, if for any κ > 0 the following identity f (x) = κ−δd−1 κ f (dκx) holds. Thus,the definition of homogeneity for functions is given by: Definition 4.2 A scalar function V : Rn → R is called homogeneous of degree δ ∈ R with the dilatation dk if for any κ > 0 the following identity V(x) = κ−δd−1 κ V(dκx) holds. Homogeneity, was defined by Euler and there is a lot of examples for the usefulness of this feature, e.g. the Weierstrass elliptic function, or a transformation of the variables of a tensor changes the tensor into another whose components are linear homogeneous functions of the components of the original tensor [47]. Weighted homogeneity was proposed by Zubov as for weights ri > 0 and degree δ ∈ R. A system is homogeneous if ẋ = f (x) and f (Λrx) = λδΛr f (x), for example the linearity is a generalization of the classical homogeneity with δ = 0 and additivity. A really good option is to work with homogeneous time invariant systems due to their features, they are nonlinear systems but with algebraic properties that approximate the linear features and are also compatibles. Properties of the homogeneous time invariant systems 4. Design of Observers for Nonlinear Systems 66 • If x = 0 is locally attractive (LA)⇔ Globally Asymptotically Stable (GAS) • If x = 0 is (GAS) and δ < 0⇔ x = 0 Finite Time Stable • If x = 0 is (GAS) and δ = 0⇔ x = 0 Exponentially Stable • If x = 0 is (GAS) and δ > 0⇔ x = 0 Asymptotically Stable • If x = 0 is (GAS)⇔ Exist a Homogeneous Lyapunov Function 4.4.3. Derivation The use of discontinuity forms for observers design was researched initially with the theory of Sliding Modes. There had been proved in many contributions researched that a continuous observer, as Luenberger, could assure the convergence only in absence of disturbance. The linear observer converges asymptotically, not in finite time, and is not able to converge to the true value of the unmeasured state in the presence of an unknown input. In fact, finite time convergence is impossible for any observer having locally Lipschitz continuous injection terms, and the convergence in the presence of persistent unknown inputs is also impossible for any continuous observer [48]. In order to alleviate this problem, Sliding Modes Observer were designed that have discontinuous injection terms, normally using the function sign. However, this observer is also unable to either converge in finite time or to estimate the velocity correctly in the presence of an unknown input. Additionally, for the above two cases the convergence time depends on the initial conditions, this means that it is difcult to estimate a priori the time required by the observer to provide a good estimation. In order to achieve the features for the observer, a discontinuous observer, named Generalized Super-Twisting Observer (GSTO) was proposed Generalized Super-Twisting Observer The generalization was made for two order system, when the plant is given in the form: ẋ1 = f1(x1,u) + x2 ẋ2 = f2(x1, x2,u) Then the proposed GSTO has the form: x˙̂1 = l1γφ1(e1) + f1(x̂1,u) + x̂2 x˙̂2 = l2γφ2(e1) + f2(x̂1,x̂2,u) 4. Design of Observers for Nonlinear Systems 67 where e1 = x̂1 − x1is the state estimation error, l1 > 0, l2 > 0, and γ > 0 is a gain that has to be selected sufficiently large in order to assure the convergence of the observer. The terms φ represent injection of nonlinearities and are given by: 1 φ1(e1) = µ1|e1| 2 sign(e1) + µ2|e q 1| sign(e1), µ1 ≥ 0, µ2 ≥ 0 µ2 1 1 φ2(e1) = 1 sign(e1) + µ1µ2(q + ) e q− | 1| 2 sign(e1) + µ2 2q−1 2|e1| sign(e1), 2 2 where µ1 ≥ 0 and µ2 ≥ 0 are constants, not both zero, and q is a real number greater than 1/2, φ2(e1) = φ1(e1)′φ1(e1) then are both monotonically increasing functions of e1. φ1(e1) is continuous and φ2(e1) is discontinuous at e1 = 0 The second order plant is not the case under consideration here, but is take as a drawback in order to understand the discontinuous observer. Now the effect of the gains in the convergence time will be discussed. When a perturbation is present in the dynamic of the output state, we know from the observability properties that it is impossible to obtain convergence to zero of the estimation error. But with a correctly choose of gains this method propose to achieve a practically stability. The GSTO converges to zero in finite time, with or without unknown input and the convergence time is basically the same for very large initial estimation error conditions, this is due to the introduction of a nonlinear term with a power q larger than one. For different values of the parameters (µ1,µ2, q) some important others particular cases are recovered: Super Twisting observer It is achieved if (µ1 = 1,µ2 = 0, q) , in this case φ2(e1) is a discontinuous form while φ1(e1) is a continuous form, this case is widely used in research as in. The effect of the discontinuous term origins that convergence in finite time to zero of the estimation error in absence or perturbation or in presence of anon vanishing unknown input or perturbation but this convergence time grows very fast (and unbounded) with the size of the initial estimation error and additionally it is not able to assure global convergence in the presence of known terms, or for some gains if the perturbation is large. Homogeneous observer This case is achieved if φ (e ) = e q 1 1 | 1| sign(e1) and φ2(e ) = e 2q−1 1 | 1| sign(e1), the system without perturbation is homogeneous and and it is able to converge in finite time if the q value is choose between 1/2 and 1, however, it is not able to converge to zero when a perturbation is present. If q is greater than one converges only asymptotically and its convergence time is uniform in the initial conditions With this drawback is evidently that the best performance is reaching for the generalized super twisting observer, however the researches had developed this algorithm for two order system, such that the system considered in this work is a 4. Design of Observers for Nonlinear Systems 68 third order system, the proposed approach is the research of a homogeneous observer. 4.4.4. Realization With this overview the following design, a discontinuous homogeneous observer for a third order system is explained first for the case when the system is in a Observable Normal Form. Then an observer for a ONF system can be designed as: 2 z˙̂1 = −Lk1|e1| 3 sign(e1) + ẑ2 1 z˙̂ 2 2 = −L k2|e1| 3 sign(e1) + ẑ3 z˙̂3 = −L3k3sign(e1) + K(ẑ) ŷ = ẑ1 The discontinuous observer proposed for this system is shown in Figure 4.20. Figure 4.20.: Diagram of a Discontinuous Observer for a system in Observable Normal Form where e = x̂1−x1 is the state estimation error, k1, k2 and k3 are positive gains selected appropriately in order to guarantee the convergence of the observer in absence of external perturbations. The choice of the values of the gains is not a simple task, even being subject of several investigations, this study used a practical approach consisting in prove values subject to the following conditions • k1>0 4. Design of Observers for Nonlinear Systems 69 • k2>0 • k1k2>k3 That is the simple way to assure a Hurwitz polynomial for linear system, when the response of the observer for the system without disturbances is appropriate then just the value of L is handled, considering L > 0. Now, analogous with the theory of the High Gain observer, to achieve a design for our system only the observability matrix On(x̂) are inverted, which is much simpler than the determination of the inverse of an non-linear mapping. The transformed observer in original coordinates, from is given by: ( )  −1  2   Lk1|e1| 3 sign(e1) ∂ (x̂) o 1 x˙̂ = f (x) + g(x)u n − L2k2|e1| 3 sign(e ∂x 1) L3k sign(e )  3 1 y = x1 This knowledge now allows for accommodating significantly more system classes than before, and now the implementation is possible. Figure 4.9 is the corresponding block diagram illustrates. Figure 4.21.: Diagram of a general Discontinuous Observer 4.4.5. Error Dynamics The dynamics of the estimation error was made for the design based on the Normal Observable Form and considering e1 = ẑ1 − z1 , e2 = ẑ2 − z2 and e3 = ẑ3 − z3 is given by: 4. Design of Observers for Nonlinear Systems 70 2 ė1 = −Lk1|e1| 3 sign(e1) + e2 1 ė = L2 2 − k2|e1| 3 sign(e1) + e3 ė3 = L3 − k3sign(e1) + w(t) where w(t) = K(x̂)−K(x)−U(x,u̇) . In order to demonstrate the error convergence to zero, a Lyapunov analysis is proposed by J.Moreno in [46] considering the Lyapunov Function Candidate, the following homogeneous and differentiable form: 3 5 2 5 3 5 V(e) = a e 3 5 | 5 1 1| 3 − e1e2 + a2|e5 1| 2 − a2e2|e3| sign(e3)e2 + a 5 3|e3| 3 Thus, if L = 1, the derivative of the Lyapunov function is given by: 2 2 V̇(e) = −−|Lk1|e1| 3 sign(e1) + e2| 3 3 1 +a2(−k 2 1 e1 + |e2| 2 sign(e2))(−k2|e1| 3 sign(e1) + e3) 1 +a 3 3 23(−k2e1 + |e3| sign(e3))(−k2|e1| 3 sing(e1) + e3) +3a k sing(e ) e 2 a23 3 3 1 | 3| ( e 2 2 − |e3| sign(e a 3)) 3 The function V̇ is homogeneous and discontinuous, but the problem at this point arise in since V and are not quadratic functions, and to prove the negative definiteness is not an easy task. However, there is a set of values (k1, k2, k3, L) such that the function V is positive definite and satisfies the inequality 4 V̇(e) ≤ −αV 5 For α > 0, this differential equation will be always negative so that is a Lyapunov function for the observation error, and trajectories converge in finite time. This method is an open area research, offers the possibility of extend to arbitrary order systems, with a complete analysis based in Lyapunov framework. On the other hand the effect of the noise, and the design of the gains are still under investigation. 4.4.6. Results The following Figures show the performance of this observer for the estimation of the states of the Turbocharger Diesel Engine model, the design was made for 4. Design of Observers for Nonlinear Systems 71 different values of the parameter L. The results for each state of the system are shown in Figures 4.22, 4.23, 4.24 respectively. In order to simulate the dynamic of the process, the values of the control inputs, initial conditions and the other inputs are the same as in the previous cases. Three different designs was probed, with the three values for the gain, that are L1 < L2 < L3 . The trajectories of the system are drawn with solid lines. We used dashed lines for the observer. In addition, we plotted the trajectories of the error between the real states and the estimated states using dash-dotted lines. It can be seen that the observer designed with smallest L, converges faster than the other cases. In fact, this behavior has the same explication that the High Gain observer. 5 4 x 10 x 10 1.35 0.5 L L 1 1 L L 2 2 1.3 L 0 L 3 3 Model 1.25 −0.5 1.2 −1 1.15 −1.5 1.1 −2 1.05 −2.5 0 0.5 1 1.5 2 0 0.05 0.1 0.15 0.2 0.25 Time[s] Time[s] Figure 4.22.: Performance of the Discontinuous Observer for the state pi. Error of the estimation 5 4 x 10 x 10 1.4 0.5 L L 1 1 L L 2 2 1.35 L 0 L 3 3 Model 1.3 −0.5 1.25 −1 1.2 −1.5 1.15 −2 1.1 −2.5 0 0.5 1 1.5 2 0 0.05 0.1 0.15 0.2 0.25 Time[s] Time[s] Figure 4.23.: Performance of the Discontinuous Observer for the state px. Error of the estimation Furthermore, for a better comparative of the performances of the three design the Figure 4.25 show the root mean square error for the three states. The complete performance is easier to compare and such as shown in Table 4.7 that contain the Mean Square Error for each case, this is only a comparative value without other meaning, which unit is the same of the state unit squared, and is given by: Some advantages of this approach since it is based on homogeneous functions are the possibility of extension for system to an arbitrary order, the convergence in finite p [Pa] p [Pa] x i e [Pa] e [Pa] p p x i 4. Design of Observers for Nonlinear Systems 72 1200 180 L L 1 1 L 160 L 2 2 1000 L L 3 140 3 Model 120 800 100 600 80 60 400 40 20 200 0 0 −20 0 0.5 1 1.5 2 0 0.05 0.1 0.15 0.2 0.25 Time[s] Time[s] Figure 4.24.: Performance of the Discontinuous Observer for the state Pc. Error of the estimation 8 x 10 5 L=0.03 L =0.05 L=1 0 0 0.05 0.1 0.15 0.2 0.25 Time[s] 8 x 10 5 0 0 0.05 0.1 0.15 0.2 0.25 Time[s] 4 x 10 4 2 0 0 0.05 0.1 0.15 0.2 0.25 Time[s] Figure 4.25.: Root mean square error of the Discontinuous Observer P [W] c √ √ √ (P P̂ )2c − c (p p̂ )2 (p p̂ )2x − x i − i e [W] P c 4. Design of Observers for Nonlinear Systems 73 time, could be analyzed using Lyapunov. Therefore, this design could be use for estimate an unknown input. The calculus for the gains is still under research. This discontinuous observer is able to estimate in finite time and exactly, in absence of measurement noise, the states of the system. Table 4.7.: Mean Square Error of the Discontinuous Observer MSE(pi) MSE(px) MSE(Pc) L = 0.03 4.4828 · 106 3.1943 · 106 365.6521 L = 0.05 5.4321 106 · 3.9314 · 106 271.27273 L = 0.1 6.4693 · 106 4.4065 · 106 276.7387 4. Design of Observers for Nonlinear Systems 74 4.5. Others Approaches There are others approaches researched for design of observers for nonlinear systems that are not include in this work, also for the application of the observer design on a Turbocharger Diesel Engine. It offers another perspectives about the survey area, that is important to know and to study, too. 4.5.1. Lyapunov Based Methods Usually it is a theoretical approach [49], which rarely offers the possibility of building such observers as it gives no clue about the way we construct Lyapunov functions and how to design the observer. The method is: trying to solve the stability problem of the linear approximation using the Lyapunov stability theory in order to determine the conditions under which the dynamics of nonlinear error has a stable behavior around a fixed point (e = 0). Among the most important results within this approach we can highlight the work of Kou [50] . 4.5.2. Extended Linearization Extended linearization method was proposed by Baumann and Rugh [51]. The proposed method is based on the use of a non-linear output injection with the intention that the dynamic of linearized error leads to local stability. In that work it was demonstrated that for the inverted pendulum system the results obtained are better than those that can be obtained by linearizing the system and designing a linear observer, but the observer is designed so that it can operate only around a set of points. 4.5.3. Lie-algebraic Techniques The basic idea of these techniques is to transform nonlinear systems in systems in which the linear systems theory is valid. In literature we can find many contributions, among which include the most importantlt the works of Krener and Isidori [52]. The advantage of these techniques is that they exploit the vast knowledge available for the design of linear observers reducing the nonlinear problem to a problem that can be handled with linear techniques. The problem at the time of its application is that the linear system on which to apply must meet a series of restrictive property, and even finding the system transformation is not an easy task. 4. Design of Observers for Nonlinear Systems 75 4.5.4. Horizon Moving Estimator In the method of moving horizon (Horizon Moving Estimator, MHE) [53] the estimate is made without using linearization nor statistical information of perturbations of the model. This observer works adjusting the model estimates according to a horizon of past data and estimating, from the fitted model, the new data. The basic idea of moving horizon estimation is only using a fixed number of the N last data for the estimate. The horizon moves forward at each sampling instant and uses existing measures. 5. Comparative Simulations 5.1. Implementation The observers have been implemented in a simulation software for verification, that is, Matlab. The performance of the observers is evaluated on the diesel engine model then it is designed on. Since the implementation of the observer, however, will be discrete, it is needed to define the sampling time Ts. Considering the discrete Euler realization of the observer dynamics and as it is visible in the Figure (2.4) the dynamics of the system has, approximately, a settling time of 1 s, for this reason and considering the high nonlinearities present in the model, a sampling time Ts = 0.001 s has been choose for the software implementation, such that in software the observer were implemented as: [ ( )] x̂(k + 1) = x̂(k) + Ts · f (x̂(k)) + g(x̂(k),u(k)) + K y(k) − h(x̂(k)) Where K represents the gain only in a general way. The choose of this gain has been done for each one of the designed observers as follows: • Extended Luenberger Observer: The gain has the form k(x,u) and is calculated by the Equation (4.9). The values of the parameters pi have been chosen empirically considering always that the eigenvalues must be in the left half- plane to ensure stability until to achieve a considerably fast response, such that the observation error dynamics is much faster than the dynamics of the system. • High Gain Observer: The gain has been calculated by the Equation (4.27). The gain values for the linear part of the observer have been computed using the same eigenvalues as in the previous case to have a comparative benchmark. The "high gain" was chosen empirically such that the error dynamics is much faster than the dynamics of the system but with a limited peak phenomenon. The inverse of the observability matrix is required in this approach. • Extended Kalman Bucy Filter: The gain was calculated using Equation (4.33) and (4.34), the Riccati equation, and calculating the gain L at each iteration, the initial values of the covariance matrices were chosen empirically considering the criteria detailed in the respective section 5. Comparative Simulations 77 • Sliding Modes Observer: The gain parameters for this approach has been chosen equals to that for the High Gain Observer to use the same framework for comparison. The inverse of the observability matrix is required in this approach. These designs are valid in all the chapter, where the changes are represented for different scenarios of simulation. The calculus of the inverse of the observability matrix was carried out symbolically using the software package Maple, because this terms are represented by very large expressions. The input of the system was a constant fueling rate, a constant engine speed, two different reference signals on the variable turbine, two different reference signals on the exhaust valve, these simulation values are presented in the Table 5.1. Table 5.1.: Values of the inputs for the simulation test Variable Description Value Unit N Engine Speed 2250 [RPM] W f Fuel rate 6 [Kg/h] xegr EGR valve position 0.7 [%] xvgt VGT valve position 0.6 [%] 5.2. Comparative Results The simulation tests were made for different scenarios.The transient performance for the nonlinear observer is shown in Figures 5.1,5.2,5.3 for all the designs. The observers works properly. For the intake and exhaust manifold pressure and the power compressor the estimated the true state variables seem to coincide for the four approaches taken in consideration in this work. Each designed observer presents its own features, but reaches the aim to estimate the states with a dynamics faster than the dynamics of the system. The comparison of the time of convergence and overshoot in this section makes no sense, since these features can improve or worsen for each of the approaches according to the design parameters one choses. That is, there may be better and also worse than those shown here for each method presented. 5.3. Disturbances and Noise In this section the performance of the designed observers is evaluated on cases of disturbances and noise presence. Most systems in practice unavoidably encounter disturbances and the performance of the control scheme depends largely on its effectiveness to deal with them. Considering actuator faults and parametric variations. As first case it is considered noise in the output measure. 5. Comparative Simulations 78 5 4 x 10 x 10 1.35 0.5 System ELO ELO HGO 1.3 HGO 0 SMO SMO EKBF EKBF 1.25 −0.5 1.2 −1 1.15 −1.5 1.1 −2 1.05 −2.5 0 0.2 0.4 0.6 0.8 1 0 0.1 0.2 0.3 0.4 0.5 Time[s] Time[s] Figure 5.1.: Comparative performance of the observers for pi 5 4 x 10 x 10 1.5 0.5 System ELO ELO HGO 1.45 HGO 0 SMO SMO EKBF 1.4 EKBF −0.5 1.35 1.3 −1 1.25 −1.5 1.2 −2 1.15 1.1 −2.5 0 0.2 0.4 0.6 0.8 1 0 0.1 0.2 0.3 0.4 0.5 Time[s] Time[s] Figure 5.2.: Comparative performance of the observers for px 1200 500 System ELO ELO HGO 1000 HGO 400 SMO SMO EKBF EKBF 800 300 600 200 400 100 200 0 0 −200 −100 0 0.2 0.4 0.6 0.8 1 0 0.1 0.2 0.3 0.4 0.5 Time[s] Time[s] Figure 5.3.: Comparative performance of the observers for Pc P [W] c p [Pa] p [Pa]x i e [W] P e [Pa] e [Pa] c p px i 5. Comparative Simulations 79 5.3.1. Output Noise The first test probes the performance of the design observers with noise in the output. These cases consider the output contaminated by additive noise with normal distribution, zero mean and intensity of five percent of the real value respectively. This value is, as is shown in Figure 5.4, a substantial measurement of noise. 5 x 10 1.35 Measured p i Model p 1.3 i 1.25 1.2 1.15 1.1 1.05 1 0 0.2 0.4 0.6 0.8 1 Time[s] Figure 5.4.: Dynamics of pi ideal and with noise 5 x 10 1.35 System ELO 1.3 HGO SMO EKBF 1.25 1.2 1.15 1.1 1.05 0 0.2 0.4 0.6 0.8 1 Time[s] Figure 5.5.: Performance of the observers for pi with noise in the output The Figures shown the performance of the design observers with a considerable level of noise in the output. The transient performance of the states for the nonlinear observers are shown in Figures 5.5,5.7,5.9 for the all the designs. The transient performance of the error for all cases are shown in Figures 5.6,5.8,5.10. p [Pa] p [Pa] i i 5. Comparative Simulations 80 4 4 x 10 x 10 0.5 0.5 ELO HGO 0 0 −0.5 −0.5 −1 −1 −1.5 −1.5 −2 −2 −2.5 −2.5 0 0.1 0.2 0.3 0.4 0.5 0 0.1 0.2 0.3 0.4 0.5 Time[s] Time[s] 4 4 x 10 x 10 0 0.5 SMO EKBF 0 −0.5 −0.5 −1 −1 −1.5 −1.5 −2 −2 −2.5 −2.5 0 0.1 0.2 0.3 0.4 0.5 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 Time[s] Time[s] Figure 5.6.: Dynamic of the error for the state pi 5 x 10 1.5 System ELO 1.45 HGO SMO 1.4 EKBF 1.35 1.3 1.25 1.2 1.15 1.1 0 0.2 0.4 0.6 0.8 1 Time[s] Figure 5.7.: Performance of the observers for px with noise in the output e [Pa] e [Pa] p p i i p [Pa] x e e [Pa] p p i i 5. Comparative Simulations 81 4 4 x 10 x 10 0.5 0.5 ELO HGO 0 0 −0.5 −0.5 −1 −1 −1.5 −1.5 −2 −2 −2.5 −2.5 −3 0 0.1 0.2 0.3 0.4 0.5 0 0.1 0.2 0.3 0.4 0.5 Time[s] Time[s] 4 4 x 10 x 10 0 0.5 SMO EKBF 0 −0.5 −0.5 −1 −1 −1.5 −1.5 −2 −2 −2.5 −2.5 0 0.1 0.2 0.3 0.4 0.5 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 Time[s] Time[s] Figure 5.8.: Dynamic of the error for the state px 1400 System ELO 1200 HGO SMO 1000 EKBF 800 600 400 200 0 −200 0 0.2 0.4 0.6 0.8 1 Time[s] Figure 5.9.: Performance of the observers for Pc with noise in the output e [Pa] e [Pa] p p x x P [W] c e e [Pa] p p x x 5. Comparative Simulations 82 180 500 ELO HGO 160 400 140 120 300 100 80 200 60 100 40 20 0 0 −20 −100 0 0.1 0.2 0.3 0.4 0.5 0 0.1 0.2 0.3 0.4 0.5 Time[s] Time[s] 200 350 SMO EKBF 300 150 250 200 100 150 50 100 50 0 0 −50 −50 0 0.1 0.2 0.3 0.4 0.5 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 Time[s] Time[s] Figure 5.10.: Dynamic of the error for the state Pc The observers work properly since the intake and exhaust manifold pressure and the power compressor estimated and the true state variables are very closer for the four approaches taken in consideration in this work. Each designed observer present its own features, it is easy to see that the High Gain Observer is the approach it is more influenced by noise measurement, noise is reflected in the observed signal for all states, as we expected according to the theory of this design. Likewise the noise is also present in the estimates made by the observer for Extended Luenberger. In the case of Discontinuous Observer and Entended Kalman-Bucy filter, the observed signal converges perfectly with the ideal signal model. This is because in the first case by the presence of sign functions in its design and in the second case being a stochastic approach precisely its design method is based on minimizing noise. Thus the results fulfilling the expected behavior. 5.3.2. Uncertainties in Parameters The test proves the performance of the design observers with uncertainty in the parameters. These cases consider uncertainties values of the five percent of the design parameter. e [W] e [W] P P c c e e [W] P P c c 5. Comparative Simulations 83 Uncertainty in Ambient Pressure The ambient pressure in the model considers the value of 101300 Pa that is the average atmospheric pressure at sea level which was adopted as exactly 101325 Pa. However, this value changes for different levels. In order to prove the performance of the design observers, 100 kPa is considered equivalent to an altitude of 112 meters which is close to average of the world population areas, and it is recommended for the International Union of Pure and Applied Chemistry (IUPAC). 5 x 10 1.35 System ELO 1.3 HGO DO EKBF 1.25 1.2 1.15 1.1 1.05 0 0.2 0.4 0.6 0.8 1 Time[s] Figure 5.11.: Performance of the observers for pi with uncertainty in ambient pressure As in the previous case, the Figures show the performance of the designed observers with an uncertainty parameter of operation, the ambient temperature. The transient performance of the states for the nonlinear observers are shown in Figures 5.11,5.13,5.15 for the all the designs. The transient performance of the error for all cases are shown in Figures 5.12,5.14,5.16. The observers works properly since the intake and exhaust manifold pressure and the power compressor estimated and the true state variables are very close for the four approaches taken in consideration in this work. Each designed observer presents its own features, it is easy to see that in the High Gain Observer and Extended Luenberger observer the intake manifold pressure the estimated and the true state variables are seem. For the other two states the error is bounded. In the case of Discontinuous Observer and Entended Kalman-Bucy filter the error is bounded for the three states. Thus the results are fulfilling the expected behavior since the variable pa is present in the dynamics of the three states and, in general, that the uncertainty in the model will p [Pa] i 5. Comparative Simulations 84 4 4 x 10 x 10 0.5 0.5 ELO HGO 0 0 −0.5 −0.5 −1 −1 −1.5 −1.5 −2 −2 −2.5 −2.5 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 Time[s] Time[s] 4 4 x 10 x 10 0.5 0.5 DO EKBF 0 0 −0.5 −0.5 −1 −1 −1.5 −1.5 −2 −2 −2.5 −2.5 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 Time[s] Time[s] Figure 5.12.: Dynamic of the error for the state pi 5 x 10 1.35 System ELO HGO 1.3 DO EKBF 1.25 1.2 1.15 1.1 0 0.2 0.4 0.6 0.8 1 Time[s] Figure 5.13.: Performance of the observers for px with uncertainty in ambient pressure e e p p i i p [Pa] x e e p p i i 5. Comparative Simulations 85 4 4 x 10 x 10 0.5 0.5 ELO HGO 0 0 −0.5 −0.5 −1 −1 −1.5 −1.5 −2 −2 −2.5 −2.5 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 Time[s] Time[s] 4 4 x 10 x 10 0.5 0.5 DO EKBF 0 0 −0.5 −0.5 −1 −1 −1.5 −1.5 −2 −2 −2.5 −2.5 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 Time[s] Time[s] Figure 5.14.: Dynamic of the error for the state px 1200 System ELO 1000 HGO DO EKBF 800 600 400 200 0 −200 0 0.2 0.4 0.6 0.8 1 Time[s] Figure 5.15.: Performance of the observers for Pc with uncertainty in ambient pressure e e p p x x P [W] c e e p p x x 5. Comparative Simulations 86 200 500 ELO HGO 150 400 100 300 50 200 0 100 −50 0 −100 −100 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 Time[s] Time[s] 200 300 DO EKBF 250 150 200 100 150 100 50 50 0 0 −50 −50 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 Time[s] Time[s] Figure 5.16.: Dynamic of the error for the state Pc bias the estimates. Even when the system is observable it is in general impossible to create a convergent estimate if there is uncertainty in the system parameters. Uncertainty in Ambient Temperature The ambient temperature of the model considers the value of 298 K, then in order to shows the performance in others ambient was considers a δT of 5 K. The Figures show the performance of the design observers with an uncertainty parameter of operation, the ambient temperature. The transient performance of the states for the nonlinear observers are shown in Figures 5.17,5.19,5.21 for the all the designs. The transient performance of the error for all cases are shown in Figures 5.18,5.20,5.22. The observers works properly since the intake and exhaust manifold pressure and the power compressor estimated and the true state variables are very closer for the four approaches taken in consideration in this work. Each designed observer presents its own features, it easy to see that the High Gain Observer, Extended Luenberger observer and Entended Kalman-Bucy filter have a better estimation since for the pi and px states the error converge to zero, in the case of Pc the observer value is bounded. In the case of Discontinuous Observer the observed signal the error is bounded for e e P P c c e e P P c c 5. Comparative Simulations 87 5 x 10 1.35 System ELO 1.3 HGO DO EKBF 1.25 1.2 1.15 1.1 1.05 0 0.2 0.4 0.6 0.8 1 Time[s] Figure 5.17.: Performance of the observers for pi with uncertainty in temperature 4 4 x 10 x 10 0.5 0.5 ELO HGO 0 0 −0.5 −0.5 −1 −1 −1.5 −1.5 −2 −2 −2.5 −2.5 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 Time[s] Time[s] 4 4 x 10 x 10 0 0.5 DO EKBF 0 −0.5 −0.5 −1 −1 −1.5 −1.5 −2 −2 −2.5 −2.5 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 Time[s] Time[s] Figure 5.18.: Dynamics of the error for the state pi e e p p i i p [Pa] i e e p p i i 5. Comparative Simulations 88 5 x 10 1.4 System ELO 1.35 HGO DO EKBF 1.3 1.25 1.2 1.15 1.1 0 0.2 0.4 0.6 0.8 1 Time[s] Figure 5.19.: Performance of the observers for px with uncertainty in temperature 4 4 x 10 x 10 0.5 0.5 ELO HGO 0 0 −0.5 −0.5 −1 −1 −1.5 −1.5 −2 −2 −2.5 −2.5 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 Time[s] Time[s] 4 4 x 10 x 10 0 0.5 DO EKBF 0 −0.5 −0.5 −1 −1 −1.5 −1.5 −2 −2 −2.5 −2.5 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 Time[s] Time[s] Figure 5.20.: Dynamics of the error for the state px e e p p x x p [Pa] x e e p p x x 5. Comparative Simulations 89 1200 System ELO 1000 HGO DO EKBF 800 600 400 200 0 −200 0 0.2 0.4 0.6 0.8 1 Time[s] Figure 5.21.: Performance of the observers for Pc with uncertainty in temperature 180 450 ELO HGO 160 400 140 350 120 300 100 250 80 200 60 150 40 100 20 50 0 0 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 Time[s] Time[s] 200 350 DO EKBF 300 150 250 200 100 150 50 100 50 0 0 −50 −50 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 Time[s] Time[s] Figure 5.22.: Dynamics of the error for the state Pc e e P P c c P [W] c e e P P c c 5. Comparative Simulations 90 the three states. Thus the results are fulfilling the expected behavior since the variable Ta is present only in the dynamics of pi that is precisely the unique output. The results show, in general, that the uncertainty in the model will bias the estimates. Even when the system is observable it is in general impossible to create a convergent estimate if there is uncertainty in the system parameters. As a summary of the mains features of the performance of the four designs for the system under consideration the Table 5.2 is presented. Table 5.2.: Comparison of the Observers Advantages Disadvantages sensible to noise and simple approach, can perturbations, need Extended Luenberger be applied for partial of calculus of large observable systems symbolic expressions fast response, simple peak phenomenon, approach, reduction High Gain amplification of noise of influence of with higher gains perturbation the choice of the initial matrices influences the performance, Extended good behavior even in computational cost Kalman-Bucy presence of noise to calculate Riccati equation at each iteration can be used to estimate disturbances, finite empirical design of Discontinuous time convergence, can gain, effect of noise be generalized for higher order systems 6. Conclusions and Perspectives 6.1. Conclusions The present study contributed some important contributions concerning the observability analysis and design of nonlinear observers for a Turbocharged Diesel Engine. The particular object of study has been a nonlinear third order model, and the corresponding estimation problem is to obtain the states pi, px and Pc from the inputs and the output pi, and it has been addressed using four different approaches. The idea is to use non-linear model-based observers to estimate the unmeasurable state variables: the compressor power and the exhaust manifold pressure. The performance of the designed observers is verified and evaluated by simulations. Simulations show that the performance of observers is high, these results have been achieved due to mathematical, physical and system theoretical concepts, as well as an analysis of the observability and design methods. In the following the main conclusions about the performance of each of the four different observes that have been addressed in this thesis are presented. The observer based on the Extended Luenberger approach, uses a decomposition of the system in two parts, the design is made for the first subsystem, that must be locally weakly observable ignoring the second subsystem, which may be unobservable but satisfies the steady state property. This means that the system is detectable and the value of r could be less than the dimension of the system. In this special case, the value of r = 3 and the design is made without major problems. Although there exists a good result for the convergence of the error dynamics, the simulations made in presence of disturbances are not good. The principal disadvantage of this approach is the requirement of the symbolic computation of the observer gain, that could be made with algebra software packages such as MAPLE, but in a complicated system with many nonlinearities, as in the case considered in this work, the multiple differentiations required results in a very large symbolic expression. The observer based on a High Gain designs is a relatively simple approach to nonlinear observer design and presents robust features. The basic idea is to separate the system into a linear and a non-linear part that is treated as a "perturbation term". Then, a conventional linear observer is first nominally designed for the linear part. 6. Conclusions and Perspectives 92 The second step is to find the greatest possible gain for the observer such that it dominates the nonlinear part and achieves an asymptotically stable estimation for the error dynamics. The convergence velocity of the observation error increase while the gain of the non linear part increases. There exists a trade-off about the selection of this gain because if the gain is very high, the perturbation is more dominated but the effect of the measurement noise is amplified, and the peaking phenomenon is more evidently. The observer based on Extended Kalman Filter generalizes and improves the Extended Luenberger Observer, especially because the observer’s gain is recalculated in each iteration. The main disadvantage, since it is a stochastic approach, is the necessity of information about the noise, normally unknown. The choice of the initial values of the matrices P, Q, and R play an important role in the result of the estimation. Another disadvantage is the computational cost for the calculation of the differential Riccati equation in each iteration and the calculation of the matrices A and C. This last issue is improved using a numerical method for the calculation of the value of the derivative of f (x) and g(x). This method called Step Complex Differentiation, presents a good option when the analytical derivative is not required as the result is just a numerical value. The observer based on Discontinuous Observers generalizes and improves several other known methods, as for example the High-Gain Observer. This specific system is easy to design if a diffeomorphism of the system is considered. Much work is still necessary to complete this area of research, in particular,the complete study of the effect of measurement noise. In the performance of this observer it is clear that increasing the gain L will improve the convergence velocity and reduction of the effect of the perturbations, but it will also increase the effect of noise, and vice- versa. Then a trade-off between estimation error due to noise and to perturbations is considered. The investigation for the calculation of the gains ki is also an important step. An important conclusion about the complexity of the observer design for nonlinear system is: if a diffeomorphism for the system exists then it be an easier task since the complete calculus of a transformed system is not needed, only the inversion of the Jacobian of the map. This issue is an important advantage for the design using High Gain and Discontinuous Observers approaches. 6.2. Perspective Actually, each technique is suited only for problems with special structure and it turned out that, and comparisons are difficult to do. What is though clear, is that nonlinear observer design is still an open area for research, especially in attempting to broaden and adapt the above techniques so that they may apply to larger classes 6. Conclusions and Perspectives 93 of nonlinear systems. If the estimates states will be used for a controller design then it is important to mention that, while in the linear case the separation theorem is considered, i.e. the dynamics of the observer and the control loop can be independently seen one another via the observer matrix L and set the regulator matrix K, this concept may not be applied for nonlinear systems. This make it necessary to do a complete design that considers the both designs, controller and observer, together which comes along with complicated stability proofs for the closed loops. About the system under consideration, the model studied here is a gross simplification, and not considers a lot of other nonlinearities present in the real system, for example, [6] mentions that the EGR and VGT are both driven by electronic vacuum regulator valves (EVRV), which adjust the vacuum pressure delivered to the actuators according to a generated electric duty cycle signal. Due to their operating principle, these EVRVs introduce further nonlinearities to the system which cannot be neglected. These nonlinearities include: static gain change depending on the applied duty cycle, saturation, dead band, different dynamic response depending on the amplitude of the step change (faster response for larger steps), different time constants depending on the direction of the applied step input and hysteresis. Modeling of the nonlinear behavior of an EVRV is a substantial task worked in [54]. An analytical design of an observer that cover all the dynamics of the system results in a hard task, that should however be tackled in future works. For a real-time implementation of the observers proposed it will be necessary to improve the computational time for the calculus because most of the designed observers solve complex symbolic differential equations and evaluate large symbolic matrices. An option for this topic is to implement some numerical methods for the implementation and solve, as the Step Complex differentiation was used for the Extended Kalman Filter. Other options about observers design for really complex systems could be to use different approaches as fuzzy logic or neural network, which are not based-model observers but these approaches not assure good performance because they will give no rigorous stability results since this property cannot be analyzed and verified mathematically. Bibliography [1] Philippe Moulin. Air systems modeling and control for turbocharged engines. PhD thesis, École Nationale Supérieure des Mines de Paris, 2010. [2] J. Wahlström. Control of EGR and VGT for Emission Control and Pumping Work Minimization in Diesel Engines. Linköping studies in science and technology: Thesis. Department of Electrical Engineering, Linköpings universitet, 2006. [3] Mohamed Guermouche, Sofiane Ahmed Ali, and Nicolas Langlois. Sliding mode control for diesel generator via disturbance observer. In 23th Mediterranean Conference on Control and Automation (MED), pages 487–494. IEEE, 2015. [4] Johan Wahlström, Lars Eriksson, and Lars Nielsen. Egr-vgt control and tuning for pumping work minimization and emission control. IEEE Transactions on Control Systems Technology, 18(4):993–1003, 2010. [5] M. Jankovic and I. Kolmanovsky. Constructive Lyapunov control design for turbocharged diesel engines. IEEE Transactions on Control Systems Technology, 8(2):288–299, 2000. [6] M. Jung and Keith Glover. Control-oriented linear parameter-varying modelling of a turbocharged diesel engine. In Proceedings of 2003 IEEE Conference on Control Applications, volume 1, pages 155–160, June 2003. [7] Michael Larsen, Mrdjan Janković, and Petar V Kokotović. Indirect passivation design for a diesel engine model. In Proceedings of the 2000 IEEE International Conference on Control Applications, pages 309–314. IEEE, 2000. [8] Hans Joachim Ferreau, Peter Ortner, Peter Langthaler, Luigi Del Re, and Moritz Diehl. Predictive control of a real-world diesel engine using an extended online active set strategy. Annual Reviews in Control, 31(2):293–301, 2007. [9] Marcelin Dabo, Nicolas Langlois, and Houcine Chafouk. Dynamic feedback linearization applied to asymptotic tracking: generalization about the turbocharged diesel engine outputs choice. In American Control Conference, pages 3458–3463. IEEE, 2009. [10] Vadim Utkin, Hao-Chi Chang, Ilya Kolmanovsky, Jeffrey Cook, et al. Sliding mode control for variable geometry turbocharged diesel engines. In American Bibliography 95 Control Conference, 2000. Proceedings of the 2000, volume 1, pages 584–588. IEEE, 2000. [11] Jonas Fredriksson. Nonlinear control of turbocharged diesel engines. Control Engineering Laboratory, Department of Signals and Systems, Chalmers University of technology, 1999. [12] L. Guzzella and C. Onder. Introduction to Modeling and Control of Internal Combustion Engine Systems. Springer Berlin Heidelberg, 2013. [13] M. Jankovic, M. Jankovic, and I. Kolmanovsky. Robust nonlinear controller for turbocharged diesel engines. In American Control Conference, 1998. Proceedings of the 1998, volume 3, pages 1389–1394 vol.3, Jun 1998. [14] Jonas Fredriksson and Bo Egardt. Estimating exhaust manifold pressure in a turbocharged diesel engine. In Control Applications, 2002. Proceedings of the 2002 International Conference on, volume 2, pages 701–706. IEEE, 2002. [15] Fabio Chiara, Marcello Canova, and Yue-Yun Wang. An exhaust manifold pressure estimator for a two-stage turbocharged diesel engine. In American Control Conference (ACC), 2011, pages 1549–1554. IEEE, 2011. [16] Eriksson L. Wahlström J. Modelling diesel engines with a variable- geometry turbocharger and exhaust gas recirculation by optimization of model parameters for capturing non-linear system dynamics. Proceedings of the Institution of Mechanical Engineers, Part D: Journal of Automobile Engineering, (225):960–986, 2011. [17] Nicos Ladommatos, Razmik Balian, Roy Horrocks, and Laurence Cooper. The effect of exhaust gas recirculation on combustion and nox emissions in a high- speed direct-injection diesel engine. Technical report, SAE Technical Paper, 1996. [18] Merten Jung, Richard G Ford, Keith Glover, Nick Collings, Urs Christen, and Michael J Watts. Parameterization and transient validation of a variable geometry turbocharger for mean-value modeling at low and medium speed- load points. Technical report, SAE Technical Paper, 2002. [19] Martin Herceg, Tobias Raff, Rolf Findeisen, and Frank Allgowe. Nonlinear model predictive control of a turbocharged diesel engine. In Computer Aided Control System Design, 2006 IEEE International Conference on Control Applications, 2006 IEEE International Symposium on Intelligent Control, 2006 IEEE, pages 2766– 2771. IEEE, 2006. [20] Merten Jung and Keith Glover. Calibratable linear parameter-varying control of a turbocharged diesel engine. IEEE Transactions on Control Systems Technology, 14(1):45–62, 2006. Bibliography 96 [21] J. Heywood. Internal Combustion Engine Fundamentals. McGraw-Hill Education, 1988. [22] M. Jung. Mean-value Modelling and Robust Control of the Airpath of a Turbocharged Diesel Engine. University of Cambridge, 2003. [23] EA Misawa and JK Hedrick. Nonlinear observers—a state-of-the-art survey. Journal of dynamic systems, measurement, and control, 111(3):344–352, 1989. [24] J Gauthier and G Bornard. Observability for any u (t) of a class of nonlinear systems. In 1980 19th IEEE Conference on Decision and Control including the Symposium on Adaptive Processes, number 19, pages 910–915, 1980. [25] G. Besançon. Nonlinear Observers and Applications. Lecture Notes in Control and Information Sciences. Springer Berlin Heidelberg, 2007. [26] R. Hermann and Arthur J. Krener. Nonlinear controllability and observability. Automatic Control, IEEE Transactions on, 22(5):728–740, 1977. [27] Kurt Johannes Reinschke. Multivariable control: A graph-theoretic approach, volume 108 of Lecture notes in control and information sciences. Springer, Berlin, 1988. [28] H. Nijmeijer. New directions in nonlinear observer design ; [a collection of contributions presented at a international workshop held from June 24 - 26, 1999, in Geiranger Fjord, Norway], volume 244 of Lecture notes in control and information sciences. Springer, London, 1999. [29] Gildas Besançon. An overview on observer tools for nonlinear systems. In Gildas Besançon, editor, Nonlinear observers and applications, volume 363 of Lecture notes in control and information sciences, chapter 1, pages 1–33. Springer, Berlin, 2007. [30] Jean-Paul Gauthier and Ivan Kupka. Deterministic observation theory and applications. Cambridge University Press, Cambridge, 2001. [31] K. Robenack. Extended luenberger observer for nonuniformly observable nonlinear systems. In Thomas Meurer, editor, Control and observer design for nonlinear finite and infinite dimensional systems, volume 322 of Lecture notes in control and information sciences, pages 19–34. Springer, Berlin, 2005. [32] Gildas Besançon. Nonlinear observers and applications, volume 363 of Lecture notes in control and information sciences. Springer, Berlin, 2007. [33] Jaime A. Moreno. Nonlinear observers. Workshop from XVI Congreso Latinoamericano de Control Automático, 2014. [34] J. Adamy. Nichtlineare Regelungen. Springer Berlin Heidelberg, 2009. Bibliography 97 [35] Klaus Röbenack. Extended luenberger observer for nonuniformly observable nonlinear systems. In Control and Observer Design for Nonlinear Finite and Infinite Dimensional Systems, volume 322 of Lecture Notes in Control and Information Science, pages 19–34. Springer Berlin Heidelberg, 2005. [36] Klaus Röbenack. Computation of the observer gain for extended luenberger observers using automatic differentiation. IMA Journal of Mathematical Control and Information, 21(1):33–47, 2004. [37] Klaus Röbenack and Kurt J Reinschke. The computation of lie derivatives and lie brackets based on automatic differentiation. ZAMM-Journal of Applied Mathematics and Mechanics/Zeitschrift für Angewandte Mathematik und Mechanik, 84(2):114–123, 2004. [38] F Deza, E Busvelle, JP Gauthier, and D Rakotopara. High gain estimation for nonlinear systems. Systems & control letters, 18(4):295–299, 1992. [39] Guy Bornard and Hassan Hammouri. A high gain observer for a class of uniformly observable systems. In Proceedings of the 30th IEEE Conference on Decision and Control, pages 1494–1496. IEEE, 1991. [40] Hassan K Khalil. High-gain observers in nonlinear feedback control. In International Conference on Control, Automation and Systems, pages 47–57. IEEE, 2008. [41] Hassan Hammouri. Uniform observability and observer synthesis. In Gildas Besançon, editor, Nonlinear observers and applications, volume 363 of Lecture notes in control and information sciences, chapter 2, pages 35–69. Springer, Berlin, 2007. [42] J. P. Gauthier, H. Hammouri, and S. Othman. A simple observer for nonlinear systems applications to bioreactors. IEEE Transactions on Automatic Control, 37(6):875–880, 1992. [43] Hassan K. Khalil. Nonlinear systems. Prentice Hall, Upper Saddle River, NJ, 2. ed edition, 1996. [44] Hubert Schwetlick, Horst Kretzschmar, and Schwetlick-Kretzschmar. Numerische Verfahren für Naturwissenschaftler und Ingenieure: Eine computerorientierte Einführung ; mit 34 Tabellen. Mathematik für Ingenieure. Fachbuchverl, Leipzig, 1. aufl edition, 1991. [45] Rudolph Emil Kalman. A new approach to linear filtering and prediction problems. Transactions of the ASME–Journal of Basic Engineering, 82(Series D):35– 45, 1960. [46] Jaime A. Moreno. Nonlinear observers: Continuous and discontinuous. Fakultätskolloquium, 2015. Bibliography 98 [47] Todd Rowland and Eric W. Weisstein. "tensor." from mathworld–a wolfram web resource http://mathworld.wolfram.com/Tensor.html. [48] Jaime A Moreno. On discontinuous observers for second order systems: properties, analysis and design. In Advances in Sliding Mode Control, pages 243–265. Springer, 2013. [49] BL Walcott. State observation of nonlinear control systems via the method of lyapunov. Deterministic control of uncertain systems, (40):333, 1990. [50] Shauying R Kou, David L Elliott, and Tzyh Jong Tarn. Exponential observers for nonlinear dynamic systems. Information and control, 29(3):204–216, 1975. [51] William T Baumann and Wilson J Rugh. Feedback control of nonlinear systems by extended linearization. IEEE Transactions on Automatic Control, 31(1):40–46, 1986. [52] Arthur J Krener and Alberto Isidori. Linearization by output injection and nonlinear observers. Systems & Control Letters, 3(1):47–52, 1983. [53] Christopher V Rao, James B Rawlings, and David Q Mayne. Constrained state estimation for nonlinear discrete-time systems: Stability and moving horizon approximations. IEEE Transactions on Automatic Control, 48(2):246–258, 2003. [54] Paul Moraal and Ilya Kolmanovsky. Turbocharger modeling for automotive control applications. Technical report, SAE Technical Paper, 1999. A. Appendix A A.1. Parameter Variable Description Value Unit N Engine Speed 2000 [RPM] W f Fuel rate 5 [[Kg/h]] R Specific gas constant 287 J kg·K pi Intake manifold pressure - [hPa] px Exhaust manifold pressure - [hPa] pre f Reference pressure 101.3 [hPa] pa Ambient pressure 101.3 [hPa] Pc Compressor Power - [kW] Ti Intake manifold temperature 313 [K] Tx Exhaust manifold temperature 509 [K] Tre f Referent Temperature 298 [K] Ta Ambient Temperature 298 [K] Vi Volume of the intake manifold 0,006 [m3] Vx Volume of the exhaust manifold 0,001 [m3] Vd Displacement Volume 0,002 [m3] ηm Turbocharger mechanical efficiency 98 [%] ηc Compressor isentropic efficiency 61 [%] ηt Turbine isentropic efficiency 76 [%] ηv Volume efficiency 87 [[%]] cP Specific heat at constant pressure 1014.4 [ J kg·K ] cv Specific heat at constant volume 727.4 J kg·K µ constant 0.286 − a Parameter a −0.136 − b Parameter b 0.176 − c Parameter c 0.4 − d Parameter d 0.6 − Table A.1.: System parameters A. Appendix A 100 A.2. Abbreviations Abbreviation Description VGT variable geometry turbocharger EGR exhaust gas recirculation UO uniformly observable LWO locally weakly observable ONF observer normal form PM particulate matter AFR air-to-fuel ratio EVRV electronic vacuum regulator valves MAP manifold absolute pressure MAF mass air flow Table A.2.: Abbreviations used in the model description List of Figures 1.1. Illustration of the Scania six cylinder engine with EGR and VGT [2] . 3 1.2. The Observer as the center of control system . . . . . . . . . . . . . . 5 2.1. Schematic diagram of a turbocharged diesel engine with EGR [19]. . 8 2.2. Valid operating area for the model . . . . . . . . . . . . . . . . . . . . 10 2.3. Block diagram of a turbocharged diesel engine . . . . . . . . . . . . . 11 2.4. Dynamic behavior a turbocharged diesel engine . . . . . . . . . . . . 12 3.1. Structure of a third order system in an Observable Normal Form . . . 21 3.2. Value of O3,3 vs x2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27 3.3. Behaviour of the determinat of O3(x) regard the states . . . . . . . . . 29 4.1. Block diagram of the observer for a Turbocharger Diesel Engine . . . 31 4.2. Diagram of a Extended Luenberger Observer. . . . . . . . . . . . . . . 32 4.3. Estimate pi . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36 4.4. Estimate px . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36 4.5. Estimate Pc . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37 4.6. Root mean square error . . . . . . . . . . . . . . . . . . . . . . . . . . 37 4.7. Structure of a High Gain Observer using ONF . . . . . . . . . . . . . 41 4.8. Behavior of eigenvalues in a High Gain Observer [34] . . . . . . . . . 42 4.9. Structure of a High Gain Observer . . . . . . . . . . . . . . . . . . . . 50 4.10. Performance of the High Gain observer for the state pi. Error of the estimation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51 4.11. Performance of the High Gain observer for the state px. Error of the estimation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51 4.12. Performance of the High Gain observer for the state Pc. Error of the estimation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52 4.13. Root mean square error . . . . . . . . . . . . . . . . . . . . . . . . . . 52 4.14. Schematic diagram of a Kalman Filter. . . . . . . . . . . . . . . . . . . 56 4.15. Schematic diagram of the Extended Kalman Bucy Filter . . . . . . . . 59 4.16. Performance of the Extended Kalman Bucy filter for the state pi. Error of the estimation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62 4.17. Performance of the Extended Kalman Bucy filter for the state px. Error of the estimation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63 4.18. Performance of the Extended Kalman Bucy filter for the state Pc. Error of the estimation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63 List of Figures 102 4.19. Root mean square error of the Extended Kalman Bucy filter . . . . . 63 4.20. Diagram of a Discontinuous Observer for a system in Observable Normal Form . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68 4.21. Diagram of a general Discontinuous Observer . . . . . . . . . . . . . 69 4.22. Performance of the Discontinuous Observer for the state pi. Error of the estimation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71 4.23. Performance of the Discontinuous Observer for the state px. Error of the estimation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71 4.24. Performance of the Discontinuous Observer for the state Pc. Error of the estimation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72 4.25. Root mean square error of the Discontinuous Observer . . . . . . . . 72 5.1. Comparative performance of the observers for pi . . . . . . . . . . . 78 5.2. Comparative performance of the observers for px . . . . . . . . . . . 78 5.3. Comparative performance of the observers for Pc . . . . . . . . . . . 78 5.4. Dynamics of pi ideal and with noise . . . . . . . . . . . . . . . . . . . 79 5.5. Performance of the observers for pi with noise in the output . . . . . 79 5.6. Dynamic of the error for the state pi . . . . . . . . . . . . . . . . . . . 80 5.7. Performance of the observers for px with noise in the output . . . . . 80 5.8. Dynamic of the error for the state px . . . . . . . . . . . . . . . . . . . 81 5.9. Performance of the observers for Pc with noise in the output . . . . . 81 5.10. Dynamic of the error for the state Pc . . . . . . . . . . . . . . . . . . . 82 5.11. Performance of the observers for pi with uncertainty in ambient pressure 83 5.12. Dynamic of the error for the state pi . . . . . . . . . . . . . . . . . . . 84 5.13. Performance of the observers for px with uncertainty in ambient pressure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84 5.14. Dynamic of the error for the state px . . . . . . . . . . . . . . . . . . . 85 5.15. Performance of the observers for Pc with uncertainty in ambient pressure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85 5.16. Dynamic of the error for the state Pc . . . . . . . . . . . . . . . . . . . 86 5.17. Performance of the observers for pi with uncertainty in temperature . 87 5.18. Dynamics of the error for the state pi . . . . . . . . . . . . . . . . . . . 87 5.19. Performance of the observers for px with uncertainty in temperature 88 5.20. Dynamics of the error for the state px . . . . . . . . . . . . . . . . . . . 88 5.21. Performance of the observers for Pc with uncertainty in temperature 89 5.22. Dynamics of the error for the state Pc . . . . . . . . . . . . . . . . . . . 89 List of Tables 2.1. Indices used in the model . . . . . . . . . . . . . . . . . . . . . . . . . 7 2.2. Values of the inputs for the simulation . . . . . . . . . . . . . . . . . . 11 3.1. Valid values of the states . . . . . . . . . . . . . . . . . . . . . . . . . . 30 4.1. Values of the inputs for the simulation . . . . . . . . . . . . . . . . . . 36 4.2. Eingenvalues . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36 4.3. Mean Square Error . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38 4.4. Simulation cases, ε . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50 4.5. Mean Square Error . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53 4.6. Mean Square Error for the Extended Kalman Bucy Filter . . . . . . . 64 4.7. Mean Square Error of the Discontinuous Observer . . . . . . . . . . . 73 5.1. Values of the inputs for the simulation test . . . . . . . . . . . . . . . 77 5.2. Comparison of the Observers . . . . . . . . . . . . . . . . . . . . . . . 90 A.1. System parameters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99 A.2. Abbreviations used in the model description . . . . . . . . . . . . . . 100