The updated publications can be found on Google Scholar website
This paper presents an approach to deal with model uncertainty in iterative learning control (ILC). Model uncertainty generally degrades the performance of conventional learning algorithms. To deal with this problem, a robust worst-case norm-optimal ILC design is introduced. The design problem is reformulated as a convex optimization problem, which can be solved efficiently. The paper also shows that the proposed robust ILC is equivalent to conventional norm-optimal ILC with trial-varying parameters; accordingly, the design trade-off between robustness and convergence speed is analyzed.
In this paper, we present two iterative learning control (ILC) frameworks for tracking problems with specified data points that are desired points at certain time instants. To design ILC systems for such problems, unlike traditional ILC approaches, we first develop an algorithm in which not only the control signal but also the reference trajectory is updated at each trial. We investigate the relationship between the reference trajectory and ILC tracking control as it relates to the rate of convergence. Second, a new ILC scheme is proposed to produce output curves that pass close to the desired points. Here, the control signals are generated by solving an optimal ILC problem with respect to the desired sampling points. One of the key advantages of the proposed approaches is a significant reduction of the computational cost.
This paper presents a multi-objective iterative learning control (ILC) design approach that realizes an optimal trade-off between robust convergence, converged tracking performance, convergence speed, and input constraints. Linear time-invariant single-input single-output systems which are represented by both parametric and nonparametric models are considered. The noncausal filter Q(z) and learning function L(z) are simultaneously optimized by solving a convex optimization problem. The proposed method is applied to a non-minimal phase system and compared with a model-inversion based ILC design. Using the developed ILC design the underlying trade-off between tracking performance and convergence speed is thoroughly/quantitatively analyzed.
This paper presents a virtual validation and testing framework for ADAS (Advanced Driver Assistant System) planning and control development based on a co-simulation platform of vehicle dynamics and traffic environment tools. One of the main challenges in ADAS development is validating the planning and control algorithms in a closed-loop fashion, where both vehicle dynamics characteristics and a wide variety of traffic scenarios are taken into account. The designs should also guarantee optimal performance toward precise trajectory tracking, and time/fuel optimality with respect to various constraints. This work focuses on simulation-based approaches to frontload control design verification during the early phases of ADAS development involving two software: LMS Imagine.Lab Amesim and PreScan. The requirements for an interface that help to facilitate the co-simulation development are studied. The approach is demonstrated with three different use cases: adaptive cruise control, green wave technology, and autonomous parking.
This work presents our research and development on autonomous valet parking planning and control. The designs focus on several aspects, mainly safety on collision avoidance with other cars, infrastructure and pedestrians, and optimality on precise tracking, driving time and fuel consumption. Moreover, the co-simulation structure of Simcenter Amesim and PreScan is investigated for the testing and validation framework. Amesim provides an integrated simulation platform to predict the multidisciplinary performance of vehicle dynamics and capabilities to accelerate design of control strategies. PreScan is a physics-based simulation platform that can simulate traffic scenarios and sensor technologies.
This paper presents ROFALT, a freely-available, model-based iterative learning control (ILC) tool for nonlinear systems, that strives to close the gap between the theory of nonlinear ILC and successful applications. By providing a simple yet powerful syntax, all phases of the design of a nonlinear ILC –from modeling, tuning and execution, to analysis – are supported. ROFALT implements an optimization-based two-step approach that yields fast convergence rates for nonlinear systems and is easily tunable to trade convergence speed off for robustness. To demonstrate the efficiency of ROFALT, an application to a tracking problem of a race car is considered, where a simple kinematic bicycle model is used to iteratively learn the control of complex vehicle dynamics. The results show fast convergence and the effect of different tunings is shown.
This paper presents a generalized iterative learning control (ILC) design in the frequency domain with experimental validation. The optimal ILC learning function and robustness filter function are simultaneously optimized by solving a linear programming problem using frequency response functions. Moreover, the design realizes an optimal trade-off between robust convergence, converged tracking performance, convergence speed, and input constraints. The proposed ILC method is experimentally validated on a lab scale overhead crane system. The results demonstrate the advantages of the approach as an automation design with optimal solutions, efficient computation, robustness and intuitive tuning for trade-off analyses between multiple ILC specifications.
This paper addresses robust performance analysis and synthesis of lifted system iterative learning control (ILC). By applying the full block S-procedure, sufficient conditions for the robust performance of ILC with both unstructured and structured uncertainty are derived. In the synthesis problem, we consider the design of the learning filter Q for model-inversion based ILC. The problem is reformulated as a convex optimization problem such that the converged tracking error is minimized subject to the monotonic convergence condition. This synthesis approach enables full automation of the ILC design since neither an uncertainty model has to be identified nor a robustness filter has to be chosen. The advantages of this novel robust performance ILC technique are demonstrated on an experimental linear motor test setup.
This paper discusses robust iterative learning control (ILC) analysis and synthesis problems that account for model uncertainty in the lifted system representation. In the robust analysis, we transform the robust monotonic convergence condition with unstructured uncertainty into an equivalent convex problem. In this framework, for a given learning gain Q, the design of the learning gain L that maximizes the convergence speed is reformulated as a convex optimization problem. We discuss various properties of the proposed robust ILC analysis and design, and analyze the performance of the proposed robust ILC design through numerical simulations.
This paper presents an experimental validation of a recently proposed robust norm-optimal iterative learning control (ILC). The robust ILC input is computed by minimizing the worst-case value of a performance index under model uncertainty, yielding a convex optimization problem. The proposed robust ILC design is experimentally validated on a lab scale overhead crane system, showing the advantages of the approach over classical ILC designs in monotonic convergence and tracking performance.
In this paper we present an approach to deal with model uncertainty in norm-optimal iterative learning control (ILC). Model uncertainty generally degrades the convergence and performance of conventional learning algorithms. To deal with model uncertainty, a robust worst-case norm-optimal ILC is introduced. The problem is then reformulated as a convex minimization problem, which can be solved efficiently to generate the control signal. The paper also investigates the relationship between the proposed approach and conventional norm-optimal ILC; where it is found that our design method is equivalent to conventional norm-optimal ILC with trial-varying learning gains. Finally, simulation results of the presented technique are given.
In this paper we present an approach to deal with trial-varying initial conditions in norm-optimal iterative learning control (ILC). Varying initial conditions generally degrade the performance of conventional learning algorithms. We therefore introduce a worst-case optimization problem that accounts for trial-varying of initial conditions. The optimization is then reformulated as a convex minimization problem, which can be solved efficiently to generate the control signal. We investigate the relationship between the proposed approach and classical norm-optimal ILC; where we find that our methodology is equivalent to classical norm-optimal ILC with trial-varying parameters. Finally, simulation results of the presented technique are given.
In this paper, we present two iterative learning control (ILC) frameworks for multiple points tracking problems. First, we present an ILC scheme to produce output curves that pass close to the reference points without considering the reference trajectory. Here, the control signals are generated by solving an optimal ILC problem with respect to the points. Second, we propose an optimal ILC multiple points tracking technique to handle non-repetitive uncertainties at reference points, which happens naturally in real applications due to noise contamination, disturbances, and other control purpose. As a result, the problem is formulated as a two-objective optimization problem.
This paper presents a new optimization-based iterative learning control (ILC) framework for multiple-point tracking control. Conventionally, one demand prior to designing ILC algorithms for such problems is to build a reference trajectory that passes through all given points at given times. In this paper, we produce output curves that pass close to the desired points without considering the reference trajectory. Here, the control signals are generated by solving an optimal ILC problem with respect to the points. As such, the whole process becomes simpler; key advantages include significantly decreasing the computational cost and improving performance. Our work is then examined in both continuous and discrete systems.
In this paper, we present iterative learning control (ILC) algorithms for terminal control in multi-input multi output systems. The optimal ILC framework is investigated for the formulations of single-terminal point and multiple intermediate pass points tracking control. First, we consider an initial learning control technique for one final output, before sequentially exploring multiple terminal output-tracking via initial and changing inputs. The novel contribution of this work is in the analysis of the terminal ILC algorithms, regardless of the stability, monotonic convergence, and performance properties in both cases. Illustrative examples are then provided to verify the proposed approaches.
In this paper, we present an iterative learning control (ILC) algorithm to track specified desired multiple terminal points at given time instants. A framework to update the desired trajectories from given points is developed based on the interpolation technique. The approach shows better rate of convergence of the errors. The simulation with a satellite antenna control model is demonstrated to show the effectiveness of our approach.