Control
Ali Madady; Naser Taghva Manesh
Abstract
This paper introduces a novel optimal iterative learning control scheme for continuous-time systems with multiple-inputs and multiple-outputs and linear time-varying dynamics. While iterative learning control has been extensively studied in the discrete-time domain, the development of optimal iterative ...
Read More
This paper introduces a novel optimal iterative learning control scheme for continuous-time systems with multiple-inputs and multiple-outputs and linear time-varying dynamics. While iterative learning control has been extensively studied in the discrete-time domain, the development of optimal iterative learning control for continuous-time systems remains limited due to the lack of lifted-formulations and associated mathematical challenges. The proposed method transforms the original optimal iterative learning control problem into a linear quadratic tracking-like problem, enabling the derivation of an explicit close-loop control law that ensures both tracking performance and control effort minimization. Unlike many existing approaches that rely on learning algorithms involving derivative terms, which are often sensitive to measurement noise, the proposed design avoids such terms and remains computationally efficient. Moreover, the monotonic convergence of the tracking error and the associated cost function are proved by rigorous mathematical analysis. Theoretical results are supported by four comprehensive simulation examples, including comparisons with several existing iterative learning control methods. Quantitative evaluations confirm that the proposed optimal scheme significantly outperforms previous techniques in terms of convergence speed and error reduction rate. This contribution offers a new framework for the optimal control of continuous-time systems with multiple inputs and outputs in repetitive tasks and provides a foundation for future extensions to constrained, nonlinear, or partially measurable systems.
Optimization
sadegh kalantari; mehdi ramezani; ali madadi
Abstract
This paper aimed to formulate image noise reduction as an optimization problem and denoise the target image using matrix low rank approximation. Considering the fact that the smaller pieces of an image are more similar (more dependent) in natural images; therefore, it is more logical to use low rank ...
Read More
This paper aimed to formulate image noise reduction as an optimization problem and denoise the target image using matrix low rank approximation. Considering the fact that the smaller pieces of an image are more similar (more dependent) in natural images; therefore, it is more logical to use low rank approximation on smaller pieces of the image. In the proposed method, the image corrupted with AWGN (Additive White Gaussian Noise) is locally denoised, and the optimization problem of low rank approximation is solved on all fixed-size patches (Windows with pixels needing to be processed). This method can be implemented in parallelly for practical purposes, because it can simultaneously handle different image patches. This is one of the advantages of this method. In all noise reduction methods, the two factors, namely the amount of the noise removed from the image and the preservation of the edges (vital details), are very important. In the proposed method, all the new ideas including the use of TI image (Training Image) and SVD adaptive basis, iterability of the algorithm and patch labeling have all been proved efficient in producing sharper images, good edge preservation and acceptable speed compared to the state-of-the-art denoising methods.