Control Theory and Optimization Technique

In open loop control, it is assumed that the dynamical model of the system is well known, that there is little or no environmental noise and that the control signal can be applied with high precision. This approach is generally utilized when there is a target value, to achieve at a particular final time, T. The disadvantage of open-loop control is that the performance of the controller is highly susceptible to any unanticipated disturbances. In feedback control, continuous or discrete time measurements of the system output, y(t), are used to adjust the control signal in real time. At each instant, the observed process, y is compared to a tracking reference, r(t), and used to generate an error signal. Feedback therefore provides the backbone of most modern control applications. In learning control, a measurement of the system, y(t), is also used to design the optimal feedback signal; however, it is not done in real time. Instead, a large number of trial control signals are tested in advance, and the one that performs best is selected to be u ◦ (t).

Continuous-time Markov decision processes (CTMDPs) have wide applications, such as the queueing systems, control of the epidemic, telecommunication, population processes, inventory control. Markov processes (also called Markov chains) are based on two fundamental concepts: states and state transitions. A state is treated as a random variable which describes some properties of the system. A state transition describes a change in the system state at a given time instance. One can classify Markov processes into discrete-time and continuous-time categories. The property that every point is reachable from any point in a given time interval [0, T] is called controllability (at T). Finally, the concept of null controllability, i.e., the possibility to reach the origin from an arbitrary initial point.
  • Control Theory and Application
  • Control Theory and Methodologies
  • Control System Modeling
  • Process Control and Automatic Control Theory
  • Automotive Control Systems and Autonomous Vehicles
  • Optimization Problems in Control Engineering
  • Dynamic Programming
  • Markov Decision Problems
  • Dynamic Programming over the Infinite Horizon
  • Optimal Stopping Problems
  • Programming Average-Cost
  • Continuous-Time Markov Decision Processes
  • Controllability
  • Observability
  • Kalman Filter and Certainty Equivalence
  • Dynamic Programming in Continuous Time

Related Conference of Control Theory and Optimization Technique

Control Theory and Optimization Technique Conference Speakers