Welcome to the IKCEST
Journal
IEEE Control Systems

IEEE Control Systems

Archives Papers: 465
IEEE Xplore
Please choose volume & issue:
Advertising & Sales
2024 International Conference on Unmanned Aircraft Systems [Conference Reports]
Kimon P. Valavanis
Keywords:Unmanned Aerial VehiclesUnmanned Aerial SystemsAircraft SystemsScience And TechnologyEnd-usersETH ZurichPerception Of The RobotFlight Operations
Abstracts:Provides society information that may include news, reviews or technical notes that should be of interest to practitioners and researchers.
The Roger W. Brockett Control Systems Award [Awards]
Carolyn L. BeckSarah SpurgeonMagnus EgerstedtJohn Baillieul
Keywords:Control SystemNumber Of FriendsLeaders In The FieldEngineering EducationLasting Impression
Abstracts:Presents the recipients of The Roger W. Brockett Control Systems awards for 2024.
2024 IEEE CSS Awards [Awards]
Keywords:Control SystemControl TechnologyNomination ProcessIEEE TransactionsPaper AwardTechnical CommunityCo-authorsOptimal ControlPower SystemSystems EngineeringPractical SignificanceCalendar YearAutomatic ControlSystems ScienceSignificance For ApplicationsFlight ControlAward Winners
Abstracts:Presents the recipients of IEEE CSS awards for 2024.
João Manoel Gomes da Silva Jr. [People In Control]
João Manoel Gomes da SilvaCarolyn L. BeckSarah SpurgeonMagnus EgerstedtJohn Baillieul
Keywords:InterviewsControl In PeopleControl SystemStability Of SystemLinear SystemNonlinear SystemsFirst ContactElectrical EngineeringControl FieldProportional-integral-derivativeGraduate ProgramsPromising OpportunitiesNonlinear ElementSemidefinite ProgrammingGraduate CoursesActuator SaturationPole Placement
Unsupervised Representation Learning in Deep Reinforcement Learning: A Review
Nicolò BotteghiMannes PoelChristoph Brune
Keywords:Representation learningGeometryUncertaintySystematicsDeep reinforcement learningMarket researchRobustnessEncodingDynamical systemsUnsupervised learningClosed boxBenchmark testingMeasurement techniquesPerformance evaluationData analysisDeep LearningUnsupervised LearningRepresentation LearningDeep Reinforcement LearningUnsupervised Representation LearningNeural NetworkSystem DynamicsLatent VariablesState VariablesState SpaceControl ProblemPhysical SystemOptimal PolicyState RepresentationSampling EfficiencyOpen ChallengesMarkov Decision ProcessPhysical LawsMeaningful RepresentationLow-dimensional RepresentationLatent SpaceLatent ModelForward ModelLatent StateReward ModelDeep Reinforcement Learning AlgorithmContrastive LossVariational AutoencoderReconstruction LossMarkov Decision Process Model
Abstracts:This review article addresses the problem of learning abstract representations of measurement data in the context of deep reinforcement learning. While the data are often ambiguous, high-dimensional, and complex to interpret, many dynamical systems can be effectively described by a low-dimensional set of state variables. Discovering these state variables from the data is a crucial aspect for 1) improving the data efficiency, robustness, and generalization of DRL methods; 2) tackling the curse of dimensionality; and 3) bringing interpretability and insights into black-box DRL. This review provides a comprehensive and complete overview of unsupervised representation learning in DRL by describing the main DL tools used for learning representations of the world, providing a systematic view of the method and principles; summarizing applications, benchmarks, and evaluation strategies; and discussing open challenges and future directions.
Extremum Seeking Through Delays and PDEs [Bookshelf]
Mouhacine Benosman
Keywords:Partial Differential EquationsExtremum SeekingOrdinary Differential EquationsControl FieldNon-cooperative GameInfinite-dimensional SystemsData-driven Learning
Abstracts:Presents reviews for the following list of books, Extremum Seeking Through Delays and PDEs.
Aradhana Nayak [PHDs in Control]
Aradhana Nayak
Keywords:InterviewsMechanical SystemsControl TechnologyPromising Area
Introduction [PHDs in Control]
Anuradha Annaswamy
Keywords:Inverse Reinforcement Learning
The Arrow of Time in Estimation and Control: Duality Theory Beyond the Linear Gaussian Model
Jin Won KimPrashant G. Mehta
Keywords:Linear systemsStochastic systemsHidden Markov modelsEstimationObservabilityGaussian processesControl theoryLinear ModelDual TheoryLinear GaussianLinear Gaussian ModelArrow Of TimeOptimization ProblemNonlinear ModelOptimal ControlLinear SystemNonlinear SystemsHidden Markov ModelMinimum VarianceEstimation ProblemTime-reversalOptimal Control ProblemDual ProblemOptimal FilterRiccati EquationNonlinear FilterGraduate CoursesDeterministic ControlStochastic Differential EquationsControl InputDual ControlSpace Of FunctionsDifferential EquationsConditional ExpectationDual SystemAdjoint Equations
Abstracts:The duality between estimation and control is a foundational concept in control theory. Most students learn about the elementary duality—between observability and controllability—in their first graduate course in linear systems theory. Therefore, it comes as a surprise that for a more general class of nonlinear stochastic systems (HMMs), duality is incomplete. Our objective in writing this article is twofold: 1) to describe the difficulty in extending duality to HMMs and 2) to discuss its recent resolution by the authors. A key message is that the main difficulty in extending duality comes from time reversal when going from estimation to control. The reason for time reversal is explained with the aid of the familiar linear deterministic and linear Gaussian models. The explanation is used to motivate the difference between the linear and the nonlinear models. Once the difference is understood, duality for HMMs is described based on our recent work. The article also includes a comparison and discussion of the different types of duality considered in the literature.
Hot Journals