-
Retraction Notice: Guest Editorial: Special Section on Edge Intelligence for Industrial Internet of Things
Sahil GargGeetanjali RatheeNeeraj KumarDanda B. Rawat
-
Corrections to “The Dynamic Model of the UR10 Robot and its ROS2 Integration”
Vincenzo PetroneEnrico FerrentinoPasquale Chiacchio
Keywords:Service robotsInformatics
Abstracts:This addresses errors in [1]. Compared to the original article, we applied minor grammatical corrections and symbol refinements to clarify the adoption of the chosen notation and nomenclature.
-
A Block Storage Optimization Method for Blockchain-Enabled Industrial Internet of Things
Wei ZhangJianchang LiuHonghai WangYuanchao LiuShubin Tan
Keywords:Industrial Internet of ThingsBlockchainsCloud computingCostsConvergenceConsensus algorithmReal-time systemsComputer architectureSystems architecturePeer-to-peer computingReal-time InformationTest SuiteStorage CostIndustrial Internet Of ThingsSpace OccupancyStorage AbilityTransmission CostStorage ProblemsPopulation SizeHigh-dimensionalObjective FunctionEfficient AlgorithmCloud ComputingStorage CapacityScientific InformationDiversity MeasuresDecision VariablesReal-world ScenariosLinear FormExponential FormStorage SpaceTest InstancesTest ProblemsPareto OptimalSeamless IntegrationPareto FrontService DelayBlock storageblockchaincloudevolutionary algorithmindustrial Internet of Things (IIoT)
Abstracts:With the rapid development of 5G, numerous data is generated in the blockchain-enabled industrial Internet of Things (IIoT). Although these peers in the blockchain system have the storage ability, they are far from meeting the storage requirements of the generated data. In addition, all data is stored in the blockchain network, which is unfriendly to applications that require real-time information. To address the above storage problems, this article proposes a block storage optimization method for blockchain-enabled IIoT, whose core idea is to conditionally select some blocks to store in the cloud. This method firstly models the selection conditions of blocks as a many-objective optimization problem, where the selection conditions include using probability, storage cost, space occupation, and transmission cost. Then, a cascading selection-based evolutionary algorithm (CSEA) for many-objective optimization is developed to solve the model and thereby obtain the optimal blocks stored in the cloud, where CSEA adopts the diversity-first principle. Finally, CSEA is first compared with seven state-of-the-art methods on two benchmark test suites for validating its ability to obtain reliable experimental results, and then is used to solve the proposed model. The corresponding results demonstrate that CSEA has high competitiveness, and our method can effectively address the storage problems above. In summary, this article provides a novel method for addressing the storage problem in the blockchain-enabled IIoT.
-
Two-Stage Heuristic Optimization With Hybrid Evolutionary Multitasking for Automatic Optical Inspection Route Scheduling
Junhu CaoJinyong YuZhengkai LiXinghu YuHao SunJianbin QiuJuan J. Rodríguez-Andina
Keywords:Job shop schedulingClustering algorithmsHeuristic algorithmsPath planningOptimizationMathematical modelsCamerasPartitioning algorithmsProcessor schedulingMathematical programmingEvolutionary AlgorithmsClustering ResultsProgramming ModelHeuristic AlgorithmPrinted Circuit BoardPath PlanningGeometric ConstraintsHierarchical AlgorithmNear-optimal SolutionPath DistanceMixed-integer Programming ModelSurface-mountedHeuristic RulesTwo-stage FrameworkSmall-scale ProblemsPaired T-testConfidence LevelFinal ResultsPath LengthClustering AlgorithmField Of View SizeHybrid Genetic AlgorithmK-means AlgorithmCluster ModelClass Of ProblemsIndustrial CameraReal IndustryDetection TimeExact AlgorithmTwo-stage AlgorithmEvolutionary multitask optimizationhierarchical heuristicmixed integer modelroute scheduling
Abstracts:Route scheduling for automatic optical inspection (AOI) of printed circuit boards (PCBs) impacts the productivity of surface mount production lines. Current state-of-the-art mathematical models in the area are not rigorous enough and neglect significant practical constraints, such as component geometric constraints. This article proposes a hierarchical mixed integer programming model to describe the route scheduling problem for AOI of PCBs. The model allows theoretical optimal solutions to be obtained for small-scale problems. In addition, a two-stage heuristic framework, consisting of clustering and path planning stages, is proposed to improve efficiency in solving large-scale problems, achieving near-optimal solutions. Taking into account that component distribution affects clustering results, the clustering stage is developed with a hierarchical heuristic algorithm based on block density with an aggregation strategy. The Lin–Kernighan algorithm is first used to quickly generate the scheduling sequence in the path planning stage. Image acquisition centers are initially adjusted with a customized heuristic. After that, a hybrid evolutionary multitask algorithm is proposed to further reduce path distance by dividing the image acquisition center adjustment task into several subtasks using heuristic rules. The algorithm obtains better quality results and is faster than traditional evolutionary algorithms. Experiments on an actual industrial AOI platform demonstrate that the proposed two-stage heuristic route scheduling algorithm outperforms state-of-the-art research in the area.
-
Penalty Removal Search Algorithm for Distributed Optimization of Nonconvex Functions in Economic Dispatch
Yiyang GeZhanshan Wang
Keywords:Linear programmingEconomicsDistributed algorithmsCost functionQ-learningGermaniumCostsValvesFuelsConvex functionsOptimal DistributionEconomic DispatchObjective FunctionOptimization AlgorithmSigmoid FunctionFeasible SolutionLocal OptimumPenalty FunctionProjection OperatorNon-convex OptimizationDistributed AlgorithmReinforcement Learning FrameworkConvex Optimization AlgorithmsOptimization ProblemRunning TimeCost FunctionMaximum PressureInequality ConstraintsParticle Swarm OptimizationTime SlotVirtual ActivitiesOptimal ActionGradient InformationFeasible DomainPower UnitsThermal PowerEquilibrium Of SystemOptimal Solution Of ProblemTerm WeightCollaborative Q-learning (CQL) algorithmdistributed optimizationeconomic dispatch (ED)nonconvex optimizationpenalty removal search algorithm (PRSA)
Abstracts:Nonconvexity is a usually overlooked factor in economic dispatch (ED). Enhancing the nonconvexity of the objective function leads traditional convex optimization algorithms easily to fall into the local optimum. To address the above problem, a penalty removal search algorithm (PRSA) is proposed for ED nonconvex optimization. It is composed of two distributed optimization algorithms embedded in a reinforcement learning framework. In Phase I of PRSA, a distributed optimization algorithm with projection operators is designed. It uses fewer variables to locate the region where the optimal solution belongs by the cooperative Q-learning. In Phase II of PRSA, the sigmoid function serves as a penalty function to form the second distributed optimization algorithm. This is used to skip the searched solutions and allow the algorithm to continue searching for more feasible solutions. The PRSA solves the problem that the algorithm misses feasible solutions when the nonconvex coefficients increase. Finally, the effectiveness of the PRSA is verified by numerical examples.
-
A Novel ViDAR Device With Visual Inertial Encoder Odometry and Reinforcement Learning-Based Active SLAM Method
Zhanhua XinZhihao WangShenghao ZhangWanchao ChiYan MengShihan KongYan XiongChong ZhangYuzhen LiuJunzhi Yu
Keywords:CalibrationSimultaneous localization and mappingOdometryCamerasVisualizationVectorsManifoldsSensorsRobotsOptimizationSimultaneous Localization And MappingVisual OdometryField Of ViewFeature PointsCalibration MethodInertial Measurement UnitDeep Reinforcement LearningMotion PlatformMonocular CameraDetection In VideosVisual-inertial OdometryCoordinate SystemChinese Academy Of SciencesClosed-loop ControlProportional-integral-derivativeReward FunctionMarkov Decision ProcessLie AlgebraDrift RateStatic ModeENCODE DataAutomated Guided VehiclesContinuous RotationOutdoor ExperimentsRotational ModesCamera PoseStructure From MotionLarge-scale ExperimentsMotion ModeLeast-squares OptimizationActive simultaneous localization and mapping (SLAM)deep reinforcement learning (DRL)least square optimizationmultisensor fusionvideo detection and ranging (ViDAR)ViDAR calibrationvisual-inertial-encoder odometry
Abstracts:In the field of multisensor fusion for simultaneous localization and mapping (SLAM), monocular cameras and IMUs are widely used to build simple and effective visual-inertial systems. However, limited research has explored the integration of motor-encoder devices to enhance SLAM performance. By incorporating such devices, it is possible to significantly improve active capability and field of view (FOV) with minimal additional cost and structural complexity. This article proposes a novel visual-inertial-encoder tightly coupled odometry (VIEO) based on a video detection and ranging (ViDAR) device. A ViDAR calibration method is introduced to ensure accurate initialization for VIEO. In addition, a platform motion decoupled active SLAM method based on deep reinforcement learning (DRL) is proposed. Experimental data demonstrate that the proposed ViDAR and the VIEO algorithm significantly increase cross-frame co-visibility relationships compared to its corresponding visual-inertial odometry (VIO) algorithm, improving state estimation accuracy. Additionally, the DRL-based active SLAM algorithm, with the ability to decouple from platform motion, can increase the diversity weight of the feature points and further enhance the VIEO algorithm's performance. The proposed methodology sheds fresh insights into both the updated platform design and decoupled approach of active SLAM systems in complex environments.
-
An Integrated Distributed Fault Diagnosis Framework for Large-Scale Industrial Processes Based on Spatio–Temporal Causal Analysis
Dongjie HuaJie DongKaixiang PengSilvio Simani
Keywords:Feature extractionFault diagnosisCorrelationFault detectionDecodingTrainingTime series analysisRedundancyVegetationConvolutionCausal AnalysisFault DiagnosisLarge-scale Industrial ProcessesFault Diagnosis FrameworkSpatial FeaturesAutoencoderSpatiotemporal CharacteristicsIndustrial DataCausal GraphConvolutional NetworkConvolutional Neural NetworkWeight MatrixLong Short-term MemoryProcess MonitoringNodes In The GraphFalse Alarm RateOriginal SpaceAnomaly DetectionOperation StateGraph Convolutional NetworkFault Diagnosis MethodGraph Attention NetworkAnomaly ScoreRandom SegmentsBidirectional Long Short-term MemoryGated Recurrent UnitParallel StrategySpatiotemporal NetworkCause Of DefectsTemporal FeaturesDistributed fault diagnosislarge-scale industrial processroot cause recognitionspatio–temporal causal graph (STCG)spatio–temporal characteristics
Abstracts:The networked structure of sensors emerges in large-scale industrial processes. Causal graphs can reveal the underlying mechanisms. However, due to the constraints of material and information flows, industrial process data exhibit complex spatio–temporal characteristics. Traditional causal discovery results include redundant information and the spatio–temporal features are not sufficiently mined, affecting the accuracy of fault diagnosis. To address the above problems, an integrated distributed fault diagnosis framework is proposed. First, a new method combining mechanism knowledge and correlation is proposed to construct a spatio–temporal causal graph, which highlight spatio–temporal causal information. Second, an embedded time convolutional network-based autoencoder is designed to extract spatio–temporal features simultaneously. Then, the local-global fault detection scheme is performed. On this basis, a new anomaly status information matrix is designed by decoder and spatial features to achieve root cause recognition. Finally, the effectiveness of the proposed method is validated using actual data from the hot strip mill process, achieving a fault detection accuracy of 98.3$\%$.
-
Divergence-Based Event Detection in Microbatch Processing for Data Streams
Ibrahim AL-AghaPradeep ChowriappaZhiqiang Deng
Keywords:Event detectionStreamsFeature extractionFault detectionReal-time systemsKurtosisTransient analysisMonitoringVibrationsAccuracyEvent DetectionData StreamsMicro-batch ProcessingMachine LearningData DistributionDecision TreeReal-time DetectionSensor DataClass ImbalanceDivergence MeasureHigher-order FeaturesConvolutional Neural NetworkSupport Vector MachineHigh PrecisionK-nearest NeighborAccuracy And PrecisionReal-world ApplicationsSingular Value DecompositionSupport Vector Machine ClassifierSynthetic Minority Oversampling TechniqueDecision Tree ClassifierBayesian ClassifierMinority ClassAmount Of Data PointsAmount Of PointsMean Absolute ValueNoisy EnvironmentsTransient FaultsAnomaly DetectionBearing fault detectiondata stream processingmachine learningmicrobatch processing (MBP)multimodal datanear real-time event detection
Abstracts:Bearing fault detection is critical for ensuring the reliability of rotating machinery and preventing costly breakdowns. This article presents a novel approach combining microbatch processing (MBP) with a retrospective divergence-based event detection algorithm to address key challenges in real-time fault detection, including handling multimodal sensor data, managing class imbalance, and detecting subtle fault signatures. MBP splits continuous data streams into smaller, manageable batches for near-real-time processing, but we hypothesize that this discretization can compromise detection accuracy, especially with multimodal data. To overcome these limitations, our method introduces: first, the extraction of higher order features, such as skewness and kurtosis, from microbatches to enhance the system's ability to detect early-stage faults; second, the application of Kullback–Leibler and Pearson divergence measures to detect changes in data distributions across microbatches; and third, validation using machine learning models—Naïve Bayes, decision trees, and support vector machines—to assess the algorithm's effectiveness. Experimental results demonstrate that the proposed method improves fault detection accuracy, particularly in detecting early faults and handling imbalanced data. Our findings suggest that combining MBP with retrospective divergence-based techniques is a robust solution for detecting faults in multimodal data streams, making it well-suited for real-time industrial monitoring.
-
Overall Delay of Task Processing in Resource- Constrained Industrial Edge Computing: Model and Optimization
Qi ZhangWeiqiang Xu
Keywords:DelaysProcessor schedulingJob shop schedulingEdge computingSchedulingComputational modelingServersResource managementIndustrial Internet of ThingsImage edge detectionProcessing TasksEdge ComputingProcessing DelayTask Processing DelayNumerical SimulationsSequential ProcessInternet Of ThingsGlobal OptimizationMultiple TasksGlobal SolutionTransmission DelayGlobal Optimal SolutionDelay EstimationEdge ServerIndustrial Internet Of ThingsDelay TaskTask OffloadingNew VariablesTime SlotDirected GraphOffloading DecisionInteger Nonlinear ProgrammingDownload TimeTopological StatesTerminal DevicesTask SchedulingResource-constrained EnvironmentsLinear Inequality ConstraintsComputational SequenceIndustrial EnvironmentContainer-based edge computingindustrial Internet of Things (IIoT)overall delayresource-constraintstask scheduling
Abstracts:With the advancement of the industrial Internet of Things, minimizing delay has become a critical performance metric for many industrial applications. Edge computing effectively addresses this requirement by offloading tasks to nearby edge servers, significantly reducing task processing delay, including both transmission and computation delays at local or edge servers. However, current researches often focus on isolated aspects of the task processing delay and typically handle multiple simultaneous tasks by dividing computing capacity for concurrent processing, which can lead to increased delays. In this article, we propose a comprehensive delay model that captures the entire process from task generation to completion, termed the overall delay of task processing (ODTP), along with a computing resource allocation strategy that sequentially allocates computing resource based on an optimized scheduling order (SAOS). To minimize the ODTP in resource-constrained, container-based industrial edge computing environments, we introduce an optimization problem termed ODTP-M and a corresponding solution, ODTP-O, which optimizes task offloading, computing resource scheduling order, container caching, and image caching. Due to the nonlinear coupling of variables, which makes solving ODTP-M directly challenging, we transform it as an equivalent linear problem, termed l-ODTP-M, using a series of mathematical techniques. Numerical simulation results demonstrate that this transformation quickly achieve the global optimal solution of ODTP-M. Furthermore, we developed an environment to simulate the task process in resource-constrained container-based industrial edge computing. Compared to concurrent task processing and random order sequential processing, SAOS achieves the lowest task processing delay. In addition, ODTP-M consistently minimizes task processing delay across various scenarios, including resource-constrained conditions, outperforming recent studies that focus on specific aspects of task delay.
-
Spatio–Temporal Aware Personalized Federated Learning for Load Forecasting in Power Systems
Siya XuJie ZouCheng ZhouHai WuZeng Zeng
Keywords:Load forecastingLoad modelingElectricityPredictive modelsTrainingData modelsComputational modelingAccuracyFederated learningServersPower SystemFederated LearningLoad ForecastingModel PerformanceConvolutional NetworkConvergence RateSpatial FeaturesTemporal FeaturesElectrical EnergyHeterogeneous DataModel ConvergenceAssembly MechanismsForecast AccuracySpatiotemporal CharacteristicsTraining SpeedElectrical LoadAdaptive AdjustmentResource HeterogeneityHierarchical MechanismPrediction ErrorCluster CentersTemporal Convolutional NetworkCommunication LatencySystem LatencyGlobal ModelSimilar DistanceFederated Learning AlgorithmLayeringComputation LatencyCurrent RoundData heterogeneityelectricity load forecastingpersonalized federated learning (PFL)resource heterogeneityspatio–temporal convolutional network (STCN)
Abstracts:With the development of smart grids and the increasing demand for electric energy consumption, electricity load forecasting has become more and more important in electric energy management. However, the differences in electricity load patterns between different regions lead to data heterogeneity, which may seriously affect the model performance of traditional federated learning methods in electricity load forecasting. Meanwhile, the resource heterogeneity between clients will further decrease the efficiency and the accuracy of forecasting. To this end, we propose a spatio–temporal aware personalized federated learning (PFL) framework for electricity load forecasting to improve the forecasting accuracy and training speed so as to enhance the real-time responsiveness and system stability of the power grid. First, to solve the data heterogeneity, we design a collaborative training domain (CTD) construction method based on spatio–temporal features. Then, on the basis of constructed CTDs, we propose a spatio–temporal convolutional network (STCN)-based layered PFL method to address the resource heterogeneity, which can be divided into personalized layers and generalized layers according to temporal and spatial static features separately. In addition, we design a hierarchical aggregation mechanism and an adaptive edge model aggregation adjustment mechanism to optimize the training process. Experimental results show that the method proposed in this article outperforms other methods in terms of model accuracy and convergence speed.