-
Mechanical Design and Data-Enabled Predictive Control of a Planar Soft Robot
Huanqing WangKaixiang ZhangKyungjoon LeeYu MeiKeyi ZhuVaibhav SrivastavaJun ShengZhaojian Li
Keywords:Soft roboticsRobotsFabricsComputational modelingAerospace electronicsPneumatic systemsPredictive controlMechanical DesignRobot ControlSoft RobotsRobot DesignDimensionality ReductionSystem IdentificationControl PerformanceData-driven ControlActuatorData MatrixNonlinear SystemsControl InputSingular ValueNonlinear DynamicsSingular Value DecompositionTracking PerformanceModel-based ApproachConfiguration SpaceSlack VariablesLinear Time-invariant SystemsLow-level ControlTask SpaceConstant CurvatureModel-based ControlHigh NonlinearityLinear Time-invariantRow BlockBenchmark ApproachesEnd-effectorKinematic RelationsModelingcontroland learning for soft robotssoft sensors and actuatorsdata-driven controlpredictive control
Abstracts:Soft robots offer a unique combination of flexibility, adaptability, and safety, making them well-suited for a diverse range of applications. However, the inherent complexity of soft robots poses great challenges in their modeling and control. In this letter, we present the mechanical design and data-driven control of a pneumatic-driven soft planar robot. Specifically, we employ a data-enabled predictive control (DeePC) strategy that directly utilizes system input/output data to achieve safe and optimal control, eliminating the need for tedious system identification or modeling. In addition, a dimension reduction technique is introduced into the DeePC framework, resulting in significantly enhanced computational efficiency with minimal to no degradation in control performance. Comparative experiments are conducted to validate the efficacy of DeePC in the control of the fabricated soft robot.
-
Bioinspired Head-to-Shoulder Reference Frame Transformation for Movement-Based Arm Prosthesis Control
Bianca LentoVincent LeconteLucas BardisbanianEmilie DoatEffie SegasAymar de Rugy
Keywords:ProstheticsComputer visionControl systemsTransformsNeuronsKinematicsArtificial neural networksReference FrameBioinspiredProsthesis ControlReference Frame TransformationArtificial Neural NetworkComputer VisionProof Of ConceptAmputationArm MovementsRobotic PlatformDistal JointObject PoseMovement GoalsDegrees Of FreedomError Of The MeanControl SystemKinematicPrediction ErrorUpper BodyInertial Measurement UnitParticipants In ArmTarget SpaceHead OrientationResidual LimbMovement TimeShoulder GirdleIdeal ControlHand PositionObjective MetricsVirtual Reality SystemProsthetics and exoskeletonsrobust/adaptive controlreference frame transformationhuman factors and human-in-the-loopartificial neural network
Abstracts:Movement-based strategies are being explored as alternatives to unsatisfactory myoelectric controls for transhumeral prostheses. We recently showed that adding movement goals to shoulder information enabled Artificial Neural Networks (ANNs), trained on natural arm movements, to predict distal joints so well that transhumeral amputees could reach as with their valid arm in Virtual Reality (VR). This control relies on the object's pose in a shoulder-centered reference frame, whereas it might only be available in a head-centered reference frame through gaze-guided computer vision. Here, we designed two methods to perform the required head-to-shoulder transformation from orientation-only data, possibly available in real-life settings. The first involved an ANN trained offline to do this transformation, while the second was based on a bioinspired space map with online adaptation. Experimental results on twelve participants controlling a prosthesis avatar in VR demonstrated persistent errors with the first method, while the second method effectively encoded the transition between the two frames. The effectiveness of this second method was also tested on six transhumeral amputees in VR, and a physical proof of concept was implemented on a teleoperated robotic platform with computer vision. Those advances represent necessary steps toward the deployment of movement-based control in real-life scenarios.
-
Human-to-Robot Handover Control of an Autonomous Mobile Robot Based on Hand-Masked Object Pose Estimation
Yu-Yun HuangKai-Tai Song
Keywords:ManipulatorsHandoverGraspingRobotsCamerasTask analysisPose estimationPose EstimationHuman Pose EstimationObject PoseAutomated Guided VehiclesHandover ControlPath PlanningDepth ImagesModel Predictive ControlSpecific PersonRecognition RateHand RegionUnseen ObjectsMobile ManipulatorCoordinate SystemAverage ErrorObject RecognitionBounding BoxLarger SpaceCoordinate TransformationInertial Measurement UnitTarget PoseHuman-robot CollaborationTarget PersonHuman HandLinear VelocitySingle Shot Multibox DetectorHomogeneous MatrixRobotic ArmCm In WidthUser's HandCollaborative robots in manufacturingtask and motion planningmobile manipulationpose estimation
Abstracts:This letter presents a human-to-robot handover design for an Autonomous Mobile Robot (AMR). The developed control system enables the AMR to navigate to a specific person and grasp the object that the person wants to handover. This letter proposes a motion planning algorithm for grasping an unseen object held in hand. Through hand detection and segmentation, the hand region is masked and removed from the acquired depth image, which is used to estimate the object pose for grasping. For grasp pose determination, we propose to add the Convolutional Block Attention Module (CBAM) to the Generative Grasping Convolutional Neural Network (GGCNN) model to enhance the recognition rate. For the object-grasp task, the AMR localizes the object in person's hand, and uses the Model Predictive Control (MPC)-based controller to simultaneously control the mobile base and manipulator to grasp the object. A laboratory-developed mobile manipulator, equipped with a 6-DoF TM5M-900 is used for experimental verification. The experimental results show an average handover success rate of 81% for five different objects.
-
Gravity Compensation Method for Whole Body-Mounted Robot With Contact Force Distribution Sensor
Shinichi MasaokaYuki FunaboraShinji Doki
Keywords:RobotsRobot sensing systemsGravityTorqueForce measurementAssistive robotsSafetyContact ForceGravity CompensationContact Force DistributionTuning ParameterCompensatory EffectRobot ModelRobotic AssistanceWearable RobotsBody Mass IndexControl MethodCenter Of MassKnee JointEffect Of WeightHip JointSubjective OutcomesJoint AnglesGeometric ModelInertial Measurement UnitKg Of WeightDevelopment Of SensorsEffects Of RobotsTorque SensorConventional Control MethodsFeedforward ControlExternal SensorsPosition Of The RobotSensor ValuesHuman Body MovementsRobot JointControl Parameter TuningWearable roboticsforce controlphysically assistive devices
Abstracts:The emergence of sheet-type force distribution sensors has allowed direct measurement of contact force. We developed a wearable assistive robot that can directly measure contact force and investigated the gravity compensation effect of contact-force-based control. For conventional robots that do not measure the force acting between the robot and the human body (contact force) directly, a precise robot model is required for gravity compensation, which is difficult to implement in software. In the first experiment, we examined a method of gravity compensation using only joint sensors in torque-based control, which is a common conventional method, and assessed the difficulty of this method. In the next experiment, which involved one healthy subject, we confirmed that contact-force-based control has a significant gravity compensation effect without requiring a rigorous robot model. Experiments with two additional healthy subjects using the same parameters revealed that even rough parameter tuning can produce a gravity compensation effect. This letter not only proposes a simplified gravity compensator for wearable assistive robots but also demonstrates the robustness of parameter tuning in contact-force-based control under static conditions. Based on the findings of this letter, we will further study the possibility of other kinds of disturbance compensation and dynamic conditions in the future.
-
A Frequency-Based Attention Neural Network and Subject-Adaptive Transfer Learning for sEMG Hand Gesture Classification
Phuc Thanh-Thien NguyenShun-Feng SuChung-Hsien Kuo
Keywords:Feature extractionTransfer learningTransformsGesture recognitionNeural networksMusclesFast Fourier transformsNeural NetworkTransfer LearningHand GesturesFourier TransformHuman-computer InteractionRecognition AccuracyAttention ModuleRobot ControlSurface ElectromyographyGesture RecognitionsEMG SignalssEMG DataProsthetic ControlSignal ProcessingConvolutional Neural NetworkMachine Learning MethodsFast Fourier TransformAttention MechanismTarget DomainLimited DatasetHand Gesture RecognitionDiscrete Cosine TransformShort-time Fourier TransformDiscrete Fourier TransformChannel AttentionFrequency Domain FeaturesMode SpectraHuman-robot InteractionMatthews Correlation CoefficientContinuous Wavelet TransformFrequency-based attentionfourier transformshort-time fourier transformclass-imbalanced classificationsurface electromyography (sEMG)
Abstracts:This study introduces a novel approach for real-time hand gesture classification through the integration of a Frequency-based Attention Neural Network (FANN) with Subject-Adaptive Transfer Learning, specifically tailored for surface electromyography (sEMG) data. By utilizing the Fourier transform, the proposed methodology leverages the inherent frequency characteristics of sEMG signals to enhance the discriminative features for accurate gesture recognition. Additionally, the subject-adaptive transfer learning strategy is employed to improve model generalization across different individuals. The combination of these techniques results in an effective and versatile system for sEMG-based hand gesture classification, demonstrating promising performance in adapting individual variability and improving classification accuracy. The proposed method's performance is evaluated and compared with established approaches using the publicity available NinaPro DB5 dataset. Notably, the proposed simple model, coupled with frequency-based attention modules, achieves accuracies of 89.56% with a quick prediction time of 5ms, showcasing its potential for dexterous control of robots and bionic hands. The findings of this research contribute to the advancement of gesture recognition systems, particularly in the domains of human-computer interaction and prosthetic control.
-
GaussianGrasper: 3D Language Gaussian Splatting for Open-Vocabulary Robotic Grasping
Yuhang ZhengXiangyu ChenYupeng ZhengSongen GuRunyi YangBu JinPengfei LiChengliang ZhongZengmao WangLina LiuChao YangDawei WangZhen ChenXiaoxiao LongMeiqing Wang
Keywords:Three-dimensional displaysRobotsFeature extractionGraspingImage reconstructionLocation awarenessGeometryRobotic GraspingLanguage TeachingSelf-supervised Learning3D SceneGaussian Field2D Feature3D GaussianRGB-D ImagesFeature MapsPoint CloudDepth MapSemantic FeaturesConvex HullMemory UsageSegmentation Map3D RepresentationRobot Manipulator3D FeaturesLatent FeaturesExplicit RepresentationSurface Normals3D FieldStructure From MotionMulti-view ImagesContrastive LossRelevance ScoreNormal MapOpen WorldEnd-effectorPhysical WorldLanguage-guided robotic manipulation3D Gaussian splattinglanguage feature field
Abstracts:Constructing a 3D scene capable of accommodating open-ended language queries, is a pivotal pursuit in the domain of robotics, which facilitates robots in executing object manipulations based on human language directives. To achieve this, some research efforts have been dedicated to the development of language-embedded implicit fields. However, implicit fields (e.g. NeRF) encounter limitations due to the necessity of taking images from a larger number of viewpoints for reconstruction, coupled with their inherent inefficiencies in inference. Furthermore, these methods directly distill patch-level 2D features, leading to ambiguous segmentation boundaries. Thus, we present the GaussianGrasper, which uses 3D Gaussian Splatting (3DGS) to explicitly represent the scene as a set of Gaussian primitives and is capable of real-time rendering. Our approach takes RGB-D images from limited viewpoints as input and uses an Efficient Feature Distillation (EFD) module that employs contrastive learning to efficiently distill 2D language embeddings and constraint consistency of feature embeddings. With the reconstructed geometry of the Gaussian field, our method enables the pre-trained grasping model to generate collision-free grasp pose candidates. Furthermore, we propose a normal-guided grasp module to select the best grasp pose. Through comprehensive real-world experiments, we demonstrate that GaussianGrasper enables robots to accurately locate and grasp objects according to language instructions, providing a new solution for language-guided grasping tasks.
-
Learning Multimodal Confidence for Intention Recognition in Human-Robot Interaction
Xiyuan ZhaoHuijun LiTianyuan MiaoXianyi ZhuZhikai WeiLifen TanAiguo Song
Keywords:Human-robot interactionUncertaintyRobotsReliabilityTask analysisFeature extractionVectorsAction RecognitionHuman-robot InteractionGesturesFusion MethodDistribution Of CategoriesUncertainty ReductionMultimodal LearningDifficulties In Daily LifeLong Short-term MemoryStochastic Gradient DescentSources Of UncertaintyComplex ScenariosState MachineGraph Convolutional NetworkSimple ScenarioFusion ResultsHuman IntentionEvidence TheoryDecision LevelHuman PartnerBatch LearningMultimodal InteractionAdditional AdviceDecision-level FusionHuman-robot CollaborationLagrange DualityReliable ModalitySum Of ExponentialsSupport Vector MachineIndependent PoolsHuman factors and human-in-the-loopmultimodal confidence learning for opinion poolmultimodal perception for HRI
Abstracts:The rapid development of collaborative robotics has provided a new possibility of helping the elderly who has difficulties in daily life, allowing robots to operate according to specific intentions. However, efficient human-robot cooperation requires natural, accurate and reliable intention recognition in shared environments. The current paramount challenge for this is reducing the uncertainty of multimodal fused intention to be recognized and reasoning adaptively a more reliable result despite current interactive condition. In this letter we propose a novel learning-based multimodal fusion framework Batch Multimodal Confidence Learning for Opinion Pool (BMCLOP). Our approach combines Bayesian multimodal fusion method and batch confidence learning algorithm to improve accuracy, uncertainty reduction and success rate given the interactive condition. In particular, the generic and practical multimodal intention recognition framework can be easily extended further. Our desired assistive scenarios consider three modalities gestures, speech and gaze, all of which produce categorical distributions over all the finite intentions. The proposed method is validated with a six-DoF robot through extensive experiments and exhibits high performance compared to baselines.
-
Let Me Give You a Hand: Enhancing Human Grasp Force With a Soft Robotic Assistive Glove
Cem SuulkerAlexander GreenwaySophie SkachIldar FarkhatdinovStuart Charles MillerKaspar Althoefer
Keywords:ActuatorsRobotsForceThumbRobot sensing systemsElectromyographySoft roboticsEnhance HumanSoft RobotsGrasp ForceRobotic GloveSoft Robotic GloveLinear Mixed-effects ModelsAssistive TechnologyUser SatisfactionDaily TasksSurface ElectromyographyWearable DevicesDexterityRobotic SystemExoskeletonForce ValuesForce SensorElectromyography SignalsTest RigRobotic DevicesIntensive TasksFlexor Digitorum ProfundusMuscle EffortHand SizePneumatic PressureForce RequirementsSoft actuatorssoft robot applicationssoft robotic glovewearable robotics
Abstracts:Soft robotic gloves are designed to assist individuals with daily tasks that involve grasping. Such devices are however often hampered by an inability to generate enough force to enable them to perform the tasks for which they were designed. This study evaluates the grasping capabilities of a novel textile soft robotic glove, which has performance-enhancing integrated elastic band actuators. We conducted a user study with 20 participants to assess the assistive glove's effectiveness. Our novel evaluation method, using surface electromyography sensors to measure muscle activity, enabled us to determine the respective grasping force contributions of the assistive device and the user. Our findings indicate that the device provides consistent grasp assistance across a force range from 20 to 80 Newtons. Average assistance for the fingers was 15.8 Newtons, with a maximum of 33.3 Newtons, while for the thumb it averaged at 12.4 Newtons, with a maximum of 23.3 Newtons. The results were validated using Linear Mixed-Effects Models, demonstrating statistically significant findings with p values of below 0.01. A user satisfaction survey (QUEST 2.0) suggested high perceived value given its excellent rating of 4.53 out of 5. Overall, these results suggest that the device can make a significant difference, helping users when performing grasping tasks.
-
LECES: A Low-Bandwidth and Efficient Collaborative Exploration System With Distributed Multi-UAV
Tong ZhangHao ShenYingming YinJianyu XuJiajie YuYongzhou Pan
Keywords:CollaborationTask analysisBandwidthAutonomous aerial vehiclesResource managementRobotsIterative methodsCollaborative SystemEfficient ExplorationUnmanned Aerial VehiclesGlobal MapEfficient AllocationExploration StrategyTask AllocationCommunication BandwidthTraveling Salesman ProblemPath LengthShortest PathMapping DataRepulsive ForcesCluster CentersCluster AssignmentGaussian Mixture ModelPenalty FunctionK-means AlgorithmReal-world ExperimentsBinary CodeBenchmark TestGrid MapSmooth TrajectoryRepresentation Of The EnvironmentTopological MapCentral ServerEnvironment MapExploration TimeCentralized ApproachSource CodeAerial Systems: ApplicationsMulti-Robot SystemsSearch and Rescue Robots
Abstracts:Collaborative exploration is a prevailing trend of autonomous exploration by unmanned aerial vehicles (UAVs). However, most collaborative exploration systems rely on excessively high communication bandwidth for precise map maintenance and efficient task allocation. This letter proposes a low-bandwidth and efficient collaborative exploration system with distributed multi-UAV. First, a lightweight map fusion method is proposed, based on Binary OctoMap with a sliding cube, to incrementally maintain a consistent global map for all UAVs with low bandwidth cost. Then, an efficient exploration strategy is proposed that decouples the multi-UAV task allocation problem into independent single-UAV Asymmetric Traveling Salesman Problems (ATSP) based on the consistent global map. By viewpoints clustering, assignment, and decision, it allows for efficient task allocation without iterative interactions. Experiments are conducted in both simulations and real-world environments. The experiment results demonstrate that our method achieves stable and efficient exploration with low communication bandwidth requirements.
-
Adaptive Abrupt Disturbance Rejection Tracking Control for Wheeled Mobile Robots
Hao WuShuting WangYuanlong XieHu LiShiqi ZhengLiquan Jiang
Keywords:SwitchesTransient analysisVectorsFrictionUpper boundSteady-stateMobile robotsDisturbance RejectionWheeled RobotControl Of Mobile RobotsAbrupt DisturbancesTransient StateAbrupt ChangesAuxiliary VariablesSliding Mode ControlAbrupt TransitionDisturbance ObserverUncertain DisturbancesSwitching LawRoot Mean Square ErrorSteady StateMaximum And MinimumLarge ErrorsAngular VelocityControl PerformancePositive ConstantTransient ResponseAdaptive Sliding Mode ControlAdaptive GainAdaptive LawTracking ErrorTraditional LawExternal DisturbancesError FigureNonholonomic ConstraintsNormal ErrorChanges In FrictionAbrupt changesdisturbance-rejection sliding-mode controller (SMC)observerwheeled mobile robots (WMRs)
Abstracts:Uncertain disturbances increase the difficulty of robust tracking control for wheeled mobile robots (WMRs) in industrial scenarios, especially when exhibiting abrupt changes. This letter proposes an adaptive abrupt disturbance-rejection sliding mode controller (SMC). To address the increased variability in the disturbance boundaries caused by abrupt transitions, a new adaptive disturbance observer (ADOB) is designed to improve the tracking robustness and weaken the chattering of SMC by generating auxiliary system variables without depending on any prior boundary information about disturbance and its change rate. Then, a novel barrier function-based switching law is constructed to suppress the residual-disturbance estimation error of the ADOB at the transient state, which achieves the tradeoffs between the necessary sufficient gain and chattering by avoiding gain overestimation. The finite-time Lyapunov stability of the sliding variables and the estimated errors have been proved theoretically. The practical effectiveness is illustrated in experiments with the custom-developed WMRs.