-
Anandghan Waghmare: “Beyond Built-In—Expanding What Devices Can Sense”
Lakmal Meegahapola
Keywords:Pervasive computingPerformance evaluationEnergy consumptionMachine learningSignal processingSensorsGlucoseSmart devicesSmart phonesUsabilityCost ReductionFasting Plasma GlucoseMobile AppWearableHuman-computer InteractionOriginal DesignPower DevicesSmart DevicesBlood Glucose MeasurementsInference SystemHealthcare ProblemScalable SolutionMinimum Energy ConsumptionIntegration Of LearningCapabilities Of DevicesHardware FailureIntegration Of Machine LearningMeasured Light IntensityLipid Profile Analysis
Abstracts:Anandghan Waghmare: My research focuses on extending the sensing capabilities of ubiquitous devices, such as smartphones and smartwatches, beyond their original design. This strategy involves exploring the untapped potential within these increasingly powerful devices to enable functionalities, such as blood glucose measurement, using smartphones or UV light intensity detection through smartwatches. I investigate the integration of advanced signal processing, embedded systems, and machine learning to unlock the hidden sensing capabilities of modern devices. My work primarily pursues two complementary approaches to enhance device functionality while maintaining practicality. The first approach involves developing cost-effective, low-power hardware add-ons that seamlessly integrate with existing devices. These add-ons are designed for affordability, broad compatibility, and minimal energy consumption, ensuring widespread accessibility and ease of use. By leveraging the processing power of the smart device, these add-ons remain simple yet effective, expanding sensing capabilities without necessitating the acquisition of new devices. The second approach focuses on hardware modifications that enhance sensor performance without requiring a complete device redesign. Modifying existing components offers advantages, such as reduced costs and shortened development cycles, and preserves user familiarity. This strategy allows for the unlocking of new functionalities while maintaining the device’s original form, making advanced sensing more commercially viable. Through these strategies, my research aims to develop practical and scalable solutions that augment sensing capabilities, ultimately making smart devices more intelligent and versatile.
-
Toward HabiComp: Ethical Habit Redesign in UbiComp Through Nonverbal Nudging
Yugo Nakamura
Keywords:EthicsDelaysGray-scaleUbiquitous computingCultural differencesReal-time systemsAttenuationGamesParadigm ShiftFeeding BehaviorEnvironmental CuesCompulsive DisorderHabitual BehaviorReward StructureUbiquitous ComputingHabitual ActionsInvisibleHabituationSensory CuesScrolling
Abstracts:This article envisions HabiComp—a paradigm shift in ubiquitous computing that redefines technology not just as a tool for seamless interaction, but as an ethical architect of habitual behavior. This project has been exploring nonverbal nudging through System 0, an ambient, context-aware cognitive preprocessing layer that supports human cognitive Systems 1 and 2 by recalibrating habitual actions while preserving autonomy. Two case studies illustrate its subtle yet profound impact: one reconfigures eating behavior through environmental cues, to foster mindful consumption, and the other reshapes digital reward structures to ethically disengage users from compulsive behaviors. Building on these insights, this article outlines key challenges and future directions, positioning HabiComp as a guiding framework for the ethical redesign of habits within ubiquitous computing.
-
Pervasive Computing X Software Engineering: Perspectives to ICSE 2024
Ella Peltonen
Keywords:Pervasive computingComputer scienceSoftwareCognitionSoftware engineeringSoftware EngineeringPervasive ComputingInternet Of ThingsMobile AppHuman-computer InteractionBug ReportsResearch PrototypeSmartphone
Abstracts:Let us call it refreshing to look behind one’s comfort zone and consider another subfield within computer science. The IEEE/ACM International Conference on Software Engineering is an influential conference. In this column, I report on recent software engineering work related to pervasive computing.
-
Creating Immersive Digital Twins of Terrestrial Planetary Analogs With Multimodal Sensing and Game Engines for Virtual Exploration
Leonie BenschCody PaigeDon D. HaddadFangzheng LiuNathan PerryGerrit OlivierJessica ToddJoseph A. Paradiso
Keywords:SensorsDigital twinsThree-dimensional displaysEnginesData visualizationVisualizationLaser radarTrainingSpatial databasesDigital TwinGame EngineMultimodal SensorVirtuallyGeologicalData VisualizationSensor DataEnvironmental Data3D ReconstructionUnmanned Aerial VehiclesLight Detection And RangingFuture PerformanceEnvironmental ConstraintsSituational AwarenessPhotogrammetryImmersive ExperienceMinimal TrainingSeismic DataHead-mounted DisplayImmersive EnvironmentPoint CloudField DeploymentTime Series GraphsApproximate MatchingManual AlignmentPipeline StagesRegulatory ConstraintsValley GlaciersImmersive Virtual RealitySpatial Alignment
Abstracts:Creating cross-reality applications of planetary analog environments supports scientific exploration and mission planning by offering a safe and cost-effective way to explore remote terrains. We present a pipeline that integrates physical and virtual data through 3D reconstruction, environmental sensing, and interactive real-time rendering in game engines. The approach was validated at two analog sites in Svalbard, Norway, and Lanzarote, Spain, using UAV photogrammetry, smartphone LiDAR, RGB imagery, and environmental and seismic sensors. In Svalbard, we reconstructed water-indicating terrain in Unity3D. In Lanzarote, we visualized a lava tube with integrated seismic and atmospheric data in Unreal Engine. The environments are explorable in both desktop and VR modes. By combining consumer hardware with multimodal sensing, we demonstrate a flexible method for generating immersive digital twins. We discuss low-cost tools for analog fieldwork, outline design considerations for integration and visualization, and provide recommendations for future cross-reality deployments in science and exploration contexts.
-
MetaGadget: An Accessible Framework for IoT Integration Into Commercial Metaverse Platforms
Ryutaro KuraiHikari YanagawaYuichi HiroiTakefumi Hiraki
Keywords:MetaverseInternet of ThingsServersSwitchesAerospace electronicsControl systemsHTTPDigital twinsInternet Of ThingsCommercial PlatformsVirtuallyPrototypeTechnical ChallengesPhysical SpaceDevelopment Of ApplicationsTechnical SkillsVirtual WorldCommunication ProtocolControl DevicesInternet Of Things DevicesSmart HomeIntegration Of DevicesRaspberry PiTechnical BarriersEducational ApplicationsPhysical DevicesDigital TwinVirtual Reality ExperienceNew Forms Of InteractionInteractive SystemDevelopment EnvironmentGame ExperienceMultiple SpacesTraffic LightError HandlingJSON FormatConnectivity Patterns
Abstracts:While the integration of Internet of Things (IoT) devices in virtual spaces is becoming increasingly common, technical barriers to controlling custom devices in multiuser virtual reality (VR) environments remain high, particularly limiting new applications in educational and prototyping settings. We propose MetaGadget, a framework for connecting IoT devices to commercial metaverse platforms that implements device control through HTTP-based event triggers without requiring persistent client connections. Through two workshops focused on smart home control and custom device integration, we explored the potential application of IoT connectivity in multiuser metaverse environments. Participants successfully implemented new interactions unique to the metaverse, such as environmental sensing and remote control systems that support simultaneous operation by multiple users, and reported positive feedback on the ease of system development. We verified that our framework provides a new approach to controlling IoT devices in the metaverse while reducing technical requirements and provides a foundation for creative practice that connects multiuser VR environments and physical spaces.
-
Global-to-Local Decision Intelligence Using a Cross-Reality VR Platform and Satellite Earth Observation
Minoo RathnasabapathyRachel ConnollyLucas De BonetDava Newman
Keywords:Electromagnetic compatibilityData visualizationStakeholdersSatellitesThree-dimensional displaysMediaDecision makingVirtual environmentsComputer architectureClimate changeSatellite ImagesEarth ObservationVirtuallyClimate ChangeFeedback LoopEmotional ResponsesData IntegrationData VisualizationGlobal ClimateDecision SupportLocal ScaleGeographic Information SystemClimate ModelsClimate Change MitigationGlobal DataClimate DataGlobal TrendsWildfireHuman-computer InteractionSea Level RiseUser TestingVirtual SettingInternet Of ThingsVirtual Reality ExperienceGlobal PolicyGlobal ChangeDesktop ApplicationStakeholder GroupsDiscovery DataClimate Action
Abstracts:The integration of physical, digital, and immersive modalities into decision intelligence is transforming how complex challenges, such as climate change, are addressed. For example, visualizing rising sea levels in coastal communities can help provide actionable insights to urban planners for adaptation and mitigation efforts. Cross-reality platforms that combine multimodalities offer critical progress in overcoming barriers of data accessibility for climate decision-making by providing actionable insights and fostering collaboration among diverse stakeholders. This design approach serves as a basis for the design of Earth Mission Control (EMC), an immersive decision-support platform that integrates satellite Earth observation data into virtual reality and physical environments, allowing users to interact with critical climate insights across its multimodal features, such as hyperwalls, 3D map tables, and interactive dashboards. This article highlights EMC's capabilities through its multiuser and remote access features and provides examples of hyperlocal climate storyboards (vignettes) that connect global climate trends to localized impacts.
-
SurgiKLAR: An Augmented Reality Framework for Improved Surgical Training
Puspamita BanerjeePooja P JainJaswanth GeorgeChetan ShirvankarSubhamoy Mandal
Keywords:SurgeryTrainingThree-dimensional printingSolid modelingLaparoscopesReal-time systemsVideosMinimally invasive surgeryInstrumentsSurgical TrainingMedical StudentsUser StudyLess ExperiencedSurgical OutcomesMinimally Invasive SurgeryObstetrics And GynecologyReal-time TrackingVirtual SimulationLive VideoReal-time VisualizationDigital TwinAnatomical ModelPatient-specific ModelsLaparoscopic CholecystectomyMixed RealityGrowth In Recent YearsAugmented Reality TechnologyAugmented Reality SystemImmersive LearningInteractive SystemAugmented Reality ApplicationsLaparoscopic SurgeryReal-time InstrumentGynecological ProceduresAnatomical VariationsMagnetic Resonance ImagingGraphical User InterfaceInstrument DetectionAvailability Of Equipment
Abstracts:Augmented reality (AR) technology is transforming minimally invasive surgeries by merging physical and virtual spaces to improve intraoperative guidance and training. We present SurgiKLAR, a mixed reality framework for improved surgical training, to clearly (“klar”) visualize segmented anatomies from preoperative scans coregistered with patient-specific models to simulate surgical procedures. The system is adapted for gynecology surgeries, featuring realistic uterine models and instruments for adaptive simulations, personalized guidance, and real-time alerts. Preliminary evaluations with a 3D-printed phantom demonstrated its potential application in preplanning complex scenarios. We conducted a two-phase user study to evaluate the usability of the SurgiKLAR system. Phase I assessed AR adoption challenges, and Phase II validated usability for training. The user feedback highlighted the importance of accurate visualizations and the need for extensive training programs. Future work involves an extension to diverse surgical domains, improved precision, and enhanced safety, thereby highlighting the transformative potential of cross-reality in surgery.
-
Mixed-Reality System for Neurodegenerative Disorders: Design, Implementation, and User Evaluation
Daria HemmerlingPaweł JemiołoMateusz DaniołMarek WodzińskiMiłosz DudekMarta KaczmarskaKinga JasiewiczMagdalena Igras-CybulskaJakub KamińskiMagdalena Wojcik-Pędziwiatr
Keywords:DiseasesMonitoringFeature extractionUsabilityTrackingMedical diagnostic imagingReal-time systemsMotorsSupport vector machinesNeurodegenerative DiseasesUsability EvaluationMixed Reality SystemCognitive FunctionClinical SettingsSubjectivityHigh ScoresEarly Stages Of The DiseaseUser ExperienceMotor SkillsSensor DataClinical ContextHuntington’s DiseaseDexterityPresent ChallengesDisease MonitoringSystem UsabilityDyskinesiaHand MovementsDiagnostic ResultsAmyotrophic Lateral SclerosisPresence QuestionnaireHealthy GroupMontreal Cognitive AssessmentProgressive Supranuclear PalsyEye MovementsIterative RefinementMachine Learning ModelsInertial Measurement UnitMovement Speed
Abstracts:This study introduces an innovative multimodal system designed to support the assessment of Parkinson’s disease, atypical Parkinsonian syndromes, and Huntington’s disease by integrating mixed-reality (MR) technology, specifically using augmented reality goggles. The system captures symptoms related to neurodegenerative disorders through detailed sensor data, including hand movements, gait, and gaze patterns. Central to our methodology is an interactive, game-like experience lasting 30 min, comprising 17 tasks that evaluate motor skills, speech, memory, cognitive functions, gait, and gaze. These tasks were developed in collaboration with neurologists and draw upon established clinical paradigms. This engaging experience, combined with traditional clinical tests, provides a comprehensive assessment of both physical and cognitive capabilities. Rather than aiming to provide a diagnostic decision, this study focuses on evaluating the feasibility and usability of the system in a clinical setting. By automating diagnostic tasks and utilizing MR technology, our approach enhances patient comfort and offers a more immersive and efficient assessment process. The system was evaluated with 15 neurodegenerative patients and 32 healthy subjects, demonstrating high scores in both immersion and usability across both groups. This work represents a first step toward integrating MR into clinical workflows for neurodegenerative disease assessment.
-
Call for Papers: IEEE Pervasive Computing
-
Ethical Considerations of Extended Reality in the Workplace
Marios ConstantinidesFotis Liarokapis
Keywords:EthicsEmploymentArtificial intelligencePrivacySafetyPsychologySurveillanceSystematic literature reviewMedical servicesInteractiveAccountabilityConceptual FrameworkPsychological Well-beingEmotion RecognitionWebinarsDigital ContentEthical ChallengesLack Of ValidationUnderrepresented GroupsImmersive ExperienceEthical FrameworkPhysical SafetyProfessional EnvironmentBiometric DataWorkplace SettingsDeployment Of TechnologiesPsychological SafetyIEEE XploreMixed RealityDigital TwinGap In The LiteratureMotion SicknessEmotional FatigueEthical DimensionsCognitive LoadLow-resource EnvironmentsImplications Of TechnologiesInclusive DesignBilingual
Abstracts:As extended reality (XR) technologies are increasingly adopted in workplaces, ethical concerns, such as privacy, equity, and surveillance, must be addressed to ensure their responsible deployment. The current mapping of this research space has narrowly focused on issues, such as privacy or safety, failing to provide a comprehensive view of ethical challenges across different XR technologies and workplace contexts. We surveyed 39 papers and contextualize them using the widely used National Institute of Standards and Technology artificial intelligence ethical framework along the dimensions of privacy and surveillance, accountability and governance, transparency and explainability, fairness and inclusivity, and safety and psychological well-being. Our findings highlight that most papers largely focus on conceptual frameworks over practical implementations in addition to the lack of empirical validation of ethical frameworks and insufficient attention to underrepresented groups.