-
A Brief Survey and Implementation on Network-Level Intent Decomposition for Telecommunication Networks
Yao WangChungang YangYijun YuYexing LiDong LiXiaoxue ZhaoZhu Han
Keywords:OptimizationOntologiesUnified modeling languageSurveysAnalytical modelsCouplingsAdaptation modelsTranslationLogicAutomationTelecommunication network managementService-oriented architectureCommunication NetworkRunning TimeAdjustable ParametersIntelligent SystemsNetwork OperatorsConfiguration ParametersNetwork ManagementService RequirementsNetwork ElementsCoupling ProblemOptimization ProblemOptimization AlgorithmLong Short-term MemoryNetwork SizeBase StationMulti-objective OptimizationQuality Of ExperienceOptimal OperationNetwork ResourcesTemporal ModelDecomposition LayersDeep Q-networkDomain-specific LanguagesVirtual Network FunctionsNon-functional RequirementsOntology NetworkPolicy ConflictMulti-objective Optimization ProblemType Of ConflictUnified Modeling Language
Abstracts:With the rapid development of telecommunication networks, various service requirements and on-demand network configurations have been becoming increasingly complicated. Meanwhile, the number of network elements and configurable network parameters will be significantly huge. Network operators cannot effectively handle various coupling problems among these adjustable parameters. Therefore, intent-based network management is essential for addressing these couplings and dependencies. However, it is a significant challenge for network operators to analyze the supply relationships between network-level intents and these adjustable parameters. Network-level intent decomposition (NID) can be introduced to reduce manual operations to enhance the degree of automation; however, there is a lack of a survey and clear methodology for complex NID. In this article, in order to overcome long manual configuration cycles and inflexible policy scheduling challenges in network intelligent control, we clarify a novel concept of the NID and provide a precise classification of the NID framework. This article presents a comprehensive survey of the NID from both the design time and run time perspectives. The design time contains intent modeling, intent decomposition, and intent validation, and the run time is mainly oriented to intent optimization. We then present an implementation framework of the NID. Finally, we design an intelligent NID system for an energy-aware open radio access network and demonstrate its feasibility and effectiveness.
-
Enabling Service-Oriented Programming for Multi-User in Polymorphic Network
Zixi CuiLe TianPeng YiYuxiang HuJiangxing Wu
Keywords:Personal protective equipmentProgrammingHardwareRuntimeLogicProgram processorsCodesField programmable gate arraysSwitchesProtocolsService-oriented architectureOpen IssuesHeterogeneous DevicesHardware DetailsParadigm ShiftProgramming ModelNetwork OperatorsNetwork ServicesNetwork ApplicationsNetwork ResourcesUsage In ApplicationsHardware ResourcesIndividual DevicesIntermediate RepresentationHardware DevicesControl PlaneDevice ProgrammingDomain-specific LanguagesVirtual Network FunctionsPolicy RuleCache Hit
Abstracts:Polymorphic networking (PINet) aims to support the coexistence and evolution of diverse services for the multi-user in a unified programmable environment. An efficient programming system plays a crucial role in realizing PINet, by which the network programs are deployed on the underlying heterogeneous devices. This article presents PINet's programming environment (PPE), a service-oriented programming environment for PINet with three major goals, that is, incremental, application-level, and coordination programming. PPE provides a one end-to-end service abstraction that allows programmers to express packet processing logic (e.g., forwarding and computing) without concern for the network topology and hardware details. PPE also proposes a network-wide compiler system with hierarchical architecture to deploy out-of-the-box services for the multi-user. We elaborately describe the PPE's goals, workflow, and challenges for our motivation. Some implementation details and open issues are discussed as future research directions.
-
Programmable Data Planes for Increased Digital Resilience in OT Networks
Filip HolikMarco M. CookXicheng LiAwais Aziz ShahDimitrios Pezaros
Keywords:SwitchesComputer architectureSecurityComputer securityResilienceSafetyPerformance evaluationData centersLogicCritical infrastructureIndustrial controlProgrammable Data PlaneDigital ResilienceData CenterCritical InfrastructureFunctional FlexibilitySecurity FunctionsLearning AlgorithmsCenter For ControlEducational SettingsInternet Of ThingsNetwork TopologyNormal OperationTraditional NetworkAnomaly DetectionIndustrial EnvironmentSoftware ImplementationSmart GridNetwork DevicesEdge DevicesControl PlaneProgrammable Logic ControllersIndustrial Internet Of ThingsPacket ProcessingFlow TableFlexible DeploymentNetwork Technology
Abstracts:Critical national infrastructure is heavily reliant on operational technology (OT) to monitor and control physical industrial processes. Despite the importance of OT systems, they have inherently been designed to maintain safety assurance rather than mitigate security threats. Recent advances in programmable networks offer highly customizable and flexible solutions for cybersecurity. However, the integration of data plane and network stack programmability into OT environments has not been fully explored. The conventional programmable data plane (PDP) approach based on the P4 language and the reconfigurable match-action tables (RMT) switch model has proven highly effective for data center environments. However, performance is subject to specific target implementations, while the architectural model lacks inherent support for flexible and stateful functionality required by cybersecurity use cases. Alternative (host) network programmability paradigms, such as extended Berkley packet filter (eBPF), are gaining traction as more suitable to OT use cases. In this article, we explore the suitability of PDP and offload technologies for improved digital resilience in OT environments. We provide a comprehensive cross-comparison between PDP solutions and advocate that eBPF-based network programmability can achieve flexible, dynamic, and low-cost in-network functions for digital security and resilience. We present an eBPF-based proof-of-concept asset discovery service that is compared with alternative solutions. Furthermore, we discuss the OT network security functions enabled through eBPF to address potential future research directions.
-
GENIO: Synergizing Edge Computing with Optical Network Infrastructures
Carmine CesaranoAlessio FoggiaGianluca RoscignoLuca AndreaniRoberto Natella
Keywords:Cloud computingComputer architectureMonitoringPassive optical networksEdge computingSoftwareServersOptical network unitsSecurityTelecommunication network managementLow latency communicationNetwork InfrastructureEdge ComputingOptical NetworksEnd-usersComputational ResourcesCentral OfficeResource ManagementSimulation EnvironmentInternet Of Things DevicesSmart CityAccess NetworkComputational CapabilitiesSoftware ComponentsStorage CapabilityHardware ComponentsInternet Service ProvidersThreat ModelEdge ServerControl PlaneCloud LayerEdge LayerVirtual Network FunctionsSmart BuildingsTelecom OperatorsPhysical NodesCommercial Off-the-shelfKey Performance IndicatorsTask SchedulingCloud ComputingLow Latency
Abstracts:Edge computing has emerged as a paradigmfor bringing low-latency and bandwidth-intensiveapplications close to end-users. However, edgecomputing platforms still face challenges relatedto resource constraints, connectivity, and security.We present GENIO, a novel platform thatintegrates edge computing within existing passiveoptical network (PON) infrastructures. GENIOenhances central offices with computational andstorage resources, enabling telecom operatorsto leverage their existing PON networks as a distributededge computing infrastructure. Throughsimulations, we show the feasibility of GENIO insupporting real-world edge scenarios and its betterperformance compared to a traditional edgecomputing architecture.
-
Large Language Models for Zero Touch Network Configuration Management
Oscar G. LiraOscar M. CaicedoNelson L. S. da Fonseca
Keywords:Autonomous networksTransformersTrainingData modelsLarge language modelsAnalytical modelsAdaptation modelsPrompt engineeringKnowledge engineeringGeneratorsNetwork ConfigurationLarge Language ModelsZero-touch NetworkZero TouchNatural LanguageHuman InterventionNetwork ManagementNetwork DevicesMinimal Human InterventionDeep Neural NetworkLarge NetworksAbility Of The ModelHallucinationsAnomaly DetectionLanguage UnderstandingVerification ProcessSelf-attention MechanismDevice ConfigurationUser RequestsTranslation TaskNetwork AdministratorsAssistant RoleSyntax ErrorsAutonomic NetworkManagement DomainJavaScript Object Notation
Abstracts:The zero-touch network and service management (ZSM) paradigm, a direct response to the increasing complexity of communication networks, is a problem-solving approach. In this article, taking advantage of recent advances in generative artificial intelligence, we introduce the network configuration generator (LLM-NetCFG) that architects ZSM configuration agents by large language models. LLM-NetCFG can automatically generate configurations, verify them, and configure network devices based on intents expressed in natural language. We also show the automation and verification of network configurations with minimum human intervention. Moreover, we explore the opportunities and challenges of integrating LLM in functional areas of network management to fully achieve ZSM.
-
Making In-Network Computing a Shared Infrastructure in Datacenters: Architecture, Challenges and Open Issues
Dong ZhouShuo WangPeiyuan LinSiyu HanTao Huang
Keywords:Computer architectureServersPerformance evaluationThroughputRuntimeProgrammingTrainingSwitchesMachine learningLow latency communicationProgrammable controlOpen IssuesShared InfrastructureIn-network ComputingKey InsightsMultiple ApplicationsResource EfficiencyCachingNetwork DevicesPractical DeploymentDevice ProgrammingDeployment Of ApplicationsIsolation ProblemsTypical Use CaseDistributed Machine LearningInternet Of ThingsDistribution SystemProcessing UnitFault-tolerantLow LatencyControl PlanePacket ProcessingResource BlockParameter ServerLogical ProcessNetwork PathConsensus ProtocolRound-trip TimeEdge ComputingDemand For Applications
Abstracts:Emerging programmable network devices spawn the trend of offloading computations into the network. Nowadays, in-network computing (INC) has been proven to be an essential way to break the performance bottleneck of distributed applications in datacenters. Various applications have achieved orders of magnitude performance improvement, such as distributed machine learning, consensus, and caching, via INC. Currently, optimizations of specific applications with INC are widely discussed, but the widespread use of INC is challenging due to the lack of architecture for deploying multiple applications on a shared INC infrastructure. These problems narrow the generality and practicability of INC when deployed in the real environment. In this article, we analyze the requirements and challenges of the practical INC deployment in data-centers, and present our key insights. We propose the shared in-network computing network (SINCN) architecture to enable the practical deployment of multiple applications in a shared INC infrastructure, which aims to extend the use of INC and benefit more applications from INC. In SINCN, the INC primitives, programming abstractions, and runtime programmable technology are jointly leveraged to solve the problems of generality, efficiency, flexibility, and isolation. In addition, we propose some technical challenges that occur in real deployment and present some potential open issues. Finally, a typical use case with SINCN is introduced, and the simulation results show the advantages of SINCN in terms of resource efficiency.
-
MultiAgentNetSim: Empowering Next-Generation Network Modeling with Multi-Agent Simulation
Joshua ShakyaMorgan ChopinLeila Merghem-Boulahia
Keywords:5G mobile communicationData modelsAnalytical modelsPricingComplexity theoryVehicle dynamicsLoad modelingHeuristic algorithmsNext generation networkingResource managementSimulationMulti-agentComplex NetworkRealistic SimulationNetwork ManagementDynamic PricingPricing StrategyNetwork ScenariosModern NetworksNetwork SlicingResource AllocationService QualityDynamic NetworkSimulation ParametersBase StationPotential AvenuesMobile UsersNetwork BehaviorReal-time ConditionsEmergent BehaviorBase ClassesNetwork Of AgentsDigital TwinAdmission ControlMobile Network OperatorsMobility ModelBusiness DecisionsRadio Access NetworkGeospatial ModelTraffic ModelNetwork Congestion
Abstracts:The increasing complexity of next-generation networks necessitates advanced simulation techniques, with multi-agent simulation (MAS) emerging as a highly effective solution. MAS enables a thorough analysis of micro-level interactions and intricate interdependencies inherent in modern networks, along with their implications – complexities that traditional simulation methods are increasingly unable to address effectively. Motivated by this potential, MultiAgentNetSim is proposed, founded on the principles of MAS. A key feature of MultiAgentNetSim is its capacity to facilitate realistic simulations of complex network scenarios, with network slicing serving as a prominent example explored within this article. Beyond serving as a simulation environment, MultiAgentNetSim functions as a decision-support tool, providing dynamic inputs for algorithm training, and a robust framework for evaluating algorithmic performance. A notable example is its integration with a dynamic pricing algorithm within the network slicing scenario. The simulation inputs deliver expressive and realistic data, enabling a more informed pricing strategy. This strategy, explored through the metric of operator profit, can be assessed across multiple metrics and continuously optimized using feedback from the platform, making it an invaluable asset in modern network management.
-
When Machine Learning Meets Knowledge Graph: A New Vision for Designing Network Intelligent Optimization Pipelines and Rules
Lei FengMingwan QinFanqin ZhouZhixiang YangWenjing Li
Keywords:OptimizationLocation awarenessKnowledge engineeringDecision makingData modelsAnomaly detectionReliabilityAccuracyTrainingMachine learningKnowledge graphsIntelligent networksMachine LearningOptimal RuleIntelligent OptimizationMachine Learning ModelsMobile NetworkOptimal NetworkFeature EngineeringProblem In NetworksNetwork ManagementOperation And MaintenanceDecision-making AlgorithmPerformance Of Machine Learning ModelsMaintenance Of NetworksNeural NetworkScalableConvolutional Neural NetworkSupport Vector MachineCognitive DomainsDecision TreeF1 ScoreDouble Deep Q-networkGraph Neural NetworksAnomaly DetectionWireless NetworksBase StationNetwork PerformanceModel Training PhaseGenerative Adversarial NetworksPredictive CapabilityKnowledge Extraction
Abstracts:With the continuous development of mobile communication networks, machine learning (ML) significantly saves on labor costs and enhances the efficiency of network operations and maintenance through automated decision-making and predictive analysis. However, ML-based network intelligent optimization is usually poorly interpretable and has difficulty capturing correlations in the face of vague network problems or complex data dimensions. In addition, the conclusion is not easily considered trustworthy without further validation, which cannot meet the reliability requirements of network operations and maintenance. This article provides a comprehensive review of knowledge graph (KG) applications in network intelligent optimization, proposing a novel KG-driven framework to enhance ML-based optimization pipelines. By leveraging KG to guide feature engineering, the proposed framework improves ML model performance and augments network optimization interpretability. Finally, the feasibility and effectiveness of the proposed framework are validated through a case study on KG-driven network coverage optimization, demonstrating its potential for advancing intelligent and trustworthy network management.
-
Machine Learning and Wi-Fi: Unveiling the Path Toward AI/ML-Native IEEE 802.11 Networks
Francesc WilhelmiSzymon SzottKatarzyna Kosek-SzottBoris Bellalta
Keywords:Wireless fidelityArtificial intelligenceIEEE 802.11 StandardComputational modeling3GPPCostsStandardsData modelsProtocolsComputer architectureMachine learningMachine LearningEnablersStandardization EffortsStages Of AdoptionData ProcessingData ModelMachine Learning MethodsMachine Learning ModelsInteroperabilityNetwork StateComputational CapabilitiesDeep Reinforcement LearningChannel AccessRoamingUnlicensed SpectrumFair AccessMedium Access ControlReceived Signal Strength IndicatorMachine Learning PipelineFederal Communications CommissionMulti-armed BanditPhysicalism
Abstracts:Artificial intelligence (AI) and machine learning (ML) are nowadays mature technologies considered essential for driving the evolution of future communications systems. Simultaneously, Wi-Fi technology has constantly evolved over the past three decades and incorporated new features generation after generation, thus gaining in complexity. As such, researchers have observed that AI/ML functionalities may be required to address the upcoming Wi-Fi challenges that will be otherwise difficult to solve with traditional approaches. This article discusses the role of AI/ML in current and future Wi-Fi networks, and depicts the ways forward. A roadmap toward AI/ML-native Wi-Fi, key challenges, standardization efforts, and major enablers are also discussed. An exemplary use case is provided to showcase the potential of AI/ML in Wi-Fi at different adoption stages.
-
Integrating Optimization Theory with Deep Learning for Wireless Network Design
Sinem ColeriAysun Gurur OnalanMarco Di Renzo
Keywords:OptimizationDeep learningMathematical modelsTrainingData modelsComplexity theoryClosed boxAccuracyResource managementComputer architectureWireless networksDeep LearningWireless NetworksOptimal TheoryDesign Of Wireless NetworksOptimal ConditionsBuilding BlocksDeep Neural NetworkBlack BoxConvergence RateIterative SolutionOptimization ProblemObjective FunctionIterative AlgorithmDeep ApproachDecision VariablesDeep Learning ApproachesEnergy HarvestingPath LossInput SizeOptimal AllocationDeep Neural Network ArchitectureOptimal Resource AllocationDigital TwinTheory-based ApproachValidation LossIncrease In RuntimeMassive Multiple-input Multiple-outputExplainable Artificial IntelligenceMathematical ManipulationsAmount Of Training Data
Abstracts:Traditional wireless network design relies on optimization algorithms derived from domain-specific mathematical models, which are often inefficient and unsuitable for dynamic, real-time applications due to high complexity. Deep learning has emerged as a promising alternative to overcome complexity and adaptability concerns, but it faces challenges such as accuracy issues, delays, and limited interpretability due to its inherent black-box nature. This article introduces a novel approach that integrates optimization theory with deep learning methodologies to address these issues. The methodology starts by constructing the block diagram of the optimization theory-based solution, identifying key building blocks corresponding to optimality conditions and iterative solutions. Selected building blocks are then replaced with deep neural networks, enhancing the adaptability and interpretability of the system. Extensive simulations show that this hybrid approach not only reduces runtime compared to optimization theory based approaches, but also significantly improves accuracy and convergence rates, outperforming pure deep learning models.