Internet-Draft | DTNMA | April 2024 |
Birrane, et al. | Expires 30 October 2024 | [Page] |
The Delay-Tolerant Networking (DTN) architecture describes a type of challenged network in which communications may be significantly affected by long signal propagation delays, frequent link disruptions, or both. The unique characteristics of this environment require a unique approach to network management that supports asynchronous transport, autonomous local control, and a small footprint (in both resources and dependencies) so as to deploy on constrained devices.¶
This document describes a DTN management architecture (DTNMA) suitable for managing devices in any challenged environment but, in particular, those communicating using the DTN Bundle Protocol (BP). Operating over BP requires an architecture that neither presumes synchronized transport behavior nor relies on query-response mechanisms. Implementations compliant with this DTNMA should expect to successfully operate in extremely challenging conditions, such as over uni-directional links and other places where BP is the preferred transport.¶
This Internet-Draft is submitted in full conformance with the provisions of BCP 78 and BCP 79.¶
Internet-Drafts are working documents of the Internet Engineering Task Force (IETF). Note that other groups may also distribute working documents as Internet-Drafts. The list of current Internet-Drafts is at https://datatracker.ietf.org/drafts/current/.¶
Internet-Drafts are draft documents valid for a maximum of six months and may be updated, replaced, or obsoleted by other documents at any time. It is inappropriate to use Internet-Drafts as reference material or to cite them other than as "work in progress."¶
This Internet-Draft will expire on 30 October 2024.¶
Copyright (c) 2024 IETF Trust and the persons identified as the document authors. All rights reserved.¶
This document is subject to BCP 78 and the IETF Trust's Legal Provisions Relating to IETF Documents (https://trustee.ietf.org/license-info) in effect on the date of publication of this document. Please review these documents carefully, as they describe your rights and restrictions with respect to this document. Code Components extracted from this document must include Revised BSD License text as described in Section 4.e of the Trust Legal Provisions and are provided without warranty as described in the Revised BSD License.¶
This document describes a logical, informational DTN management architecture (DTNMA) suitable for operating devices in a challenged architecture - such as those communicating using the DTN Bundle Protocol (BPv7) [RFC9171].¶
Challenged networks have certain properties that differentiate them from other kinds of networks. These properties, outlined in Section 2.2.1 of [RFC7228], include lacking end-to-end IP connectivity, having "serious interruptions" to end-to-end connectivity, and exhibiting delays longer than can be tolerated by end-to-end synchronization mechanisms (such as TCP).¶
These challenged properties can be caused by a variety of factors such as physical constraints (e.g., long signal propagation delays and frequent link disruptions), administrative policies (e.g., quality-of-service prioritization, service-level agreements, and traffic management and scheduling), and off-nominal behaviors (e.g., active attackers and misconfigurations). Since these challenges are not solely caused by sparseness, instances of challenged networks will persist even in increasingly connected environments.¶
The Delay-Tolerant Networking (DTN) architecture, described in [RFC4838], has been designed for data exchange in challenged networks. Just as the DTN architecture requires new capabilities for transport and transport security, special consideration is needed for the operation of devices in a challenged network.¶
This document describes how challenged network properties affect the operation of devices in those networks. This description is presented as a logical architecture formed from a union of best practices for operating devices deployed in challenged environments.¶
One important practice captured in this document is the concept of self-operation. Self-operation involves operating a device without human interactivity, without system-in-the-loop synchronous function, and without a synchronous underlying transport layer. This means that devices determine their own schedules for data reporting, their own operational configuration, and perform their own error discovery and mitigation.¶
This document includes the information necessary to document existing practices for operating devices in a challenged network in the context of a logical architecture. A logical architecture describes the logical operation of a system by identifying components of the system (such as in a reference model), the behaviors they enable, and use cases describing how those behaviors result in the desired operation of the system.¶
Logical architectures are not functional architectures. Therefore, any functional design for achieving desired behaviors is out of scope for this document. The set of architectural principles presented here is not meant to completely specify interfaces between components.¶
The selection of any particular transport or network layer is outside of the scope of this document. The DTNMA does not require the use of any specific protocol such as IP, BP, TCP, or UDP. In particular, the DTNMA design does not presume the use of BPv7, IPv4 or IPv6.¶
Network features such as naming, addressing, routing, and communications security are out of scope of the DTNMA. It is presumed that any operational network communicating DTNMA messages would implement these services for any payloads carried by that network.¶
The interactions between and amongst the DTNMA and other management approaches are outside of the scope of this document.¶
The remainder of this document is organized into the following nine sections, described as follows.¶
This section defines terminology that either is unique to the DTNMA or is necessary for understanding the concepts defined in this specification.¶
The DTNMA provides network management services able to operate in a challenged network environment, such as envisioned by the DTN architecture. This section describes what is meant by the term "challenged network", the important properties of such a network, and observations on impacts to management approaches.¶
Constrained networks are defined as networks where "some of the characteristics pretty much taken for granted with link layers in common use in the Internet at the time of writing are not attainable." [RFC7228]. This broad definition captures a variety of potential issues relating to physical, technical, and regulatory constraints on message transmission. Constrained networks typically include nodes that regularly reboot or are otherwise turned off for long periods of time, transmit at low or asynchronous bitrates, and/or have very limited computational resources.¶
Separately, a challenged network is defined as one that "has serious trouble maintaining what an application would today expect of the end-to-end IP model" [RFC7228]. Links in such networks may be impacted by attenuation, propagation delays, mobility, occultation, and other limitations imposed by energy and mass considerations. Therefore, systems relying on such links cannot guarantee timely end-to-end data exchange.¶
By these definitions, a "challenged" network is a special type of "constrained" network, where constraints prevent timely end-to-end data exchange. As such, "all challenged networks are constrained networks ... but not all constrained networks are challenged networks ... Delay-Tolerant Networking (DTN) has been designed to cope with challenged networks" [RFC7228].¶
Solutions that work in constrained networks might not be solutions that work in challenged networks. In particular, challenged networks exhibit the following properties that impact the way in which the function of network management is considered.¶
The set of constraints that might be present in a challenged network impact both the topology of the network and the services active within that network.¶
Operational networks handle cases where nodes join and leave the network over time. These topology changes may or may not be planned, they may or may not represent errors, and they may or may not impact network services. Challenged networks differ from other networks not in the presence of topological change, but in the likelihood that impacts to topology result in impacts to network services.¶
The difference between topology impacts and service impacts can be expressed in terms of connectivity. Topological connectivity usually refers to the existence of a path between an application message source and destination. Service connectivity, alternatively, refers to the existence of a path between a node and one or more services needed to process (often just-in-time) application messaging. Examples of service connectivity include access to infrastructure services such as a Domain Name System (DNS) or a Certificate Authority (CA).¶
In networks that might be partitioned most of the time, it is less likely that a node would concurrently access both an application endpoint and one or more network service endpoints. For this reason, network services in a challenged network should be designed to allow for asynchronous operation. Accommodating this use case often involves the use of local caching, pre-placing information, and not hard-coding message information at a source that might change when a message reaches its destination.¶
Network operations and management approaches need to adapt to the topology and service impacts encountered in challenged networks. In particular, the roles and responsibilities of "managers" and "agents" need to be different than what would be expected of unchallenged networks.¶
When connectivity to a manager cannot be guaranteed, agents will need to rely on locally available information and local autonomy to react to changes at the node. When an agent uses local autonomy for self-operation, it acts as a local operator serving as a proxy for an absent remote operator.¶
Therefore, the role of a "manager" must become one of a remote operator generating configurations and other essential updates for the local operator "agents" that are co-resident on a managed device.¶
This approach creates a two-tiered management architecture. The first tier is the management of the local operator configuration using any one of a variety of standard mechanisms, models, and protocols. The second tier is the operation of the local device through the local operator.¶
The DTNMA defines the DTNMA Manager (DM) as a remote operator application and the DTNMA Agent (DA) as an agent acting as a local operator application. In this model, which is illustrated in Figure 1, the DM and DA can be viewed as applications with the DM producing new configurations and the DA receiving those configurations from an underlying management mechanism.¶
Two-Tiered Management Architecture¶
In this approach, the configurations produced by the DM are based on the DA features and associated data model. How those configurations are transported between the DM and the DA, and how results are communicated back from the DA to the DM, can be accomplished using whatever mechanism is most appropriate for the network and the device platforms. For example, the use of a NETCONF, RESTCONF, or SNMP server on the managed device to provide configurations to a DA.¶
In addition to disconnectivity, topological change can alter the associations amongst managed and managing devices. Different managing devices might be active in a network at different times or in different partitions. Managed devices might communicate with some, all, or none of these managing devices as a function of their own local configuration and policy.¶
Therefore, a network management architecture for challenged networks should:¶
The following special cases illustrate some of the operational situations that can be encountered in the management of devices in a challenged network.¶
These special cases highlight the need for managed devices to operate without presupposing a dedicated connection to a single managing device. Managing devices in a challenged network might never expect a reply to a command, and communications from managed devices may be delivered much later than the events being reported.¶
This section describes those design properties that are desirable when defining a management architecture operating across challenged links in a network. These properties ensure that network management capabilities are retained even as delays and disruptions in the network scale. Ultimately, these properties are the driving design principles for the DTNMA.¶
The DTNMA should be agnostic of the underlying physical topology, transport protocols, security solutions, and supporting infrastructure of a given network. Due to the likelihood of operating in a frequently partitioned environment, the topology of a network may change over time. Attempts to stabilize an architecture around individual nodes can result in a brittle management framework and the creation of congestion points during periods of connectivity.¶
The DTNMA should not prescribe any association between a DM and a DA other than those defined in this document. There should be no logical limitation to the number of DMs that can control a DA, the number of DMs that a DA should report to, or any requirement that a DM and DA relationship implies a pair.¶
The DTNMA should use data models to define the syntactic and semantic contracts for data exchange between a DA and a DM. A given model should have the ability to "inherit" the contents of other models to form hierarchical data relationships.¶
Many network management solutions use data models to specify the semantic and syntactic representation of data exchanged between managed and managing devices. The DTNMA is not different in this regard - information exchanged between DAs and DMs should conform to one or more pre-defined, normative data models.¶
A common best practice when defining a data model is to make it cohesive. A cohesive model is one that includes information related to a single purpose such as managing a single application or protocol. When applying this practice, it is not uncommon to develop a large number of small data models that, together, describe the information needed to manage a device.¶
Another best practice for data model development is the use of inclusion mechanisms to allow one data model to include information from another data model. This ability to include a data model avoids repeating information in different data models. When one data model includes information from another data model, there is an implied model hierarchy.¶
Data models in the DTNMA should allow for the construction of both cohesive models and hierarchically related models. These data models should be used to define all sources of information that can be retrieved, configured, or executed in the DTNMA. This includes supporting DA autonomy functions such as parameterization, filtering, and event driven behaviors. These models will be used to both implement interoperable autonomy engines on DAs and define interoperable report parsing mechanisms on DMs.¶
DAs in the DTNMA architecture should determine when to push information to DMs as a function of their local state.¶
Pull management mechanisms require a managing device to send a query to a managed device and then wait for a response to that specific query. This practice implies some knowledge synchronization between entities (which may be as simple as knowing when a managed device might be powered). However, challenged networks cannot guarantee timely round-trip data exchange. For this reason, pull mechanisms should be avoided in the DTNMA.¶
Push mechanisms, in this context, refer to the ability of DAs to leverage local autonomy to determine when and what information should be sent to which DMs. The push is considered adaptive because a DA determines what information to push (and when) as an adaptation to changes to the DA's internal state. Once pushed, information might still be queued pending connectivity of the DA to the network.¶
Messages exchanged between a DA and a DM in the DTNMA should be defined in a way that allows for efficient on-the-wire encoding. DTNMA design decisions that result in smaller message sizes should be preferred over those that result in larger message sizes.¶
There is a relationship between message encoding and message processing time at a node. Messages with little or no encodings may simplify node processing whereas more compact encodings may require additional activities to generate/parse encoded messages. Generally, compressing a message takes processing time at the sender and decompressing a message takes processing time at a receiver. Therefore, there is a design tradeoff between minimizing message sizes and minimizing node processing.¶
There is a significant advantage to smaller DTNMA message sizes in a challenged network. Smaller messages require smaller periods of viable transmission for communication, they incur less re-transmission cost, and they consume less resources when persistently stored en-route in the network.¶
Data elements within the DTNMA should be uniquely identifiable so that they can be individually manipulated. Further, these identifiers should be universal - the identifier for a data element should be the same regardless of role, implementation, or network instance.¶
Identification schemes that would be relative to a specific DA or specific system configuration might change over time and should be avoided. Relying on relative identification schemes would require resynchronizing relative state when nodes in a challenge network reconnect after periods of partition. This type of resynchronization should be avoided whenever possible.¶
The DTNMA allows for the addition of new data elements to a data model as part of the runtime operation of the management system. These definitions may represent custom data definitions that are applicable only for a particular device or network. Custom definitions should also be able to be removed from the system during runtime.¶
The goal of this approach is to dynamically add or remove data elements to the local runtime schemas as needed - such as the definition of new counters, new reports, or new rules.¶
The custom definition of new data from existing data (such as through data fusion, averaging, sampling, or other mechanisms) provides the ability to communicate desired information in as compact a form as possible.¶
Custom data elements should be calculated and used both as parameters for DA autonomy and for more efficient reporting to DMs. Defining new data elements allows for DAs to perform local data fusion and defining new reporting templates allows for DMs to specify desired formats and generally save on link capacity, storage, and processing time.¶
The management of applications by a DA should be achievable using only knowledge local to the DA because DAs might need to operate during times when they are disconnected from a DM.¶
DA autonomy may be used for simple automation of predefined tasks or to support semi-autonomous behavior in determining when to run tasks and how to configure or parameterize tasks when they are run.¶
Important features provided by the DA are listed below. These features work together to accomplish tasks. As such, there is commonality amongst their definitions and nature of their benefits.¶
To understand the contributions of these features to a common behavior, consider the example of a managed device coming online with a set of pre-installed configuration. In this case, the device's stand-alone operation comes from the pre-configuration of its local autonomy engine. This engine-based behavior allows the system to behave in a deterministic way and any new configurations will need to be authorized before being adopted.¶
Features such as deterministic processing and engine-based behavior are separate from (but do not preclude the use of) other Artificial Intelligence (AI) and Machine Learning (ML) approaches for device management.¶
Several remote management solutions have been developed for both local-area and wide-area networks. Their capabilities range from simple configuration and report generation to complex modeling of device settings, state, and behavior. Each of these approaches are successful in the domains for which they have been built, but are not all equally functional when deployed in a challenged network.¶
This section describes some of the well-known protocols for remote management and contrasts their purposes with the desirable properties of the DTNMA. The purpose of this comparison is to identify parts of existing approaches that can be adopted or adapted for use in challenged networks and where new capabilities should be created specifically for this environment.¶
An early and widely used example of a remote management protocol is the Simple Network Management Protocol (SNMP) currently at Version 3 [RFC3410]. The SNMP utilizes a request/response model to get and set data values within an arbitrarily deep object hierarchy. Objects are used to identify data such as host identifiers, link utilization metrics, error rates, and counters between application software on managing and managed devices [RFC3411]. Additionally, SNMP supports a model for unidirectional push messages, called event notifications, based on agent-defined triggering events.¶
SNMP relies on logical sessions with predictable round-trip latency to support its "pull" mechanism but a single activity is likely to require many round-trip exchanges. Complex management can be achieved, but only through careful orchestration of real-time, end-to-end, managing-device-generated query-and-response logic.¶
There is existing work that uses the SNMP data model to support some low-fidelity Agent-side processing, to include the Distributed Management Expression MIB [RFC2982] and Definitions of Managed Objects for the Delegation of Management Scripts [RFC3165]. However, Agent autonomy is not an SNMP mechanism, so support for a local agent response to an initiating event is limited. In a challenged network where the delay between a managing device receiving an alert and sending a response can be significant, SNMP is insufficient for autonomous event handling.¶
SNMP separates the representations for managed data models from Manager--Agent messaging, sequencing and encoding. Each data model is termed a Management Information Base (MIB) [RFC3418] and uses the Structure of Management Information (SMI) modeling language [RFC2578]. Additionally, the SMI itself is based on the ASN.1 Syntax [ASN.1] which is used not just for SMI but for other, unrelated data structure specification such as the Cryptographic Message Syntax (CMS) [RFC5652]. Separating data models from messaging and encoding is a best practice in remote management protocols and is also necessary for the DTNMA.¶
Each SNMP MIB is composed of managed object definitions each of which is associated with a hierarchical Object Identifier (OID). Because of the arbitrarily deep nature of MIB object trees, the size of OIDs is not strictly bounded by the protocol (though may be bounded by implementations).¶
The SNMP protocol itself, which is at version 2 [RFC3416], can operate over a variety of transports, including plaintext UDP/IP [RFC3417], SSH/TCP/IP [RFC5592], and DTLS/UDP/IP or TLS/TCP/IP [RFC6353].¶
SNMP uses an abstracted security model to provide authentication, integrity, and confidentiality. There are options for user-based security model (USM) of [RFC3414], which uses in-message security, and transport security model (TSM) [RFC5591], which relies on the transport to provide security functions and interfaces.¶
Several network management protocols, including NETCONF [RFC6241], RESTCONF [RFC8040], and CORECONF [I-D.ietf-core-comi], share the same XML information set [xml-infoset] for its hierarchical managed information and [XPath] expressions to identify nodes of that information model. Since they share the same information model and the same data manipulation operations, together they will be referred to as "*CONF" protocols. Each protocol, however, provides a different encoding of that information set and its related operation-specific data.¶
The YANG modeling language of [RFC7950] is used to define the data model for these management protocols. Currently, YANG represents the IETF standard for defining managed information models.¶
The YANG modeling language defines a syntax and modular semantics for organizing and accessing a device's configuration or operational information. YANG allows subdividing a full managed configuration into separate namespaces defined by separate YANG modules. Once a module is developed, it is used (directly or indirectly) on both the client and server to serve as a contract between the two. A YANG module can be complex, describing a deeply nested and inter-related set of data nodes, actions, and notifications.¶
Unlike the separation in Section 5.1.1 between ASN.1 syntax and module semantics from higher-level SMI data model semantics, YANG defines both a text syntax and module semantics together with data model semantics.¶
The YANG language provides flexibility in the organization of model objects to the model developer. The YANG supports a broad range of data types noted in [RFC6991]. YANG supports the definition of parameterized Remote Procedure Calls (RPCs) and actions to be executed on managed devices as well as the definition of event notifications within the model.¶
The use of YANG for data modeling necessarily comes with some side-effects, some of which are described here.¶
Data nodes, RPCs, and notifications within a YANG model are named by a namespace-qualified, text-based path of the module, sub-module, container, and any data nodes such as lists, leaf-lists, or leaves, without any explicit hierarchical organization based on data or object type.¶
Existing efforts to make compressed names for YANG objects, such as the YANG Schema Item iDentifiers (SID) from Section 3.2 of [RFC9254], allow a node to be named by an globally unique integer value but are still relatively verbose (up to 8 bytes per item) and still must be translated into text form for things like instance identification (see below). Additionally, when representing a tree of named instances the child elements can use differential encoding of SID integer values as "delta" integers. The mechanisms for assigning SIDs and the lifecycle of those SIDs are still in development [I-D.ietf-core-sid].¶
Because the original use of YANG with NETCONF was to model XML information sets, the values and built-in types are necessarily text based. The JSON encoding of YANG data [RFC7951] allows for optimized representations of many built-in types, and similarly the CBOR encoding [RFC9254] allows for different optimized representations.¶
In particular, the YANG built-in types natively support a fixed range of decimal fractions (Section 9.3 of [RFC7950]) but purposefully do not support floating point numbers.
There are alternatives, such as the type bandwidth-ieee-float32
from [RFC8294] or using the "binary" type with one of the IEEE-754 encodings.¶
A significant amount of existing YANG tooling or modeling presumes the use of YANG data within a management protocol with specific operations available. For example, the access control model of [RFC8341] relies on those operations specific to the *CONF protocols for proper behavior.¶
The emergence of multiple NETCONF-derived protocols may make these presumptions less problematic in the future. Work to more consistently identify different types of YANG modules and their use has been undertaken to disambiguate how YANG modules should be treated [RFC8199].¶
The YANG modeling language continues to evolve as new features are needed by adopting management protocols.¶
NETCONF is a stateful, XML-encoding-based protocol that provides a syntax to retrieve, edit, copy, or delete any data nodes or exposed functionality on a server. It requires that underlying transport protocols support long-lived, reliable, low-latency, sequenced data delivery sessions. A bi-directional NETCONF session needs to be established before any data transfer (or notification) can occur.¶
The XML exchanged within NETCONF messages is structured according to YANG modules supported by the NETCONF agent, and the data nodes reside within one of possibly many datastores in accordance with the Network Management Datastore Architecture (NMDA) of [RFC8342].¶
NETCONF transports are required to provide authentication, data integrity, confidentiality, and replay protection. Currently, NETCONF can operate over SSH/TCP/IP [RFC6242] or TLS/TCP/IP [RFC7589].¶
RESTCONF is a stateless, JSON-encoding-based protocol that provides the same operations as NETCONF, using the same YANG modules for structure and same NMDA datastores, but using RESTful exchanges over HTTP. It uses HTTP-native methods to express its allowed operations: GET, POST, PUT, PATCH, or DELETE data nodes within a datastore.¶
Although RESTCONF is a logically stateless protocol, it does rely on state within its transport protocol to achieve behaviors such as authentication and security sessions. Because RESTCONF uses the same data node semantics of NETCONF, a typical activity can involve the use of several sequential round-trips of exchanges to first discover managed device state and then act upon it.¶
CORECONF is an emerging stateless protocol built atop the Constrained Application Protocol (CoAP) [RFC7252] that defines a messaging construct developed to operate specifically on constrained devices and networks by limiting message size and fragmentation. CoAP also implements a request/response system and methods for GET, POST, PUT, and DELETE.¶
Another emerging but not-IETF-affiliated management protocol is the gRPC Network Management Interface (gNMI) [gNMI] which is based on gRPC messaging and uses Protobuf data modeling.¶
The same limitations of RESTCONF listed above apply to gNMI because of its reliance on synchronous HTTP exchanges and TLS security for normal operations, as well as the likely deep nesting of data schemas. There is a capability for gNMI to transport JSON-encoded YANG-modeled data, but this composing is not fully standardized and relies on specific tool integrations to operate.¶
The data managed and exchanged via gNMI is encoded and modeled using Google Protobuf, an encoding and modeling syntax not affiliated with the IETF (although an attempt has been made and abandoned [I-D.rfernando-protocol-buffers]).¶
Because the Protobuf modeling syntax is relatively low-level (around the same as ASN.1 or CBOR), there are some efforts as part of the OpenConfig work [gNMI] to translate YANG modules into Protobuf schemas (similar to translation to XML or JSON schemas for NETCONF and RESTCONF respectively) but there is no required interoperabilty between management via gRPC or any of the *CONF protocols.¶
The message encoding and exchange for gNMI, as the name implies, is gRPC protocol [gRPC]. gRPC exclusively uses HTTP/2 [RFC9113] for transport and relies on some aspects specific to HTTP/2 for its operations (such as HTTP trailer fields). While not mandated by gRPC, when used to transport gNMI data TLS is required for transport security.¶
A lower-level remote management protocol, intended to be used to manage hardware devices and network appliances below the operating system (OS), is the Intelligent Platform Management Interface (IPMI) standardized in [IPMI]. The IPMI is focused on health monitoring, event logging, firmware management, and serial-over-LAN (SOL) remote console access in a "pre-OS or OS-absent" host environment. The IPMI operates over a companion Remote Management Control Protocol (RMCP) for messaging, which itself can use UDP for transport.¶
Because the IPMI and RCMP are tailored to low-level and well-connected devices within a datacenter, with typical workflows requiring many messaging round trips or low-latency interactive sessions, they are not suitable for operation over a challenged network.¶
The future of network operations requires more autonomous behavior including self-configuration, self-management, self-healing, and self-optimization. One approach to support this is termed Autonomic Networking [RFC7575].¶
There is a large and growing set of work within the IETF focused on developing an Autonomic Networking Integrated Model and Approach (ANIMA). The ANIMA work has developed a comprehensive reference model for distributing autonomic functions across multiple nodes in an autonomic networking infrastructure [RFC8993].¶
This work, focused on learning the behavior of distributed systems to predict future events, is an emerging network management capability. This includes the development of signalling protocols such as GRASP [RFC8990] and the autonomic control plane (ACP) [RFC8368].¶
Both autonomic and challenged networks require similar degrees of autonomy. However, challenged networks cannot provide the complex coordination between nodes and distributed supporting infrastructure necessary for the frequent data exchanges for negotiation, learning, and bootstrapping associated with the above capabilities.¶
There is some emerging work in ANIMA as to how disconnected devices might join and leave the autonomic control plane over time. However, this work is addressing a different problem than that encountered by challenged networks.¶
Outside of the terrestrial networking community, there are existing and established remote management systems used for deep space mission operations. Examples of two of these are for the New Horizons mission to Pluto [NEW-HORIZONS] and the DART mission to the asteroid Dimorphos [DART].¶
The DTNMA has some heritage in the concepts of deep space autonomy, but each of those mission instantiations use mission-specific data encoding, messaging, and transport as well as mission-specific (or heavily mission-tailored) modeling concepts and languages. Part of the goal of the DTNMA is to take the proven concepts from these missions and standardize a messaging syntax as well as a modular data modeling method.¶
Management mechanisms that provide the complete set of DTNMA desirable properties do not currently exist. This is not surprising since autonomous management in the context of a challenged networking environment is a new and emerging use case.¶
In particular, a management architecture is needed that integrates the following motivating features.¶
Combining these new features with existing mechanisms for message data exchange (such as BP), data representations (such as CBOR) and data modeling languages (such as YANG) will form a pragmatic approach to defining challenged network management.¶
This section describes a reference model for reasoning about network management concepts for challenged networks (generally) and those conforming to the DTN architecture (in particular). The goal of this section is to describe how DTNMA services provide DTNMA desirable properties.¶
Similar to other network management architectures, the DTNMA draws a logical distinction between a managed device and a managing device. Managed devices use a DA to manage resident applications. Managing devices use a DM to both monitor and control DAs.¶
The DTNMA differs from some other management architectures in three significant ways, all related to the need for a device to self-manage when disconnected from a managing device.¶
A DTNMA reference model is provided in Figure 2 below. In this reference model, applications and services on a managing device communicate with a DM which uses pre-shared definitions to create a set of policy directives that can be sent to a managed device's DA via a command-based interface. The DA provides local monitoring and control (commanding) of the applications and services resident on the managed device. The DA also performs local data fusion as necessary to synthesize data products (such as reports) that can be sent back to the DM when appropriate.¶
DTNMA Reference Model¶
This model preserves the familiar concept of "managers" resident on managing devices and "agents" resident on managed devices. However, the DTNMA model is unique in how the DM and DA operate. The DM is used to pre-configure DAs in the network with management policies. it is expected that the DAs, themselves, perform monitoring and control functions on their own. In this way, a properly configured DA may operate without a reliable connection back to a DM.¶
The reference model illustrated in Figure 2 implies the existence of certain logical components whose roles and responsibilities are discussed in this section.¶
By definition, managed applications and services reside on a managed device. These software entities can be controlled through some interface by the DA and their state can be sampled as part of periodic monitoring. It is presumed that the DA on the managed device has the proper data model, control interface, and permissions to alter the configuration and behavior of these software applications.¶
A DA resides on a managed device. As is the case with other network management approaches, this agent is responsible for the monitoring and control of the applications local to that device. Unlike other network management approaches, the agent accomplishes this task without a regular connection to a DTNMA Manager.¶
The DA performs three major functions on a managed device: the monitoring and control of local applications, production of data analytics, and the administrative control of the agent itself.¶
DAs monitor the status of applications running on their managed device and selectively control those applications as a function of that monitoring. The following components are used to perform monitoring and control on an agent.¶
DAs generate new data elements as a function of the current state of the managed device and its applications. These new data products may take the form of individual data values, or new collections of data used for reporting. The logical components responsible for these behaviors are as follows.¶
DAs perform a variety of administrative services in support of their configuration, such as the following.¶
The DTNMA allows for a many-to-many relationship amongst DTNMA Agents and Managers. A single DM may configure multiple DAs, and a single DA may be configured by multiple DMs. Multiple managers may exist in a network for at least two reasons. First, different managers may exist to control different applications on a device. Second, multiple managers increase the likelihood of an agent encountering a manager when operating in a sparse or challenged environment.¶
While the need for multiple managers is required for operating in a dynamically partitioned network, this situation allows for the possibility of conflicting information from different managers. Implementations of the DTNMA should consider conflict resolution mechanisms. Such mechanisms might include analyzing managed content, time, agent location, or other relevant information to select one manager input over other manager inputs.¶
Managing applications and services reside on a managing device and serve as the both the source of DA policy statements and the target of DA reporting. They may operate with or without an operator in the loop.¶
Unlike management applications in unchallenged networks, these applications cannot exert closed-loop control over any managed device application. Instead, they exercise open-loop control by producing policies that can be configured and enforced on managed devices by DAs.¶
A DM resides on a managing device. This manager provides an interface between various managing applications and services and the DAs that enforce their policies. In providing this interface, DMs translate between whatever native interface exists to various managing applications and the autonomy models used to encode management policy.¶
The DM performs three major functions on a managing device: policy encoding, reporting, and administration.¶
DMs translate policy directives from managing applications and services into standardized policy expressions that can be recognized by DAs. The following logical components are used to perform this policy encoding.¶
DMs receive reports on the status of managed devices during periods of connectivity with the DAs on those devices. The following logical components are needed to implement reporting capabilities on a DM.¶
Managers in the DTNMA perform a variety of administrative services, such as the following.¶
A consequence of operating in a challenged environment is the potential inability to negotiate information in real-time. For this reason, the DTNMA requires that managed and managing devices operate using pre-shared definitions rather than relying on data definition negotiation.¶
The three types of pre-shared definitions in the DTNMA are the DA autonomy model, managed application data models, and any runtime data shared by managers and agents.¶
A DTNMA autonomy model represents the data elements and associated autonomy structures that define the behavior of the agent autonomy engine. A standardized autonomy model allows for individual implementations of DAs, and DMs to interoperate. A standardized model also provides guidance to the design and implementation of both managed and managing applications.¶
This section describes the services provided by DTNMA components on both managing and managed devices. Many of the services discussed in this section attempt to provide continuous operation of a managed device through periods of no connectivity with a managing device.¶
DTNMA monitoring is associated with some DA autonomy engine. The term monitoring implies regular access to information such that state changes may be acted upon within some response time period.¶
Predicate autonomy on a managed device should collect state associated with the device at regular intervals and evaluate that collected state for any changes that require a preventative or corrective action. Similarly, this monitoring may cause the device to generate one or more reports destined to a managing device.¶
Similar to monitoring, DTNMA control results in actions by the agent to change the state or behavior of the managed device. All control in the DTNMA is local control. In cases where there exists a timely connection to a manager, received Controls are still evaluated and run locally as part of local autonomy. In this case, the autonomy stimulus is the receipt of the Control and the response is to immediately run the Control. In this way, there is never a dependency on a session or other stateful exchange with any remote entity.¶
DTNMA Fusion services produce new data products from existing state on the managed device. These fusion products can be anything from simple summations of sampled counters to complex calculations of behavior over time.¶
Fusion is an important service in the DTNMA because fusion products are part of the overall state of a managed device. Complete knowledge of this overall state is important for the management of the device and the predicates of rules on a DA may refer to fused data.¶
In-situ data fusion is an important function as it allows for the construction of intermediate summary data, the reduction of stored and transmitted raw data, possibly fewer predicates in rule definitions, and otherwise insulates the data source from conclusions drawn from that data.¶
The DTNMA requires fusion to occur on the managed device itself. If the network is partitioned such that no connection to a managing device is available, then fusion needs to happen locally. Similarly, connections to a managing device might not remain active long enough for round-trip data exchange or may not have the bandwidth to send all sampled data.¶
DTNMA configuration services update the local configuration of a managed device with the intent to impact the behavior and capabilities of that device.¶
The DTNMA configuration service is unique in that the selection of managed device configurations occurs as a function of the state of the device. This implies that management proxies on the device store multiple configuration functions that can be applied as needed without consultation from a managing device.¶
When detecting stimuli, the agent autonomy engine supports a mechanism for evaluating whether application monitoring data or runtime data values are recent enough to indicate a change of state. In cases where data has not been updated recently, it may be considered stale and not used to reliably indicate that some stimulus has occurred.¶
DTNMA reporting services collect information known to the managed device and prepare it for eventual transmission to one or more managing devices. The contents of these reports, and the frequency at which they are generated, occurs as a function of the state of the managed device, independent of the managing device.¶
Once generated, it is expected that reports might be queued pending a connection back to a managing device. Therefore, reports need to be differentiable as a function of the time they were generated.¶
When reports are sent to a managing device over a challenged network, they may arrive out of order due to taking different paths through the network or being delayed due to retransmissions. A managing device should not infer meaning from the order in which reports are received.¶
Reports may or may not be associated with a specific Control. Some reports may be annotated with the Control that caused the report to be generated. Sometimes, a single report will represent the end state of applying multiple Controls.¶
Both local and remote services provided by the DTNMA affect the behavior of multiple applications on a managed device and may interface with multiple managing devices.¶
Authorization services enforce the potentially complex mapping of other DTNMA services amongst managed and managing devices in the network. For example, fine-grained access control can determine which managing devices receive which reports, and what Controls can be used to alter which managed applications.¶
This is particularly beneficial in networks that either deal with multiple administrative entities or overlay networks that cross administrative boundaries. Allowlists, blocklists, key-based infrastructures, or other schemes may be used for this purpose.¶
An important characteristic of the DTNMA is the shift in the role of a managing device. One way to describe the behavior of the agent autonomy engine is to describe the characteristics of the autonomy model it implements.¶
This section describes a logical autonomy model in terms of the abstract data elements that would comprise the model. Defining abstract data elements allows for an unambiguous discussion of the behavior of an autonomy model without mandating a particular design, encoding, or transport associated with that model.¶
A managing autonomy capability on a potentially disconnected device needs to behave in both an expressive and deterministic way. Expressivity allows for the model to be configured for a wide range of future situations. Determinism allows for the forensic reconstruction of device behavior as part of debugging or recovery efforts. It also is necessary to ensure predictable behavior.¶
The DTNMA autonomy model is a rule-based model in which individual rules associate a pre-identified stimulus with a pre-configured response to that stimulus.¶
Stimuli are identified using one or more predicate logic expressions that examine aspects of the state of the managed device. Responses are implemented by running one or more procedures on the managed device.¶
In its simplest form, a stimulus is a single predicate expression of a condition that examines some aspect of the state of the managed device. When the condition is met, a predetermined response is applied. This behavior can be captured using the construct:¶
IF <condition 1> THEN <response 1>;¶
In more complex forms, a stimulus may include both a common condition shared by multiple rules and a specific condition for each individual rule. If the common condition is not met, the evaluation of the specific condition of each rule sharing the common condition can be skipped. In this way, the total number of predicate evaluations can be reduced. This behavior can be captured using the construct:¶
IF <common condition> THEN IF <specific condition 1> THEN <response 1> IF <specific condition 2> THEN <response 2> IF <specific condition 3> THEN <response 3>¶
DTNMA Autonomy Model¶
The flow of data into and out of the agent autonomy engine is illustrated in Figure 3. In this model, the autonomy engine stores the combination of stimulus conditions and associated responses as a set of "rules" in a rules database. This database is updated through the execution of the autonomy engine and as configured from policy statements received by managers.¶
Stimuli are detected by examining the state of applications as reported through application monitoring interfaces and through any locally-derived data. Local data is calculated in accordance with definitions also provided by managers as part of the runtime data store.¶
Responses to stimuli may include updates to the rules database, updates to the runtime data store, Controls sent to applications, and the generation of reports.¶
There are several practical challenges to the implementation of a distributed rule-based system. Large numbers of rules may be difficult to understand, deconflict, and debug. Rules whose conditions are given by fused or other dynamic data may require data logging and reporting for deterministic offline analysis. Rule differences across managed devices may lead to oscillating effects. This section identifies those characteristics of an autonomy model that might help implementations mitigate some of these challenges.¶
There are a number of ways to represent data values, and many data modeling languages exist for this purpose. When considering how to model data in the context of the DTNMA autonomy model there are some modeling features that should be present to enable functionality. There are also some modeling features that should be prevented to avoid ambiguity.¶
Traditional network management approaches favor flexibility in their data models. The DTNMA stresses deterministic behavior that supports forensic analysis of agent activities "after the fact". As such, the following statements should be true of all data representations relating to DTNMA autonomy.¶
The expressive representation of simple data values is fundamental to the successful construction and evaluation of predicates in the DTNMA autonomy model. When defining such values, there are useful distinctions regarding how values are identified and whether values are generated internal or external to the autonomy model.¶
A DTNMA data value should combine a base type (e.g., integer, real, string) representation with relevant semantic information. Base types are used for proper storage and encoding. Semantic information allows for additional typing, constraint definitions, and mnemonic naming. This expanded definition of data value allows for better predicate construction and evaluation and early type checking.¶
Data values may further be annotated based on whether their value is the result of a DA calculation or the result of some external process on the managed device. For example, operators may with to know which values can be updated by actions on the DA versus which values (such as sensor readings) cannot be reliably changed because they are calculated external to the DA.¶
The DTNMA autonomy model should, as required, report on the state of its managed device (to include the state of the model itself). This reporting should be done as a function of the changing state of the managed device, independent of the connection to any managing device. Queuing reports allows for later forensic analysis of device behavior, which is a desirable property of DTNMA management.¶
DTNMA data reporting consists of the production of some data report instance conforming to a data report schema. The use of schemas allows a report instance to identify the schema to which it conforms instead of carrying the structure in the report itself. This approach can significantly reduce the size of generated reports.¶
The agent autonomy engine requires that managed devices issue commands on themselves as if they were otherwise being controlled by a managing device. The DTNMA implements commanding through the use of Controls and macros.¶
Controls represent parameterized, predefined procedures run by the DA either as directed by the DM or as part of a rule response from the DA autonomy engine. Macros represent ordered sequences of Controls.¶
Controls are conceptually similar to RPCs in that they represent parameterized functions run on the managed device. However, they are conceptually dissimilar from RPCs in that they do not have a concept of a return code because they operate over an asynchronous transport. The concept of return code in an RPC implies a synchronous relationship between the caller of the procedure and the procedure being called, which might not be possible within the DTNMA.¶
The success or failure of a Control may be handled locally by the agent autonomy engine. Local error handling is particularly important in this architecture given the potential for long periods of disconnectivity between a DA and a DM. The failure of one or more Controls is part of the state of the DA and can be used to trigger rules within the DA autonomy engine.¶
The impact of a Control is externally observable via the generation and eventual examination of data reports produced by the managed device.¶
The failure of certain Controls might leave a managed device in an undesired state. Therefore, it is important that there be consideration for Control-specific recovery mechanisms (such as a rollback or safing mechanism). When a Control that is part of a macro (such as in an autonomy response) fails, there may be a need to implement a safe state for the managed device based on the nature of the failure.¶
As discussed in Section 9.1, the DTNMA rule-based stimulus-response system associates stimulus detection with a predetermined response. Rules may be categorized based on whether their stimuli include generic statements of managed device state or whether they are optimized to only consider the passage of time on the device.¶
State-based rules are those whose stimulus is based on the evaluated state of the managed device. Time-based rules are a unique subset of state-based rules whose stimulus is given only by a time-based event. Implementations might create different structures and evaluation mechanisms for these two different types of rules to achieve more efficient processing on a platform.¶
Using the autonomy model defined in Section 9, this section describes flows through sample configurations conforming to the DTNMA. These use cases illustrate remote configuration, local monitoring and control, multiple manager support, and data fusion.¶
The use cases presented in this section are documented with a shorthand notation to describe the types of data sent between managers and agents. This notation, outlined in Table 1, leverages the definitions of autonomy model components defined in Section 9.¶
Term | Definition | Example |
---|---|---|
EDD# | Externally Defined Data - a data value defined external to the DA. | EDD1, EDD2 |
V# | Variable - a data value defined internal to the DA. | V1 = EDD1 + 7 |
EXPR | Predicate expression - used to define a rule stimulus. | V1 > 5 |
ID | DTNMA Object Identifier. | V1, EDD2 |
ACL# | Enumerated Access Control List. | ACL1 |
DEF(ACL,ID,EXPR) | Define ID from expression. Allow managers in ACL to see this ID. | DEF(ACL1, V1, EDD1 + EDD2) |
PROD(P,ID) | Produce ID according to predicate P. P may be a time period (1s) or an expression (EDD1 > 10). | PROD(1s, EDD1) |
RPT(ID) | A report instance containing data named ID. | RPT(EDD1) |
These notations do not imply any implementation approach. They only provide a succinct syntax for expressing the data flows in the use case diagrams in the remainder of this section.¶
This nominal configuration shows a single DM interacting with multiple DAs. The control flows for this scenario are outlined in Figure 4.¶
Serialized Management Control Flow¶
In a serialized management scenario, a single DM interacts with multiple DAs.¶
In this figure, the DTNMA Manager A sends a policy to DTNMA Agents A and B to report the value of an EDD (EDD1) every second in (step 1). Each DA receives this policy and configures their respective autonomy engines for this production. Thereafter, (step 2) each DA produces a report containing data element EDD1 and sends those reports back to the DM.¶
This behavior continues without any additional communications from the DM.¶
Building from the nominal configuration in Section 10.2, this scenario shows a challenged network in which connectivity between DTNMA Agent B and the DM is temporarily lost. Control flows for this case are outlined in Figure 5.¶
Challenged Management Control Flow¶
In a challenged network, DAs store reports pending a transmit opportunity.¶
In this figure, DTNMA Manager A sends a policy to DTNMA Agents A and B to produce an EDD (EDD1) every second in (step 1). Each DA receives this policy and configures their respective autonomy engines for this production. Produced reports are transmitted when there is connectivity between the DA and DM (step 2).¶
At some point, DTNMA Agent B loses the ability to transmit in the network (steps 3 and 4). During this time period, DA B continues to produce reports, but they are queued for transmission. This queuing might be done by the DA itself or by a supporting transport such as BP. Eventually (and before the next scheduled production of EDD1), DTNMA Agent B is able to transmit in the network again (step 5) and all queued reports are sent at that time. DTNMA Agent A maintains connectivity with the DM during steps 3-5, and continues to send reports as they are generated.¶
This scenario illustrates the DTNMA open-loop control paradigm, where DAs manage themselves in accordance with policies provided by DMs, and provide reports to DMs based on these policies.¶
The control flow shown in Figure 6, includes an example of data fusion, where multiple policies configured by a DM result in a single report from a DA.¶
Consolidated Management Control Flow¶
A many-to-one mapping between management policy and device state reporting is supported by the DTNMA.¶
In this figure, DTNMA Manager A sends a policy statement in the form of a rule to DTNMA Agents A and B, which instructs the DAs to produce a report with EDD1 every second (step 1). Each DA receives this policy, which is stored in its respective Rule Database, and configures its Autonomy Engine. Reports are transmitted by each DA when produced (step 2).¶
At a later time, DTNMA Manager A sends an additional policy to DTNMA Agent B, requesting the production of a report for EDD2 every second (step 3). This policy is added to DTNMA Agent B's Rule Database.¶
Following this policy update, DTNMA Agent A will continue to produce EDD1 and DTNMA Agent B will produce both EDD1 and EDD2 (step 4). However, DTNMA Agent B may provide these values to the DM in a single report rather than as 2 independent reports. In this way, there is no direct mapping between the single consolidated report sent by DTNMA Agent B (step 4) and the two different policies sent to DTNMA Agent B that caused that report to be generated (steps 1 and 3).¶
The managed applications on a DA may be controlled by different administrative entities in a network. The DTNMA allows DAs to communicate with multiple DMs in the network, such as in cases where there is one DM per administrative domain.¶
Whenever a DM sends a policy expression to a DA, that policy expression may be associated with authorization information. One method of representing this is an ACL.¶
The ability of one DM to access the results of policy expressions configured by some other DM will be limited to the authorization annotations of those policy expressions.¶
An example of multi-manager authorization is illustrated in Figure 7.¶
Multiplexed Management Control Flow¶
Multiple DMs may interface with a single DA, particularly in complex networks.¶
In this figure, both DTNMA Managers A and B send policies to DTNMA Agent A (step 1). DM A defines a variable (V1) whose value is given by the mathematical expression (EDD1 * 2) and is associated with an ACL (ACL1) that restricts access to V1 to DM A only. Similarly, DM B defines a variable (V2) whose value is given by the mathematical expression (EDD2 * 2) and associated with an ACL (ACL2) that restricts access to V2 to DM B only.¶
Both DTNMA Managers A and B also send policies to DTNMA Agent A to report on the values of their variables at 1 second intervals (step 2). Since DM A can access V1 and DM B can access V2, there is no authorization issue with these policies and they are both accepted by the autonomy engine on Agent A. Agent A produces reports as expected, sending them to their respective managers (step 3).¶
Later (step 4) DM B attempts to configure DA A to also report to it the value of V1. Since DM B does not have authorization to view this variable, DA A does not include this in the configuration of its autonomy engine and, instead, some indication of permission error is included in any regular reporting back to DM B.¶
DM A also sends a policy to Agent A (step 5) that defines a variable (V3) whose value is given by the mathematical expression (EDD3 * 3) and is not associated with an ACL, indicating that any DM can access V3. In this instance, both DM A and DM B can then send policies to DA A to report the value of V3 (step 6). Since there is no authorization restriction on V3, these policies are accepted by the autonomy engine on Agent A and reports are sent to both DM A and B over time (step 7).¶
There are times where a single network device may serve as both a DM for other DAs in the network and, itself, as a device managed by someone else. This may be the case on nodes serving as gateways or proxies. The DTNMA accommodates this case by allowing a single device to run both a DA and DM.¶
An example of this configuration is illustrated in Figure 8.¶
Cascading Management Control Flow¶
A device can operate as both a DTNMA Manager and an Agent.¶
In this example, we presume that DA B is able to sample a given EDD (EDD1) and that DA C is able to sample a different EDD (EDD2). Node B houses DM B (which controls DA C) and DA B (which is controlled by DM A). DM A must periodically receive some new value that is calculated as a function of both EDD1 and EDD2.¶
First, DM A sends a policy to DA B to define a variable (V0) whose value is given by the mathematical expression (EDD1 + EDD2) without a restricting ACL. Further, DM A sends a policy to DA B to report on the value of V0 every second (step 1).¶
DA B needs the ability to monitor both EDD1 and EDD2. However, the only way to receive EDD2 values is to have them reported back to Node B by DA C and included in the Node B runtime data stores. Therefore, DM B sends a policy to DA C to report on the value of EDD2 (step 2).¶
DA C receives the policy in its autonomy engine and produces reports on the value of EDD2 every second (step 3).¶
DA B may locally sample EDD1 and EDD2 and uses that to compute values of V0 and report on those values at regular intervals to DM A (step 4).¶
While a trivial example, the mechanism of associating fusion with the Agent function rather than the Manager function scales with fusion complexity. Within the DTNMA, DAs and DMs are not required to be separate software implementations. There may be a single software application running on Node B implementing both DM B and DA B roles.¶
This document requires no IANA actions.¶
Security within a DTNMA exists in at least two layers: security in the data model and security in the messaging and encoding of the data model.¶
Data model security refers to the validity and accessibility of data elements. For example, a data element might be available to certain DAs or DMs in a system, whereas the same data element may be hidden from other DAs or DMs. Both verification and authorization mechanisms at DAs and DMs are important to achieve this type of security.¶
The exchange of information between and amongst DAs and DMs in the DTNMA is expected to be accomplished through some secured messaging transport.¶
Brian Sipos of the Johns Hopkins University Applied Physics Laboratory (JHU/APL) provided excellent technical review of the DTNMA concepts presented in this document and additional information related to existing network management techniques.¶