Network Working Group Y. Snir Request for Comments: 3644 Y. Ramberg Category: Standards Track Cisco Systems J. Strassner Intelliden R. Cohen Ntear LLC B. Moore IBM November 2003 Policy Quality of Service (QoS) Information Model Status of this Memo This document specifies an Internet standards track protocol for the Internet community, and requests discussion and suggestions for improvements. Please refer to the current edition of the "Internet Official Protocol Standards" (STD 1) for the standardization state and status of this protocol. Distribution of this memo is unlimited. Copyright Notice Copyright (C) The Internet Society (2003). All Rights Reserved.Abstract
This document presents an object-oriented information model for representing Quality of Service (QoS) network management policies. This document is based on the IETF Policy Core Information Model and its extensions. It defines an information model for QoS enforcement for differentiated and integrated services using policy. It is important to note that this document defines an information model, which by definition is independent of any particular data storage mechanism and access protocol.
Table of Contents
1. Introduction. . . . . . . . . . . . . . . . . . . . . . . . . 5 1.1. The Process of QoS Policy Definition. . . . . . . . . . 5 1.2. Design Goals and Their Ramifications. . . . . . . . . . 8 1.2.1. Policy-Definition Oriented. . . . . . . . . . . 8 1.2.1.1. Rule-based Modeling . . . . . . . . . 9 1.2.1.2. Organize Information Hierarchically . 9 1.2.1.3. Goal-Oriented Policy Definition . . . 10 1.2.2. Policy Domain Model. . . . . . . . . . . . . . . 11 1.2.2.1. Model QoS Policy in a Device- and Vendor-Independent Manner . . . . . . 11 1.2.2.2. Use Roles for Mapping Policy to Network Devices . . . . . . . . . . . 11 1.2.2.3. Reusability . . . . . . . . . . . . . 12 1.2.3. Enforceable Policy. . . . . . . . . . . . . . . 12 1.2.4. QPIM Covers Both Signaled And Provisioned QoS . 14 1.2.5. Interoperability for PDPs and Management Applications. . . . . . . . . . . . . . . . . . 14 1.3. Modeling Abstract QoS Policies. . . . . . . . . . . . . 15 1.4. Rule Hierarchy. . . . . . . . . . . . . . . . . . . . . 17 1.4.1. Use of Hierarchy Within Bandwidth Allocation Policies. . . . . . . . . . . . . . . . . . . . 17 1.4.2. Use of Rule Hierarchy to Describe Drop Threshold Policies. . . . . . . . . . . . . . . 21 1.4.3. Restrictions of the Use of Hierarchy Within QPIM. . . . . . . . . . . . . . . . . . . . . . 22 1.5. Intended Audiences. . . . . . . . . . . . . . . . . . . 23 2. Class Hierarchies . . . . . . . . . . . . . . . . . . . . . . 23 2.1. Inheritance Hierarchy . . . . . . . . . . . . . . . . . 23 2.2. Relationship Hierarchy. . . . . . . . . . . . . . . . . 26 3. QoS Actions . . . . . . . . . . . . . . . . . . . . . . . . . 26 3.1. Overview. . . . . . . . . . . . . . . . . . . . . . . . 26 3.2. RSVP Policy Actions . . . . . . . . . . . . . . . . . . 27 3.2.1. Example: Controlling COPS Stateless Decision. . 28 3.2.2. Example: Controlling the COPS Replace Decision. 29 3.3. Provisioning Policy Actions . . . . . . . . . . . . . . 29 3.3.1. Admission Actions: Controlling Policers and Shapers . . . . . . . . . . . . . . . . . . . . 29 3.3.2. Controlling Markers . . . . . . . . . . . . . . 32 3.3.3. Controlling Edge Policies - Examples. . . . . . 33 3.4. Per-Hop Behavior Actions. . . . . . . . . . . . . . . . 34 3.4.1. Controlling Bandwidth and Delay . . . . . . . . 35 3.4.2. Congestion Control Actions. . . . . . . . . . . 35 3.4.3. Using Hierarchical Policies: Examples for PHB Actions . . . . . . . . . . . . . . . . . . . . 36 4. Traffic Profiles. . . . . . . . . . . . . . . . . . . . . . . 38 4.1. Provisioning Traffic Profiles . . . . . . . . . . . . . 38
4.2. RSVP Traffic Profiles . . . . . . . . . . . . . . . . . 39 5. Pre-Defined QoS-Related Variables . . . . . . . . . . . . . . 40 6. QoS Related Values. . . . . . . . . . . . . . . . . . . . . . 42 7. Class Definitions: Association Hierarchy. . . . . . . . . . . 44 7.1. The Association "QoSPolicyTrfcProfInAdmissionAction". . 44 7.1.1. The Reference "Antecedent". . . . . . . . . . . 44 7.1.2. The Reference "Dependent" . . . . . . . . . . . 44 7.2. The Association "PolicyConformAction" . . . . . . . . . 44 7.2.1. The Reference "Antecedent". . . . . . . . . . . 45 7.2.2. The Reference "Dependent" . . . . . . . . . . . 45 7.3. The Association "QoSPolicyExceedAction" . . . . . . . . 45 7.3.1. The Reference "Antecedent". . . . . . . . . . . 46 7.3.2. The Reference "Dependent" . . . . . . . . . . . 46 7.4. The Association "PolicyViolateAction" . . . . . . . . . 46 7.4.1. The Reference "Antecedent". . . . . . . . . . . 46 7.4.2. The Reference "Dependent" . . . . . . . . . . . 47 7.5. The Aggregation "QoSPolicyRSVPVariableInRSVPSimplePolicyAction" . . . . 47 7.5.1. The Reference "GroupComponent". . . . . . . . . 47 7.5.2. The Reference "PartComponent" . . . . . . . . . 47 8. Class Definitions: Inheritance Hierarchy. . . . . . . . . . . 48 8.1. The Class QoSPolicyDiscardAction. . . . . . . . . . . . 48 8.2. The Class QoSPolicyAdmissionAction. . . . . . . . . . . 48 8.2.1. The Property qpAdmissionScope . . . . . . . . . 48 8.3. The Class QoSPolicyPoliceAction . . . . . . . . . . . . 49 8.4. The Class QoSPolicyShapeAction. . . . . . . . . . . . . 49 8.5. The Class QoSPolicyRSVPAdmissionAction. . . . . . . . . 50 8.5.1. The Property qpRSVPWarnOnly . . . . . . . . . . 50 8.5.2. The Property qpRSVPMaxSessions. . . . . . . . . 51 8.6. The Class QoSPolicyPHBAction. . . . . . . . . . . . . . 51 8.6.1. The Property qpMaxPacketSize. . . . . . . . . . 51 8.7. The Class QoSPolicyBandwidthAction. . . . . . . . . . . 52 8.7.1. The Property qpForwardingPriority . . . . . . . 52 8.7.2. The Property qpBandwidthUnits . . . . . . . . . 52 8.7.3. The Property qpMinBandwidth . . . . . . . . . . 53 8.7.4. The Property qpMaxBandwidth . . . . . . . . . . 53 8.7.5. The Property qpMaxDelay . . . . . . . . . . . . 53 8.7.6. The Property qpMaxJitter. . . . . . . . . . . . 53 8.7.7. The Property qpFairness . . . . . . . . . . . . 54 8.8. The Class QoSPolicyCongestionControlAction. . . . . . . 54 8.8.1. The Property qpQueueSizeUnits . . . . . . . . . 54 8.8.2. The Property qpQueueSize. . . . . . . . . . . . 55 8.8.3. The Property qpDropMethod . . . . . . . . . . . 55 8.8.4. The Property qpDropThresholdUnits . . . . . . . 55 8.8.5. The Property qpDropMinThresholdValue. . . . . . 55 8.8.6. The Property qpDropMaxThresholdValue. . . . . . 56 8.9. The Class QoSPolicyTrfcProf . . . . . . . . . . . . . . 56 8.10. The Class QoSPolicyTokenBucketTrfcProf. . . . . . . . . 57
8.10.1. The Property qpTBRate . . . . . . . . . . . . . 57 8.10.2. The Property qpTBNormalBurst. . . . . . . . . . 57 8.10.3. The Property qpTBExcessBurst. . . . . . . . . . 57 8.11. The Class QoSPolicyIntServTrfcProf. . . . . . . . . . . 57 8.11.1. The Property qpISTokenRate. . . . . . . . . . . 58 8.11.2. The Property qpISPeakRate . . . . . . . . . . . 58 8.11.3. The Property qpISBucketSize . . . . . . . . . . 58 8.11.4. The Property qpISResvRate . . . . . . . . . . . 58 8.11.5. The Property qpISResvSlack. . . . . . . . . . . 59 8.11.6. The Property qpISMinPolicedUnit . . . . . . . . 59 8.11.7. The Property qpISMaxPktSize . . . . . . . . . . 59 8.12. The Class QoSPolicyAttributeValue . . . . . . . . . . . 59 8.12.1. The Property qpAttributeName. . . . . . . . . . 60 8.12.2. The Property qpAttributeValueList . . . . . . . 60 8.13. The Class QoSPolicyRSVPVariable . . . . . . . . . . . . 60 8.14. The Class QoSPolicyRSVPSourceIPv4Variable . . . . . . . 61 8.15. The Class QoSPolicyRSVPDestinationIPv4Variable. . . . . 61 8.16. The Class QoSPolicyRSVPSourceIPv6Variable . . . . . . . 62 8.17. The Class QoSPolicyRSVPDestinationIPv6Variable. . . . . 62 8.18. The Class QoSPolicyRSVPSourcePortVariable . . . . . . . 62 8.19. The Class QoSPolicyRSVPDestinationPortVariable. . . . . 63 8.20. The Class QoSPolicyRSVPIPProtocolVariable . . . . . . . 63 8.21. The Class QoSPolicyRSVPIPVersionVariable. . . . . . . . 63 8.22. The Class QoSPolicyRSVPDCLASSVariable . . . . . . . . . 64 8.23. The Class QoSPolicyRSVPStyleVariable. . . . . . . . . . 64 8.24. The Class QoSPolicyRSVPIntServVariable. . . . . . . . . 65 8.25. The Class QoSPolicyRSVPMessageTypeVariable. . . . . . . 65 8.26. The Class QoSPolicyRSVPPreemptionPriorityVariable . . . 65 8.27. The Class QoSPolicyRSVPPreemptionDefPriorityVariable. . 66 8.28. The Class QoSPolicyRSVPUserVariable . . . . . . . . . . 66 8.29. The Class QoSPolicyRSVPApplicationVariable. . . . . . . 66 8.30. The Class QoSPolicyRSVPAuthMethodVariable . . . . . . . 67 8.31. The Class QosPolicyDNValue. . . . . . . . . . . . . . . 67 8.31.1. The Property qpDNList . . . . . . . . . . . . . 68 8.32. The Class QoSPolicyRSVPSimpleAction . . . . . . . . . . 68 8.32.1. The Property qpRSVPActionType . . . . . . . . . 68 9. Intellectual Property Rights Statement. . . . . . . . . . . . 69 10. Acknowledgements. . . . . . . . . . . . . . . . . . . . . . . 69 11. Security Considerations . . . . . . . . . . . . . . . . . . . 69 12. References. . . . . . . . . . . . . . . . . . . . . . . . . . 70 12.1. Normative References . . . . . . . . . . . . . . . . . 70 12.2. Informative References . . . . . . . . . . . . . . . . 70 13. Authors' Addresses. . . . . . . . . . . . . . . . . . . . . . 72 14. Full Copyright Statement. . . . . . . . . . . . . . . . . . . 73
1. Introduction
The QoS Policy Information Model (QPIM) establishes a standard framework and constructs for specifying and representing policies that administer, manage, and control access to network QoS resources. Such policies will be referred to as "QoS policies" in this document. The framework consists of a set of classes and relationships that are organized in an object-oriented information model. It is agnostic of any specific Policy Decision Point (PDP) or Policy Enforcement Point (PEP) (see [TERMS] for definitions) implementation, and independent of any particular QoS implementation mechanism. QPIM is designed to represent QoS policy information for large-scale policy domains (the term "policy domain" is defined in [TERMS]). A primary goal of this information model is to assist human administrators in their definition of policies to control QoS resources (as opposed to individual network element configuration). The process of creating QPIM data instances is fed by business rules, network topology and QoS methodology (e.g., Differentiated Services). This document is based on the IETF Policy Core Information Model and its extensions as specified by [PCIM] and [PCIMe]. QPIM builds upon these two documents to define an information model for QoS enforcement for differentiated and integrated services ([DIFFSERV] and [INTSERV], respectively) using policy. It is important to note that this document defines an information model, which by definition is independent of any particular data storage mechanism and access protocol. This enables various data models (e.g., directory schemata, relational database schemata, and SNMP MIBs) to be designed and implemented according to a single uniform model. The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this document are to be interpreted as described in BCP 14, RFC 2119 [KEYWORDS].1.1. The Process of QoS Policy Definition
This section describes the process of using QPIM for the definition QoS policy for a policy domain. Figure 1 illustrates information flow and not the actual procedure, which has several loops and feedback not depicted.
---------- ---------- ----------- | Business | | Topology | | QoS | | Policy | | | |Methodology| ---------- ---------- ----------- | | | | | | ------------------------------------ | V --------------- | QPIM/PCIM(e) | | modeling | --------------- | | -------------- |<----------| Device info, | | | capabilities | | -------------- V (---------------) ( device )---) ( configuration ) )---) (---------------) ) ) (--------------) ) (-------------) Figure 1: The QoS definition information flow The process of QoS policy definition is dependent on three types of information: the topology of the network devices under management, the particular type of QoS methodology used (e.g., DiffServ) and the business rules and requirements for specifying service(s) [TERMS] delivered by the network. Both topology and business rules are outside the scope of QPIM. However, important facets of both must be known and understood for correctly specifying the QoS policy. Typically, the process of QoS policy definition relies on a methodology based on one or more QoS methodologies. For example, the DiffServ methodology may be employed in the QoS policy definition process. The topology of the network consists of an inventory of the network elements that make up the network and the set of paths that traffic may take through the network. For example, a network administrator may decide to use the DiffServ architectural model [DIFFSERV] and classify network devices using the roles "boundary" and "core" (see [TERMS] for a definition of role, and [PCIM] for an explanation of
how they are used in the policy framework). While this is not a complete topological view of the network, many times it may suffice for the purpose of QoS policy definition. Business rules are informal sets of requirements for specifying the behavior of various types of traffic that may traverse the network. For example, the administrator may be instructed to implement policy such that VoIP traffic manifests behavior that is similar to legacy voice traffic over telephone networks. Note that this business rule (indirectly) prescribes specific behavior for this traffic type (VoIP), for example in terms of minimal delay, jitter and loss. Other traffic types, such as WEB buying transactions, system backup traffic, video streaming, etc., will express their traffic conditioning requirements in different terms. Again, this information is required not by QPIM itself, but by the overall policy management system that uses QPIM. QPIM is used to help map the business rules into a form that defines the requirements for conditioning different types of traffic in the network. The topology, QoS methodology, and business rules are necessary prerequisites for defining traffic conditioning. QPIM enables a set of tools for specifying traffic conditioning policy in a standard manner. Using a standard QoS policy information model such as QPIM is needed also because different devices can have markedly different capabilities. Even the same model of equipment can have different functionality if the network operating system and software running in those devices is different. Therefore, a means is required to specify functionality in a standard way that is independent of the capabilities of different vendors' devices. This is the role of QPIM. In a typical scenario, the administrator would first determine the role(s) that each interface of each network element plays in the overall network topology. These roles define the functions supplied by a given network element independent of vendor and device type. The [PCIM] and [PCIMe] documents define the concept of a role. Roles can be used to identify what parts of the network need which type of traffic conditioning. For example, network interface cards that are categorized as "core" interfaces can be assigned the role name "core-interface". This enables the administrator to design policies to configure all interfaces having the role "core-interface" independent of the actual physical devices themselves. QPIM uses roles to help the administrator map a given set of devices or interfaces to a given set of policy constructs.
The policy constructs define the functionality required to perform the desired traffic conditioning for particular traffic type(s). The functions themselves depend on the particular type of networking technologies chosen. For example, the DiffServ methodology encourages us to aggregate similar types of traffic by assigning to each traffic class a particular per-hop forwarding behavior on each node. RSVP enables bandwidth to be reserved. These two methodologies can be used separately or in conjunction, as defined by the appropriate business policy. QPIM provides specific classes to enable DiffServ and RSVP conditioning to be modeled. The QPIM class definitions are used to create instances of various policy constructs such as QoS actions and conditions that may be hierarchically organized in rules and groups (PolicyGroup and PolicyRule as defined in [PCIM] and [PCIMe]). Examples of policy actions are rate limiting, jitter control and bandwidth allocation. Policy conditions are constructs that can select traffic according to a complex Boolean expression. A hierarchical organization was chosen for two reasons. First, it best reflects the way humans tend to think about complex policy. Second, it enables policy to be easily mapped onto administrative organizations, as the hierarchical organization of policy mirrors most administrative organizations. It is important to note that the policy definition process described here is done independent of any specific device capabilities and configuration options. The policy definition is completely independent from the details of the implementation and the configuration interface of individual network elements, as well as of the mechanisms that a network element can use to condition traffic.1.2. Design Goals and Their Ramifications
This section explains the QPIM design goals and how these goals are addressed in this document. This section also describes the ramifications of the design goals and the design decisions made in developing QPIM.1.2.1. Policy-Definition Oriented
The primary design goal of QPIM is to model policies controlling QoS behavior in a way that as closely as possible reflects the way humans tend to think about policy. Therefore, QPIM is designed to address the needs of policy definition and management, and not device/network configuration.
There are several ramifications of this design goal. First, QPIM uses rules to define policies, based on [PCIM] and [PCIMe]. Second, QPIM uses hierarchical organizations of policies and policy information extensively. Third, QPIM does not force the policy writer to specify all implementation details; rather, it assumes that configuration agents (PDPs) interpret the policies and match them to suit the needs of device-specific configurations.1.2.1.1. Rule-based Modeling
Policy is best described using rule-based modeling as explained and described in [PCIM] and [PCIMe]. A QoS policy rule is structured as a condition clause and an action clause. The semantics are simple: if the condition clause evaluates to TRUE, then a set of QoS actions (specified in the action clause) can be executed. For example, the rule: "WEB traffic should receive at least 50% of the available bandwidth resources or more, when more is available" can be formalized as: "<If protocol == HTTP> then <minimum BW = 50%>" where the first angle bracketed clause is a traffic condition and the second angle bracketed clause is a QoS action. This approach differs from data path modeling that describes the mechanisms that operates on the packet flows to achieve the desired effect. Note that the approach taken in QPIM specifically did NOT subclass the PolicyRule class. Rather, it uses the SimplePolicyCondition, CompoundPolicyCondition, SimplePolicyAction, and CompoundPolicyAction classes defined in [PCIMe], as well as defining subclasses of the following classes: Policy, PolicyAction, SimplePolicyAction, PolicyImplicitVariable, and PolicyValue. Subclassing the PolicyRule class would have made it more difficult to combine actions and conditions defined within different functional domains [PCIMe] within the same rules.1.2.1.2. Organize Information Hierarchically
The organization of the information represented by QPIM is designed to be hierarchical. To do this, QPIM utilizes the PolicySetComponent aggregation [PCIMe] to provide an arbitrarily nested organization of policy information. A policy group functions as a container of
policy rules and/or policy groups. A policy rule can also contain policy rules and/or groups, enabling a rule/sub-rule relationship to be realized. The hierarchical design decision is based on the realization that it is natural for humans to organize policy rules in groups. Breaking down a complex policy into a set of simple rules is a process that follows the way people tend to think and analyze systems. The complexity of the abstract, business-oriented policy is simplified and made into a hierarchy of simple rules and grouping of simple rules. The hierarchical information organization helps to simplify the definition and readability of data instances based on QPIM. Hierarchies can also serve to carry additional semantics for QoS actions in a given context. An example, detailed in section 2.3, demonstrates how hierarchical bandwidth allocation policies can be specified in an intuitive form, without the need to specify complex scheduler structures.1.2.1.3. Goal-Oriented Policy Definition
QPIM facilitates goal-oriented QoS policy definition. This means that the process of defining QoS policy is focused on the desired effect of policies, as opposed to the means of implementing the policy on network elements. QPIM is intended to define a minimal specification of desired network behavior. It is the role of device-specific configuration agents to interpret policy expressed in a standard way and fill in the necessary configuration details that are required for their particular application. The benefit of using QPIM is that it provides a common lingua franca that each of the device- and/or vendor-specific configuration agents can use. This helps ensure a common interpretation of the general policy as well as aid the administrator in specifying a common policy to be implemented across different devices. This is analogous to the fundamental object- oriented paradigm of separating specification from implementation. Using QPIM, traffic conditioning can be specified in a general manner that can help different implementations satisfy a common goal. For example, a valid policy may include only a single rule that specifies that bandwidth should be reserved for a given set of traffic flows. The rule does not need to include any of the various other details that may be needed for implementing a scheduler that supports this bandwidth allocation (e.g., the queue length required). It is assumed that a PDP or the PEPs would fill in these details using (for example) their default queue length settings. The policy
writer need only specify the main goal of the policy, making sure that the preferred application receives enough bandwidth to operate adequately.1.2.2. Policy Domain Model
An important design goal of QPIM is to provide a means for defining policies that span numerous devices. This goal differentiates QPIM from device-level information models, which are designed for modeling policy that controls a single device, its mechanisms and capabilities. This design goal has several ramifications. First, roles [PCIM] are used to define policies across multiple devices. Second, the use of abstract policies frees the policy definition process from having to deal with individual device peculiarities, and leaves interpretation and configuration to be modeled by PDPs or other configuration agents. Third, QPIM allows extensive reuse of all policy building blocks in multiple rules used within different devices.1.2.2.1. Model QoS Policy in a Device- and Vendor-Independent Manner
QPIM models QoS policy in a way designed to be independent of any particular device or vendor. This enables networks made up of different devices that have different capabilities to be managed and controlled using a single standard set of policies. Using such a single set of policies is important because otherwise, the policy will itself reflect the differences between different device implementations.1.2.2.2. Use Roles for Mapping Policy to Network Devices
The use of roles enables a policy definition to be targeted to the network function of a network element, rather than to the element's type and capabilities. The use of roles for mapping policy to network elements provides an efficient and simple method for compact and abstract policy definition. A given abstract policy may be mapped to a group of network elements without the need to specify configuration for each of those elements based on the capabilities of any one individual element. The policy definition is designed to allow aggregating multiple devices within the same role, if desired. For example, if two core network interfaces operate at different rates, one does not have to define two separate policy rules to express the very same abstract policy (e.g., allocating 30% of the interface bandwidth to a given
preferred set of flows). The use of hierarchical context and relative QoS actions in QPIM addresses this and other related problems.1.2.2.3. Reusability
Reusable objects, as defined by [PCIM] and [PCIMe], are the means for sharing policy building blocks, thus allowing central management of global concepts. QPIM provides the ability to reuse all policy building blocks: variables and values, conditions and actions, traffic profiles, and policy groups and policy rules. This provides the required flexibility to manage large sets of policy rules over large policy domains. For example, the following rule makes use of centrally defined objects being reused (referenced): If <DestinationAddress == FinanceSubNet> then <DSCP = MissionCritical> In this rule, the condition refers to an object named FinanceSubNet, which is a value (or possibly a set of values) defined and maintained in a reusable objects container. The QoS action makes use of a value named MissionCritical, which is also a reusable object. The advantage of specifying a policy in this way is its inherent flexibility. Given the above policy, whenever business needs require a change in the subnet definition for the organization, all that's required is to change the reusable value FinanceSubNet centrally. All referencing rules are immediately affected, without the need to modify them individually. Without this capability, the repository that is used to store the rules would have to be searched for all rules that refer to the finance subnet, and then each matching rule's condition would have to be individually updated. This is not only much less efficient, but also is more prone to error. For a complete description of reusable objects, refer to [PCIM] and [PCIMe].1.2.3. Enforceable Policy
Policy defined by QPIM should be enforceable. This means that a PDP can use QPIM's policy definition in order to make the necessary decisions and enforce the required policy rules. For example, RSVP admission decisions should be made based on the policy definitions specified by QPIM. A PDP should be able to map QPIM policy definitions into PEP configurations, using either standard or proprietary protocols.
QPIM is designed to be agnostic of any particular, vendor-dependent technology. However, QPIM's constructs SHOULD always be interpreted so that policy-compliant behavior can be enforced on the network under management. Therefore, there are three fundamental requirements that QPIM must satisfy: 1. Policy specified by QPIM must be able to be mapped to actual network elements. 2. Policy specified by QPIM must be able to control QoS network functions without making reference to a specific type of device or vendor. 3. Policy specified by QPIM must be able to be translated into network element configuration. QPIM satisfies requirements #1 and #2 above by using the concept of roles (specifically, the PolicyRoles property, defined in PCIM). By matching roles assigned to policy groups and to network elements, a PDP (or other enforcement agent) can determine what policy should be applied to a given device or devices. The use of roles in mapping policy to network elements supports model scalability. QPIM policy can be mapped to large-scale policy domains by assigning a single role to a group of network elements. This can be done even when the policy domain contains heterogeneous devices. So, a small set of policies can be deployed to large networks without having to re-specify the policy for each device separately. This QPIM property is important for QoS policy management applications that strive to ease the task of policy definition for large policy domains. Requirement #2 is also satisfied by making QPIM domain-oriented (see [TERMS] for a definition of "domain"). In other words, the target of the policy is a domain, as opposed to a specific device or interface. Requirement #3 is satisfied by modeling QoS conditions and actions that are commonly configured on various devices. However, QPIM is extensible to allow modeling of actions that are not included in QPIM. It is important to note that different PEPs will have different capabilities and functions, which necessitate different individual configurations even if the different PEPs are controlled by the same policy.
1.2.4. QPIM Covers Both Signaled And Provisioned QoS
The two predominant standards-based QoS methodologies developed so far are Differentiated Services (DiffServ) and Integrated Services (IntServ). The DiffServ provides a way to enforce policies that apply to a large number of devices in a scalable manner. QPIM provides actions and conditions that control the classification, policing and shaping done within the differentiated service domain boundaries, as well as actions that control the per-hop behavior within the core of the DiffServ network. QPIM does not mandate the use of DiffServ as a policy methodology. Integrated services, together with its signaling protocol (RSVP), provides a way for end nodes (and edge nodes) to request QoS from the network. QPIM provides actions that control the reservation of such requests within the network. As both methodologies continue to evolve, QPIM does not attempt to provide full coverage of all possible scenarios. Instead, QPIM aims to provide policy control modeling for all major scenarios. QPIM is designed to be extensible to allow for incorporation of control over newly developed QoS mechanisms.1.2.5. Interoperability for PDPs and Management Applications
Another design goal of QPIM is to facilitate interoperability among policy systems such as PDPs and policy management applications. QPIM accomplishes this interoperability goal by standardizing the representation of policy. Producers and consumers of QoS policy need only rely on QPIM-based schemata (and resulting data models) to ensure mutual understanding and agreement on the semantics of QoS policy. For example, suppose that a QoS policy management application, built by vendor A writes its policies based on the LDAP schema that maps from QPIM to a directory implementation using LDAP. Now assume that a separately built PDP from vendor B also relies on this same LDAP schema derived from QPIM. Even though these are two vendors with two different PDPs, each may read the schema of the other and "understand" it. This is because both the management application and the PDP were architected to comply with the QPIM specification. The same is true with two policy management applications. For example, vendor B's policy application may run a validation tool that computes whether there are conflicts within rules specified by the other vendor's policy management application.
Interoperability of QPIM producers/consumers is by definition at a high level, and does not guarantee that the same policy will result in the same PEP configuration. First, different PEPs will have different capabilities and functions, which necessitate different individual configurations even if the different PEPs are controlled by the same policy. Second, different PDPs will also have different capabilities and functions, and may choose to translate the high- level QPIM policy differently depending on the functionality of the PDP, as well as on the capabilities of the PEPs that are being controlled by the PDP. However, the different configurations should still result in the same network behavior as that specified by the policy rules.1.3. Modeling Abstract QoS Policies
This section provides a discussion of QoS policy abstraction and the way QPIM addresses this issue. As described above, the main goal of the QPIM is to create an information model that can be used to help bridge part of the conceptual gap between a human policy maker and a network element that is configured to enforce the policy. Clearly this wide gap implies several translation levels, from the abstract to the concrete. At the abstract end are the business QoS policy rules. Once the business rules are known, a network administrator must interpret them as network QoS policy and represent this QoS policy by using QPIM constructs. QPIM facilitates a formal representation of QoS rules, thus providing the first concretization level: formally representing humanly expressed QoS policy. When a human business executive defines network policy, it is usually done using informal business terms and language. For example, a human may utter a policy statement that reads: "human resources applications should have better QoS than simple web applications" This might be translated to a slightly more sophisticated form, such as: "traffic generated by our human resources applications should have a higher probability of communicating with its destinations than traffic generated by people browsing the WEB using non-mission- critical applications" While this statement clearly defines QoS policy at the business level, it isn't specific enough to be enforceable by network elements. Translation to "network terms and language" is required.
On the other end of the scale, a network element functioning as a PEP, such as a router, can be configured with specific commands that determine the operational parameters of its inner working QoS mechanisms. For example, the (imaginary) command "output-queue-depth = 100" may be an instruction to a network interface card of a router to allow up to 100 packets to be stored before subsequent packets are discarded (not forwarded). On a different device within the same network, the same instruction may take another form, because a different vendor built that device or it has a different set of functions, and hence implementation, even though it is from the same vendor. In addition, a particular PEP may not have the ability to create queues that are longer than, say, 50 packets, which may result in a different instruction implementing the same QoS policy. The first example illustrates 'abstract policy', while the second illustrates 'concrete configuration'. Furthermore, the first example illustrates end-to-end policy, which covers the conditioning of application traffic throughout the network. The second example illustrates configuration for a particular PEP or a set thereof. While an end-to-end policy statement can only be enforced by configuration of PEPs in various parts of the network, the information model of policy and that of the mechanisms that a PEP uses to implement that policy are vastly different. The translation process from abstract business policy to concrete PEP configuration is roughly expressed as follows: 1. Informal business QoS policy is expressed by a human policy maker (e.g., "All executives' WEB requests should be prioritized ahead of other employees' WEB requests") 2. A network administrator analyzes the policy domain's topology and determines the roles of particular device interfaces. A role may be assigned to a large group of elements, which will result in mapping a particular policy to a large group of device interfaces. 3. The network administrator models the informal policy using QPIM constructs, thus creating a formal representation of the abstract policy. For example, "If a packet's protocol is HTTP and its destination is in the 'EXECUTIVES' user group, then assign IPP 7 to the packet header". 4. The network administrator assigns roles to the policy groups created in the previous step matching the network elements' roles assigned in step #2 above.
5. A PDP translates the abstract policy constructs created in step #3 into device-specific configuration commands for all devices effected by the new policy (i.e., devices that have interfaces that are assigned a role matching the new policy constructs' roles). In this process, the PDP consults the particular devices' capabilities to determine the appropriate configuration commands implementing the policy. 6. For each PEP in the network, the PDP (or an agent of the PDP) issues the appropriate device-specific instructions necessary to enforce the policy. QPIM, PCIM and PCIMe are used in step #3 above.1.4. Rule Hierarchy
Policy is described by a set of policy rules that may be grouped into subsets [PCIMe]. Policy rules and policy groups can be nested within other policy rules, providing a hierarchical policy definition. Nested rules are also called sub-rules, and we use both terms in this document interchangeably. The aggregation PolicySetComponent (defined in [PCIMe] is used to represent the nesting of a policy rule or group in another policy rule. The hierarchical policy rule definition enhances policy readability and reusability. Within the QoS policy information model, hierarchy is used to model context or scope for the sub-rule actions. Within QPIM, bandwidth allocation policy actions and drop threshold actions use this hierarchal context. First we provide a detailed example of the use of hierarchy in bandwidth allocation policies. The differences between flat and hierarchical policy representation are discussed. The use of hierarchy in drop threshold policies is described in a following subsection. Last but not least, the restrictions on the use of rule hierarchies within QPIM are described.1.4.1. Use of Hierarchy Within Bandwidth Allocation Policies
Consider the following example where the informal policy reads: On any interface on which these rules apply, guarantee at least 30% of the interface bandwidth to UDP flows, and at least 40% of the interface bandwidth to TCP flows. The QoS Policy information model follows the Policy Core information model by using roles as a way to specify the set of interfaces on which this policy applies. The policy does not assume that all interfaces are run at the same speed, or have any other property in
common apart from being able to forward packets. Bandwidth is allocated between UDP and TCP flows using percentages of the available interface bandwidth. Assume that we have an available interface bandwidth of 1 Mbits/sec. Then this rule will guarantee 300Kbits/sec to UDP flows. However, if the interface bandwidth was instead only 64kbits/sec, then this rule would correspondingly guarantee 19.2kb/sec. This policy is modeled within QPIM using two policy rules of the form: If (IP protocol is UDP) THEN (guarantee 30% of available BW) (1) If (IP protocol is TCP) THEN (guarantee 40% of available BW) (2) Assume that these two rules are grouped within a PolicySet [PCIMe] carrying the appropriate role combination. A possible implementation of these rules within a PEP would be to use a Weighted-Round-Robin scheduler with 3 queues. The first queue would be used for UDP traffic, the second queue for TCP traffic and the third queue for the rest of the traffic. The weights of the Weighted-Round-Robin scheduler would be 30% for the first queue, 40% for the second queue and 30% for the last queue. The actions specifying the bandwidth guarantee implicitly assume that the bandwidth resource being guaranteed is the bandwidth available at the interface level. A PolicyRoleCollection is a class defined in [PCIMe] whose purpose is to identify the set of resources (in this example, interfaces) that are assigned to a particular role. Thus, the type of managed elements aggregated within the PolicyRoleCollection defines the bandwidth resource being controlled. In our example, interfaces are aggregated within the PolicyRoleCollection. Therefore, the rules specify bandwidth allocation to all interfaces that match a given role. Other behavior could be similarly defined by changing what was aggregated within the PolicyRoleCollection. Normally, a full specification of the rules would require indicating the direction of the traffic for which bandwidth allocation is being made. Using the direction variable defined in [PCIMe], the rules can be specified in the following form: If (direction is out) If (IP protocol is UDP) THEN (guarantee 30% of available BW) If (IP protocol is TCP) THEN (guarantee 40% of available BW) where indentation is used to indicate rule nesting. To save space, we omit the direction condition from further discussion.
Rule nesting provides the ability to further refine the scope of bandwidth allocation within a given traffic class forwarded via these interfaces. The example below adds two nested rules to refine bandwidth allocation for UDP and TCP applications. If (IP protocol is UDP) THEN (guarantee 30% of available BW) (1) If (protocol is TFTP) THEN (guarantee 10% of available BW) (1a) If (protocol is NFS) THEN (guarantee 40% of available BW) (1b) If (IP protocol is TCP) THEN (guarantee 40% of available BW) (2) If (protocol is HTTP) THEN guarantee 20% of available BW) (2a) If (protocol is FTP) THEN (guarantee 30% of available BW) (2b) Subrules 1a and 1b specify bandwidth allocation for UDP applications. The total bandwidth resource being partitioned among UDP applications is the bandwidth available for the UDP traffic class (i.e., 30%), not the total bandwidth available at the interface level. Furthermore, TFTP and NFS are guaranteed to get at least 10% and 40% of the total available bandwidth for UDP, while other UDP applications aren't guaranteed to receive anything. Thus, TFTP and NFS are guaranteed to get at least 3% and 12% of the total bandwidth. Similar logic applies to the TCP applications. The point of this section will be to show that a hierarchical policy representation enables a finer level of granularity for bandwidth allocation to be specified than is otherwise available using a non- hierarchical policy representation. To see this, let's compare this set of rules with a non-hierarchical (flat) rule representation. In the non-hierarchical representation, the guaranteed bandwidth for TFTP flows is calculated by taking 10% of the bandwidth guaranteed to UDP flows, resulting in 3% of the total interface bandwidth guarantee. If (UDP AND TFTP) THEN (guarantee 3% of available BW) (1a) If (UDP AND NFS) THEN (guarantee 12% of available BW) (1b) If (other UDP APPs) THEN (guarantee 15% of available BW) (1c) If (TCP AND HTTP) THEN guarantee 8% of available BW) (2a) If (TCP AND FTP) THEN (guarantee 12% of available BW) (2b) If (other TCP APPs) THEN (guarantee 20% of available BW) (2c) Are these two representations identical? No, bandwidth allocation is not the same. For example, within the hierarchical representation, UDP applications are guaranteed 30% of the bandwidth. Suppose a single UDP flow of an application different from NFS or TFTP is running. This application would be guaranteed 30% of the interface bandwidth in the hierarchical representation but only 15% of the interface bandwidth in the flat representation.
A two stage scheduler is best modeled by a hierarchical representation whereas a flat representation may be realized by a non-hierarchical scheduler. A schematic hierarchical Weighted-Round-Robin scheduler implementation that supports the hierarchical rule representation is described below. --UDP AND TFTP queue--10% --UDP AND NFS queue--40%-Scheduler-30%--+ --Other UDP queue--50% A1 | | --TCP AND HTTP queue--20% | --TCP AND FTP queue--30%-Scheduler-40%--Scheduler--Interface --Other TCP queue--50% A2 | B | ------------Non UDP/TCP traffic-----30%--+ Scheduler A1 extracts packets from the 3 UDP queues according to the weight specified by the UDP sub-rule policy. Scheduler A2 extracts packets from the 3 TCP queues specified by the TCP sub-rule policy. The second stage scheduler B schedules between UDP, TCP and all other traffic according to the policy specified in the top most rule level. Another difference between the flat and hierarchical rule representation is the actual division of bandwidth above the minimal bandwidth guarantee. Suppose two high rate streams are being forwarded via this interface: an HTTP stream and an NFS stream. Suppose that the rate of each flow is far beyond the capacity of the interface. In the flat scheduler implementation, the ratio between the weights is 8:12 (i.e., HTTP:NFS), and therefore HTTP stream would consume 40% of the bandwidth while NFS would consume 60% of the bandwidth. In the hierarchical scheduler implementation the only scheduler that has two queues filled is scheduler B, therefore the ratio between the HTTP (TCP) stream and the NFS (UDP) stream would be 30:40, and therefore the HTTP stream would consume approximately 42% of the interface bandwidth while NFS would consume 58% of the interface bandwidth. In both cases both HTTP and NFS streams got more than the minimal guaranteed bandwidth, but the actual rates forwarded via the interface differ. The conclusion is that hierarchical policy representation provides additional structure and context beyond the flat policy representation. Furthermore, policies specifying bandwidth allocation using rule hierarchies should be enforced using hierarchical schedulers where the rule hierarchy level is mapped to the hierarchical scheduler level.
1.4.2. Use of Rule Hierarchy to Describe Drop Threshold Policies
Two major resources govern the per hop behavior in each node. The bandwidth allocation resource governs the forwarding behavior of each traffic class. A scheduler priority and weights are controlled by the bandwidth allocation policies, as well as the (minimal) number of queues needed for traffic separation. A second resource, which is not controlled by bandwidth allocation policies, is the queuing length and drop behavior. For this purpose, queue length and threshold policies are used. Rule hierarchy is used to describe the context on which thresholds act. The policy rule's condition describes the traffic class and the rule's actions describe the bandwidth allocation, the forwarding priority and the queue length. If the traffic class contains different drop precedence sub-classes that require different thresholds within the same queue, the sub-rules actions describe these thresholds. Below is an example of the use of rule nesting for threshold control purposes. Let's look at the following rules: If (protocol is FTP) THEN (guarantee 10% of available BW) (queue length equals 40 packets) (drop technique is random) if (src-ip is from net 2.x.x.x) THEN min threshold = 30% max threshold = 70% if (src-ip is from net 3.x.x.x) THEN min threshold = 40% max threshold = 90% if (all other) THEN min threshold = 20% max threshold = 60% The rule describes the bandwidth allocation, the queue length and the drop technique assigned to FTP flows. The sub-rules describe the drop threshold priorities within those FTP flows. FTP packets received from all networks apart from networks 2.x.x.x and 3.x.x.x are randomly dropped when the queue threshold for FTP flows accumulates to 20% of the queue length. Once the queue fills to 60%, all these packets are dropped before queuing. The two other sub rules provide other thresholds for FTP packets coming from the specified two subnets. The Assured Forwarding per hop behavior (AF) is another good example of the use of hierarchy to describe the different drop preferences within a traffic class. This example is provided in a later section.
1.4.3. Restrictions of the Use of Hierarchy Within QPIM
Rule nesting is used within QPIM for two important purposes: 1) Enhance clarity, readability and reusability. 2) Provide hierarchical context for actions. The second point captures the ability to specify context for bandwidth allocation, as well as providing context for drop threshold policies. When is a hierarchy level supposed to specify the bandwidth allocation context, when is the hierarchy used for specifying the drop threshold context, and when is it used merely for clarity and reusability? The answer depends entirely on the actions. Bandwidth control actions within a sub-rule specify how the bandwidth allocated to the traffic class determined by the rule's condition clause should be further divided among the sub-rules. Drop threshold actions control the traffic class's queue drop behavior for each of the sub- rules. The bandwidth control actions have an implicit pointer saying: the bandwidth allocation is relative to the bandwidth resources defined by the higher level rule. Drop threshold actions have an implicit pointer saying: the thresholds are taken from the queue resources defined by the higher level rule. Other actions do not have such an implicit pointer, and for these actions hierarchy is used only for reusability and readability purposes. Each rule that includes a bandwidth allocation action implies that a queue should be allocated to the traffic class defined by the rule's condition clause. Therefore, once a bandwidth allocation action exists within the actions of a sub-rule, a threshold action within this sub-rule cannot refer to thresholds of the parent rule's queue. Instead, it must refer to the queue of the sub-rule itself. Therefore, in order to have a clear and unambiguous definition, refinement of thresholds and refinements of bandwidth allocations within sub-rules should be avoided. If both refinements are needed for the same rule, threshold refinements and bandwidth refinements rules should each be aggregated to a separate group, and these groups should be aggregated under the policy rule, using the PolicySetComponent aggregation.
1.5. Intended Audiences
QPIM is intended for several audiences. The following lists some of the intended audiences and their respective uses: 1. Developers of QoS policy management applications can use this model as an extensible framework for defining policies to control PEPs and PDPs in an interoperable manner. 2. Developers of Policy Decision Point (PDP) systems built to control resource allocation signaled by RSVP requests. 3. Developers of Policy Decision Points (PDP) systems built to create QoS configuration for PEPs. 4. Builders of large organization data and knowledge bases who decide to combine QoS policy information with other networking policy information, assuming all modeling is based on [PCIM] and [PCIMe]. 5. Authors of various standards may use constructs introduced in this document to enhance their work. Authors of data models wishing to map a storage specific technology to QPIM must use this document as well.