Tech-invite3GPPspaceIETFspace
959493929190898887868584838281807978777675747372717069686766656463626160595857565554535251504948474645444342414039383736353433323130292827262524232221201918171615141312111009080706050403020100
in Index   Prev   Next

RFC 8456

Benchmarking Methodology for Software-Defined Networking (SDN) Controller Performance

Pages: 64
Informational
Part 1 of 4 – Pages 1 to 10
None   None   Next

Top   ToC   RFC8456 - Page 1
Internet Engineering Task Force (IETF)                  V. Bhuvaneswaran
Request for Comments: 8456                                      A. Basil
Category: Informational                               Veryx Technologies
ISSN: 2070-1721                                             M. Tassinari
                                              Hewlett Packard Enterprise
                                                               V. Manral
                                                                 NanoSec
                                                                S. Banks
                                                          VSS Monitoring
                                                            October 2018


     Benchmarking Methodology for Software-Defined Networking (SDN)
                         Controller Performance

Abstract

This document defines methodologies for benchmarking the control- plane performance of Software-Defined Networking (SDN) Controllers. The SDN Controller is a core component in the SDN architecture that controls the behavior of the network. SDN Controllers have been implemented with many varying designs in order to achieve their intended network functionality. Hence, the authors of this document have taken the approach of considering an SDN Controller to be a black box, defining the methodology in a manner that is agnostic to protocols and network services supported by controllers. This document provides a method for measuring the performance of all controller implementations. Status of This Memo This document is not an Internet Standards Track specification; it is published for informational purposes. This document is a product of the Internet Engineering Task Force (IETF). It represents the consensus of the IETF community. It has received public review and has been approved for publication by the Internet Engineering Steering Group (IESG). Not all documents approved by the IESG are a candidate for any level of Internet Standard; see Section 2 of RFC 7841. Information about the current status of this document, any errata, and how to provide feedback on it may be obtained at https://www.rfc-editor.org/info/rfc8456.
Top   ToC   RFC8456 - Page 2
Copyright Notice

   Copyright (c) 2018 IETF Trust and the persons identified as the
   document authors.  All rights reserved.

   This document is subject to BCP 78 and the IETF Trust's Legal
   Provisions Relating to IETF Documents
   (https://trustee.ietf.org/license-info) in effect on the date of
   publication of this document.  Please review these documents
   carefully, as they describe your rights and restrictions with respect
   to this document.  Code Components extracted from this document must
   include Simplified BSD License text as described in Section 4.e of
   the Trust Legal Provisions and are provided without warranty as
   described in the Simplified BSD License.

Table of Contents

1. Introduction ....................................................4 1.1. Conventions Used in This Document ..........................4 2. Scope ...........................................................4 3. Test Setup ......................................................4 3.1. Test Setup - Controller Operating in Standalone Mode .......5 3.2. Test Setup - Controller Operating in Cluster Mode ..........6 4. Test Considerations .............................................7 4.1. Network Topology ...........................................7 4.2. Test Traffic ...............................................7 4.3. Test Emulator Requirements .................................7 4.4. Connection Setup ...........................................8 4.5. Measurement Point Specification and Recommendation .........9 4.6. Connectivity Recommendation ................................9 4.7. Test Repeatability .........................................9 4.8. Test Reporting .............................................9 5. Benchmarking Tests .............................................11 5.1. Performance ...............................................11 5.1.1. Network Topology Discovery Time ....................11 5.1.2. Asynchronous Message Processing Time ...............13 5.1.3. Asynchronous Message Processing Rate ...............14 5.1.4. Reactive Path Provisioning Time ....................17 5.1.5. Proactive Path Provisioning Time ...................19 5.1.6. Reactive Path Provisioning Rate ....................21 5.1.7. Proactive Path Provisioning Rate ...................23 5.1.8. Network Topology Change Detection Time .............25 5.2. Scalability ...............................................26 5.2.1. Control Sessions Capacity ..........................26 5.2.2. Network Discovery Size .............................27 5.2.3. Forwarding Table Capacity ..........................29
Top   ToC   RFC8456 - Page 3
      5.3. Security ..................................................31
           5.3.1. Exception Handling .................................31
           5.3.2. Handling Denial-of-Service Attacks .................32
      5.4. Reliability ...............................................34
           5.4.1. Controller Failover Time ...........................34
           5.4.2. Network Re-provisioning Time .......................36
   6. IANA Considerations ............................................37
   7. Security Considerations ........................................38
   8. References .....................................................38
      8.1. Normative References ......................................38
      8.2. Informative References ....................................38
   Appendix A. Benchmarking Methodology Using OpenFlow Controllers ...39
     A.1. Protocol Overview ..........................................39
     A.2. Messages Overview ..........................................39
     A.3. Connection Overview ........................................39
     A.4. Performance Benchmarking Tests .............................40
       A.4.1. Network Topology Discovery Time ........................40
       A.4.2. Asynchronous Message Processing Time ...................42
       A.4.3. Asynchronous Message Processing Rate ...................43
       A.4.4. Reactive Path Provisioning Time ........................44
       A.4.5. Proactive Path Provisioning Time .......................46
       A.4.6. Reactive Path Provisioning Rate ........................47
       A.4.7. Proactive Path Provisioning Rate .......................49
       A.4.8. Network Topology Change Detection Time .................50
     A.5. Scalability ................................................51
       A.5.1. Control Sessions Capacity ..............................51
       A.5.2. Network Discovery Size .................................52
       A.5.3. Forwarding Table Capacity ..............................54
     A.6. Security ...................................................55
       A.6.1. Exception Handling .....................................55
       A.6.2. Handling Denial-of-Service Attacks .....................57
     A.7. Reliability ................................................59
       A.7.1. Controller Failover Time ...............................59
       A.7.2. Network Re-provisioning Time ...........................61
   Acknowledgments ...................................................63
   Authors' Addresses ................................................64
Top   ToC   RFC8456 - Page 4

1. Introduction

This document provides generic methodologies for benchmarking Software-Defined Networking (SDN) Controller performance. To achieve the desired functionality, an SDN Controller may support many northbound and southbound protocols, implement a wide range of applications, and work either alone or as part of a group. This document considers an SDN Controller to be a black box, regardless of design and implementation. The tests defined in this document can be used to benchmark an SDN Controller for performance, scalability, reliability, and security, independently of northbound and southbound protocols. Terminology related to benchmarking SDN Controllers is described in the companion terminology document [RFC8455]. These tests can be performed on an SDN Controller running as a virtual machine (VM) instance or on a bare metal server. This document is intended for those who want to measure an SDN Controller's performance as well as compare the performance of various SDN Controllers.

1.1. Conventions Used in This Document

The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", "SHOULD", "SHOULD NOT", "RECOMMENDED", "NOT RECOMMENDED", "MAY", and "OPTIONAL" in this document are to be interpreted as described in BCP 14 [RFC2119] [RFC8174] when, and only when, they appear in all capitals, as shown here.

2. Scope

This document defines a methodology for measuring the networking metrics of SDN Controllers. For the purpose of this memo, the SDN Controller is a function that manages and controls Network Devices. Any SDN Controller without a control capability is out of scope for this memo. The tests defined in this document enable the benchmarking of SDN Controllers in two ways: standalone mode (a standalone controller) and cluster mode (a cluster of homogeneous controllers). These tests are recommended for execution in lab environments rather than in live network deployments. Performance benchmarking of a federation of controllers (i.e., a set of SDN Controllers) managing different domains, is beyond the scope of this document.

3. Test Setup

As noted above, the tests defined in this document enable the measurement of an SDN Controller's performance in standalone mode and cluster mode. This section defines common reference topologies that are referred to in individual tests described later in this document.
Top   ToC   RFC8456 - Page 5

3.1. Test Setup - Controller Operating in Standalone Mode

+-----------------------------------------------------------+ | Application-Plane Test Emulator | | | | +-----------------+ +-------------+ | | | Application | | Service | | | +-----------------+ +-------------+ | | | +-----------------------------+(I2)-------------------------+ | | (Northbound Interface) +-------------------------------+ | +----------------+ | | | SDN Controller | | | +----------------+ | | | | Device Under Test (DUT) | +-------------------------------+ | (Southbound Interface) | +-----------------------------+(I1)-------------------------+ | | | +-----------+ +-------------+ | | | Network | | Network | | | | Device 2 |--..-| Device n - 1| | | +-----------+ +-------------+ | | / \ / \ | | / \ / \ | | l0 / X \ ln | | / / \ \ | | +-----------+ +-----------+ | | | Network | | Network | | | | Device 1 |..| Device n | | | +-----------+ +-----------+ | | | | | | +---------------+ +---------------+ | | | Test Traffic | | Test Traffic | | | | Generator | | Generator | | | | (TP1) | | (TP2) | | | +---------------+ +---------------+ | | | | Forwarding-Plane Test Emulator | +-----------------------------------------------------------+ Figure 1
Top   ToC   RFC8456 - Page 6

3.2. Test Setup - Controller Operating in Cluster Mode

+-----------------------------------------------------------+ | Application-Plane Test Emulator | | | | +-----------------+ +-------------+ | | | Application | | Service | | | +-----------------+ +-------------+ | | | +-----------------------------+(I2)-------------------------+ | | (Northbound Interface) +---------------------------------------------------------+ | | | +------------------+ +------------------+ | | | SDN Controller 1 | <--E/W--> | SDN Controller n | | | +------------------+ +------------------+ | | | | Device Under Test (DUT) | +---------------------------------------------------------+ | (Southbound Interface) | +-----------------------------+(I1)-------------------------+ | | | +-----------+ +-------------+ | | | Network | | Network | | | | Device 2 |--..-| Device n - 1| | | +-----------+ +-------------+ | | / \ / \ | | / \ / \ | | l0 / X \ ln | | / / \ \ | | +-----------+ +-----------+ | | | Network | | Network | | | | Device 1 |..| Device n | | | +-----------+ +-----------+ | | | | | | +---------------+ +---------------+ | | | Test Traffic | | Test Traffic | | | | Generator | | Generator | | | | (TP1) | | (TP2) | | | +---------------+ +---------------+ | | | | Forwarding-Plane Test Emulator | +-----------------------------------------------------------+ Figure 2
Top   ToC   RFC8456 - Page 7

4. Test Considerations

4.1. Network Topology

The test cases SHOULD use Leaf-Spine topology with at least two Network Devices in the topology for benchmarking. Test traffic generators TP1 and TP2 SHOULD be connected to the leaf Network Device 1 and the leaf Network Device n. To achieve a complete performance characterization of the SDN Controller, it is recommended that the controller be benchmarked for many network topologies and a varying number of Network Devices. Further, care should be taken to make sure that a loop-prevention mechanism is enabled in either the SDN Controller or the network when the topology contains redundant network paths.

4.2. Test Traffic

Test traffic is used to notify the controller about the asynchronous arrival of new flows. The test cases SHOULD use frame sizes of 128, 512, and 1508 bytes for benchmarking. Tests using jumbo frames are optional.

4.3. Test Emulator Requirements

The test emulator SHOULD timestamp the transmitted and received control messages to/from the controller on the established network connections. The test cases use these values to compute the controller processing time.
Top   ToC   RFC8456 - Page 8

4.4. Connection Setup

There may be controller implementations that support unencrypted and encrypted network connections with Network Devices. Further, the controller may be backward compatible with Network Devices running older versions of southbound protocols. It may be useful to measure the controller's performance with one or more applicable connection setup methods defined below. For cases with encrypted communications between the controller and the switch, key management and key exchange MUST take place before any performance or benchmark measurements. 1. Unencrypted connection with Network Devices, running the same protocol version. 2. Unencrypted connection with Network Devices, running different protocol versions. Examples: a. Controller running current protocol version and switch running older protocol version. b. Controller running older protocol version and switch running current protocol version. 3. Encrypted connection with Network Devices, running the same protocol version. 4. Encrypted connection with Network Devices, running different protocol versions. Examples: a. Controller running current protocol version and switch running older protocol version. b. Controller running older protocol version and switch running current protocol version.
Top   ToC   RFC8456 - Page 9

4.5. Measurement Point Specification and Recommendation

The accuracy of the measurements depends on several factors, including the point of observation where the indications are captured. For example, the notification can be observed at the controller or test emulator. The test operator SHOULD make the observations/measurements at the interfaces of the test emulator, unless explicitly specified otherwise in the individual test. In any case, the locations of measurement points MUST be reported.

4.6. Connectivity Recommendation

The SDN Controller in the test setup SHOULD be connected directly with the forwarding-plane and management-plane test emulators to avoid any delays or failure introduced by the intermediate devices during benchmarking tests. When the controller is implemented as a virtual machine, details of the physical and logical connectivity MUST be reported.

4.7. Test Repeatability

To increase confidence in the measured results, it is recommended that each test SHOULD be repeated a minimum of 10 times.

4.8. Test Reporting

Each test has a reporting format that contains some global and identical reporting components, and some individual components that are specific to individual tests. The following parameters for test configuration and controller settings MUST be reflected in the test report. Test Configuration Parameters: 1. Controller name and version 2. Northbound protocols and versions 3. Southbound protocols and versions 4. Controller redundancy mode (standalone or cluster mode) 5. Connection setup (unencrypted or encrypted) 6. Network Device type (physical, virtual, or emulated) 7. Number of nodes
Top   ToC   RFC8456 - Page 10
      8.  Number of links

      9.  Data-plane test traffic type

      10. Controller system configuration (e.g., physical or virtual
          machine, CPU, memory, caches, operating system, interface
          speed, storage)

      11. Reference test setup (e.g., the setup shown in Section 3.1)

   Parameters for Controller Settings:

      1. Topology rediscovery timeout

      2. Controller redundancy mode (e.g., active-standby)

      3. Controller state persistence enabled/disabled

   To ensure the repeatability of the test, the following capabilities
   of the test emulator SHOULD be reported:

      1. Maximum number of Network Devices that the forwarding plane
         emulates

      2. Control message processing time (e.g., topology discovery
         messages)

   One way to determine the above two values is to simulate the required
   control sessions and messages from the control plane.


(next page on part 2)

Next Section