6.3. Shaper Tests
Like a queue, a traffic shaper is memory based, but with the added intelligence of an active traffic scheduler. The same concepts as those described in Section 6.2 (queue testing) can be applied to testing a network device shaper. Again, the tests are divided into two sections: individual shaper benchmark tests and then full-capacity shaper benchmark tests.6.3.1. Shaper Individual Tests
A traffic shaper generally has three (3) components that can be configured: - Ingress Queue bytes - Shaper Rate (SR), bps - Burst Committed (Bc) and Burst Excess (Be), bytes The Ingress Queue holds burst traffic, and the shaper then meters traffic out of the egress port according to the SR and Bc/Be parameters. Shapers generally transmit into policers, so the idea is for the emitted traffic to conform to the policer's limits.
6.3.1.1. Testing Shaper with Stateless Traffic
Objective: Test a shaper by transmitting stateless traffic bursts into the shaper ingress port and verifying that the egress traffic is shaped according to the shaper traffic profile. Test Summary: The stateless traffic must be burst into the DUT ingress port and not exceed the Ingress Queue. The burst can be a single burst or multiple bursts. If multiple bursts are transmitted, then the transmission interval (Ti) must be large enough so that the SR is not exceeded. An example will clarify single-burst and multiple- burst test cases. In this example, the shaper's ingress and egress ports are both full-duplex Gigabit Ethernet. The Ingress Queue is configured to be 512,000 bytes, the SR = 50 Mbps, and both Bc and Be are configured to be 32,000 bytes. For a single-burst test, the transmitting test device would burst 512,000 bytes maximum into the ingress port and then stop transmitting. If a multiple-burst test is to be conducted, then the burst bytes divided by the transmission interval between the 512,000-byte bursts must not exceed the SR. The transmission interval (Ti) must adhere to a formula similar to the formula described in Section 6.2.1.1 for queues, namely: Ti = Ingress Queue * 8 / SR For the example from the previous paragraph, the Ti between bursts must be greater than 82 milliseconds (512,000 bytes * 8 / 50,000,000 bps). This yields an average rate of 50 Mbps so that an Ingress Queue would not overflow. Test Metrics: The metrics defined in Section 4.1 (LP, OOS, PDV, SR, SBB, and SBI) SHALL be measured at the egress port and recorded. Procedure: 1. Configure the DUT shaper ingress QL and shaper egress rate parameters (SR, Bc, Be). 2. Configure the tester to generate a stateless traffic burst equal to QL and an interval equal to Ti (QL in bits/BB).
3. Generate bursts of QL traffic into the DUT, and measure the metrics defined in Section 4.1 (LP, OOS, PDV, SR, SBB, and SBI) at the egress port and across the entire Td (default 30-second duration). Reporting Format: The Shaper Stateless Traffic individual report MUST contain all results for each QL/SR test run. A recommended format is as follows: *********************************************************** Test Configuration Summary: Tr, Td DUT Configuration Summary: Ingress Burst Rate, QL, SR The results table should contain entries for each test run, as follows (Test #1 to Test #Tr): - LP, OOS, PDV, SR, SBB, and SBI ***********************************************************6.3.1.2. Testing Shaper with Stateful Traffic
Objective: Test a shaper by transmitting stateful traffic bursts into the shaper ingress port and verifying that the egress traffic is shaped according to the shaper traffic profile. Test Summary: To provide a more realistic benchmark and to test queues in Layer 4 devices such as firewalls, stateful traffic testing is also recommended for the shaper tests. Stateful traffic tests will also utilize the Network Delay Emulator (NDE) from the network setup configuration in Section 1.2. The BDP of the TCP test traffic must be calculated as described in Section 6.2.1.2. To properly stress network buffers and the traffic-shaping function, the TCP window size (which is the minimum of the TCP RWND and sender socket) should be greater than the BDP, which will stress the shaper. BDP factors of 1.1 to 1.5 are recommended, but the values are left to the discretion of the tester and should be documented.
The cumulative TCP window sizes* (RWND at the receiving end and CWND at the transmitting end) equates to the TCP window size* for each connection, multiplied by the number of connections. * As described in Section 3 of [RFC6349], the SSB MUST be large enough to fill the BDP. For example, if the BDP is equal to 256 KB and a connection size of 64 KB is used for each connection, then it would require four (4) connections to fill the BDP and 5-6 connections (oversubscribe the BDP) to stress-test the traffic-shaping function. Two types of TCP tests MUST be performed: the Bulk Transfer Test and the Micro Burst Test Pattern, as documented in Appendix B. The Bulk Transfer Test only bursts during the TCP Slow Start (or Congestion Avoidance) state, while the Micro Burst Test Pattern emulates application-layer bursting, which may occur any time during the TCP connection. Other types of tests SHOULD include the following: simple web sites, complex web sites, business applications, email, and SMB/CIFS file copy (all of which are also documented in Appendix B). Test Metrics: The test results will be recorded per the stateful metrics defined in Section 4.2 -- primarily the TCP Test Pattern Execution Time (TTPET), TCP Efficiency, and Buffer Delay. Procedure: 1. Configure the DUT shaper ingress QL and shaper egress rate parameters (SR, Bc, Be). 2. Configure the test generator* with a profile of an emulated application traffic mixture. - The application mixture MUST be defined in terms of percentage of the total bandwidth to be tested. - The rate of transmission for each application within the mixture MUST also be configurable. * To ensure repeatable results, the test generator MUST be capable of generating precise TCP test patterns for each application specified.
3. Generate application traffic between the ingress (client side) and egress (server side) ports of the DUT, and measure the metrics (TTPET, TCP Efficiency, and Buffer Delay) per application stream and at the ingress and egress ports (across the entire Td, default 30-second duration). Reporting Format: The Shaper Stateful Traffic individual report MUST contain all results for each traffic scheduler and QL/SR test run. A recommended format is as follows: ****************************************************************** Test Configuration Summary: Tr, Td DUT Configuration Summary: Ingress Burst Rate, QL, SR Application Mixture and Intensities: These are the percentages configured for each application type. The results table should contain entries for each test run, with minimum, maximum, and average per application session, as follows (Test #1 to Test #Tr): - Throughput (bps) and TTPET for each application session - Bytes In and Bytes Out for each application session - TCP Efficiency and Buffer Delay for each application session ******************************************************************6.3.2. Shaper Capacity Tests
Objective: The intent of these scalability tests is to verify shaper performance in a scaled environment with shapers active on multiple queues on multiple egress physical ports. These tests will benchmark the maximum number of shapers as specified by the device manufacturer. The following sections provide the specific test scenarios, procedures, and reporting formats for each shaper capacity test.
6.3.2.1. Single Queue Shaped, All Physical Ports Active
Test Summary: The first shaper capacity test involves per-port shaping with all physical ports active. Traffic from multiple ingress physical ports is directed to the same egress physical port. This will cause oversubscription on the egress physical port. Also, the same amount of traffic is directed to each egress physical port. The benchmarking methodologies specified in Sections 6.3.1.1 (stateless) and 6.3.1.2 (stateful) (procedure, metrics, and reporting format) should be applied here. Since this is a capacity test, the configuration and report results format (see Section 6.3.1) MUST also include: Configuration: - The number of physical ingress ports active during the test - The classification marking (DSCP, VLAN, etc.) for each physical ingress port - The traffic rate for stateful traffic and the traffic rate/mixture for stateful traffic for each physical ingress port - The shaped egress port shaper parameters (QL, SR, Bc, Be) Report Results: - For each active egress port, the achieved throughput rate and shaper metrics for each ingress port traffic stream Example: - Egress Port 1: throughput and metrics for ingress streams 1-n - Egress Port n: throughput and metrics for ingress streams 1-n6.3.2.2. All Queues Shaped, Single Port Active
Test Summary: The second shaper capacity test is conducted with all queues actively shaping on a single physical port. The benchmarking methodology described in the per-port shaping test (Section 6.3.2.1) serves as the foundation for this. Additionally, each of the SP queues on the egress physical port is configured with a shaper. For the highest-priority queue, the
maximum amount of bandwidth available is limited by the bandwidth of the shaper. For the lower-priority queues, the maximum amount of bandwidth available is limited by the bandwidth of the shaper and traffic in higher-priority queues. The benchmarking methodologies specified in Sections 6.3.1.1 (stateless) and 6.3.1.2 (stateful) (procedure, metrics, and reporting format) should be applied here. Since this is a capacity test, the configuration and report results format (see Section 6.3.1) MUST also include: Configuration: - The number of physical ingress ports active during the test - The classification marking (DSCP, VLAN, etc.) for each physical ingress port - The traffic rate for stateful traffic and the traffic rate/mixture for stateful traffic for each physical ingress port - For the active egress port, each of the following shaper queue parameters: QL, SR, Bc, Be Report Results: - For each queue of the active egress port, the achieved throughput rate and shaper metrics for each ingress port traffic stream Example: - Egress Port High-Priority Queue: throughput and metrics for ingress streams 1-n - Egress Port Lower-Priority Queue: throughput and metrics for ingress streams 1-n
6.3.2.3. All Queues Shaped, All Ports Active
Test Summary: For the third shaper capacity test (which is a combination of the tests listed in Sections 6.3.2.1 and 6.3.2.2), all queues will be actively shaping and all physical ports active. The benchmarking methodologies specified in Sections 6.3.1.1 (stateless) and 6.3.1.2 (stateful) (procedure, metrics, and reporting format) should be applied here. Since this is a capacity test, the configuration and report results format (see Section 6.3.1) MUST also include: Configuration: - The number of physical ingress ports active during the test - The classification marking (DSCP, VLAN, etc.) for each physical ingress port - The traffic rate for stateful traffic and the traffic rate/mixture for stateful traffic for each physical ingress port - For each of the active egress ports: shaper port parameters and per-queue parameters (QL, SR, Bc, Be) Report Results: - For each queue of each active egress port, the achieved throughput rate and shaper metrics for each ingress port traffic stream Example: - Egress Port 1, High-Priority Queue: throughput and metrics for ingress streams 1-n - Egress Port 1, Lower-Priority Queue: throughput and metrics for ingress streams 1-n ... - Egress Port n, High-Priority Queue: throughput and metrics for ingress streams 1-n - Egress Port n, Lower-Priority Queue: throughput and metrics for ingress streams 1-n
6.4. Concurrent Capacity Load Tests
As mentioned in Section 3 of this document, it is impossible to specify the various permutations of concurrent traffic management functions that should be tested in a device for capacity testing. However, some profiles are listed below that may be useful for testing multiple configurations of traffic management functions: - Policers on ingress and queuing on egress - Policers on ingress and shapers on egress (not intended for a flow to be policed and then shaped; these would be two different flows tested at the same time) The test procedures and reporting formats from Sections 6.1, 6.2, and 6.3 may be modified to accommodate the capacity test profile.7. Security Considerations
Documents of this type do not directly affect the security of the Internet or of corporate networks as long as benchmarking is not performed on devices or systems connected to production networks. Further, benchmarking is performed on a "black box" basis, relying solely on measurements observable external to the DUT/SUT. Special capabilities SHOULD NOT exist in the DUT/SUT specifically for benchmarking purposes. Any implications for network security arising from the DUT/SUT SHOULD be identical in the lab and in production networks.
8. References
8.1. Normative References
[3GPP2-C_R1002-A] 3rd Generation Partnership Project 2, "cdma2000 Evaluation Methodology", Version 1.0, Revision A, May 2009, <http://www.3gpp2.org/public_html/specs/ C.R1002-A_v1.0_Evaluation_Methodology.pdf>. [RFC1242] Bradner, S., "Benchmarking Terminology for Network Interconnection Devices", RFC 1242, DOI 10.17487/RFC1242, July 1991, <http://www.rfc-editor.org/info/rfc1242>. [RFC2119] Bradner, S., "Key words for use in RFCs to Indicate Requirement Levels", BCP 14, RFC 2119, DOI 10.17487/RFC2119, March 1997, <http://www.rfc-editor.org/info/rfc2119>. [RFC2544] Bradner, S. and J. McQuaid, "Benchmarking Methodology for Network Interconnect Devices", RFC 2544, DOI 10.17487/RFC2544, March 1999, <http://www.rfc-editor.org/info/rfc2544>. [RFC2680] Almes, G., Kalidindi, S., and M. Zekauskas, "A One-way Packet Loss Metric for IPPM", RFC 2680, DOI 10.17487/RFC2680, September 1999, <http://www.rfc-editor.org/info/rfc2680>. [RFC3148] Mathis, M. and M. Allman, "A Framework for Defining Empirical Bulk Transfer Capacity Metrics", RFC 3148, DOI 10.17487/RFC3148, July 2001, <http://www.rfc-editor.org/info/rfc3148>. [RFC4115] Aboul-Magd, O. and S. Rabie, "A Differentiated Service Two-Rate, Three-Color Marker with Efficient Handling of in-Profile Traffic", RFC 4115, DOI 10.17487/RFC4115, July 2005, <http://www.rfc-editor.org/info/rfc4115>. [RFC4689] Poretsky, S., Perser, J., Erramilli, S., and S. Khurana, "Terminology for Benchmarking Network-layer Traffic Control Mechanisms", RFC 4689, DOI 10.17487/RFC4689, October 2006, <http://www.rfc-editor.org/info/rfc4689>. [RFC4737] Morton, A., Ciavattone, L., Ramachandran, G., Shalunov, S., and J. Perser, "Packet Reordering Metrics", RFC 4737, DOI 10.17487/RFC4737, November 2006, <http://www.rfc-editor.org/info/rfc4737>.
[RFC5481] Morton, A. and B. Claise, "Packet Delay Variation Applicability Statement", RFC 5481, DOI 10.17487/RFC5481, March 2009, <http://www.rfc-editor.org/info/rfc5481>. [RFC6349] Constantine, B., Forget, G., Geib, R., and R. Schrage, "Framework for TCP Throughput Testing", RFC 6349, DOI 10.17487/RFC6349, August 2011, <http://www.rfc-editor.org/info/rfc6349>. [RFC6703] Morton, A., Ramachandran, G., and G. Maguluri, "Reporting IP Network Performance Metrics: Different Points of View", RFC 6703, DOI 10.17487/RFC6703, August 2012, <http://www.rfc-editor.org/info/rfc6703>. [SPECweb2009] Standard Performance Evaluation Corporation (SPEC), "SPECweb2009 Release 1.20 Benchmark Design Document", April 2010, <https://www.spec.org/web2009/docs/design/ SPECweb2009_Design.html>.8.2. Informative References
[CA-Benchmark] Hamilton, M. and S. Banks, "Benchmarking Methodology for Content-Aware Network Devices", Work in Progress, draft-ietf-bmwg-ca-bench-meth-04, February 2013. [CoDel] Nichols, K., Jacobson, V., McGregor, A., and J. Iyengar, "Controlled Delay Active Queue Management", Work in Progress, draft-ietf-aqm-codel-01, April 2015. [MEF-10.3] Metro Ethernet Forum, "Ethernet Services Attributes Phase 3", MEF 10.3, October 2013, <https://www.mef.net/Assets/Technical_Specifications/ PDF/MEF_10.3.pdf>. [MEF-12.2] Metro Ethernet Forum, "Carrier Ethernet Network Architecture Framework -- Part 2: Ethernet Services Layer", MEF 12.2, May 2014, <https://www.mef.net/Assets/Technical_Specifications/ PDF/MEF_12.2.pdf>. [MEF-14] Metro Ethernet Forum, "Abstract Test Suite for Traffic Management Phase 1", MEF 14, November 2005, <https://www.mef.net/Assets/ Technical_Specifications/PDF/MEF_14.pdf>.
[MEF-19] Metro Ethernet Forum, "Abstract Test Suite for UNI Type 1", MEF 19, April 2007, <https://www.mef.net/Assets/ Technical_Specifications/PDF/MEF_19.pdf>. [MEF-26.1] Metro Ethernet Forum, "External Network Network Interface (ENNI) - Phase 2", MEF 26.1, January 2012, <http://www.mef.net/Assets/Technical_Specifications/ PDF/MEF_26.1.pdf>. [MEF-37] Metro Ethernet Forum, "Abstract Test Suite for ENNI", MEF 37, January 2012, <https://www.mef.net/Assets/ Technical_Specifications/PDF/MEF_37.pdf>. [PIE] Pan, R., Natarajan, P., Baker, F., White, G., VerSteeg, B., Prabhu, M., Piglione, C., and V. Subramanian, "PIE: A Lightweight Control Scheme To Address the Bufferbloat Problem", Work in Progress, draft-ietf-aqm-pie-02, August 2015. [RFC2697] Heinanen, J. and R. Guerin, "A Single Rate Three Color Marker", RFC 2697, DOI 10.17487/RFC2697, September 1999, <http://www.rfc-editor.org/info/rfc2697>. [RFC2698] Heinanen, J. and R. Guerin, "A Two Rate Three Color Marker", RFC 2698, DOI 10.17487/RFC2698, September 1999, <http://www.rfc-editor.org/info/rfc2698>. [RFC7567] Baker, F., Ed., and G. Fairhurst, Ed., "IETF Recommendations Regarding Active Queue Management", BCP 197, RFC 7567, DOI 10.17487/RFC7567, July 2015, <http://www.rfc-editor.org/info/rfc7567>.
Appendix A. Open Source Tools for Traffic Management Testing
This framework specifies that stateless and stateful behaviors SHOULD both be tested. Some open source tools that can be used to accomplish many of the tests proposed in this framework are iperf, netperf (with netperf-wrapper), the "uperf" tool, Tmix, TCP-incast-generator, and D-ITG (Distributed Internet Traffic Generator). iperf can generate UDP-based or TCP-based traffic; a client and server must both run the iperf software in the same traffic mode. The server is set up to listen, and then the test traffic is controlled from the client. Both unidirectional and bidirectional concurrent testing are supported. The UDP mode can be used for the stateless traffic testing. The target bandwidth, packet size, UDP port, and test duration can be controlled. A report of bytes transmitted, packets lost, and delay variation is provided by the iperf receiver. iperf (TCP mode), TCP-incast-generator, and D-ITG can be used for stateful traffic testing to test bulk transfer traffic. The TCP window size (which is actually the SSB), number of connections, packet size, TCP port, and test duration can be controlled. A report of bytes transmitted and throughput achieved is provided by the iperf sender, while TCP-incast-generator and D-ITG provide even more statistics. netperf is a software application that provides network bandwidth testing between two hosts on a network. It supports UNIX domain sockets, TCP, SCTP, and UDP via BSD Sockets. netperf provides a number of predefined tests, e.g., to measure bulk (unidirectional) data transfer or request/response performance (http://en.wikipedia.org/wiki/Netperf). netperf-wrapper is a Python script that runs multiple simultaneous netperf instances and aggregates the results. uperf uses a description (or model) of an application mixture. It generates the load according to the model descriptor. uperf is more flexible than netperf in its ability to generate request/response application behavior within a single TCP connection. The application model descriptor can be based on empirical data, but at the time of this writing, the import of packet captures is not directly supported.
Tmix is another application traffic emulation tool. It uses packet captures directly to create the traffic profile. The packet trace is "reverse compiled" into a source-level characterization, called a "connection vector", of each TCP connection present in the trace. While most widely used in ns2 simulation environments, Tmix also runs on Linux hosts. The traffic generation capabilities of these open source tools facilitate the emulation of the TCP test patterns discussed in Appendix B.Appendix B. Stateful TCP Test Patterns
This framework recommends at a minimum the following TCP test patterns, since they are representative of real-world application traffic (Section 5.2.1 describes some methods to derive other application-based TCP test patterns). - Bulk Transfer: Generate concurrent TCP connections whose aggregate number of in-flight data bytes would fill the BDP. Guidelines from [RFC6349] are used to create this TCP traffic pattern. - Micro Burst: Generate precise burst patterns within a single TCP connection or multiple TCP connections. The idea is for TCP to establish equilibrium and then burst application bytes at defined sizes. The test tool must allow the burst size and burst time interval to be configurable. - Web Site Patterns: The HTTP traffic model shown in Table 4.1.3-1 of [3GPP2-C_R1002-A] demonstrates a way to develop these TCP test patterns. In summary, the HTTP traffic model consists of the following parameters: - Main object size (Sm) - Embedded object size (Se) - Number of embedded objects per page (Nd) - Client processing time (Tcp) - Server processing time (Tsp)
Web site test patterns are illustrated with the following examples: - Simple web site: Mimic the request/response and object download behavior of a basic web site (small company). - Complex web site: Mimic the request/response and object download behavior of a complex web site (eCommerce site). Referencing the HTTP traffic model parameters, the following table was derived (by analysis and experimentation) for simple web site and complex web site TCP test patterns: Simple Complex Parameter Web Site Web Site ----------------------------------------------------- Main object Ave. = 10KB Ave. = 300KB size (Sm) Min. = 100B Min. = 50KB Max. = 500KB Max. = 2MB Embedded object Ave. = 7KB Ave. = 10KB size (Se) Min. = 50B Min. = 100B Max. = 350KB Max. = 1MB Number of embedded Ave. = 5 Ave. = 25 objects per page (Nd) Min. = 2 Min. = 10 Max. = 10 Max. = 50 Client processing Ave. = 3s Ave. = 10s time (Tcp)* Min. = 1s Min. = 3s Max. = 10s Max. = 30s Server processing Ave. = 5s Ave. = 8s time (Tsp)* Min. = 1s Min. = 2s Max. = 15s Max. = 30s * The client and server processing time is distributed across the transmission/receipt of all of the main and embedded objects. To be clear, the parameters in this table are reasonable guidelines for the TCP test pattern traffic generation. The test tool can use fixed parameters for simpler tests and mathematical distributions for more complex tests. However, the test pattern must be repeatable to ensure that the benchmark results can be reliably compared.
- Interactive Patterns: While web site patterns are interactive to a degree, they mainly emulate the downloading of web sites of varying complexity. Interactive patterns are more chatty in nature, since there is a lot of user interaction with the servers. Examples include business applications such as PeopleSoft and Oracle, and consumer applications such as Facebook and IM. For the interactive patterns, the packet capture technique was used to characterize some business applications and also the email application. In summary, an interactive application can be described by the following parameters: - Client message size (Scm) - Number of client messages (Nc) - Server response size (Srs) - Number of server messages (Ns) - Client processing time (Tcp) - Server processing time (Tsp) - File size upload (Su)* - File size download (Sd)* * The file size parameters account for attachments uploaded or downloaded and may not be present in all interactive applications.
Again using packet capture as a means to characterize, the following table reflects the guidelines for simple business applications, complex business applications, eCommerce, and email Send/Receive: Simple Complex Business Business Parameter Application Application eCommerce* Email -------------------------------------------------------------------- Client message Ave. = 450B Ave. = 2KB Ave. = 1KB Ave. = 200B size (Scm) Min. = 100B Min. = 500B Min. = 100B Min. = 100B Max. = 1.5KB Max. = 100KB Max. = 50KB Max. = 1KB Number of client Ave. = 10 Ave. = 100 Ave. = 20 Ave. = 10 messages (Nc) Min. = 5 Min. = 50 Min. = 10 Min. = 5 Max. = 25 Max. = 250 Max. = 100 Max. = 25 Client processing Ave. = 10s Ave. = 30s Ave. = 15s Ave. = 5s time (Tcp)** Min. = 3s Min. = 3s Min. = 5s Min. = 3s Max. = 30s Max. = 60s Max. = 120s Max. = 45s Server response Ave. = 2KB Ave. = 5KB Ave. = 8KB Ave. = 200B size (Srs) Min. = 500B Min. = 1KB Min. = 100B Min. = 150B Max. = 100KB Max. = 1MB Max. = 50KB Max. = 750B Number of server Ave. = 50 Ave. = 200 Ave. = 100 Ave. = 15 messages (Ns) Min. = 10 Min. = 25 Min. = 15 Min. = 5 Max. = 200 Max. = 1000 Max. = 500 Max. = 40 Server processing Ave. = 0.5s Ave. = 1s Ave. = 2s Ave. = 4s time (Tsp)** Min. = 0.1s Min. = 0.5s Min. = 1s Min. = 0.5s Max. = 5s Max. = 20s Max. = 10s Max. = 15s File size Ave. = 50KB Ave. = 100KB Ave. = N/A Ave. = 100KB upload (Su) Min. = 2KB Min. = 10KB Min. = N/A Min. = 20KB Max. = 200KB Max. = 2MB Max. = N/A Max. = 10MB File size Ave. = 50KB Ave. = 100KB Ave. = N/A Ave. = 100KB download (Sd) Min. = 2KB Min. = 10KB Min. = N/A Min. = 20KB Max. = 200KB Max. = 2MB Max. = N/A Max. = 10MB * eCommerce used a combination of packet capture techniques and reference traffic flows as described in [SPECweb2009]. ** The client and server processing time is distributed across the transmission/receipt of all of the messages. The client processing time consists mainly of the delay between user interactions (not machine processing).
Again, the parameters in this table are the guidelines for the TCP test pattern traffic generation. The test tool can use fixed parameters for simpler tests and mathematical distributions for more complex tests. However, the test pattern must be repeatable to ensure that the benchmark results can be reliably compared. - SMB/CIFS file copy: Mimic a network file copy, both read and write. As opposed to FTP, which is a bulk transfer and is only flow-controlled via TCP, SMB/CIFS divides a file into application blocks and utilizes application-level handshaking in addition to TCP flow control. In summary, an SMB/CIFS file copy can be described by the following parameters: - Client message size (Scm) - Number of client messages (Nc) - Server response size (Srs) - Number of server messages (Ns) - Client processing time (Tcp) - Server processing time (Tsp) - Block size (Sb) The client and server messages are SMB control messages. The block size is the data portion of the file transfer.
Again using packet capture as a means to characterize, the following table reflects the guidelines for SMB/CIFS file copy: SMB/CIFS Parameter File Copy -------------------------------- Client message Ave. = 450B size (Scm) Min. = 100B Max. = 1.5KB Number of client Ave. = 10 messages (Nc) Min. = 5 Max. = 25 Client processing Ave. = 1ms time (Tcp) Min. = 0.5ms Max. = 2 Server response Ave. = 2KB size (Srs) Min. = 500B Max. = 100KB Number of server Ave. = 10 messages (Ns) Min. = 10 Max. = 200 Server processing Ave. = 1ms time (Tsp) Min. = 0.5ms Max. = 2ms Block Ave. = N/A size (Sb)* Min. = 16KB Max. = 128KB * Depending upon the tested file size, the block size will be transferred "n" number of times to complete the example. An example would be a 10 MB file test and 64 KB block size. In this case, 160 blocks would be transferred after the control channel is opened between the client and server.
Acknowledgments
We would like to thank Al Morton for his continuous review and invaluable input to this document. We would also like to thank Scott Bradner for providing guidance early in this document's conception, in the area of the benchmarking scope of traffic management functions. Additionally, we would like to thank Tim Copley for his original input, as well as David Taht, Gory Erg, and Toke Hoiland-Jorgensen for their review and input for the AQM group. Also, for the formal reviews of this document, we would like to thank Gilles Forget, Vijay Gurbani, Reinhard Schrage, and Bhuvaneswaran Vengainathan.Authors' Addresses
Barry Constantine JDSU, Test and Measurement Division Germantown, MD 20876-7100 United States Phone: +1-240-404-2227 Email: barry.constantine@jdsu.com Ram (Ramki) Krishnan Dell Inc. Santa Clara, CA 95054 United States Phone: +1-408-406-7890 Email: ramkri123@gmail.com