IETF 81 Proceedings
Introduction | Area, Working Goup & BoF Reports | Plenaries | Training | Internet Research Task Force
In addition to this official charter maintained by the IETF Secretariat, there is additional information about this working group on the Web at:
Additional BMWG Page
Additional information is available at tools.ietf.org/wg/bmwg
Chair(s):Operations and Management Area Director(s):Operations and Management Area Advisor: |
The Benchmarking Methodology Working Group (BMWG) will continue to
produce a series of recommendations concerning the key performance
characteristics of internetworking technologies, or benchmarks for
network devices, systems, and services. Taking a view of networking
divided into planes, the scope of work includes benchmarks for the
management, control, and forwarding planes.
Each recommendation will describe the class of equipment, system, or
service being addressed; discuss the performance characteristics that
are pertinent to that class; clearly identify a set of metrics that aid
in the description of those characteristics; specify the methodologies
required to collect said metrics; and lastly, present the requirements
for the common, unambiguous reporting of benchmarking results.
The set of relevant benchmarks will be developed with input from the
community of users (e.g, network operators and testing organizations)
and from those affected by the benchmarks when they are published
(networking and test equipment manufacturers). When possible, the
benchmarks and other terminology will be developed jointly with
organizations that are willing to share their expertise. Joint review
requirements for a specific work area will be included in the detailed
description of the task, as listed below.
To better distinguish the BMWG from other measurement initiatives in the
IETF, the scope of the BMWG is limited to the characterization of
implementations of various internetworking technologies
using controlled stimuli in a laboratory environment. Said differently,
the BMWG does not attempt to produce benchmarks for live, operational
networks. Moreover, the benchmarks produced by this WG shall strive to
be vendor independent or otherwise have universal applicability to a
given technology class.
Because the demands of a particular technology may vary from deployment
to deployment, a specific non-goal of the Working Group is to define
acceptance criteria or performance requirements.
An ongoing task is to provide a forum for discussion regarding the
advancement of measurements designed to provide insight on the
capabilities and operation of inter-networking technology
implementations.
The BMWG will communicate with the operations community through
organizations such as NANOG, RIPE, and APRICOT.
In addition to its current work plan, the BMWG is explicitly tasked to
develop benchmarks and methodologies for the following technologies:
* BGP Control-plane Convergence Methodology (Terminology is complete):
With relevant performance characteristics identified, BMWG will prepare
a Benchmarking Methodology Document with review from the Routing Area
(e.g., the IDR working group and/or the RTG-DIR). The Benchmarking
Methodology will be Last-Called in all the groups that previously
provided input, including another round of network operator input during
the last call.
* SIP Networking Devices: Develop new terminology and methods to
characterize the key performance aspects of network devices using
SIP, including the signaling plane scale and service rates while
considering load conditions on both the signaling and media planes. This
work will be harmonized with related SIP performance metric definitions
prepared by the PMOL working group.
* Flow Export and Collection: Develop terminology and methods to
characterize network devices flow monitoring, export, and collection.
The goal is a methodology to assess the maximum IP flow rate that a
network device can sustain without losing any IP flow information or
compromising the accuracy of information exported on the IP flows,
and to asses the forwarding plane performance (if the forwarding
function is present) in the presence of Flow Monitoring.
* Data Center Bridging Devices:
Some key concepts from BMWG's past work are not meaningful when testing
switches that implement new IEEE specifications in the area of data
center bridging. For example, throughput as defined in RFC 1242 cannot
be measured when testing devices that implement three new IEEE
specifications: priority-based flow control (802.1Qbb); priority groups
(802.1Qaz); and congestion notification (802.1Qau).
Since devices that implement these new congestion-management
specifications should never drop frames, and since the metric of
throughput distinguishes between non-zero and zero drop rates, no
throughput measurement is possible using the existing methodology.
The current emphasis is on the Priority Flow Control aspects of
Data Center Bridging, and the work will include an investigation
into whether TRILL RBridges require any specific treatment in the
methodology. This work will update RFC 2544 and exchange periodic
Liaisons with IEEE 802.1 DCB Task Group, especially at WG Last Call.
* Content Aware Devices:
New classes of network devices that operate above the IP layer of the
network stack require a new methodology to perform adequate
benchmarking. Existing BMWG RFCs (RFC2647 and RFC3511) provides useful
measurement and performance statistics, though they may not reflect the
actual performance of the device when deployed in production networks.
Operating within the limitations of the charter, namely blackbox
characterization in laboratory environments, the BMWG will develop a
methodology that more closely relates the performance of these devices
to performance in an operational setting. In order to confirm or
identify key performance characteristics, BMWG will solicit input from
operations groups such as NANOG, RIP and APRICOT.
* LDP Dataplane Convergence:
In order to identify key LDP convergence performance characteristics,
BMWG will solicit input from operations groups such as NANOG, RIP and
APRICOT. When relevant performance characteristics have been identified,
BMWG will jointly prepare a Benchmarking Terminology Document with the
Routing Area (e.g., the MPLS working group and or the RTG-DIR), which
would define metrics relevant to LDP convergence. The Benchmark
definition document would be Last-Called in all the working groups that
produced it, and solicit operator input during the last call. The work
will then continue in BMWG to define the test methodology, with input
and review from the aforementioned parties.
Done | Expand the current Ethernet switch benchmarking methodology draft to define the metrics and methodologies particular to the general class of connectionless, LAN switches. | |
Done | Edit the LAN switch draft to reflect the input from BMWG. Issue a new version of document for comment. If appropriate, ascertain consensus on whether to recommend the draft for consideration as an RFC. | |
Done | Take controversial components of multicast draft to mailing list for discussion. Incorporate changes to draft and reissue appropriately. | |
Done | Submit workplan for initiating work on Benchmarking Methodology for LAN Switching Devices. | |
Done | Submit workplan for continuing work on the Terminology for Cell/Call Benchmarking draft. | |
Done | Submit initial draft of Benchmarking Methodology for LAN Switches. | |
Done | Submit Terminology for IP Multicast Benchmarking draft for AD Review. | |
Done | Submit Benchmarking Terminology for Firewall Performance for AD review | |
Done | Progress ATM benchmarking terminology draft to AD review. | |
Done | Submit Benchmarking Methodology for LAN Switching Devices draft for AD review. | |
Done | Submit first draft of Firewall Benchmarking Methodology. | |
Done | First Draft of Terminology for FIB related Router Performance Benchmarking. | |
Done | First Draft of Router Benchmarking Framework | |
Done | Progress Frame Relay benchmarking terminology draft to AD review. | |
Done | Methodology for ATM Benchmarking for AD review. | |
Done | Terminology for ATM ABR Benchmarking for AD review. | |
Done | Terminology for FIB related Router Performance Benchmarking to AD review. | |
Done | Firewall Benchmarking Methodology to AD Review | |
Done | First Draft of Methodology for FIB related Router Performance Benchmarking. | |
Done | First draft Net Traffic Control Benchmarking Methodology. | |
Done | Methodology for IP Multicast Benchmarking to AD Review. | |
Done | Resource Reservation Benchmarking Terminology to AD Review | |
Done | First I-D on IPsec Device Benchmarking Terminology | |
Done | EGP Convergence Benchmarking Terminology to AD Review | |
Done | Resource Reservation Benchmarking Methodology to AD Review | |
Done | Net Traffic Control Benchmarking Terminology to AD Review | |
Done | IGP/Data-Plane Terminology I-D to AD Review | |
Done | IGP/Data-Plane Methodology and Considerations I-Ds to AD Review | |
Done | Hash and Stuffing I-D to AD Review | |
Done | IPv6 Benchmarking Methodology to AD Review | |
Done | IPsec Device Benchmarking Terminology to IESG Review | |
Done | IPsec Device Benchmarking Methodology to IESG Review | |
Done | Terminology For Protection Benchmarking to AD Review | |
Done | Methodology for MPLS Forwarding to AD Review | |
Done | Networking Device Reset Benchmark (Updates RFC 2544) to IESG Review | |
Dec 2010 | Methodology For Protection Benchmarking to IESG Review | |
Feb 2011 | Methodology for Flow Export and Collection Benchmarking to IESG Review | |
Jun 2011 | Methodology for Data Center Bridging Benchmarking to IESG Review | |
Jun 2011 | Terminology for SIP Device Benchmarking to IESG Review | |
Jun 2011 | Methodology for SIP Device Benchmarking to IESG Review | |
Jul 2011 | Basic BGP Convergence Benchmarking Methodology to IESG Review | |
Dec 2011 | Terminology for Content Aware Device Benchmarking to IESG Review | |
Dec 2011 | Methodology for Content Aware Device Benchmarking to IESG | |
Dec 2011 | Terminology for LDP Convergence Benchmarking to IESG Review | |
Dec 2011 | Methodology for LDP Convergence Benchmarking to IESG Review |