Internet Engineering Task Force Jan Novak Internet-Draft Cisco Systems Intended status: Informational March 1, 2009 Expires: September 1, 2009 IP Flow Information Accounting and Export Benchmarking Methodology draft-novak-bmwg-ipflow-meth-02.txt Status of this Memo This Internet-Draft is submitted to IETF in full conformance with the provisions of BCP 78 and BCP 79. Internet-Drafts are working documents of the Internet Engineering Task Force (IETF), its areas, and its working groups. Note that other groups may also distribute working documents as Internet- Drafts. Internet-Drafts are draft documents valid for a maximum of six months and may be updated, replaced, or obsoleted by other documents at any time. It is inappropriate to use Internet-Drafts as reference material or to cite them other than as "work in progress." The list of current Internet-Drafts can be accessed at http://www.ietf.org/ietf/1id-abstracts.txt. The list of Internet-Draft Shadow Directories can be accessed at http://www.ietf.org/shadow.html. This Internet-Draft will expire on September 1, 2009. Copyright Notice Copyright (c) 2009 IETF Trust and the persons identified as the document authors. All rights reserved. This document is subject to BCP 78 and the IETF Trust's Legal Provisions Relating to IETF Documents in effect on the date of publication of this document (http://trustee.ietf.org/license-info). Please review these documents carefully, as they describe your rights and restrictions with respect to this document. Abstract This document provides methodology and framework for quantifying performance of selective monitoring of IP flows on a network device and export of this information to a collector as specified in the IPFIX documents [RFC5101]. Novak Expires September 1, 2009 [Page 1] Internet-Draft Flow Monitoring Benchmarking March 2009 Table of Contents 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . 3 2. Requirements Language . . . . . . . . . . . . . . . . . . . . 3 3. Scope of the Document . . . . . . . . . . . . . . . . . . . . 3 4. Flow Monitoring Documents . . . . . . . . . . . . . . . . . . 4 4.1 IPFIX Documents Overview. . . . . . . . . . . . . . . . . 4 4.2 PSAMP Document Overview . . . . . . . . . . . . . . . . . 4 5. Terminology . . . . . . . . . . . . . . . . . . . . . . . . . 5 5.1 Existing Terminology . . . . . . . . . . . . . . . . . . 5 5.2 New Terminology. . . . . . . . . . . . . . . . . . . . . 5 6. Test Set Up . . . . . . . . . . . . . . . . . . . . . . . . . 7 6.1 Test Topology. . . . . . . . . . . . . . . . . . . . . . 7 6.2 Base DUT Set Up. . . . . . . . . . . . . . . . . . . . . 8 6.3 Flow Monitoring Configuration. . . . . . . . . . . . . . 8 6.4 Collector. . . . . . . . . . . . . . . . . . . . . . . . 9 6.5 Packet Sampling. . . . . . . . . . . . . . . . . . . . . 9 6.6 Frame Formats. . . . . . . . . . . . . . . . . . . . . . 9 6.7 Frame Sizes. . . . . . . . . . . . . . . . . . . . . . .10 6.8 Flow Records . . . . . . . . . . . . . . . . . . . . . .10 7. Flow Monitoring Throughput Measurement Methodology. . . . . .10 7.1 Normal Cache Mode . . . . . . . . . . . . . . . . . . . .11 7.2 Cache Overflow Mode . . . . . . . . . . . . . . . . . . .12 8. RFC2544 Measurements. . . . . . . . . . . . . . . . . . . . .13 8.1 Flow Monitoring Configuration . . . . . . . . . . . . . .13 8.2 Single Traffic Component. . . . . . . . . . . . . . . . .14 8.3 Two Traffic Components. . . . . . . . . . . . . . . . . .14 9. Flow Monitoring Accuracy. . . . . . . . . . . . . . . . . . .15 10. Evaluating Flow Monitoring Applicability . . . . . . . . . .16 11. Acknowledgements . . . . . . . . . . . . . . . . . . . . . .16 12. IANA Considerations. . . . . . . . . . . . . . . . . . . . .16 13.Security Considerations . . . . . . . . . . . . . . . . . . .16 14.References . . . . . . . . . . . . . . . . . . . . . . . . .17 14.2 Normative References . . . . . . . . . . . . . . . . .17 14.2 Informative References . . . . . . . . . . . . . . . .17 Novak Expires September 1, 2009 [Page 2] Internet-Draft Flow Monitoring Benchmarking March 2009 1. Introduction Monitoring of IP flows (Flow Monitoring) on network devices is an application which has numerous usage in both service provider and enterprise segments as detailed in the IPFIX requirements [RFC3917] and is widely used in both these deployment areas. The goal of this document is to address measurement of Flow Monitoring performance in a way comparable amongst various implementations and to provide methodology for [RFC2544] (Benchmarking Methodology for Network Interconnect Devices) measurements in the presence of Flow Monitoring on the network device. This document identifies as the main parameter which has major performance impact on the network devices the rate at which flows are created and expired in the devices memory and proposes methodology how to quantify this impact in a black box test manner. The impact of Flow Monitoring on the network devices central processor unit utilisation is out of scope of this document to avoid white box testing even though it represents an interesting performance metrics. 2. Document Scope The purpose of this draft is to define a methodology for measurement of Flow Monitoring performance itself. Since Flow Monitoring will be in most cases run on network devices forwarding packets, methodology for RFC2544 measurements in the presence of Flow Monitoring is also proposed here. Flow Monitoring is defined in the IPFIX specification [RFC5101] and related documents (see section 5 for more details). [RFC2544], [RFC5180] and [MPLS] specify benchmarking of network devices forwarding IPv4, IPv6 and MPLS traffic, respectively. This draft document also proposes the methodology of performing this kind of benchmarking in the presence of Flow Monitoring, but not necessarily restricted to IPv4, IPv6 and MPLS traffic types. The methodology stays same for any traffic, the only restriction is what the actual Flow Monitoring implementation supports. 3. Requirements Language The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this document are to be interpreted as described in BCP 14, RFC 2119 [RFC2119]. RFC 2119 defines the use of these key words to help make the intent of standards track documents as clear as possible. While this document uses these keywords, this document is not a standards track document. Novak Expires September 1, 2009 [Page 3] Internet-Draft Flow Monitoring Benchmarking March 2009 4. Flow Monitoring Documents 4.1 IPFIX Documents Overview The IPFIX Protocol [RFC5101] provides network administrators with access to IP Flow information. The architecture for the export of measured IP Flow information out of an IPFIX Exporting Process to a Collecting Process is defined in the IPFIX Architecture [IPFIX-ARCH], per the requirements defined in RFC 3917 [RFC3917]. The IPFIX Architecture [IPFIX-ARCH] specifies how IPFIX Data Records and Templates are carried via a congestion-aware transport protocol from IPFIX Exporting Processes to IPFIX Collecting Processes. IPFIX has a formal description of IPFIX Information Elements, their name, type and additional semantic information, as specified in the IPFIX Information Model [RFC5102]. Finally the IPFIX Applicability Statement [IPFIX-AS] describes what type of applications can use the IPFIX protocol and how they can use the information provided. It furthermore shows how the IPFIX framework relates to other architectures and frameworks. 4.2 PSAMP Documents Overview The document "A Framework for Packet Selection and Reporting" [PSAMP-FMWK], describes the PSAMP framework for network elements to select subsets of packets by statistical and other methods, and to export a stream of reports on the selected packets to a collector. The set of packet selection techniques (sampling, filtering, and hashing) supported by PSAMP are described in "Sampling and Filtering Techniques for IP Packet Selection" [PSAMP-TECH]. The PSAMP protocol [PSAMP-PROTO] specifies the export of packet information from a PSAMP Exporting Process to a PSAMP Collecting Process. Like IPFIX, PSAMP has a formal description of its information elements, their name, type and additional semantic information. The PSAMP information model is defined in [PSAMP-INFO]. Finally [PSAMP-MIB] describes the PSAMP Management Information Base. Novak Expires September 1, 2009 [Page 4] Internet-Draft Flow Monitoring Benchmarking March 2009 5. Terminology 5.1 Existing Terminology Device Under Test (DUT) [RFC2285, section 3.1.1] Flow [RFC5101, section 2] Flow Key [RFC5101, section 2] Flow Record [RFC5101, section 2] Observation Point [RFC5101, section 2] Exporter [RFC5101, section 2] Collector [RFC5101, section 2] Control Information [RFC5101, section 2] Data Stream [RFC5101, section 2] Flow Expiration [IPFIX-ARCH, section 5.1.1] Flow Export [IPFIX-ARCH, section 5.1.2] Throughput [RFC1242, section 3.17] 5.2 New Terminology 5.2.1 Cache Definition: Memory area held and dedicated by the DUT to store Flow Record information 5.2.2 Cache Size Definition: The size of the cache in terms of how many entries of Flow Records the cache can hold Discussion: This term is typically represented as a configurable option in the particular Flow Monitoring implementation. It MUST be at least known in order to define the tests circumstances properly. Its highest value will depend on the memory available in the network device. Measurement units: Number of entries Novak Expires September 1, 2009 [Page 5] Internet-Draft Flow Monitoring Benchmarking March 2009 5.2.3 Active Timeout Definition: The time interval from the time when first packet of a particular Flow was seen till the Flow will be expired while there are still packets arriving to the DUT which belong to the Flow. Discussion: This term is typically represented as a configurable option in the particular Flow Monitoring implementation. See section 5.1.1 of [IPFIX-ARCH] for more detailed discussion. Measurement units: Seconds 5.2.4 Inactive Timeout Definition: The time interval after which the Flow is expired from the Cache if no more packets which belong to the Flow were seen Discussion: This term is typically represented as a configurable option in the particular Flow Monitoring implementation. See section 5.1.1 of [IPFIX-ARCH] for more detailed discussion. Measurement units: Seconds 5.2.5 Flow Expiration Rate Definition: Number of Flow Records which expire (as defined by the Flow Expiration term) from the Cache within a time interval Measurement units: Number of Flow Records per second 5.2.6 Flow Monitoring Throughput Definition: The maximum Flow Expiration Rate the DUT can sustain without losing any information about any offered Flow while exporting the Flow information to an external collecting device (Collector) and as measured at the Collector Discussion: Flow Monitoring Throughput is the equivalent of [RFC1242] (Benchmarking Terminology for Network Interconnection Devices) packet forwarding throughput which allows to measure a single number to compare different implementations. Flow Monitoring requires quite considerable processing power to collect, store, process and export all the information about the Novak Expires September 1, 2009 [Page 6] Internet-Draft Flow Monitoring Benchmarking March 2009 Flows passing through the DUT and the expected value is therefore several orders of magnitude lower than pure packet forwarding throughput Measurement units: Number of Flow Records per second 6. Test Set Up 6.1 Test Topology The test set-up is identical to the one used by [RFC2544], with just an addition of a Collector to analyse the Flow Export: Test topology with unidirectional traffic +-----------+ | | | Collector | | | +-----------+ | | +--------+ +-------------+ +----------+ | | | | | | | sender |-------->| DUT |--------->| receiver | | | | | | | +--------+ +-------------+ +----------+ Figure 1 Test topology with bidirectional traffic +-----------+ | | | Collector | | | +-----------+ | | +----------+ +-------------+ +----------+ | | | | | | | sender |------->| |-------->| receiver | | | | DUT | | | | | | | | | | receiver |<-------| |<--------| sender | | | | | | | +----------+ +-------------+ +----------+ Figure 2 The ideal way to implement the test is using one traffic generator (device providing the sender and receiver capabilities) with a sending Novak Expires September 1, 2009 [Page 7] Internet-Draft Flow Monitoring Benchmarking March 2009 port and a receiving port. This allows for an easy check if all the traffic sent by the sender was transmitted by the DUT and received at the receiver. If the effects of enabling Flow Monitoring on several interfaces are of concern or the media maximum speed is less that the DUT throughput, the topology can be expanded with several input and output ports. 6.2 Base DUT Set Up The base DUT set up and the way the set-up is reported in the test results is fully specified in Section 7 of [RFC2544]. The base DUT configuration might include other features like packet filters or quality of service on the input and/or output interfaces if there is the need to study Flow Monitoring in the presence of those features. The Flow Monitoring measurement procedures do not change in this case. Consideration needs to be made when evaluating measurements results to take into account the possible change of packets rates offered to the DUT and Flow Monitoring after application of the features to the configuration. 6.3 Flow Monitoring Configuration The DUT Observation Points configuration needs to be decided upon depending on the interest and scope of the testing as follows: a. input port/ports only b. output port/ports only c. both input and output Generally, the placement of Observation Points depends on the position of the DUT in the deployed network and the purpose of Flow Monitoring deployment. The testing procedures are otherwise same for all these possible configurations. To measure input and output simultaneously on one DUT port the topology in the Figure 1 can be used if the traffic sender and receiver allows for full duplex operation e.g. sender can simultaneously receive and analyse traffic and receiver can simultaneously also send traffic then the topology can be used as it is. The only change is in the configurations of the sender and receiver to allow full duplex operation. Cache Size available to the DUT operation MUST be known and taken into account when designing the test as specified in the section 7. Flow Export MUST be configured in such a way that all Flow Record information from all configured Observation Points is exported. This ensures that Flow Expiration Rate as measured from the Collector data Novak Expires September 1, 2009 [Page 8] Internet-Draft Flow Monitoring Benchmarking March 2009 includes all the Flow information handled by the DUT at the time of the measurement. The measurement time to be used to capture Flow Export data and to calculate Flow Expiration Rate is defined in sections 7.1 and 7.2. Various Flow Monitoring implementations might use different default values regarding the export of Control Information. The Flow Export corresponding to Control Information SHOULD be analysed and reported as a separate item on the test report. 6.4 Collector The Collector MUST be capable to capture at the full rate the export packets sent from the DUT without losing any of them. It does not have to have any Flow Export decoding capabilities itself, just needs to provide the captured data in the hexadecimal format for an off-line analysis which can be done after each performed measurement. During the analysis the Flow Export data need to be decoded and the received Flow information counted. The Flow Expiration Rate will be obtained from this data as the number of Flows seen in the captured data at the Collector divided by the measurement time as specified in the section 7. The Collector SHOULD support Ethernet type of interface to connect to the DUT but any media which allow data capturing and analysis can be used. The capture buffer MUST be cleared beforehand each measurement. The calculated Flow Expiration Rate SHOULD include the Flow Export corresponding to the Control Information. 6.5 Packet Sampling A Flow Monitoring implementation might provide the capability to analyse the flows after packet sampling is performed. The possible procedures and ways of packet sampling are described in [PSAMP-PROTO] and [PSAMP-TECH] and only those should be used for measurements. This document does not intend to study the effects of packet sampling itself on the network devices but packet sampling can simply be applied as part of the Flow Monitoring configuration on the DUT and perform the measurements as specified in the later sections. Consideration needs to be made when evaluating measurements results to take into account the change of packets rates offered to the DUT and especially to Flow Monitoring after packet sampling is applied. 6.6 Frame Formats Flow Monitoring itself is not dependent in any way on the media used on the input and output ports, any media can be used as supported by the DUT and the test equipment. The most common media and frame formats for IPv4, IPv6 and MPLS traffic are specified within [RFC2544], [RFC5180] and [MPLS]. Novak Expires September 1, 2009 [Page 9] Internet-Draft Flow Monitoring Benchmarking March 2009 6.7 Frame Sizes Frame sizes to use are specified in [RFC2544] section 9 for Ethernet type interfaces (64, 128, 256, 1024, 1280, 1518 bytes) and in [RFC5180] section 5 for Packet over Sonet interfaces (47, 64, 128, 256, 1024, 1280, 1518, 2048, 4096 bytes). 6.8 Flow Records The Flow Record definition is very implementation specific. A Flow Monitoring implementation might allow only for fixed Flow Record definition, based on the most common IP parameters in the IPv4 or IPv6 headers - like source and destination IP addresses, IP protocol numbers or transport level port numbers. Another implementation might allow the user to actually define his own completely arbitrary Flow Record to monitor the traffic. The requirement for the tests defined in this document is only the need for a large number of Flow Records in the Cache. The Flow Keys needed to achieve that will typically be source and destinations IP addresses and transport level port numbers. Example Flow Record: Flow Key fields Source IP address Destination IP address Transport layer source port Transport layer destination port IP protocol IP Type of Service or IP flow label Other Fields Packet counter Byte counter 7. Flow Monitoring Throughput Measurement Methodology Objective: To define and measure metrics fully expressing the Flow Monitoring performance which can be used to compare different implementations. Definition: section 5.2.6 The Flow Expiration Rate the DUT can achieve without losing any Flow information depends on the combination of a number of Flow Monitoring parameters and traffic patterns offered to the DUT. This makes it a complex task to define and measure a single metrics which would characterise the Flow Monitoring performance. There are two Flow Monitoring operational modes which need to be distinguished and measured separately while using the same metrics. Novak Expires September 1, 2009 [Page 10] Internet-Draft Flow Monitoring Benchmarking March 2009 7.1 Normal Cache Mode In the normal cache mode the DUT MUST hold less Flow Records in the Cache than the available Cache Size during the whole test. 7.1.1 Flow Monitoring Configuration Parameters Cache Size: Maximum available value on the network device Inactive Timeout Minimum possible value on the network device Active Timeout Higher or equal value to the Inactive Timeout Flow Keys Definition: Needs to allow for large numbers of unique Flow Records to be created in the Cache by incrementing one or several Flow Keys. The number of unique combinations of Flow Keys MUST be larger than the DUT Cache Size or larger than the product of the offered packet rate and the Inactive Timeout so that the flows in the Cache never get updated before they expire. 7.1.2 Traffic Configuration Parameters Traffic Generation The traffic generator when sending the packets needs to increment the packet header values defined as Flow Keys values in the Flow Record with each sent packet. Each packet then represents one Flow Record in the DUT Cache and the packet rate equals to the Flow Expiration Rate. Maximum Packet Rate The maximum packet rate which can be used for the normal cache mode measurement is the Cache Size divided by the Inactive Timeout. If the Flow Monitoring implementation allows to configure Inactive Timeout 0 then any packet rate is possible to be used. This makes sure the Flows expire from the Cache before it becomes full. 7.1.3 Measurement Time The measurement time to be used to calculate Flow Monitoring Throughput as specified in the section 6.4 is equal to the time interval when traffic is offered to the DUT. The Collector MUST continue capturing export data at least for the Inactive Timeout period after the traffic generation is stopped. This ensures that all the Flows created in the DUT Cache will be exported and analysed. Novak Expires September 1, 2009 [Page 11] Internet-Draft Flow Monitoring Benchmarking March 2009 7.1.4 Procedure The measurement procedure is same as the Throughput measurement in the section 26.1 of [RFC2544] for the traffic sending side. The output analysis is not done on the DUT receiving side but the analysed quantity for the measurement of the maximum rate value is the Flow count as provided by the Flow Export data Captured at the Collector as specified in the section 6.4. If the maximum packet rate (as specified in the section 7.1.2) does not cause the DUT to drop any Flow then the reported Flow Monitoring Throughput SHOULD be equal to the maximum packet rate. The limitation here is that the configurable values of Cache Size and Inactive Timeout do not allow to test with higher Flow Expiration Rates in normal Cache mode. 7.2 Cache Overflow Mode In the cache overflow mode the DUT MUST have the Cache fully occupied by Flow Records at any time during the measurement. 7.2.1 Flow Monitoring Configuration Parameters Cache Size Cache Size should be configured to a moderate size accordingly to the expected DUT position in the network. Its value may be limited by the fact that the number of unique Flow Keys sets which the traffic generator (sender) can provide should be multiple times larger than the Cache Size to make sure no Flow gets refreshed by another packet before it expires from the Cache. Example Cache Sizes: Small office device - 5 to 10 000 entries Medium access device - 50 to 100 000 entries Backbone device - 500 000 and higher Inactive Timeout Maximum possible value on the network device. The value has to be larger than the Cache Size divided by the test packet rate offered to the DUT to make sure the Flows do not expire from the Cache before the number of Flows in the Cache reach the Cache Size. Active Timeout Higher or equal value to the Inactive Timeout Flow Keys Definition: Needs to allow for large numbers of unique Flow Records to be created in the Cache by incrementing one or several Flow Keys. The number of unique combinations of Flow Keys values MUST be at least two times larger than the DUT Cache Size. This ensures that any Novak Expires September 1, 2009 [Page 12] Internet-Draft Flow Monitoring Benchmarking March 2009 incoming packet will never refresh any already existing Flow in the Cache and will cause another Flow to expire from the Cache. This is sometimes called emergency expiry as opposed to the inactive or active timeout of a Flow Record. 7.2.2 Traffic Configuration Parameters Traffic Generation The traffic generator needs to increment the Flow Keys values with each sent packet, this way each packet represents one Flow in the DUT Cache and the packet rate equals to the Flow Expiration Rate. 7.2.3 Measurement Time The measurement time to be used to calculate Flow Monitoring Throughput as specified in the section 6.4 is equal to the time interval when traffic is offered to the DUT minus the time needed to fully fill the Cache calculated as (Cache Size divided by the test traffic packet rate during the measurement). The Collector MUST stop capturing export data immediately when the traffic generation is stopped. This ensures that the Flow Records exported later from the fully occupied Cache do not poison the measured Flow Monitoring Throughput. 7.2.4 Procedure The measurement procedure is same as the Throughput measurement in the section 26.1 of [RFC2544] for the traffic sending side. The output analysis is not done on the DUT receiving side but the analysed quantity for the measurement of the maximum rate value is the Flow count as provided by the Flow Export data Captured at the Collector as specified in the section 6.4. 8. RFC2544 Measurements Objective: Provide RFC2544 network device characteristics in the presence of Flow Monitoring on the DUT RFC2544 studies numerous characteristics of network devices. The purely forwarding characteristics without Flow Monitoring present on the DUT can significantly vary when Flow Monitoring starts to be deployed on the network device. The objective of the section is to define controlled environment for Flow Monitoring during all the tests with all the parameters as specified below included in the test report. 8.1 Flow Monitoring Configuration Flow Monitoring Configuration needs to be applied the same way as discussed in the section 7 depending on the desired Cache test mode. RFC2544 measurements which do not involve Throughput measurement can be set to be measured with fixed Flow Expiration Rate or at the value of it corresponding to the Flow Expiration Throughput. Novak Expires September 1, 2009 [Page 13] Internet-Draft Flow Monitoring Benchmarking March 2009 RFC2544 Throughput measurement with Flow Monitoring enabled represents two dimensional measurement and it depends on the DUT capabilities and test set-up if the maximum of both variables can be found at the same time as specified below. 8.2 Single Traffic Component Section 12 of [RFC2544] discusses the use of protocol source and destination addresses for the in there defined tests. In order to perform all the RFC2544 type measurements with Flow Monitoring enabled the defined Flow Keys MUST contain IP source and destination address. The RFC2544 type measurements can be executed under these additional conditions: a. the test traffic does not use just single unique pair of source and destination address b. the Sender allows to define test traffic as follows 1) define the test exactly as specified in the section 7.1 or 7.2 2) allow for a parameter to say send N (where N is an integer number starting at 1 and incremented in small steps) packets with IP addresses A and B before changing both IP addresses to the next value This test traffic definition allows to execute the above defined IP Flow Monitoring tests with one defined Flow Expiration Rate while measuring at the same time the DUT RFC2544 characteristics. This set-up is the better option since it best simulates the live network traffic scenario with Flows containing more than just one packet. The initial packet rate at N equal to 1 defines the Flow Expiration Rate for the whole measurement procedure. The consequent increases of N do not change it anymore, the time and Cache characteristics of the test traffic stay the same. The initial rate needs to be chosen small enough to allow for the test traffic granular enough while allowing large enough value of Flow Expiration Rate for the measurement. The best procedure is to measure Flow Expiration Throughput and RFC2544 Throughput independently first and then design the rates for the mutual test. 8.3 Two Traffic Components The test traffic set-up in the section 8.2 might be difficult to achieve with commercial traffic generators. The way around it is to define two traffic components in the test traffic - one to populate Flow Monitoring Cache and the second one to execute the RFC2544 tests. Flow Monitoring Test Traffic Component - the exact traffic definition as specified in the sections 7.1 and 7.2. Novak Expires September 1, 2009 [Page 14] Internet-Draft Flow Monitoring Benchmarking March 2009 RFC2544 Test Traffic Component - test traffic as specified by [RFC2544] but under the condition it MUST create just one Flow Record in the DUT Cache. In the particular set-up discussed here this would mean a traffic stream with just one pair of unique source and destination IP address (but could be avoided if Flow Keys were for example UDP/TCP source and destination ports and the Flow Record did not contain the addresses). The first traffic component will exercise the DUT in terms of Flow activity while the second traffic component will measure the RFC2544 characteristics. The traffic rates to be reported as Throughput are the sum of rates of both components, other RFC2544 do not need any change. 9. Flow Monitoring Accuracy The pure Flow Monitoring tests in section 7 provide the capability to verify the Flow Monitoring accuracy in terms of the exported Flow Record data. Since every Flow Record created in the Cache is populated by just one packet, the full set of captured data on the Collector can be parsed (e.g. providing the exported Flow Record contents not only the Flow Record count) and each set of parameters from each Flow Record can be checked against the parameters as configured on the traffic generator and set in packets sent to the DUT. The exported Flow Record is considered accurate if: 1. all the Flow Record fields are present in each exported Flow Record 2. all the Flow Record fields values match the values ranges as set by the traffic generator (for example an IP address falls within the range of the IP addresses increments on the traffic generator) 3. all the possible Flow Record fields values as defined at the traffic generator have been found in the captured export data on the Collector. This check needs to be offset to potential detected packet loses during the test If Packet Sampling is deployed then only verifications in point 1 and 2 can be performed. This verification can be eased by collecting some Flow Monitoring statistics from the DUT. 10.Evaluating Flow Monitoring Applicability The results as discussed here obtained for certain DUT allow for a preliminary analysis of a Flow Monitoring deployment based on Internet traffic analysis data provided by organisations like [CAIDA] or performed on the network in question. The data needed to make an estimate if certain network device can manage the particular amount of live traffic with Flow Monitoring enabled is: Novak Expires September 1, 2009 [Page 15] Internet-Draft Flow Monitoring Benchmarking March 2009 Average packet size: 350 bytes Number of packets per IP Flow: 20 Expected data rate on the network device: 1 Gbit/s This results in: Expected packet rate: 357 000 pps being (1 Gbit/s divided by 350 bytes/packet) Flows per second: : 18 000 being (packet rate 357 000 pps divided by 20 packets per IP Flow) Under constant load of traffic with the parameters above the network device will always run in the Cache Overflow mode (if the network is large enough the Flows will hardly ever repeat itself within few seconds needed to fill up lets say 300 000 entries Cache ? if the Flow Keys contain parameters like IP addresses or transport level protocols and ports). 11.Acknowledgements This work could have been performed thanks to the patience and support of Cisco System Netflow development team, namely Paul Aitken, Paul Atkins and Andrew Johnson. Thanks belong also to Aamer Akhter for initiating this work and to Benoit Claise for careful reviews and presentations of the draft. 12.IANA Considerations This document requires no IANA considerations. 13.Security Considerations Documents of this type do not directly affect the security of the Internet or corporate networks as long as benchmarking is not performed on devices or systems connected to operating networks. Benchmarking activities as described in this memo are limited to technology characterization using controlled stimuli in a laboratory environment, with dedicated address space and the constraints specified in the sections above. The benchmarking network topology will be an independent test setup and MUST NOT be connected to devices that may forward the test traffic into a production network, or misroute traffic to the test management network. Further, benchmarking is performed on a "black-box" basis, relying solely on measurements observable external to the DUT. Novak Expires September 1, 2009 [Page 16] Internet-Draft Flow Monitoring Benchmarking March 2009 Special capabilities SHOULD NOT exist in the DUT specifically for benchmarking purposes. Any implications for network security arising from the DUT SHOULD be identical in the lab and in production networks. 14. References 14.1. Normative References [RFC2119] Bradner, S., "Key words for use in RFCs to Indicate Requirement Levels", BCP 14, RFC 2119, March 1997. [RFC5101] Claise B., "Specification of the IP Flow Information Export (IPFIX) Protocol for the Exchange of IP Traffic Flow Information", Standards Track, RFC 5101, January 2008. 14.2. Informative References [RFC1242] Bradner, S., "Benchmarking Terminology for Network Interconnection Devices", RFC 1242, July 1991. [RFC2285] Mandeville R., "Benchmarking Terminology for LAN Switching Devices", Informational, RFC 2285, November 1998. [RFC2544] Bradner, S., "Benchmarking Methodology for Network Interconnect Devices", Informational, RFC 2544, March 1999 [RFC5180] C. Popoviciu, A. Hamza, D. Dugatkin, G. Van de Velde, "IPv6 Benchmarking Methodology for Network Interconnect Devices, Informational, RFC 5180, May 2008 [RFC3917] Quittek j., "Requirements for IP Flow Information Export (IPFIX)", Informational, RFC 3917, October 2004. [RFC5102] Quittek, J., Bryant, S., Claise, B., Aitken, P., and J. Meyer, "Information Model for IP Flow Information Export", RFC 5102, January 2008. [IPFIX-ARCH] Sadasivan, G., Brownlee, N., Claise, B., and J. Quittek, "Architecture Model for IP Flow Information Export", Work in Progress, September 2006. [IPFIX-ARCH] Sadasivan, G., Brownlee, N., Claise, B., Quittek, J., "Architecture Model for IP Flow Information Export" draft-ietf-ipfix-architecture-12, Internet-Draft work in progress, September 2006 [IPFIX-AS] Zseby, T., Boschi, E., Brownlee, N., Claise, B., "IPFIX Applicability", draft-ietf-ipfix-as-12.txt, Internet-Draft work in progress, February 2007 Novak Expires September 1, 2009 [Page 17] Internet-Draft Flow Monitoring Benchmarking March 2009 [PSAMP-INFO] T. Dietz, F. Dressler, G. Carle, B. Claise, "Information Model for Packet Sampling Exports", draft- ietf-psamp-info-11.txt, Internet-Draft work in progress, October 2008 [PSAMP-PROTO] Claise, B., Quittek, J., and A. Johnson, "Packet Sampling (PSAMP) Protocol Specifications", draft-ietf- psamp-protocol-09, Internet-Draft work in progress, December 2007. [PSAMP-FMWK] D. Chiou, B. Claise, N. Duffield, A. Greenberg, M. Grossglauser, P. Marimuthu, J. Rexford, G. Sadasivan, "A Framework for Passive Packet Measurement" draft- ietf-psamp-framework-13.txt, Internet-Draft work in progress, June 2008 [PSAMP-TECH] T. Zseby, M. Molina, N. Duffield, F. Raspall , ?Sampling and Filtering Techniques for IP Packet Selection? draft- ietf-psamp-sample-tech-11.txt, Internet-Draft work in progress, July 2008 [PSAMP-MIB] Dietz, T., Claise, B. "Definitions of Managed Objects for Packet Sampling", Internet-Draft work in progress, June 2006 [MPLS] Akhter A. "MPLS Forwarding Benchmarking Methodology", Internet-Draft work in progress, November 2008 [CAIDA] Claffy, K., "The nature of the beast: recent traffic measurements from an Internet backbone", http://www.caida.org/publications/papers/1998/Inet98/ Inet98.html Author's Addresses Jan Novak (editor) Cisco System Edinburgh, UK Phone: +44 7740 925889 Email: janovak@cisco.com Novak Expires September 1, 2009 [Page 18] Internet-Draft Flow Monitoring Benchmarking March 2009