Enhanced Transmission Selection

Enhanced Transmission Selection is defined in IEEE P802.1Qaz/D2.3, Virtual Bridged Local Area Networks-Amendment XX: Enhanced Transmission Selection for Bandwidth Sharing Between Traffic Classes. This IEEE 802.1Qaz standard also defines one of the DCBX versions supported by the ExtremeXOS software.

ETS, and similar features in the Baseline DCBX standard, define methods for managing bandwidth allocation among traffic classes (called Priority Groups (PGs) in Baseline DCBX) and mapping 802.1p COS traffic to those traffic classes.

The rest of this section provides general guidelines for configuring the ExtremeXOS QoS feature to conform to the ETS requirements. After you configure QoS, DCBX advertises the ETS compatible configuration to DCBX peers on all DCBX enabled ports.

ETS configuration is affected by the following set of QoS objects:
  • QoS scheduler
  • QoS profile
  • dot1p

By default, the scheduling is set to strict-priority.

The following command enables ETS compatible (weighted) scheduling:

configure qosscheduler [strict-priority | weighted-round-robin | weighted-deficit-round-robin] {ports [ port_list | port_group | all}

Each QoS profile supports an IEEE ETS traffic class (TC) or a Baseline DCBX priority group (PG). To determine which QoS profile serves a TC or PG, add the number 1 to the TC or PG number. For example, TC 0 and PG 0 are served by QoS profile 1. ExtremeXOS switches support up to eight QoS profiles and can therefore support up to eight TCs or PGs. The following QoS configuration changes affect the ETS/PG configuration:
  • QoS profile:
    • When you create or delete a QoS profile, you add or remove support for the corresponding TC or PG.

    • The weight configuration helps determine the bandwidth for a TC or PG.

    • The use-strict-priority configuration overrides ETS scheduling and selects strict priority scheduling for the corresponding TC or PG.

    • The dot1p configuration maps each 802.1p priority, and the associated TC and PG, to a QoS profile. If you change the 802.1p mapping, it will change which QoS profile services each TC or PG.

  • Per port configuration parameters:
    • minbw: Sets a minimum guaranteed bandwidth in percent.

    • maxbw: Sets a maximum guaranteed bandwidth in percent.

    • committed_rate: Sets a minimum guaranteed bandwidth in Kbps or Mbps.

    • peak_rate: Sets a maximum guaranteed bandwidth in Kbps or Mbps.

For example, the following set of commands creates a QoS profile (qp5) in preparation to support iSCSI traffic, maps packets with 802.1p priority 4 to QoS profile 5, indicates that QoS profile 8 should use strict priority, and sets the weight for the ETS classes:

create qosprofile qp5
configure dot1p type 4 qosprofile qp5
configure qosprofile qp1 weight 1
configure qosprofile qp5 weight 2
configure qosprofile qp8 use-strict-priority
 
Note

Note

All Extreme Networks DCB-capable switches are configured with qp1 and qp8 by default, and some platforms support additional QoS profiles by default. When stacking is used for Summit switches, qp7 is created by default for internal control communications, and is always set to strict priority.

DCBX only advertises the bandwidth for ETS classes, so in the example, the available bandwidth is divided only between qp1 and qp5. The total bandwidth for all ETS classes must add up to 100%, so if the weights don't divide evenly, one or more of the reported bandwidth numbers are rounded to satisfy this requirement. With this in mind, the above configuration results in reported bandwidth guarantees of 33% for TC/PG 0 (qp1) and 67% for TC/PG 4 (qp5).

Weighted round robin scheduling is packet based, so when packets are queued for both classes 0 and 4, the above configuration results in two TC/PG 4 packets being transmitted for each single TC/PG 0 packet. As such, the exact percentages are realized only when the average packet sizes for both classes are the same and the measurement is taken over a long enough period of time. Another consideration is that using the lowest weights possible to achieve the desired ratios results in a more even distribution of packets within a class (that is, less jitter). For example, using weights 1 and 2 are usually preferable to using weights 5 and 10—even though the resulting bandwidth percentages are the same.

Enhanced Transmission Selection allows you to configure QoS scheduling to be weighted-deficit-round-robin. In this approach, you can configure a weight in the range of 1–127 on the QoS profiles. The difference between weighted-round-robin (WRR) and weighted-deficit-round-robin (WDRR) is that, in the latter approach, the algorithm uses a “credit counter” mechanism.

The algorithm works in slightly different ways on different platforms:

Platform:

Summit X480, X460, X440 series switches; BlackDiamond 8800 series switches with 8900-G96T-c, 8900-10G24X-c, 8900-MSM128, 8900-G48T-xl, 8900-G48X-xl, and 8900-10G8X-xl modules; E4G-400, E4G-200 cell site routers.

Methodology:

  • Credit counter—A token bucket that keeps track of bandwidth overuse relative to each queue‘s specified weight.

  • Weight—Relative bandwidth allocation to be serviced from a queue in each round compared with other queues. Range is between 1 and 127. A weight of 1 equals a unit of 128 bytes.

  • MTU Quantum Value—2 Kbytes.

  1. Set credit counter to quantum value for all queues.
  2. Service queues in round robin order, according to the weight value. When a packet from a queue is sent, the size of the packet is subtracted from the credit counter. A queue is serviced until it is either empty or its credit counter is negative.
  3. When all queues are either empty or their credit counter is less than 0, replenish credits by: MTU quantum value x weight of queue. No queue‘s credit can ever be more than quantum value x weight.

Repeat steps two and three until all queues are empty.

Platform:

Summit X670, X460-G2, X670-G2 and X770 series switches; BlackDiamond 8800 series switches with 8900-40G6X-xm module; BlackDiamond X8 series swithes with BDX-MM1, BDXA-FM960, BDXA-FM480, BDXA-40G24X, and BDXA-40G12X modules.

Methodology:

  • Credit counter—A token bucket used to keep track of bandwidth overuse relative to each queue‘s specified weight.

  • Weight—Relative bandwidth allocation to be serviced from a queue in each round compared with other queues. Range is between 1 and 127.

  • K—Minimum value required to make all credit counters positive. This value is recalculated after each round.

  1. Set credit counter for each queue to queue‘s weight value.
  2. Service queues in round robin order, according to the weight value. When a packet from a queue is sent, the size of the packet is subtracted from the credit counter. A queue is serviced until it is either empty or its credit counter is negative.
  3. When all queues are either empty or their credit counter is less than 0, replenish credits by: 2^K × weight of queue. K is calculated so that it is the minimum value required to make all credit counters positive. No queue‘s credit can ever be more than 2^K × weight of queue.

Repeat steps two and three until all queues are empty.

Platform:

BlackDiamond 8800 series switches with G48Te, G48Te2, G24Xc, G48Xc, G48Tc, 10G4Xc, 10G8Xc, MSM-48, S-G8Xc, S-10G1Xc, and S-10G2Xc modules.

Methodology:

These cards have a weight range of 1 to 15. Credit is replenished by 2^(weight – 1) × 10KB.

The number of bytes that can be transmitted in a single round is:
  • Weight 0 = Strict Priority

  • Weight 1 = 10 KB

  • Weight 2 = 20 KB

  • Weight 3 = 40 KB

  • Weight 4 = 80 KB

  • Weight 5 = 160 KB

  • Weight 6 = 320 KB

  • Weight 7 = 640 KB

  • Weight 8 = 1,280 KB

  • Weight 9 = 2,560 KB

  • Weight 10 = 5,120 KB

  • Weight 11 = 10 MB

  • Weight 12 = 20 MB

  • Weight 13 = 40 MB

  • Weight 14 = 80 MB

  • Weight 15 = 160 MB

When ETS scheduling is used without a minbw or committed_rate configured, packets from strict priority classes always preempt packets from ETS classes, so the reported percentages reflect the distribution of the bandwidth after strict priority classes use what they need.

Because of this, one might consider limiting the bandwidth for any strict priority classes using the maxbw parameter. For example, the following command limits TC/PG 7 to 20% of the interface bandwidth:
configure qosprofile qp8 maxbw 20 ports 1-24

The per-port bandwidth settings described above can also be used to either limit or guarantee bandwidth for an ETS class.

For example, the following command guarantees 40% of the bandwidth to TC/PG 0:
configure qosprofile qp1 minbw 40 ports 1-24

The DCBX protocol takes these minimum and maximum bandwidth guarantees into account when calculating the reported bandwidth. With the addition of this minimum bandwidth configuration, the reported bandwidth would change to 40% for class 0 (qp1) and 60% for class 4 (qp5).

The following are some important considerations when using minimum and maximum bandwidth guarantees:
  • They change the scheduling dynamic such that a class with a minbw will have priority over other classes (including strict priority classes) until the minbw is met, which differs from the standard ETS scheduling behavior described in 802.1az

  • If the minbw is set on multiple classes such that the total is 100%, these classes can starve other classes that do not have a configured minbw. So, for example, if the minbw for both class 0 and class 4 is set to 50% (100% total), traffic from these classes can starve class 7 traffic. This can lead to undesirable results since DCBX and other protocols are transmitted on class 7. In particular, DCBX may report the peer TLV as expired. This effect can be magnified when an egress port shaper is used to limit the egress bandwidth.

  • If all ETS classes have a maxbw set, and the total is less than 100%, the total bandwidth reported by DCBX will be less than 100%. Extreme does not report an error in this case, but some DCBX peers may report an error.

  • Packet size is a factor in the minimum and maximum bandwidth guarantees.

In light of these considerations, the following are a set of guidelines for using minimum and maximum bandwidth guarantees:
  • If minbw guarantees are used for ETS classes, and strict priority classes exist:
    • Make sure that the total minbw reserved is less than 100%.

    • Configure minbw for the strict priority classes.

  • If strict priority classes exist, you may want to configure a maxbw for the strict priority classes so they don't starve the ETS classes.

  • If maxbw is configured on some ETS classes, ensure that either the total of the maxbw settings for all ETS classes is equal to 100%, or at least one ETS class does not have a maxbw configured.

For more information on the QoS features that support ETS, see QoS.