Knowledge Search


Line-rate throughput tests on 10-Gigabit Ethernet interfaces

  [KB9798] Show Article Properties

Why can a small amount of packet loss occur occasionally while doing Line-rate throughput tests on 10-Gigabit Ethernet interfaces ?

Occasionally, while performing line-rate throughput tests on a 10-Gigabit Ethernet interface, one can observe a small amount of packet loss.  Initially it was thought hardware dependent, since certain revisions of 10GE PIC worked while others did not.

In reality, it depends on the transmit Physical Media Access (PMA) and Ethernet chips that have a timing tolerance of ±100 parts per million. This may sound small, but it can actually affect a throughput test.

Unlike SONET/SDH interfaces, Ethernet interfaces are not synchronized to a stratum-3 clock source.

So, In the worst case scenario :
  • Data tester transmits at 10Gb/sec +  100ppm = 10.000100Gb/s.
  • Receiving PIC has no problem receiving all data and storing in PFE.
  • The outgoing PIC transmits at 10Gb/sec - 100ppm =  9.999900Gb/s.
  • This leaves 10.000100 - 9.999900 =  0.000200Gb/s or 200kb/s worth of traffic that will queue up and get dropped on the outgoing PIC.
Since a throughput test is considered successful only when there is zero packet loss, you can never achieve 100% line rate.

This problem is not a hardware failure.

Care has to be taken, when testing 10Gb/s interfaces at full rate, as a small packet loss can be expected.
Related Links: