When To Use Quality of Service In An EtherNet/IP Network

QoS in an EIP network

In two earlier articles on Quality of Service and EtherNet/IP (Introduction to The EtherNet/IP QoS and Why Quality of Service on the Factory Floor can be so Frustrating), I introduced the concept of what Quality of Service means to the EtherNet/IP control engineer working on AB ControlLogix controllers. In this article, I’d like to continue that discussion with some practical information on how to know when it becomes important to implement Quality of Service.

It is my long-time contention that a well-architected EtherNet/IP network uses managed switches and full-duplex connections to every device in the network. In this kind of network, there are no collisions as every connection is its own collision domain. The network collision domains are illustrated in red in the attached network drawing (Figure 1).

Quality of Service Network Drawing

In control networks architected like this, no Ethernet device has to contend with any other device for access to the Ethernet network. When only control devices are in the network, there is almost no risk of any network congestion as an EtherNet/IP control network is unable to generate enough messages to stress any industrial managed switch in today’s switch market.

The caveat to that is that now IIoT devices are being added to control networks. These devices are generating more and more non-control traffic and traffic that is contending for that uplink to the router and the corporate network. The traffic generated by these devices reduces the bandwidth on that uplink port of the network switch that connects to its router. If a control engineer can’t avoid having this traffic, it is likely time to start managing the control traffic on the network. As this non-control traffic will only grow in the future, control engineers should make plans to make sure control traffic is preferentially handled and that the timing jitter of control messages through that uplink port is reduced.

Lab testing indicates that this is typically not a concern until the uplink is at 80% of its theoretical capacity. At that point, a FIFO queue accumulates enough messages such that there are advantages to moving important control messages to the front of the transmission queue of that overly stressed link.

Of course, if you have no control messages crossing from cell to cell or zone to zone, there are no control messages on the uplink port, and this won’t concern you. But in many cases, it’s necessary to have EtherNet/IP messages being exchanged between networks at different locations in the plant. For example, an assembly machine may want to notify the stamping press controller that sends its parts to stop production when its parts buffer is full.

In this kind of situation (Figure 2), messages must traverse not only the control network and the Ethernet infrastructure devices (routers and switches) that make up the control network but also the Ethernet infrastructure of the plant. The more router hops between message source and destination, the more possibility of meeting congestion. The routers and switches of the plant are not managed by the controls team, and it is more than a minor concern that when those messages contend with traffic from many other departments of the corporation, there might be congestion and delays.

EIP message exchange

When that happens, the individual routers and switches along the path between the two control devices each must make decisions about what messages to buffer and what messages to transmit out the egress port. Message delays for control messages traversing IT network infrastructure can present real problems for the control engineer. Production equipment can halt while waiting for data, lose connection with other devices and sometimes bring an entire production system to its knees.

Control system engineers must concern themselves with the worst-case operating conditions.  Avoiding network congestion by implementing high bandwidth Ethernet links in a control network is the first line of defense.  Implementing priority to minimize congestion’s effects in circumstances when it occurs on those high bandwidth links is a prudent second step.

In the next article in that series, I’ll talk about how messages can be marked to do just that.