There are certain terms in automation that are used all the time but not always clearly understood. We use these terms with EtherCAT, EtherNet/IP, PROFINET IO, Modbus TCP and with other networks but not always correctly. In this article, I’m going to define some of those terms and provide a little background that might make them more clear. In the end, I’ll discuss how these terms apply to the technologies that we use every day on the plant floor.
Speed – Network speed strictly refers to the number of bits we can get through a wire in one second. You can also think of it as the time a bit is present on a wire. At very slow baud rates, like 300 baud (bits per second), the bit is present on the wire for about 3 milliseconds. At 1 meg (1,000,000 bits per second), the bit is on the wire for about 1 microsecond. Network speeds are measured in bits per second (bps), kilobits or 1,000 bits per second, megabits or 1,000 kilobits per second (Mbps) and gigabits or 1,000 megabits per second (Gbps).
Bandwidth – Bandwidth is the capacity of the network to move data. A 50 megabit network can move 50 megabits of data per second, but that doesn’t mean that your device is getting all that capacity. If there is another device on the network, you each can get 25 megabits of capacity. If there are ten devices, you each can get 5 megabits of capacity per second and so on. Even though any individual message is moving very fast (maybe 1 Gbps), your device only gets access to that very fast pipe intermittently. How little access you get is your bandwidth.
Throughput – Throughput is the actual amount of data that is transferred across a network link. A network link may be rated for 50 Mbps, but the actual amount of data it can transfer may be limited to something much less. Throughput is reduced by the number of devices you have on the network or network segment, the protocol you are using and many other factors. A common way to visualize this is as a congested three-lane highway. It has the capacity to move cars at 55 mph and it does when there are only a moderate number of cars. But at peak traffic times or when the weather is bad or when a traffic cop is writing a ticket, the cars move at slower speeds. That’s how throughput gets degraded. Any number of things can cause the throughput on your network to degrade.
Latency – You can think of latency as another word for the delay. When we talk about latency with an industrial network like EtherNet/IP, PROFINET IO or even Modbus TCP, we’re talking about all of the delay accumulated as the packet makes its way from the Master device (EtherNet/IP Scanner, PROFINET IO Controller or Modbus TCP Master) to the Slave device and back to the Master. There can be delays in gateways, routers and in the end device itself. If the Slave device takes 50 msecs to respond to an acyclic request, that’s an additional 50 msecs of delay that is added to all the other delays for a request of a controller to be satisfied by a response from a target device.
Networks like EtherNet/IP and PROFINET IO use cyclic communications. This means that messages are transmitted on a schedule. Typically, every 10 msecs both the controller side device and the end device send packets out to the other one. An EtherNet/IP Scanner with 50 Adapter devices or a PROFINET IO controller with 50 end devices sends 50 messages – one to each of its end devices. There is little latency in those messages as they all go one direction: the output messages move from the controller to the end devices and the input messages from the end devices to the controller. All that traffic does eat up much of the bandwidth. The saving grace is that there is often little else on those network segments.
Acyclic messages, which are used by a controller sends a message like decreasing the ramp-up time, are going to encounter latency. The message goes from the controller, gets digested by the end device and the end device forms a response and sends it. The total delay through any gateways, routers, and switches plus the end device delay is the latency of the network. Modbus TCP messages and all other networks that use acyclic message experience latency.
EtherCAT is one of the classes of hardware augmented, real-time Ethernet communication protocols. Every EtherCAT slave device contains a special ASIC (currently limited to 100 Mbps) that processes the single telegram issued by a Master device. The EtherCAT slave ASIC extracts new output data from the network and inserts new input data onto the network – all while the telegram is moving from its input RJ45 jack to its output RJ45 jack. With only a single telegram and nearly non-existent latency as each message is processed, EtherCAT makes the best use of bandwidth and has the lowest latency and the highest throughput of all the network protocols.
 Note that I am simplifying here. There are other issues that result in some devices getting more of the available bandwidth than other devices.