Ethernet is now approaching 50 years old and, like us, humans look pretty different than it did when Bob Metcalfe birthed it in 1973. Back in those days, Ethernet was conceived as a flat network with a bunch of devices that would all share a common link. If more than one device concurrently transmitted a message, the messages would be garbled, and the devices would have to randomly resend them sometime later. The first hubs facilitated this kind of architecture, and the term collision domain was born.
We now regularly segment devices into collision domains using routers. All the devices on each port of a router form a network with a common collision domain and common address space.
Switches limit collisions on a network by buffering messages sent from device to device. Unlike the original flat network of Ethernet, switches store and forward messages on each of their ports. But what switches don’t do is to limit broadcast traffic. Routers, unlike switches, prevent broadcast traffic from moving between networks.
Every device on a network (each port on a router is a network) receives ALL the broadcast traffic on that network. Broadcast traffic is traffic that must be sent to and consumed by every device on the network. The most important broadcast traffic is ARP (Address Resolution Protocol) traffic. When a device wants to send an Ethernet message, it needs to identify the MAC (Media Access Control) address. The MAC Address of a device is a 48-bit address of a device assigned by the device manufacturer, 24-bits of which identify the manufacturer and 24-bits are a sequence number. Switches route messages from port to port using that MAC address (known as the Layer 2 address).
As the number of devices in a switched network (aka broadcast domain) increases, the volume of broadcast messages increases to the point that it starts to consume more and more of the processing bandwidth of the devices in that network. To limit that traffic, network designers either form more networks with smaller collision domains or virtually subdivide the network into VLANs (Virtual Local Area Networks).
A VLAN (Virtual LAN) is a virtual network comprised of some of the ports on one or more switches in a network. A VLAN acts just like a regular network: broadcast messages are limited to the VLAN, and it has its own address range that can use an entirely different address class. In the diagram above, two VLANs might be appropriate. VLAN 10 might consist of the red devices (marketing team) while VLAN 20 might consist of the green devices (accounting team).
Forming VLANs within a network provides a number of benefits:
- Security – Without a VLAN, in networks like the 192.168.1.x network above, all devices on the network can communicate with all other devices on the network. Separating the 192.168.1.x network into two VLANs as described above makes it impossible for the marketing team to communicate with any of the accounting team client devices. Only routers and switches enabled to do VLAN Hopping can move packets between VLANs.
- Broadcast Domain – Each VLAN forms its own broadcast domain. The number of broadcast packets sent grows linearly with the number of devices in the network. All these broadcast packets can congest the network and possibly inhibit network performance. Splitting the traffic into VLANs can reduce all this broadcast traffic and limit network congestion.
- Performance – VLANs allow the network designer to segregate higher priority traffic on its own network. For example, VOIP phones are typically placed on their own VLAN.
- Logical Groupings – VLANs enable groups of client devices dispersed around an office or plant to be logically connected to a common network. Network services particular to a logical group can be made available only to the client devices on the VLAN.
This article is the second in my series (read part 1 here) of articles on what every control engineer needs to know about IT. In the next article in this series, you will learn about hidden performance problems you may have on your EtherNet/IP industrial network.