OPC UA and the Internet

If you know anything about me at all, you know that I am a big fan of OPC UA. I have been following UA closely now for almost ten years. I love so much about this technology:

  • It has an object model that is much more sophisticated than EtherNet/IP, PROFINET IO, Modbus TCP, or anything else I’ve ever seen.
  • It has a discovery mechanism where clients can discover what data is available in a device.
  • It has an impressive security model that has passed certification by the German Government Cyber Security Center.
  • It offers a number of different transports and encodings.
  • It’s scalable to support everything from really small sensors to large servers.
  • It’s extensible. As technology changes, new encodings, transports, and security models can be incorporated.

What’s not to like? I like it so much that I wrote two books on it. The one I always recommend is “OPC UA – Unified Architecture: The Everyman’s Guide to the Most Important Information Technology in Industrial Automation.” This book provides a deep dive into a lot of the technology behind OPC UA.

But the big question is about OPC UA and the Internet. There is a lot of controversy about the best way to move data from the factory floor to the Cloud. Some people are partial to MQTT. Other people really like HTTP and the REST architecture. For a while, some of the trade associations were saying that you could use EtherNet/IP or PROFINET IO for Cloud communication, but everyone realizes now how infeasible that was.

There are literally dozens of ways to move factory floor data to a Client on the Internet. If we could design a new technology, what would we want it to do? Here’s my list:

1. It needs to support a number of different encodings. Most Servers in the Cloud can decode XML and JSON. It must support those.
2. It needs to provide a superior security system.
3. It must easily go through firewalls without requiring any pinholes or special configuration of a factory’s router and switches. This means that it must support an outbound connection.
4. It must use some sort of connected messaging. UDP is fine for applications where lost packets don’t matter, but that’s not the case when I’m moving pharmaceutical quality data to a Cloud database.
5. It must be scalable, robust, and not constrain the bandwidth of the underlying media or create latency issues.
6. It must be available in a public Cloud, private Cloud server, or some sort of hybrid Cloud. The system architect should have all options on the table.

If you review this list, you’ll see that most everything we use today fails for one reason or another – including OPC UA. MQTT doesn’t provide any common data context. HTTP/REST doesn’t scale well and creates bandwidth issues. And OPC UA, the way it operates today, meets most of these requirements but doesn’t provide that outbound connection.

System architects overcome that by using proxies that have a Client connection to the factory and a Client connection to the Cloud. Or often they use OPC UA/MQTT gateways. That’s problematic for all sorts of reasons, notably the deficiencies in the MQTT architecture.

Even though I love OPC UA, I have to admit that it requires a workaround when you want to take data to the Cloud. Wish I had a different argument to make, but that’s just the truth.