Build an OPC UA bridge in Python
Many times, during development one comes across a gap in a project that is easy to fix in essence but there are other factors at play. One such example from my work is the need for an OPC UA bridge. In this post we will explore the use cases that I have come across and build an OPC UA bridge in Python.
Use cases for OPC UA
During the development of a project the first challenge is to get the basics in place and test if the proposed solution will work. Other times one requires a temporary solution during a migration process.
The technical challenges include issues related to network configuration, admin restrictions and software functionality restrictions. There are also the three components in project management to consider: scope, time, and costs. So, one needs to prove the solution works quickly before spending too much time and cost on it, be it people hours and/or software costs.
There are several good OPC UA tools out there that can be used in many configurations. These do require a bit of time to figure out how they work and requires the purchase of license (some include running for an hour or two as a free trial). It may be a good start but sometimes you just need something quick and customisable.
Historian migration use case
The first use case I encountered was to switch over from one type of historian to another, but during this time both historians need to get the same data. The data was coming in via JSON to a FastAPI webserver that I build in a previous post. While one can setup a proper API connection, this would take considerable effort for a temporary solution.
I decided to include an additional function in the FastAPI server that I created previously, to write the tag values directly to the OPC UA server that both historians will use during migration but also be the permanent connection for the new historian.
The temporary solution is working well now and caters for about 300 tags coming in every minute. I suspect that scaling may become an issue once you reach a couple of thousand tags and higher frequencies of data acquisition. I have not tried to break it, yet 🙂
New historian configuration use case
The second use case was setting up a new historian (unrelated to the above use case) for a new R&D facility. The facility has two OPC UA servers from two different PLCs that are also on two different network ranges. As it is an R&D facility, the network infrastructure is still being configured but the team wanted to start historising data to perform analysis on the process.
The current setup is a physical data acquisition PC at the facility, with three network interfaces: two for the two PLCs and one for the local network. One the local network we have a physical PC that we use as the historian. During the next phase, some servers will be virtualised to provide redundancy.
The challenge is that the historian can reach the data acquisition PC, but not the PLC OPA UA servers directly. Thus, the need for an OPC UA bridge on the data acquisition PC to bridge the gap. Again, the key consideration was to get data flowing fast with minimal effort for this solution. The historian OPC UA connecter was also new to me to setup, so needed to figure out the configurations that were needed.
Building the Python interface
We will create an OPC UA client that reads tags from the PLC OPC UA servers and write the tags values to a local OPC UA server. The historian will be able to connect to the local OPC UA server using its native connector. With this setup, when switching over to the final production servers, only the IP address of the connector would need to be changed on the historian side and all the other configurations should remain the same.
We will build the OPC UA bridge using the opcua-asyncio library. The library provides asyncio-based asynchronous OPC UA client and server configurations.
First, we create a tags.json
file that includes the sources node ids and the destination node ids. The source node ids include some characters in the string that are not compatible with the historian connector and needs to be renames. It also makes adding new tags quick and easy.
The main function connects to the client and destination OPC UA servers. It then creates a subscription handler which subscribes to all the source node ids that we defined in the tags.json
file. It reads the tags every 5 seconds.
Once the data has changed, the subscription handler is triggered. The handler will get the source value and lookup the destination node id based on the source node id. The value will then be written to the OPC UA server.
I have used a couple of hundred tags to test the setup and all works well. There are few tags as this is an R&D facility, so have not stress tested the solution with a couple of thousand tags, but this is good enough for now.
Running Python offline
Another challenge on this network is that there is no internet access on the server. Meaning, we would need to download all packages on a PC with internet and copy the files over to the PC without internet. I use pipenv
to manage my python environments and it is quite easy to achieve following the steps below.
Conclusion
The ability to create a quick OPC UA bridge, or even just a client or server, sped up my workflow and deliverables without spending too much effort or cost on additional software. With the ability to quickly test the data flow, we were also able to implement firewall rules for the new virtualised environment. For the final solution we will be reading the OPC UA servers directly using the native historian connector, once the PLCs are on the correct network range.