gRPC has been coming up a lot in my circles recently. NI is integrating it into a large number of its projects, and so the broader NI ecosystem is seeing the ripple effect of that.
It is a protocol conceived by google that allows different systems to communicate over the network, with one system requesting the other to perform certain procedures. This is all defined in a configuration file which is used to generate template code for clients and servers.
While I had been aware of its existence, I hadn't looked at it in great detail. I did some basic testing of the NI tool earlier in the year, which showed some interesting benefits for instrument control. In particular, the promise of reliable and high-performance communication with minimal programming effort.
I've been working on a new project to add a remote interface to an instrument. For the project, I reached a basic TCP protocol that I've used a lot, but the server (written in Rust) was designed to swap out the server implementation easily.
So I used the tonic library in Rust to create a gRPC server, and the results were great.
Why gRPC for Instrument Control
There are a few standout features that caught my eye:
proto definition files & code generation
gRPC was designed to make the interface easy and accessible. At the core of this is the proto file.
This is a separate file which defines the data types and service calls available on the server. You can then generate the server or client code from the proto file.
The code generation means that rather than building libraries for every eventual client, I can now provide the proto files to anyone that might need to talk to the instrument, and they can generate code in their preferred software platform.
You can immediately use test clients like bloomRPC with no software needed (though beware, the streaming performance will probably not be very high).
Streaming APIs
I still want to test this more. I'm not sure if there is a gotcha I'm missing, as I don't think this is quite what they were designed for, but we can combine configuration messages with data streaming messages once the task starts.
As well as the typical call-and-return responses, there are streaming responses. I believe these are intended more for asynchronous loading of a large result set, but my testing showed this worked well as long as the libraries you are using can detect a closed stream.
Whether this worked for an unknown stream length was a key part of my test, and I'm pleased to say it looks very good.
Performance
The project I tested on is not the highest performance, so I haven't stressed this personally. However, evidence from other users shows that very high performance is available in the streaming APIs.
Battle-Tested
Compared to building your own TCP protocol, gRPC is battle tested. The libraries in many languages have a large reach, so I would expect a higher level of testing, resilience and security compared to rolling your own.
Downsides
Why would I avoid it? Where it is well supported, I don't see many reasons too, but the things I will be aware of:
- Because it is connectionless, like HTTP, you must implement your own logic layer to manage multiple client connections. Either blocking additional connections by some identifier or timeout or tracking different clients.
- I suspect at the top end of the performance, you will see a difference between this and TCP, but I think this bar will be pretty high. If I get a higher-performance project, though, I will test this first to be sure.
- With LabVIEW, the server support is still in development and doesn't appear ready for production yet.
Summary
gRPC looks like a great option for complex instrument control with key benefits over TCP or HTTP.
On many projects, I expect significant time savings in protocol development and testing and client library development time. It will also make it easier to interface with different teams using varied technology stacks.
For this test project, I can currently switch between TCP and gRPC control, and I expect I'll fully switch to gRPC with some more testing.