Hello, It was quite interesting. From my side, we have done some work in the past with GRPC and with envoy proxy translating calls from REST to GRPC. The problem was to store remotely browser settings. - https://blog.envoyproxy.io/envoy-and-grpc-web-a-fresh-new-alternative-to-rest-6504ce7eb880 - https://www.envoyproxy.io/docs/envoy/latest/configuration/http/http_filters/grpc_json_transcoder_filter
I haven't realized the need of an 'in-memory' GRPC since a interprcess shared memory message queue abstraction will suffice: 1. serialize the message in protobuf to queue 2. wake up the receiver 3. the receiver gets the message in a queue slot and deserializes protobuf. This might happen without passing through slow unix domain sockets or network sockets. It might be an interesting project to provide an abstraction that follows - local://endpoint:port - does local rpc via interprocess message queue - remote://endpoint:port - does remote rpc via Grpc. As side note i would like to share this: https://www.alluxio.io/blog/moving-from-apache-thrift-to-grpc-a-perspective-from-alluxio/ More or less this matches our experience and it's the reason because i asked if you needed streaming in GRPC. Those are the two main strong points for grpc vs thrift: - Streaming - Use of HTTP/2 (https://grpc.io/blog/grpc-on-http2/) Just 1c, Giorgio/
