I have a large message type with relatively complicated fields (nested 
repeated bytes etc.). The recent change of 4MB frame limit broke a lot of 
things and is making life rather difficult for us. We just happen to have a 
lot of messages in the 4-30MB ballpark. Note there's no way to increase the 
4MB limit on the python server side (github.com/grpc/grpc/issues/7927).

Our current thinking is to take existing messages that could be > 4MB:

message Foo {
bytes file1 = 1; // chunkable
bytes file2 = 2; // chunkable
repeated bytes listOfFiles = 3; // chunkable
int64 someField = 4;
}

and add a union-like serialized bytes tag that is empty save for gRPC use:

message Foo {
bytes file1 = 1; // chunkable
bytes file2 = 2; // chunkable
repeated bytes listOfFiles = 3; // chunkable
int64 someField = 4;
        bytes serialized = 9999;
}

So we'd do something like:

# server side
f = someMessageFoo(file1=someData, file2=someData2);
g = Foo() # emptyFoo
g.serialized = f.SerializeToString() # all the other fields of g is empty.
server.Send(g) # use the streaming API calls with a chunk size of 4MB

Then on the client side we'd deserialize this similarly via original_f = 
ParseFromString(new_g.serialized) to get back our message.

Granted we'd need to stuff everything into ram along the way, but it's not 
a big issue for us. Granted there's a fair bit of boilerplate to do this 
(and that we'd need to add the rather unfortunate serialized field to all 
our messages). 

Is there a better way?

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To post to this group, send email to grpc-io@googlegroups.com.
Visit this group at https://groups.google.com/group/grpc-io.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/0ff824c0-17f6-4287-b06c-6e9d7f6eb8e6%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Reply via email to