Protobuf is not designed to handle large messages; see https://developers.google.com/protocol-buffers/docs/techniques#large-data
Some details: Support we create a 100MB protobuf message. Protobuf and Grpc internally create multiple 100MB-buffers. These buffers won't be released until the entire message is processed. If we stream the message using one hundred 1MB-sub-messages. The buffer for the earlier sub-messages can be released when processing the later sub-messages. Tsz-Wo On Wed, Sep 9, 2020 at 11:10 AM Rui Wang <[email protected]> wrote: > > Thanks Tsz-Wo! > > Can you share why that API is better for large messages? Are there some > links or articles that I can read to understand streaming large messages is > better (maybe in terms of memory requirement)? > > > -Rui > > On Wed, Sep 9, 2020 at 10:32 AM Tsz Wo Sze <[email protected]> wrote: > > > It is for streaming large messages from client to server in order to avoid > > allocating large buffers. In Ozone, we often send large messages. It > > leads to high memory requirements. > > > > Tsz-Wo > > > > On Wed, Sep 9, 2020 at 10:20 AM Rui Wang <[email protected]> wrote: > > > > > Friendly raise attention on this thread. > > > > > > > > > -Rui > > > > > > On Fri, Sep 4, 2020 at 2:58 PM Rui Wang <[email protected]> wrote: > > > > > > > Hi Community, > > > > > > > > Out of curiosity, what is the context about adding StreamApi support in > > > > Ratis [1]? This interface looks relatively new. Is there known usage of > > > > StreamApi? > > > > > > > > > > > > > > > > [1]: > > > > > > > > > https://github.com/apache/incubator-ratis/blob/master/ratis-client/src/main/java/org/apache/ratis/client/api/StreamApi.java#L29 > > > > > > > > > > > > > > > > -Rui > > > > > > > > >
