3ku for your reply:).

en....Actually my application is that I CAN allocate enough ram for the in-memory message object. What I really want is to restrict the size of messages which are sent/received by client/server. Right,now ZeroCopyOutputStream can work.

But actually my app is like a RPC system. each messages that be send/received can be parsed to a full but smaller object. I don't think ZeroCopyOutputStream can assure this point.

3ku for your hints. I will rethink about the message definition..:)..

Linkedin:http://www.linkedin.com/in/dirlt


于 2010/6/4 1:39, Jason Hsueh 写道:
Ah, one option I missed is using an implementation of io::ZeroCopyOutputStream like io::FileOutputStream, which uses a fixed size buffer and flushes data to the file (socket) when the buffer is full. Then serializing a large message won't consume a lot of memory. Perhaps this is what you really wanted, rather than truncating the message?

However, you still need enough ram for the in-memory message object, and those are typically larger than the serialized form. Also, this approach may or may not work with your RPC system. It is probably still worthwhile for you to look at reworking your message definition so that you transmit smaller messages.

On Thu, Jun 3, 2010 at 9:44 AM, [email protected] <mailto:[email protected]> <[email protected] <mailto:[email protected]>> wrote:

    I think what you means that "I should redesign protocol which
    implements stream functionality based on protobuf  INSTEAD OF
    expecting protobuf to implement it."

    What I used to thought is "App -> Protobuf -> Stream
    Functionality[Protobuf provides stream functionality directly.On
    the top, my app faces a large protobuf]"
    And I think what you means is "App-> Stream Functionality ->
    Protobuf[I have to implement stream by myself, but each stream
    packet is based on protobuf. On the top,my app faces a lot of
    small stream packets in protobuf]"

    Linkedin:http://www.linkedin.com/in/dirlt


    于 2010/6/4 0:21, Jason Hsueh 写道:
    This really needs to be handled in the application since protobuf
    has no idea which fields are expendable or can be truncated. What
    I was trying to suggest earlier was to construct many Req
    protobufs and serialize those individually. i.e., instead of 1
    Req proto with 1,000,000 page ids, construct 1000 Req protos,
    each containing 1000 page ids. You can serialize each of those
    individually, stopping when you hit your memory budget.

    That being said, I would suggest redesigning your protocol so
    that you don't have to construct enormous messages. It sounds
    like what you really want is something like the streaming
    functionality in the rpc service - rather than sending one large
    protobuf you would want to stream each page id.

    On Thu, Jun 3, 2010 at 6:27 AM, dirlt <[email protected]
    <mailto:[email protected]>> wrote:

        3ku for you relply:).For the first one,I think your answer is
        quite
        clear.

        But to the second one,en,I want to produce the serialization
        of Req.

        Let me explain again:). assume my application is like this:
        0.server app wants to send 1,000,000 pageids to client
        1.if server app sends 1,000,000 pages id and serialize it, it
        will
        cost 1GB memory

        2.but server app can just allocate 100MB memory. So obviously
        server
        app can't send all pageids[1,000,000] to client

        3.meanwhile the server app's protobuf is very clever.
        It[protobuf] can
        calculate that "if server app has 100MB, it can just hold 10,000
        pageids at most". So protobuf tells server that "Hi server,if
        you just
        have 100MB memory,I can only hold 10,000 pageids"

        4.so the server app knows it,so app just serialize 10,000
        pageids into
        memory instead of 1,000,000 pageids.

        I hope I clarify it now.. If the protobuf doesn't implement
        it, do you
        have any idea about it?.


        On Jun 3, 12:40 am, Jason Hsueh <[email protected]
        <mailto:[email protected]>> wrote:
        > On Tue, Jun 1, 2010 at 6:21 AM, bnh <[email protected]
        <mailto:[email protected]>> wrote:
        > > I'm using a protobuf as the protocol for a distributed
        system.But now
        > > I
        > > have some questions about protobuf
        >
        > > a.Whether protobuf provides the inteface for user-defined
        allocator
        > > because sometimes I find 'malloc' cost too much? I've
        tried TCmalloc,
        > > but I think I can optimize the memory allocation
        according to my
        > > application.
        >
        > No, there are no hooks for providing an allocator. You'd
        need to override
        > malloc the way TCmalloc does if you want to use your own
        allocator.
        >
        >
        >
        >
        >
        > > b.Whethere protobuf provides a way to serialize a
        class/object
        > > partially[Or do you have some ideas about it]? Because my
        application
        > > is
        > > very sensitive of memory usage.. Such as a class
        >
        > > class Req{
        > > int userid;
        > > vector<PageID> pageid;
        > > };
        >
        > > I want to pack 1000 pageids into the Req. But if I pack
        all of them,
        > > the
        > > Req's size is about 1GB [hypothetically]. But I just have
        100MB
        > > memory,
        > > so I just plan to pack pageids as many as possible until
        the memory
        > > usage of Req is about 100MB. ['serialize object partially
        according to
        > > memory usage'].
        >
        > Are you talking about producing the serialization of Req,
        with a large
        > number of PageIds, or parsing such a serialization into an
        in-memory object?
        > For the former, you can serialize in smaller pieces, and
        just concatenate
        > the
        
serializations:http://code.google.com/apis/protocolbuffers/docs/encoding.html#optional
        > For the latter, there is no way for you to tell the parser
        to stop parsing
        > when memory usage reaches a certain limit. However, you can
        do this yourself
        > if you split the serialization into multiple pieces.
        >
        >
        >
        > > --
        > > You received this message because you are subscribed to
        the Google Groups
        > > "Protocol Buffers" group.
        > > To post to this group, send email to
        [email protected] <mailto:[email protected]>.
        > > To unsubscribe from this group, send email to
        > > [email protected]
        
<mailto:protobuf%[email protected]><protobuf%[email protected]
        <mailto:protobuf%[email protected]>>
        > > .
        > > For more options, visit this group at
        > >http://groups.google.com/group/protobuf?hl=en.

        --
        You received this message because you are subscribed to the
        Google Groups "Protocol Buffers" group.
        To post to this group, send email to
        [email protected] <mailto:[email protected]>.
        To unsubscribe from this group, send email to
        [email protected]
        <mailto:protobuf%[email protected]>.
        For more options, visit this group at
        http://groups.google.com/group/protobuf?hl=en.




--
You received this message because you are subscribed to the Google Groups "Protocol 
Buffers" group.
To post to this group, send email to [email protected].
To unsubscribe from this group, send email to 
[email protected].
For more options, visit this group at 
http://groups.google.com/group/protobuf?hl=en.

Reply via email to