I think what you means that "I should redesign protocol which implements stream functionality based on protobuf INSTEAD OF expecting protobuf to implement it."

What I used to thought is "App -> Protobuf -> Stream Functionality[Protobuf provides stream functionality directly.On the top, my app faces a large protobuf]" And I think what you means is "App-> Stream Functionality -> Protobuf[I have to implement stream by myself, but each stream packet is based on protobuf. On the top,my app faces a lot of small stream packets in protobuf]"


Linkedin:http://www.linkedin.com/in/dirlt


? 2010/6/4 0:21, Jason Hsueh ??:
This really needs to be handled in the application since protobuf has no idea which fields are expendable or can be truncated. What I was trying to suggest earlier was to construct many Req protobufs and serialize those individually. i.e., instead of 1 Req proto with 1,000,000 page ids, construct 1000 Req protos, each containing 1000 page ids. You can serialize each of those individually, stopping when you hit your memory budget.

That being said, I would suggest redesigning your protocol so that you don't have to construct enormous messages. It sounds like what you really want is something like the streaming functionality in the rpc service - rather than sending one large protobuf you would want to stream each page id.

On Thu, Jun 3, 2010 at 6:27 AM, dirlt <dirtysalt1...@gmail.com <mailto:dirtysalt1...@gmail.com>> wrote:

    3ku for you relply:).For the first one,I think your answer is quite
    clear.

    But to the second one,en,I want to produce the serialization of Req.

    Let me explain again:). assume my application is like this:
    0.server app wants to send 1,000,000 pageids to client
    1.if server app sends 1,000,000 pages id and serialize it, it will
    cost 1GB memory

    2.but server app can just allocate 100MB memory. So obviously server
    app can't send all pageids[1,000,000] to client

    3.meanwhile the server app's protobuf is very clever. It[protobuf] can
    calculate that "if server app has 100MB, it can just hold 10,000
    pageids at most". So protobuf tells server that "Hi server,if you just
    have 100MB memory,I can only hold 10,000 pageids"

    4.so the server app knows it,so app just serialize 10,000 pageids into
    memory instead of 1,000,000 pageids.

    I hope I clarify it now.. If the protobuf doesn't implement it, do you
    have any idea about it?.


    On Jun 3, 12:40 am, Jason Hsueh <jas...@google.com
    <mailto:jas...@google.com>> wrote:
    > On Tue, Jun 1, 2010 at 6:21 AM, bnh <baoneng...@gmail.com
    <mailto:baoneng...@gmail.com>> wrote:
    > > I'm using a protobuf as the protocol for a distributed
    system.But now
    > > I
    > > have some questions about protobuf
    >
    > > a.Whether protobuf provides the inteface for user-defined
    allocator
    > > because sometimes I find 'malloc' cost too much? I've tried
    TCmalloc,
    > > but I think I can optimize the memory allocation according to my
    > > application.
    >
    > No, there are no hooks for providing an allocator. You'd need to
    override
    > malloc the way TCmalloc does if you want to use your own allocator.
    >
    >
    >
    >
    >
    > > b.Whethere protobuf provides a way to serialize a class/object
    > > partially[Or do you have some ideas about it]? Because my
    application
    > > is
    > > very sensitive of memory usage.. Such as a class
    >
    > > class Req{
    > > int userid;
    > > vector<PageID> pageid;
    > > };
    >
    > > I want to pack 1000 pageids into the Req. But if I pack all of
    them,
    > > the
    > > Req's size is about 1GB [hypothetically]. But I just have 100MB
    > > memory,
    > > so I just plan to pack pageids as many as possible until the
    memory
    > > usage of Req is about 100MB. ['serialize object partially
    according to
    > > memory usage'].
    >
    > Are you talking about producing the serialization of Req, with a
    large
    > number of PageIds, or parsing such a serialization into an
    in-memory object?
    > For the former, you can serialize in smaller pieces, and just
    concatenate
    > the
    
serializations:http://code.google.com/apis/protocolbuffers/docs/encoding.html#optional
    > For the latter, there is no way for you to tell the parser to
    stop parsing
    > when memory usage reaches a certain limit. However, you can do
    this yourself
    > if you split the serialization into multiple pieces.
    >
    >
    >
    > > --
    > > You received this message because you are subscribed to the
    Google Groups
    > > "Protocol Buffers" group.
    > > To post to this group, send email to protobuf@googlegroups.com
    <mailto:protobuf@googlegroups.com>.
    > > To unsubscribe from this group, send email to
    > > protobuf+unsubscr...@googlegroups.com
    
<mailto:protobuf%2bunsubscr...@googlegroups.com><protobuf%2bunsubscr...@googlegroups.com
    <mailto:protobuf%252bunsubscr...@googlegroups.com>>
    > > .
    > > For more options, visit this group at
    > >http://groups.google.com/group/protobuf?hl=en.

    --
    You received this message because you are subscribed to the Google
    Groups "Protocol Buffers" group.
    To post to this group, send email to protobuf@googlegroups.com
    <mailto:protobuf@googlegroups.com>.
    To unsubscribe from this group, send email to
    protobuf+unsubscr...@googlegroups.com
    <mailto:protobuf%2bunsubscr...@googlegroups.com>.
    For more options, visit this group at
    http://groups.google.com/group/protobuf?hl=en.



--
You received this message because you are subscribed to the Google Groups "Protocol 
Buffers" group.
To post to this group, send email to proto...@googlegroups.com.
To unsubscribe from this group, send email to 
protobuf+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/protobuf?hl=en.

Reply via email to