3ku for you reply.

Yes, actually I know this demand will add a great deal of complication. So I just want to know whether you designers have some idea about how to implement it.

en.....actually I have test the performance of protobuf,thrift and some other opensource products. protobuf's package speed is higher and size of package is lower. I think protobuf is a great opensource stuff:)..

Linkedin:http://www.linkedin.com/in/dirlt


于 2010/6/4 1:40, Kenton Varda 写道:
That's correct. Sorry, but encoding large chunks of data that cannot be parsed or serialized all at once is an explicit non-goal of protocol buffers. Fulfilling such needs involves adding a great deal of complication to the system. Your case may seem relatively simple, but other, more complicated cases require things like random access, search indexes, etc. We chose to focus only on small messages.

It is certainly possible and useful to use protocol buffers as a building block when designing a format for large data sets.

On Thu, Jun 3, 2010 at 9:44 AM, [email protected] <mailto:[email protected]> <[email protected] <mailto:[email protected]>> wrote:

    I think what you means that "I should redesign protocol which
    implements stream functionality based on protobuf  INSTEAD OF
    expecting protobuf to implement it."

    What I used to thought is "App -> Protobuf -> Stream
    Functionality[Protobuf provides stream functionality directly.On
    the top, my app faces a large protobuf]"
    And I think what you means is "App-> Stream Functionality ->
    Protobuf[I have to implement stream by myself, but each stream
    packet is based on protobuf. On the top,my app faces a lot of
    small stream packets in protobuf]"

    Linkedin:http://www.linkedin.com/in/dirlt


    于 2010/6/4 0:21, Jason Hsueh 写道:
    This really needs to be handled in the application since protobuf
    has no idea which fields are expendable or can be truncated. What
    I was trying to suggest earlier was to construct many Req
    protobufs and serialize those individually. i.e., instead of 1
    Req proto with 1,000,000 page ids, construct 1000 Req protos,
    each containing 1000 page ids. You can serialize each of those
    individually, stopping when you hit your memory budget.

    That being said, I would suggest redesigning your protocol so
    that you don't have to construct enormous messages. It sounds
    like what you really want is something like the streaming
    functionality in the rpc service - rather than sending one large
    protobuf you would want to stream each page id.

    On Thu, Jun 3, 2010 at 6:27 AM, dirlt <[email protected]
    <mailto:[email protected]>> wrote:

        3ku for you relply:).For the first one,I think your answer is
        quite
        clear.

        But to the second one,en,I want to produce the serialization
        of Req.

        Let me explain again:). assume my application is like this:
        0.server app wants to send 1,000,000 pageids to client
        1.if server app sends 1,000,000 pages id and serialize it, it
        will
        cost 1GB memory

        2.but server app can just allocate 100MB memory. So obviously
        server
        app can't send all pageids[1,000,000] to client

        3.meanwhile the server app's protobuf is very clever.
        It[protobuf] can
        calculate that "if server app has 100MB, it can just hold 10,000
        pageids at most". So protobuf tells server that "Hi server,if
        you just
        have 100MB memory,I can only hold 10,000 pageids"

        4.so the server app knows it,so app just serialize 10,000
        pageids into
        memory instead of 1,000,000 pageids.

        I hope I clarify it now.. If the protobuf doesn't implement
        it, do you
        have any idea about it?.


        On Jun 3, 12:40 am, Jason Hsueh <[email protected]
        <mailto:[email protected]>> wrote:
        > On Tue, Jun 1, 2010 at 6:21 AM, bnh <[email protected]
        <mailto:[email protected]>> wrote:
        > > I'm using a protobuf as the protocol for a distributed
        system.But now
        > > I
        > > have some questions about protobuf
        >
        > > a.Whether protobuf provides the inteface for user-defined
        allocator
        > > because sometimes I find 'malloc' cost too much? I've
        tried TCmalloc,
        > > but I think I can optimize the memory allocation
        according to my
        > > application.
        >
        > No, there are no hooks for providing an allocator. You'd
        need to override
        > malloc the way TCmalloc does if you want to use your own
        allocator.
        >
        >
        >
        >
        >
        > > b.Whethere protobuf provides a way to serialize a
        class/object
        > > partially[Or do you have some ideas about it]? Because my
        application
        > > is
        > > very sensitive of memory usage.. Such as a class
        >
        > > class Req{
        > > int userid;
        > > vector<PageID> pageid;
        > > };
        >
        > > I want to pack 1000 pageids into the Req. But if I pack
        all of them,
        > > the
        > > Req's size is about 1GB [hypothetically]. But I just have
        100MB
        > > memory,
        > > so I just plan to pack pageids as many as possible until
        the memory
        > > usage of Req is about 100MB. ['serialize object partially
        according to
        > > memory usage'].
        >
        > Are you talking about producing the serialization of Req,
        with a large
        > number of PageIds, or parsing such a serialization into an
        in-memory object?
        > For the former, you can serialize in smaller pieces, and
        just concatenate
        > the
        
serializations:http://code.google.com/apis/protocolbuffers/docs/encoding.html#optional
        > For the latter, there is no way for you to tell the parser
        to stop parsing
        > when memory usage reaches a certain limit. However, you can
        do this yourself
        > if you split the serialization into multiple pieces.
        >
        >
        >
        > > --
        > > You received this message because you are subscribed to
        the Google Groups
        > > "Protocol Buffers" group.
        > > To post to this group, send email to
        [email protected] <mailto:[email protected]>.
        > > To unsubscribe from this group, send email to
        > > [email protected]
        
<mailto:protobuf%[email protected]><protobuf%[email protected]
        <mailto:protobuf%[email protected]>>
        > > .
        > > For more options, visit this group at
        > >http://groups.google.com/group/protobuf?hl=en.

        --
        You received this message because you are subscribed to the
        Google Groups "Protocol Buffers" group.
        To post to this group, send email to
        [email protected] <mailto:[email protected]>.
        To unsubscribe from this group, send email to
        [email protected]
        <mailto:protobuf%[email protected]>.
        For more options, visit this group at
        http://groups.google.com/group/protobuf?hl=en.


-- You received this message because you are subscribed to the Google
    Groups "Protocol Buffers" group.
    To post to this group, send email to [email protected]
    <mailto:[email protected]>.
    To unsubscribe from this group, send email to
    [email protected]
    <mailto:protobuf%[email protected]>.
    For more options, visit this group at
    http://groups.google.com/group/protobuf?hl=en.



--
You received this message because you are subscribed to the Google Groups "Protocol 
Buffers" group.
To post to this group, send email to [email protected].
To unsubscribe from this group, send email to 
[email protected].
For more options, visit this group at 
http://groups.google.com/group/protobuf?hl=en.

Reply via email to