Note that writing a 100GB file using CodedStream is probably a bad idea
because:
- Readers will have to read the entire file sequentially; they will not be
able to seek to particular parts.
- One bit of corruption anywhere in the file could potentially render the
entire rest of the file unreadable.

Remember that this stuff was designed for small messages.  You should really
use some sort of seekable, fault-tolerant container format for 100GB of
data.  You can still encode each individual message using protobufs, which
is useful as it allows the container format to treat each message as a
simple byte blob.

On Thu, Jun 3, 2010 at 12:43 PM, Evan Jones <[email protected]> wrote:

> On Jun 3, 2010, at 15:29 , Nader Salehi wrote:
>
>> It is not a single object; I am writing into a coded output stream
>> file which could grow to much larger than 2GB (it's more like 100GB).
>> I also have to read from this file.
>>
>> Is there a performance hit in the above-mentioned scenario?
>>
>
> No, this should work just fine. On the input size, you'll need to call
> CodedInputStream.resetSizeCounter() after each message, otherwise you'll run
> into the size limit.
>
>
> Evan
>
> --
> Evan Jones
> http://evanjones.ca/
>
> --
> You received this message because you are subscribed to the Google Groups
> "Protocol Buffers" group.
> To post to this group, send email to [email protected].
> To unsubscribe from this group, send email to
> [email protected]<protobuf%[email protected]>
> .
> For more options, visit this group at
> http://groups.google.com/group/protobuf?hl=en.
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"Protocol Buffers" group.
To post to this group, send email to [email protected].
To unsubscribe from this group, send email to 
[email protected].
For more options, visit this group at 
http://groups.google.com/group/protobuf?hl=en.

Reply via email to