OK.  But I believe Yingfeng's results were impossibly fast, unless the code
has changed since I saw it.  His data set is a vector of 10,000,000, each
with a random size in the range [3,100].  That comes out to 515,000,000
bytes (491MB) of string data.  If we totally ignore overhead of the vector,
malloc costs, etc., just reading that much data in 0.05 seconds means
reading about 10GB/s which is pretty close to the theoretical maximum
throughput of the highest-end PC RAM available today.
So either I missed something, Yingfeng's code has changed, or boost's
"binary" mode isn't really encoding the entire data set.

On Mon, Mar 30, 2009 at 9:24 PM, Alek Storm <alek.st...@gmail.com> wrote:

> I think Yingfeng is referring to the archive formats described here:
> http://www.boost.org/doc/libs/1_38_0/libs/serialization/doc/archives.html#archive_models.
> The binary format, however, appears to be non-portable, so it doesn't seem
> to serve the same purpose as Protocol Buffers, and should be faster anyway,
> since it encodes directly to native types.
>
> --
> Alek Storm
>
>
> >
>

--~--~---------~--~----~------------~-------~--~----~
You received this message because you are subscribed to the Google Groups 
"Protocol Buffers" group.
To post to this group, send email to protobuf@googlegroups.com
To unsubscribe from this group, send email to 
protobuf+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/protobuf?hl=en
-~----------~----~----~----~------~----~------~--~---

Reply via email to