I think Boost has made some optimization for such data as vector<int>, it
performs almost the same as memcpy directly.
However, if we serialize a bit more complicated data structure, such as
vector<pair<int, int> > , or vector<MyData>, where MyData refers to
struct
{
int a;
int b;
int c;
int d;
}

Then it will not perform as well as before.


On Wed, Apr 1, 2009 at 1:13 AM, Kenton Varda <ken...@google.com> wrote:

> OK.  But I believe Yingfeng's results were impossibly fast, unless the code
> has changed since I saw it.  His data set is a vector of 10,000,000, each
> with a random size in the range [3,100].  That comes out to 515,000,000
> bytes (491MB) of string data.  If we totally ignore overhead of the vector,
> malloc costs, etc., just reading that much data in 0.05 seconds means
> reading about 10GB/s which is pretty close to the theoretical maximum
> throughput of the highest-end PC RAM available today.
> So either I missed something, Yingfeng's code has changed, or boost's
> "binary" mode isn't really encoding the entire data set.
>
> On Mon, Mar 30, 2009 at 9:24 PM, Alek Storm <alek.st...@gmail.com> wrote:
>
>> I think Yingfeng is referring to the archive formats described here:
>> http://www.boost.org/doc/libs/1_38_0/libs/serialization/doc/archives.html#archive_models.
>> The binary format, however, appears to be non-portable, so it doesn't seem
>> to serve the same purpose as Protocol Buffers, and should be faster anyway,
>> since it encodes directly to native types.
>>
>> --
>> Alek Storm
>>
>>
>> >>
>>
>

--~--~---------~--~----~------------~-------~--~----~
You received this message because you are subscribed to the Google Groups 
"Protocol Buffers" group.
To post to this group, send email to protobuf@googlegroups.com
To unsubscribe from this group, send email to 
protobuf+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/protobuf?hl=en
-~----------~----~----~----~------~----~------~--~---

Reply via email to