Re: [protobuf] Is it possible to generate Sorbet enums from a protobuf?

2021-11-30 Thread 'Chris Nakamura' via Protocol Buffers
Yes!

On Tuesday, November 30, 2021 at 12:55:11 PM UTC-5 Derek Perez wrote:

> Could you file an issue here:
> https://github.com/protocolbuffers/protobuf
>
> We can look into it.
>
> On Tue, Nov 30, 2021 at 9:31 AM 'Chris Nakamura' via Protocol Buffers <
> prot...@googlegroups.com> wrote:
>
>> The protobuf docs 
>> <https://developers.google.com/protocol-buffers/docs/reference/ruby-generated#enum>
>>  mention 
>> the following:
>>
>> > Since Ruby does not have native enums, we create a module for each enum 
>> with constants to define the values. Given the .proto file:
>> However, it is possible to define enums in ruby with Sorbet 
>> <https://sorbet.org/docs/tenum>. Is it possible to automatically 
>> generate sorbet enums from a proto file, instead of passing around symbols 
>> and ints?
>>
>> Thanks!
>>
>> -- 
>> You received this message because you are subscribed to the Google Groups 
>> "Protocol Buffers" group.
>> To unsubscribe from this group and stop receiving emails from it, send an 
>> email to protobuf+u...@googlegroups.com.
>> To view this discussion on the web visit 
>> https://groups.google.com/d/msgid/protobuf/115f40c9-e91d-4db6-aac2-5294b2376dd1n%40googlegroups.com
>>  
>> <https://groups.google.com/d/msgid/protobuf/115f40c9-e91d-4db6-aac2-5294b2376dd1n%40googlegroups.com?utm_medium=email_source=footer>
>> .
>>
>

-- 
You received this message because you are subscribed to the Google Groups 
"Protocol Buffers" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to protobuf+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/protobuf/dcf2f733-ab8c-44e7-91d1-97432bbab409n%40googlegroups.com.


[protobuf] Is it possible to generate Sorbet enums from a protobuf?

2021-11-30 Thread 'Chris Nakamura' via Protocol Buffers
The protobuf docs 

 mention 
the following:

> Since Ruby does not have native enums, we create a module for each enum 
with constants to define the values. Given the .proto file:
However, it is possible to define enums in ruby with Sorbet 
. Is it possible to automatically generate 
sorbet enums from a proto file, instead of passing around symbols and ints?

Thanks!

-- 
You received this message because you are subscribed to the Google Groups 
"Protocol Buffers" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to protobuf+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/protobuf/115f40c9-e91d-4db6-aac2-5294b2376dd1n%40googlegroups.com.


Re: [protobuf] Protobuf Compiler in a C# Shared Library

2021-04-18 Thread Chris Langlois
Thanks for the recommendations. I can add .proto files to the Shared 
projects. What I cannot do is add a build action for "protobuf compiler" to 
enable automatic compilation in Visual Studio. For some reason that doesn't 
seem to work within a shared project in particular.  I tried manually 
adding the PackageReference includes for protobuf and grpc that appear in 
the client and server projects to the shared project .projitems file and it 
doesn't seem to make any difference.
Thanks
On Sunday, April 18, 2021 at 12:56:38 PM UTC-6 marc.g...@gmail.com wrote:

> What is stopping you from adding them to the shared project? What TFM is 
> it targeting? This should JustWorkTM. You could also look at the csproj 
> changes in the client and server, and try to apply the same changes in the 
> shared project - it might tell you why it isn't happy.
>
> On Sun, 18 Apr 2021, 19:43 Chris Langlois,  wrote:
>
>> I'm using Protobuf and Visual Studio C#. I have a Client project, Server 
>> project, and and shared project. I want my proto file and messages to live 
>> within the shared project. I'm installed the Nuget packages for protobuf, 
>> protobuf tools, grpc, grpc core, and grpc tools into both my client and 
>> server projects (you cant add them to a shared project). The build action 
>> for the proto file in my shared project cannot be changed to Protobuf 
>> compiler for automatic compilation like it can if I put the .proto in 
>> either the client or server project.
>>
>> Any thoughts on a better architecture or what I might be doing wrong 
>> here. This seems like it should be a common architecture but it seems to be 
>> a real struggle.
>>
>> Thanks!
>>
>> -- 
>> You received this message because you are subscribed to the Google Groups 
>> "Protocol Buffers" group.
>> To unsubscribe from this group and stop receiving emails from it, send an 
>> email to protobuf+u...@googlegroups.com.
>> To view this discussion on the web visit 
>> https://groups.google.com/d/msgid/protobuf/e9a8e8e4-e085-47aa-8f62-c34d65e9b11an%40googlegroups.com
>>  
>> <https://groups.google.com/d/msgid/protobuf/e9a8e8e4-e085-47aa-8f62-c34d65e9b11an%40googlegroups.com?utm_medium=email_source=footer>
>> .
>>
>

-- 
You received this message because you are subscribed to the Google Groups 
"Protocol Buffers" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to protobuf+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/protobuf/f9d19c61-fcaf-4bee-92e2-16793d9b42fdn%40googlegroups.com.


[protobuf] Protobuf Compiler in a C# Shared Library

2021-04-18 Thread Chris Langlois
I'm using Protobuf and Visual Studio C#. I have a Client project, Server 
project, and and shared project. I want my proto file and messages to live 
within the shared project. I'm installed the Nuget packages for protobuf, 
protobuf tools, grpc, grpc core, and grpc tools into both my client and 
server projects (you cant add them to a shared project). The build action 
for the proto file in my shared project cannot be changed to Protobuf 
compiler for automatic compilation like it can if I put the .proto in 
either the client or server project.

Any thoughts on a better architecture or what I might be doing wrong here. 
This seems like it should be a common architecture but it seems to be a 
real struggle.

Thanks!

-- 
You received this message because you are subscribed to the Google Groups 
"Protocol Buffers" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to protobuf+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/protobuf/e9a8e8e4-e085-47aa-8f62-c34d65e9b11an%40googlegroups.com.


[protobuf] JS bundle bloat due to unoptimizable proto extensions

2020-08-12 Thread 'Chris Scribner' via Protocol Buffers
Hi,

I was checking into Messages for Web's bundle size again today and noticed 
we still have large amounts of bloat coming from binary serialization code 
retained due to protobuf extensions (as described in b/146146114).

I noticed this issue hasn't been assigned to anyone for over half a year. 
Could anyone take a look or offer guidance?

Thanks,

Chris

-- 
You received this message because you are subscribed to the Google Groups 
"Protocol Buffers" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to protobuf+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/protobuf/6c7e5884-46d6-416a-8e71-1858334369f9o%40googlegroups.com.


[protobuf] Re: Adding elements to a repeated field from C++

2019-01-02 Thread Chris Dams

Dear all,

I now succeeded in making this work by instead using the AddMessage method
in the Reflection class.

Happy new year,
Chris

-- 
You received this message because you are subscribed to the Google Groups 
"Protocol Buffers" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to protobuf+unsubscr...@googlegroups.com.
To post to this group, send email to protobuf@googlegroups.com.
Visit this group at https://groups.google.com/group/protobuf.
For more options, visit https://groups.google.com/d/optout.


[protobuf] Adding elements to a repeated field from C++

2018-12-21 Thread Chris Dams

Dear all,

I am trying to use reflection from C++ to, among other things, add elements 
to a repeated protobuf field.
As such this works but I need to use the specific type of the data in the 
repeated field. I would rather
just work with the base type Message because that allows me to write 
generic code.

// This code works.
::google::protobuf::RepeatedPtrField* repeated
= reflection_->MutableRepeatedPtrField(
_, message_.GetDescriptor()->FindFieldByName(name));
Message* element = repeated->Add();

// This code segfaults in the Add call.
::google::protobuf::RepeatedPtrField* repeated
= reflection_->MutableRepeatedPtrField(
_, message_.GetDescriptor()->FindFieldByName(name));
Message* element = repeated->Add();

Any ideas how I can make the second code fragment work?

Have a nice day,
Chris

-- 
You received this message because you are subscribed to the Google Groups 
"Protocol Buffers" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to protobuf+unsubscr...@googlegroups.com.
To post to this group, send email to protobuf@googlegroups.com.
Visit this group at https://groups.google.com/group/protobuf.
For more options, visit https://groups.google.com/d/optout.


Re: [protobuf] Protobuf Java Generated Code field name has underscore at the end.

2018-09-06 Thread Chris Zhang
Thanks for the Help, so far we are trying to aviod converting the protobuf 
to JSON/BSON and then convert back.
That might cause performance issue.

It would be great if we can convert directly.

Thank you very much.

On Thursday, September 6, 2018 at 2:22:16 PM UTC-4, Adam Cozzette wrote:
>
> There is some documentation and examples here 
> <https://developers.google.com/protocol-buffers/docs/javatutorial> showing 
> the basics of how to parse and serialize protos in Java. Ilia is right that 
> doing this and using the official protobuf binary format would be the most 
> reliable way to go.
>
> However, it sounds like you want to be able to make MongoDB queries 
> referencing fields in your proto. In that case, storing serialized protos 
> will not work, because from what I understand MongoDB will just see them as 
> opaque binary blobs. If you need to query based on proto fields then I 
> believe you would need to convert the proto into BSON before storing it. 
> Probably the easiest way to do that would be to first convert to JSON using 
> JsonFormat 
> <https://github.com/protocolbuffers/protobuf/blob/master/java/util/src/main/java/com/google/protobuf/util/JsonFormat.java>
>  and 
> then presumably it's straightforward to go from JSON to BSON. I don't have 
> any experience doing this, though, so I can't say for sure how well it will 
> work. It might also be feasible to use the reflection API to serialize and 
> parse directly to and from BSON. Making protos queryable in MongoDB is not 
> a use case we have really thought about much, though.
>
> On Thu, Sep 6, 2018 at 10:56 AM Chris Zhang  > wrote:
>
>> Sure, but do you know which util is best for convert the stored protobuf 
>> data back to Java Object?
>>
>> On Thursday, September 6, 2018 at 1:39:12 PM UTC-4, Ilia Mirkin wrote:
>>>
>>> Why not just store the serialized data of the protobuf instead? That's 
>>> kind of the whole point of protobuf... 
>>>
>>> On Thu, Sep 6, 2018 at 1:27 PM, Chris Zhang  wrote: 
>>> > Hi Adam, 
>>> > 
>>> > Thanks for the response. 
>>> > 
>>> > We are trying to persist the protobuf generated java object into 
>>> mongoDB 
>>> > using Spring framework. 
>>> > However, when doing the querying from database, the spring framework 
>>> does 
>>> > not support any field name with underscore. 
>>> > 
>>> > Is there anyway we can work around? 
>>> > 
>>> > Thanks. 
>>> > 
>>> > 
>>> > On Thursday, September 6, 2018 at 1:05:38 PM UTC-4, Adam Cozzette 
>>> wrote: 
>>> >> 
>>> >> There is no way to remove the underscores without changing protoc. 
>>> But why 
>>> >> do you want to get rid of the underscores anyway? Those variables are 
>>> just a 
>>> >> private implementation detail and make no difference to the public 
>>> API. 
>>> >> 
>>> >> On Wed, Sep 5, 2018 at 1:07 PM Chris Zhang  
>>> wrote: 
>>> >>> 
>>> >>> Hi, 
>>> >>> 
>>> >>> I am new to Protobuf, and recently I found out that the java 
>>> generated 
>>> >>> code by protobuf has underscore by the end of each field names. 
>>> >>> 
>>> >>> For example, 
>>> >>> 
>>> >>> protobuf message file look like this: 
>>> >>> 
>>> >>> message DummyMessage [ 
>>> >>> 
>>> >>> string some_id = 1; 
>>> >>> bool is_active = 2; 
>>> >>> } 
>>> >>> 
>>> >>> The generated java code is like this: 
>>> >>> 
>>> >>> Class DummyMessage { 
>>> >>> 
>>> >>> String someId_; 
>>> >>> boolean isActive_; 
>>> >>> 
>>> >>> } 
>>> >>> 
>>> >>> Is there any way to get rid of the underscore of each field? 
>>> >>> 
>>> >>> Thanks, 
>>> >>> 
>>> >>> 
>>> >>> -- 
>>> >>> You received this message because you are subscribed to the Google 
>>> Groups 
>>> >>> "Protocol Buffers" group. 
>>> >>> To unsubscribe from this group and stop receiving emails from it, 
>>> send an 
>>> >>> email to protobuf+u...@googlegroups.com. 
>>> >&g

Re: [protobuf] Protobuf Java Generated Code field name has underscore at the end.

2018-09-06 Thread Chris Zhang
Sure, but do you know which util is best for convert the stored protobuf 
data back to Java Object?

On Thursday, September 6, 2018 at 1:39:12 PM UTC-4, Ilia Mirkin wrote:
>
> Why not just store the serialized data of the protobuf instead? That's 
> kind of the whole point of protobuf... 
>
> On Thu, Sep 6, 2018 at 1:27 PM, Chris Zhang  > wrote: 
> > Hi Adam, 
> > 
> > Thanks for the response. 
> > 
> > We are trying to persist the protobuf generated java object into mongoDB 
> > using Spring framework. 
> > However, when doing the querying from database, the spring framework 
> does 
> > not support any field name with underscore. 
> > 
> > Is there anyway we can work around? 
> > 
> > Thanks. 
> > 
> > 
> > On Thursday, September 6, 2018 at 1:05:38 PM UTC-4, Adam Cozzette wrote: 
> >> 
> >> There is no way to remove the underscores without changing protoc. But 
> why 
> >> do you want to get rid of the underscores anyway? Those variables are 
> just a 
> >> private implementation detail and make no difference to the public API. 
> >> 
> >> On Wed, Sep 5, 2018 at 1:07 PM Chris Zhang  wrote: 
> >>> 
> >>> Hi, 
> >>> 
> >>> I am new to Protobuf, and recently I found out that the java generated 
> >>> code by protobuf has underscore by the end of each field names. 
> >>> 
> >>> For example, 
> >>> 
> >>> protobuf message file look like this: 
> >>> 
> >>> message DummyMessage [ 
> >>> 
> >>> string some_id = 1; 
> >>> bool is_active = 2; 
> >>> } 
> >>> 
> >>> The generated java code is like this: 
> >>> 
> >>> Class DummyMessage { 
> >>> 
> >>> String someId_; 
> >>> boolean isActive_; 
> >>> 
> >>> } 
> >>> 
> >>> Is there any way to get rid of the underscore of each field? 
> >>> 
> >>> Thanks, 
> >>> 
> >>> 
> >>> -- 
> >>> You received this message because you are subscribed to the Google 
> Groups 
> >>> "Protocol Buffers" group. 
> >>> To unsubscribe from this group and stop receiving emails from it, send 
> an 
> >>> email to protobuf+u...@googlegroups.com. 
> >>> To post to this group, send email to prot...@googlegroups.com. 
> >>> Visit this group at https://groups.google.com/group/protobuf. 
> >>> For more options, visit https://groups.google.com/d/optout. 
> > 
> > -- 
> > You received this message because you are subscribed to the Google 
> Groups 
> > "Protocol Buffers" group. 
> > To unsubscribe from this group and stop receiving emails from it, send 
> an 
> > email to protobuf+u...@googlegroups.com . 
> > To post to this group, send email to prot...@googlegroups.com 
> . 
> > Visit this group at https://groups.google.com/group/protobuf. 
> > For more options, visit https://groups.google.com/d/optout. 
>

-- 
You received this message because you are subscribed to the Google Groups 
"Protocol Buffers" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to protobuf+unsubscr...@googlegroups.com.
To post to this group, send email to protobuf@googlegroups.com.
Visit this group at https://groups.google.com/group/protobuf.
For more options, visit https://groups.google.com/d/optout.


[protobuf] Re: Protobuf Java Generated Code field name has underscore at the end.

2018-09-06 Thread Chris Zhang
So in our case, we can store the protobuf java object into mongoDB, and 
inside mongoDB, you can see the object is represented as BSON with 
fieldname has underscore, which cause the problem when we try to query from 
database by passing some field critiria. Since spring can not recognize the 
field with underscores.


On Wednesday, September 5, 2018 at 4:07:56 PM UTC-4, Chris Zhang wrote:
>
> Hi,
>
> I am new to Protobuf, and recently I found out that the java generated 
> code by protobuf has underscore by the end of each field names.
>
> For example,
>
> protobuf message file look like this:
>
> message DummyMessage [
> 
> string some_id = 1;
> bool is_active = 2;
> }
>
> The generated java code is like this:
>
> Class DummyMessage {
>
> String someId_;
> boolean isActive_;
>
> }
>
> Is there any way to get rid of the underscore of each field?
>
> Thanks,
>
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"Protocol Buffers" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to protobuf+unsubscr...@googlegroups.com.
To post to this group, send email to protobuf@googlegroups.com.
Visit this group at https://groups.google.com/group/protobuf.
For more options, visit https://groups.google.com/d/optout.


Re: [protobuf] Protobuf Java Generated Code field name has underscore at the end.

2018-09-06 Thread Chris Zhang
Hi Adam,

Thanks for the response. 

We are trying to persist the protobuf generated java object into mongoDB 
using Spring framework.
However, when doing the querying from database, the spring framework does 
not support any field name with underscore.

Is there anyway we can work around?

Thanks.


On Thursday, September 6, 2018 at 1:05:38 PM UTC-4, Adam Cozzette wrote:
>
> There is no way to remove the underscores without changing protoc. But why 
> do you want to get rid of the underscores anyway? Those variables are just 
> a private implementation detail and make no difference to the public API.
>
> On Wed, Sep 5, 2018 at 1:07 PM Chris Zhang  > wrote:
>
>> Hi,
>>
>> I am new to Protobuf, and recently I found out that the java generated 
>> code by protobuf has underscore by the end of each field names.
>>
>> For example,
>>
>> protobuf message file look like this:
>>
>> message DummyMessage [
>> 
>> string some_id = 1;
>> bool is_active = 2;
>> }
>>
>> The generated java code is like this:
>>
>> Class DummyMessage {
>>
>> String someId_;
>> boolean isActive_;
>>
>> }
>>
>> Is there any way to get rid of the underscore of each field?
>>
>> Thanks,
>>
>>
>> -- 
>> You received this message because you are subscribed to the Google Groups 
>> "Protocol Buffers" group.
>> To unsubscribe from this group and stop receiving emails from it, send an 
>> email to protobuf+u...@googlegroups.com .
>> To post to this group, send email to prot...@googlegroups.com 
>> .
>> Visit this group at https://groups.google.com/group/protobuf.
>> For more options, visit https://groups.google.com/d/optout.
>>
>

-- 
You received this message because you are subscribed to the Google Groups 
"Protocol Buffers" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to protobuf+unsubscr...@googlegroups.com.
To post to this group, send email to protobuf@googlegroups.com.
Visit this group at https://groups.google.com/group/protobuf.
For more options, visit https://groups.google.com/d/optout.


[protobuf] Protobuf Java Generated Code field name has underscore at the end.

2018-09-05 Thread Chris Zhang
Hi,

I am new to Protobuf, and recently I found out that the java generated code 
by protobuf has underscore by the end of each field names.

For example,

protobuf message file look like this:

message DummyMessage [

string some_id = 1;
bool is_active = 2;
}

The generated java code is like this:

Class DummyMessage {

String someId_;
boolean isActive_;

}

Is there any way to get rid of the underscore of each field?

Thanks,


-- 
You received this message because you are subscribed to the Google Groups 
"Protocol Buffers" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to protobuf+unsubscr...@googlegroups.com.
To post to this group, send email to protobuf@googlegroups.com.
Visit this group at https://groups.google.com/group/protobuf.
For more options, visit https://groups.google.com/d/optout.


[protobuf] Reflection of well known types in C++

2018-07-27 Thread Chris Buffett
I'm wondering if protobuf supports a way to determine if a type is a 
well-known type via reflection in C++? I'm working on a marshalling layer 
for a custom encoding format and I need to unbox well-known types into 
their primitive type (e.g., Double to double). I've so far been unable to 
find any information on if it's possible to definitively tell if a type is 
well known or not, other than .

For example, if I have the following protobuf,

message A
{
message B
{
double val = 1;
google.protobuf.Doublevalue prev_val = 2;
}

repeated B values;
}

I need to convert into something with the following format
class B
{
double val;
double prev_val;
}

class A
{
list values;
}

The issue I see is that based on the protobuf format, the output would be 
ambiguous. I could also generate the following because I'm unable to tell 
whether I should unbox the type or generate a wrapping class.

class B
{
double val;
class C
{
double prev_val;
}
}

-- 
You received this message because you are subscribed to the Google Groups 
"Protocol Buffers" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to protobuf+unsubscr...@googlegroups.com.
To post to this group, send email to protobuf@googlegroups.com.
Visit this group at https://groups.google.com/group/protobuf.
For more options, visit https://groups.google.com/d/optout.


Re: [protobuf] Status of protobuf-java 2.x / 3.x compatibility

2017-12-05 Thread Chris Thunes
Thanks Feng. It seems like the GeneratedMessage / GeneratedMessageV3 split
introduced between 3.0.0-beta-4 and the 3.0.0 final release caused the
java/compatability_tests suite to start failing when run against 2.5.0 /
2.6.1 ("java/compatibility_tests/v2.5.0/test.sh 2.5.0" passes at git tag
v3.0.0-beta-4 but fails at v3.0.0). In 3.0.0 and newer, what is the role of
the non-V3 variants of GeneratedMessage, SingleFieldBuilder, etc? Are these
classes simply vestigial at this point or do they still provide some
benefit to compatibility (even if not 100%).

Thanks again,
Chris

On Mon, Dec 4, 2017 at 8:24 PM, Feng Xiao <xiaof...@google.com> wrote:

>
>
> On Mon, Dec 4, 2017 at 9:00 AM, Chris Thunes <cthu...@brewtab.com> wrote:
>
>>
>> I'm looking at options for moving some applications that currently depend
>> on protobuf-java 2.5.0 to a more recent version. This is made complicated
>> by the fact that we have a mixure of internal and external dependencies
>> (Hadoop & HBase) which depend on protobuf-java. My understanding is that
>> this will require these dependencies to move to a 3.x release sychronously
>> (i.e. regenerate using a 3.x protoc and update protobuf-java to a
>> corresponding release).
>>
>> However, looking through release notes and protobuf source code it seems
>> like some attempts have been made to address the source and binary
>> compatibility issues between 2.5/2.6.1 and 3.x. Specifically,
>>
>>- The 3.0.0-beta-4 release notes
>><https://github.com/google/protobuf/blob/v3.0.0-beta-4/CHANGES.txt>
>>mention runtime updates "to be compatible with 2.5.0/2.6.1 generated
>>protos".
>>- A number of classes have "V3" variants where the non-V3 variants
>>appear to exist solely in an attempt to maintain binary compatibility with
>>pre-3.x generated code.
>>
>> Running the compatibility tests in java/compatibility_tests/v2.5.0 it
>> appears that source and binary incompatibilities still exist.
>>
>>
>> I'm curious if anyone can shed some light on this effort and its status
>> or provide suggestions for migrating to a recent protobuf release under
>> these circumstances.
>>
> Protobuf 2.5.0/2.6.1 should be compatible with 3.0.0-beta-4 if you only
> uses protobuf public APIs. That's unfortunately not the case with Hadoop &
> HBase though. They introduced a class into com.google.protobuf package
> with the sole purpose to access protobuf package private classes
> <https://github.com/apache/hbase/blob/master/hbase-protocol/src/main/java/com/google/protobuf/HBaseZeroCopyByteString.java>.
>  As
> such there is no way you can upgrade to protobuf 3.x if the version of
> Hadoop & HBase you use still depends on protobuf 2.5/2.6 private symbols.
>
>
>> Thanks,
>>
>> Chris
>>
>>
>> --
>> You received this message because you are subscribed to the Google Groups
>> "Protocol Buffers" group.
>> To unsubscribe from this group and stop receiving emails from it, send an
>> email to protobuf+unsubscr...@googlegroups.com.
>> To post to this group, send email to protobuf@googlegroups.com.
>> Visit this group at https://groups.google.com/group/protobuf.
>> For more options, visit https://groups.google.com/d/optout.
>>
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"Protocol Buffers" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to protobuf+unsubscr...@googlegroups.com.
To post to this group, send email to protobuf@googlegroups.com.
Visit this group at https://groups.google.com/group/protobuf.
For more options, visit https://groups.google.com/d/optout.


[protobuf] Status of protobuf-java 2.x / 3.x compatibility

2017-12-04 Thread Chris Thunes

I'm looking at options for moving some applications that currently depend 
on protobuf-java 2.5.0 to a more recent version. This is made complicated 
by the fact that we have a mixure of internal and external dependencies 
(Hadoop & HBase) which depend on protobuf-java. My understanding is that 
this will require these dependencies to move to a 3.x release sychronously 
(i.e. regenerate using a 3.x protoc and update protobuf-java to a 
corresponding release).

However, looking through release notes and protobuf source code it seems 
like some attempts have been made to address the source and binary 
compatibility issues between 2.5/2.6.1 and 3.x. Specifically,

   - The 3.0.0-beta-4 release notes 
   <https://github.com/google/protobuf/blob/v3.0.0-beta-4/CHANGES.txt> 
   mention runtime updates "to be compatible with 2.5.0/2.6.1 generated 
   protos".
   - A number of classes have "V3" variants where the non-V3 variants 
   appear to exist solely in an attempt to maintain binary compatibility with 
   pre-3.x generated code.

Running the compatibility tests in java/compatibility_tests/v2.5.0 it 
appears that source and binary incompatibilities still exist.


I'm curious if anyone can shed some light on this effort and its status or 
provide suggestions for migrating to a recent protobuf release under these 
circumstances.


Thanks,

Chris


-- 
You received this message because you are subscribed to the Google Groups 
"Protocol Buffers" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to protobuf+unsubscr...@googlegroups.com.
To post to this group, send email to protobuf@googlegroups.com.
Visit this group at https://groups.google.com/group/protobuf.
For more options, visit https://groups.google.com/d/optout.


Re: [protobuf] protobuf 2.4.1 crash during ParseFromArray

2017-10-27 Thread Chris Vavruska
Yes, very early on during initialization of the library. My only concern 
with upgrading to a new version of protobuf is backward compatibility. 
Should I stick with 2.6 or go with 3.x? I know I can't use any of the new 
features but will all my current messages be recognizable across the 
different versions?

Thanks

On Thursday, October 26, 2017 at 3:28:05 PM UTC-4, Adam Cozzette wrote:
>
> When you say early on in application initialization, do you mean before 
> main() begins? Because if so then that might make things a little more 
> complicated. Using the non-lite version would probably not help very much 
> with debugging, since actually it adds more complexity by including 
> descriptors.
>
> It sounds like you may be stuck using an old version now, but if it's all 
> possible then you might want to consider upgrading, because it could well 
> be that this is a bug we've already fixed some time in the past few years.
>
> On Wed, Oct 25, 2017 at 11:46 AM, Chris Vavruska <vavr...@gmail.com 
> > wrote:
>
>> Hey all,
>>
>> I am using 2.4.1-lite in an embedded project. I know it is a bit old but 
>> it is what I have to live with for now. We send a message simple 
>> synchronous request from one application to another to get a list of files. 
>> It returns the following message.
>>
>> message return_m {
>> optionalstring  data= 1;
>> repeatedinfo_m  file_info   = 3;
>> optionaluint32  error   = 4;
>> optionalvolsize_m   size= 5;
>> }
>> When we parse the response occasionally, I see the following crash which 
>> occurs early on in application initialization.
>>
>> [bt] linux-vdso32.so.1(__kernel_sigtramp_rt32+0x0)[0x100370] [bt] [0x9] 
>> [bt] 
>> /usr/lib/libprotobuflite.so.7(_ZN6google8protobuf8internal14WireFormatLite10ReadStringEPNS0_2io16CodedInputStreamEPSs+0x8c)[0xa46f8dc]
>>  
>>
>> [bt] 
>> /opt/appfs/lib/cp/libpbcpp.so.1(_ZN5filer6info_m27MergePartialFromCodedStreamEPN6google8protobuf2io16CodedInputStreamE+0x244)[0xa755450]
>>  
>>
>> [bt] 
>> /opt/appfs/lib/cp/libpbcpp.so.1(_ZN6google8protobuf8internal14WireFormatLite20ReadMessageNoVirtualIN5filer6info_mEEEbPNS0_2io16CodedInputStreamEPT_+0xa4)[0xa75bdb8]
>>  
>>
>> [bt] 
>> /opt/appfs/lib/cp/libpbcpp.so.1(_ZN5filer8return_m27MergePartialFromCodedStreamEPN6google8protobuf2io16CodedInputStreamE+0x25c)[0xa75754c]
>> [bt] 
>> /usr/lib/libprotobuf-lite.so.7(_ZN6google8protobuf11MessageLite14ParseFromArrayEPKvi+0xb8)[0xa46d428]
>>
>> Looking at the core file it appear to have some sort of corruption. Would 
>> using the non-lite version give some additional information/protection?  If 
>> so, then I am getting the following during init.
>>
>> #0  0x in ?? ()
>> #1  0x0f124a74 in __cxxabiv1::__dynamic_cast (src_ptr=0xf1ec340 
>> <(anonymous namespace)::num_put_c>, 
>> src_type=0xf1e5f94 , 
>> dst_type=0xf1e24ac > std::__gnu_cxx_ldbl128::num_put<char, std::ostreambuf_iterator<char, 
>> std::char_traits > >>, src2dst=0)
>> at 
>> /opt/codesourcery/powerpc-linux-gnu/src/gcc/libstdc++-v3/libsupc++/dyncast.cc:60
>> #2  0x0f157e30 in std::has_facet<std::__gnu_cxx_ldbl128::num_put<char, 
>> std::ostreambuf_iterator<char, std::char_traits > > > (__loc=...)
>> at 
>> /opt/codesourcery/powerpc-linux-gnu/src/generated/gcc/powerpc-linux-gnu/libstdc++-v3/include/bits/locale_classes.tcc:110
>> #3  0x0f149320 in std::basic_ios<char, std::char_traits 
>> >::_M_cache_locale (
>> this=this@entry=0xf1eb990 <std::cout+4>, __loc=...)
>> at 
>> /opt/codesourcery/powerpc-linux-gnu/src/generated/gcc/powerpc-linux-gnu/libstdc++-v3/include/bits/basic_ios.tcc:164
>> #4  0x0f1494cc in std::basic_ios<char, std::char_traits >::init 
>> (this=this@entry=0xf1eb990 <std::cout+4>, 
>> __sb=__sb@entry=0xf1ebbec <__gnu_internal::buf_cout_sync>)
>> at 
>> /opt/codesourcery/powerpc-linux-gnu/src/generated/gcc/powerpc-linux-gnu/libstdc++-v3/include/bits/basic_ios.tcc:132
>> #5  0x0f1384f0 in basic_ostream (__sb=, this=> out>, __in_chrg=, 
>> __vtt_parm=)
>> at 
>> /opt/codesourcery/powerpc-linux-gnu/src/generated/gcc/powerpc-linux-gnu/libstdc++-v3/include/ostream:85
>> #6  std::ios_base::Init::Init (this=)
>> at 
>> /opt/codesourcery/powerpc-linux-gnu/src/gcc/libstdc++-v3/src/c++98/ios_init.cc:91
>> ---Type  to continue, or q  to quit---
>> #7  0x0f288970 in __static_initializatio

[protobuf] protobuf 2.4.1 crash during ParseFromArray

2017-10-25 Thread Chris Vavruska
Hey all,

I am using 2.4.1-lite in an embedded project. I know it is a bit old but it 
is what I have to live with for now. We send a message simple synchronous 
request from one application to another to get a list of files. It returns 
the following message.

message return_m {
optionalstring  data= 1;
repeatedinfo_m  file_info   = 3;
optionaluint32  error   = 4;
optionalvolsize_m   size= 5;
}
When we parse the response occasionally, I see the following crash which 
occurs early on in application initialization.

[bt] linux-vdso32.so.1(__kernel_sigtramp_rt32+0x0)[0x100370] [bt] [0x9] 
[bt] 
/usr/lib/libprotobuflite.so.7(_ZN6google8protobuf8internal14WireFormatLite10ReadStringEPNS0_2io16CodedInputStreamEPSs+0x8c)[0xa46f8dc]
 

[bt] 
/opt/appfs/lib/cp/libpbcpp.so.1(_ZN5filer6info_m27MergePartialFromCodedStreamEPN6google8protobuf2io16CodedInputStreamE+0x244)[0xa755450]
 

[bt] 
/opt/appfs/lib/cp/libpbcpp.so.1(_ZN6google8protobuf8internal14WireFormatLite20ReadMessageNoVirtualIN5filer6info_mEEEbPNS0_2io16CodedInputStreamEPT_+0xa4)[0xa75bdb8]
 

[bt] 
/opt/appfs/lib/cp/libpbcpp.so.1(_ZN5filer8return_m27MergePartialFromCodedStreamEPN6google8protobuf2io16CodedInputStreamE+0x25c)[0xa75754c]
[bt] 
/usr/lib/libprotobuf-lite.so.7(_ZN6google8protobuf11MessageLite14ParseFromArrayEPKvi+0xb8)[0xa46d428]

Looking at the core file it appear to have some sort of corruption. Would 
using the non-lite version give some additional information/protection?  If 
so, then I am getting the following during init.

#0  0x in ?? ()
#1  0x0f124a74 in __cxxabiv1::__dynamic_cast (src_ptr=0xf1ec340 <(anonymous 
namespace)::num_put_c>, 
src_type=0xf1e5f94 , 
dst_type=0xf1e24ac  > >>, src2dst=0)
at 
/opt/codesourcery/powerpc-linux-gnu/src/gcc/libstdc++-v3/libsupc++/dyncast.cc:60
#2  0x0f157e30 in std::has_facet > > (__loc=...)
at 
/opt/codesourcery/powerpc-linux-gnu/src/generated/gcc/powerpc-linux-gnu/libstdc++-v3/include/bits/locale_classes.tcc:110
#3  0x0f149320 in std::basic_ios::_M_cache_locale (
this=this@entry=0xf1eb990 , __loc=...)
at 
/opt/codesourcery/powerpc-linux-gnu/src/generated/gcc/powerpc-linux-gnu/libstdc++-v3/include/bits/basic_ios.tcc:164
#4  0x0f1494cc in std::basic_ios::init 
(this=this@entry=0xf1eb990 , 
__sb=__sb@entry=0xf1ebbec <__gnu_internal::buf_cout_sync>)
at 
/opt/codesourcery/powerpc-linux-gnu/src/generated/gcc/powerpc-linux-gnu/libstdc++-v3/include/bits/basic_ios.tcc:132
#5  0x0f1384f0 in basic_ostream (__sb=, this=, __in_chrg=, 
__vtt_parm=)
at 
/opt/codesourcery/powerpc-linux-gnu/src/generated/gcc/powerpc-linux-gnu/libstdc++-v3/include/ostream:85
#6  std::ios_base::Init::Init (this=)
at 
/opt/codesourcery/powerpc-linux-gnu/src/gcc/libstdc++-v3/src/c++98/ios_init.cc:91
---Type  to continue, or q  to quit---
#7  0x0f288970 in __static_initialization_and_destruction_0 
(__initialize_p=1, __priority=65535)
at 
/opt/corp/projects/cdntools/embedtools/codebench-2014.05-53-powerpc-linux-gnu/powerpc-linux-gnu/include/c++/4.8.3/iostream:74
#8  _GLOBAL__sub_I_zero_copy_stream_impl.cc(void) ()
at 
/usr/src/debug/protobuf/2.4.1-r0/protobuf-2.4.1/src/google/protobuf/io/zero_copy_stream_impl.cc:470
#9  0x0f326cac in __do_global_ctors_aux () from 
/vob/cb_releases/ppc-cetus-stable/usr/lib/libprotobuf.so.7
#10 0x0f28869c in _init () from 
/vob/cb_releases/ppc-cetus-stable/usr/lib/libprotobuf.so.7
#11 0xb7b2d5c8 in call_init (l=0xb7b0f020, argc=argc@entry=3, 
argv=argv@entry=0xbfdee474, 
env=env@entry=0xbfdee484) at dl-init.c:69
#12 0xb7b2d778 in call_init (env=, argv=, 
argc=, l=)
at dl-init.c:36
#13 _dl_init (main_map=0xb7b3f908, argc=3, argv=0xbfdee474, env=0xbfdee484) 
at dl-init.c:132
#14 0xb7b35fb8 in got_label () at ../sysdeps/powerpc/powerpc32/dl-start.S:66
Backtrace stopped: frame did not save the PC

-- 
You received this message because you are subscribed to the Google Groups 
"Protocol Buffers" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to protobuf+unsubscr...@googlegroups.com.
To post to this group, send email to protobuf@googlegroups.com.
Visit this group at https://groups.google.com/group/protobuf.
For more options, visit https://groups.google.com/d/optout.


[protobuf] Re: 'python setup.py test' failed

2015-07-08 Thread Chris
Kenton Varda kenton at google.com writes:

 
 
 I haven't seen this problem, but my wild guess is that for some reason 
your environment is using distutils instead of setuptools.  They are 
similar, but distutils lacks some features, such as the test command. 
 I'm not sure why this would happen, though.
 On Tue, Nov 9, 2010 at 1:05 AM, cy oramah at gmail.com wrote:
 
 As told in protobuf-2.3.0/python/README.TXT, I run '$python setup.py
 test'. However failed as bellow:
 /usr/lib/python2.6/distutils/dist.py:266: UserWarning: Unknown
 distribution option: 'namespace_packages'
   warnings.warn(msg)
 /usr/lib/python2.6/distutils/dist.py:266: UserWarning: Unknown
 distribution option: 'test_suite'
   warnings.warn(msg)
 usage: setup.py [global_opts] cmd1 [cmd1_opts] [cmd2 [cmd2_opts] ...]
    or: setup.py --help [cmd1 cmd2 ...]
    or: setup.py --help-commands
    or: setup.py cmd --help
 error: invalid command 'test'
 What's the problem?
 --
 You received this message because you are subscribed to the Google Groups 
Protocol Buffers group.
 To post to this group, send email to protobuf at googlegroups.com.
 To unsubscribe from this group, send email to protobuf+unsubscribe at 
googlegroups.com.
 For more options, visit this group at 
http://groups.google.com/group/protobuf?hl=en.
 
 
 
 

Simply installing setuptools per the above solution did indeed work for me. 
Thanks!

-- 
You received this message because you are subscribed to the Google Groups 
Protocol Buffers group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to protobuf+unsubscr...@googlegroups.com.
To post to this group, send email to protobuf@googlegroups.com.
Visit this group at http://groups.google.com/group/protobuf.
For more options, visit https://groups.google.com/d/optout.


[protobuf] Re: Protobuf Buffers v3.0.0-alpha-1

2015-05-08 Thread Chris
Thanks for your great work.

I'm considering migrating to version 3. Is existing proto2 data (i.e. 
serialized messages) transparently loadable in a proto3-aware application ?

On Wednesday, December 10, 2014 at 11:51:01 PM UTC-5, Feng Xiao wrote:

 Hi all,

 I just published protobuf v3.0.0-alpha-1 on our github site:
 https://github.com/google/protobuf/releases/tag/v3.0.0-alpha-1

 This is the first alpha release of protobuf v3.0.0. In protobuf v3.0.0, we 
 will add a new protobuf language version (aka proto3) and support a wider 
 range of programming languages (to name a few: ruby, php, node.js, 
 objective-c). This alpha version contains C++ and Java implementation with 
 partial proto3 support (see below for details). In future releases we will 
 add support for more programming languages and implement the full proto3 
 feature set. Besides proto3, this alpha version also includes two other new 
 features: map fields and arena allocation. They are implemented for both 
 proto3 and the old protobuf language version (aka proto2).

 We are currently working on the documentation of these new features and 
 when it's ready it will be updated to our protobuf developer guide 
 https://developers.google.com/protocol-buffers/docs/overview. For the 
 time being if you have any questions regarding proto3 or other new 
 features, please post your question in the discussion group.

 CHANGS
 ===
 Version 3.0.0-alpha-1 (C++/Java):

   General
   * Introduced Protocol Buffers language version 3 (aka proto3).

 When protobuf was initially opensourced it implemented Protocol Buffers
 language version 2 (aka proto2), which is why the version number
 started from v2.0.0. From v3.0.0, a new language version (proto3) is
 introduced while the old version (proto2) will continue to be 
 supported.

 The main intent of introducing proto3 is to clean up protobuf before
 pushing the language as the foundation of Google's new API platform.
 In proto3, the language is simplified, both for ease of use and  to
 make it available in a wider range of programming languages. At the
 same time a few features are added to better support common idioms
 found in APIs.

 The following are the main new features in language version 3:

   1. Removal of field presence logic for primitive value fields, 
 removal
  of required fields, and removal of default values. This makes 
 proto3
  significantly easier to implement with open struct 
 representations,
  as in languages like Android Java, Objective C, or Go.
   2. Removal of unknown fields.
   3. Removal of extensions, which are instead replaced by a new 
 standard
  type called Any.
   4. Fix semantics for unknown enum values.
   5. Addition of maps.
   6. Addition of a small set of standard types for representation of 
 time,
  dynamic data, etc.
   7. A well-defined encoding in JSON as an alternative to binary proto
  encoding.

 This release (v3.0.0-alpha-1) includes partial proto3 support for C++ 
 and
 Java. Items 6 (well-known types) and 7 (JSON format) in the above 
 feature
 list are not implemented.

 A new notion syntax is introduced to specify whether a .proto file
 uses proto2 or proto3:

   // foo.proto
   syntax = proto3;
   message Bar {...}

 If omitted, the protocol compiler will generate a warning and proto2 
 will
 be used as the default. This warning will be turned into an error in a
 future release.

 We recommend that new Protocol Buffers users use proto3. However, we 
 do not
 generally recommend that existing users migrate from proto2 from 
 proto3 due
 to API incompatibility, and we will continue to support proto2 for a 
 long
 time.

   * Added support for map fields (implemented in C++/Java for both proto2 
 and
 proto3).

 Map fields can be declared using the following syntax:

   message Foo {
 mapstring, string values = 1;
   }

 Data of a map field will be stored in memory as an unordered map and it
 can be accessed through generated accessors.

   C++
   * Added arena allocation support (for both proto2 and proto3).

 Profiling shows memory allocation and deallocation constitutes a 
 significant
 fraction of CPU-time spent in protobuf code and arena allocation is a
 technique introduced to reduce this cost. With arena allocation, new
 objects will be allocated from a large piece of preallocated memory and
 deallocation of these objects is almost free. Early adoption shows 20% 
 to
 50% improvement in some Google binaries.

 To enable arena support, add the following option to your .proto file:

   option cc_enable_arenas = true;

 Protocol compiler will generate additional code to make the generated
 message classes work with arenas. This does not change the existing API
 of protobuf messages and does 

[protobuf] Re: Building for RTEMS ARM

2015-01-22 Thread Chris Johns
Following up my own post ...

On Wednesday, January 21, 2015 at 5:04:25 PM UTC+11, Chris Johns wrote:


 The first are errors related to Atromics. We are adding -fpermissive to 
 make then warning. Here are a few of the many warnings we are seeing:

  from ../../protobuf.git/src/google/protobuf/arenastring.h
 :40,
  from ../../protobuf.git/src/google/protobuf/descriptor.pb
 .h:23,
  from ../../protobuf.git/src/google/protobuf/compiler/ruby
 /ruby_generator.cc:36:
 ../../protobuf.git/src/google/protobuf/stubs/once.h: In function 'void 
 google::protobuf::GoogleOnceInit(google::protobuf::ProtobufOnceType*, void 
 (*)())':
 ../../protobuf.git/src/google/protobuf/stubs/once.h:125:34: warning: 
 invalid conversion from 'google::protobuf::ProtobufOnceType* {aka int*}' 
 to 'const volatile Atomic32* {aka const volatile long int*}' [-fpermissive
 ]
   if (internal::Acquire_Load(once) != ONCE_STATE_DONE) {
  ^

 In file included from ../../protobuf.git/src/google/ 


I have tracked the warnings down to a gcc/newlib interaction as described 
here https://sourceware.org/ml/newlib/2014/msg00724.html. We are now using 
this code as part of the test to see if we have cleaned things up.

This warning and the atomic code has made me wonder why c11 is not checked 
for and stdatomic.h used or even c++11 used where possible ?

Chris

-- 
You received this message because you are subscribed to the Google Groups 
Protocol Buffers group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to protobuf+unsubscr...@googlegroups.com.
To post to this group, send email to protobuf@googlegroups.com.
Visit this group at http://groups.google.com/group/protobuf.
For more options, visit https://groups.google.com/d/optout.


[protobuf] Building for RTEMS ARM

2015-01-21 Thread Chris Johns
Hello,

I am building Protocol Buffers from github for RTEMS 
(http://www.rtems.org/) to run on a Xilinx Zynq (ARM CortexA9) and I am 
making some progress but I have some questions.

I am using the current development head for RTEMS which is called 4.11 and 
it is stable and almost about to be released. The gcc is:

$ arm-rtems4.11-gcc -v
Using built-in specs.
COLLECT_GCC=/opt/work/rtems/4.11/bin/arm-rtems4.11-gcc
COLLECT_LTO_WRAPPER=/opt/work/rtems/4.11/libexec/gcc/arm-rtems4.11/4.9.2/lto
-wrapper
Target: arm-rtems4.11 
Configured with: ../gcc-4.9.2/configure --prefix=/opt/work/rtems/4.11 --
bindir=/opt/work/rtems/4.11/bin --exec_prefix=/opt/work/rtems/4.11 --
includedir=/opt/work/rtems/4.11/include --libdir=/opt/work/rtems/4.11/lib --
libexecdir=/opt/work/rtems/4.11/libexec --mandir=/opt/work/rtems/4.11/share/man 
--infodir=/opt/work/rtems/4.11/share/info --datadir=/opt/work/rtems/4.11/share 
--build=x86_64-freebsd10.0 --host=x86_64-freebsd10.0 --target=arm-rtems4.11 
--disable-libstdcxx-pch --with-gnu-as --with-gnu-ld --verbose --with-newlib 
--with-system-zlib --disable-nls --without-included-gettext 
--disable-win32-registry 
--enable-version-specific-runtime-libs --disable-lto 
--enable-newlib-io-c99-formats 
--enable-newlib-iconv --enable-newlib-iconv-encodings=big5,cp775,cp850,cp852
,cp855,cp866,euc_jp,euc_kr,euc_tw,iso_8859_1,iso_8859_10,iso_8859_11,
iso_8859_13,iso_8859_14,iso_8859_15,iso_8859_2,iso_8859_3,iso_8859_4,
iso_8859_5,iso_8859_6,iso_8859_7,iso_8859_8,iso_8859_9,iso_ir_111,koi8_r,
koi8_ru,koi8_u,koi8_uni,ucs_2,ucs_2_internal,ucs_2be,ucs_2le,ucs_4,
ucs_4_internal,ucs_4be,ucs_4le,us_ascii,utf_16,utf_16be,utf_16le,utf_8,
win_1250,win_1251,win_1252,win_1253,win_1254,win_1255,win_1256,win_1257,win_1258
 
--enable-threads --disable-plugin --enable-obsolete --enable-languages=c,c++ 
Thread model: rtems 
gcc version 4.9.2 20141030 (RTEMS 4.11, RSB 
e7cbf74fe2c94f0a233b8a7efcac3e75c239333c, Newlib 
de616601501c4f82968683e80c112604a2d40222) (GCC)


The first are errors related to Atromics. We are adding -fpermissive to 
make then warning. Here are a few of the many warnings we are seeing:

 from ../../protobuf.git/src/google/protobuf/arenastring.h:
40,
 from ../../protobuf.git/src/google/protobuf/descriptor.pb.h
:23,
 from ../../protobuf.git/src/google/protobuf/compiler/ruby/
ruby_generator.cc:36:
../../protobuf.git/src/google/protobuf/stubs/once.h: In function 'void 
google::protobuf::GoogleOnceInit(google::protobuf::ProtobufOnceType*, void 
(*)())':
../../protobuf.git/src/google/protobuf/stubs/once.h:125:34: warning: 
invalid conversion from 'google::protobuf::ProtobufOnceType* {aka int*}' to 
'const 
volatile Atomic32* {aka const volatile long int*}' [-fpermissive]
  if (internal::Acquire_Load(once) != ONCE_STATE_DONE) {
 ^

In file included from ../../protobuf.git/src/google/protobuf/stubs/atomicops
.h:205:0,
 from ../../protobuf.git/src/google/protobuf/stubs/
atomic_sequence_num.h:33,
 from ../../protobuf.git/src/google/protobuf/arena.h:38,
 from ../../protobuf.git/src/google/protobuf/descriptor.pb.h
:22,
 from ../../protobuf.git/src/google/protobuf/compiler/ruby/
ruby_generator.cc:36:
../../protobuf.git/src/google/protobuf/stubs/atomicops_internals_arm_gcc.h:
136:17: note: initializing argument 1 of 'google::protobuf::internal::Atomic32 
google::protobuf::internal::Acquire_Load(const volatile Atomic32*)'


The other issue is RTEMS is not self hosting and so always cross-compiled. 
Is there a way to disable running tests ? It makes no sense to run the 
tests when the build and host are not the same.

Thanks

Chris

-- 
You received this message because you are subscribed to the Google Groups 
Protocol Buffers group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to protobuf+unsubscr...@googlegroups.com.
To post to this group, send email to protobuf@googlegroups.com.
Visit this group at http://groups.google.com/group/protobuf.
For more options, visit https://groups.google.com/d/optout.


[protobuf] ANN: ProtoStream - A NodeJS module to emit ProtoBuf messages from a stream, SAX-style.

2014-11-05 Thread Chris Dew
This module uses ProtocolBuffers to de-frame protocol buffer messages 
within a (TCP) stream. 

https://github.com/chrisdew/protostream

Hope it's useful to others too,

Chris.

-- 
You received this message because you are subscribed to the Google Groups 
Protocol Buffers group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to protobuf+unsubscr...@googlegroups.com.
To post to this group, send email to protobuf@googlegroups.com.
Visit this group at http://groups.google.com/group/protobuf.
For more options, visit https://groups.google.com/d/optout.


Re: [protobuf] c++ why no set_allocated for repeated nested messages?

2014-10-03 Thread chris


On Friday, October 3, 2014 10:51:09 AM UTC-7, Feng Xiao wrote:

 On Thu, Oct 2, 2014 at 8:58 PM, ch...@ochsnet.com javascript: wrote:

 Always having to obtain a new instance of a repeated nested message from 
 it's parent is really cumbersome.I fail to see the logic behind it 
 being that singular message fields have set_allocated.

 We can add an add_allocated() method for repeated message fields but 
 that's not necessary because you can do:
 foo-mutable_repeated_message()-AddAllocated(bar);
  



Thanks for the tip!  Didn't notice that you could do that.

Chris 

-- 
You received this message because you are subscribed to the Google Groups 
Protocol Buffers group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to protobuf+unsubscr...@googlegroups.com.
To post to this group, send email to protobuf@googlegroups.com.
Visit this group at http://groups.google.com/group/protobuf.
For more options, visit https://groups.google.com/d/optout.


[protobuf] c++ why no set_allocated for repeated nested messages?

2014-10-02 Thread chris
Always having to obtain a new instance of a repeated nested message from 
it's parent is really cumbersome.I fail to see the logic behind it 
being that singular message fields have set_allocated.

Chris

-- 
You received this message because you are subscribed to the Google Groups 
Protocol Buffers group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to protobuf+unsubscr...@googlegroups.com.
To post to this group, send email to protobuf@googlegroups.com.
Visit this group at http://groups.google.com/group/protobuf.
For more options, visit https://groups.google.com/d/optout.


[protobuf] Re: Issue 502 in protobuf: protoc generated Java code has numerous javadoc warnings

2014-08-25 Thread Chris Berst
Hello, 
I am new to this forum and hope I'm posting properly
Do you know whether 2.6.0 will address the issue of Java 8 javadoc warnings 
for GPB 2.5.0 generated code?
Specifically, when running Java 8 javadoc I now see lots of 'no @return or 
'no @param' warnings on code generated from my .proto file.
Best regards,
Chris Berst
Software Engineer
Daniel K. Inouye Solar Telescope


On Wednesday, April 17, 2013 12:26:15 PM UTC-6, prot...@googlecode.com 
wrote:

 Status: New 
 Owner: liu...@google.com javascript: 
 Labels: Type-Defect Priority-Medium 

 New issue 502 by jonathan...@hotmail.com: protoc generated Java code has 
   
 numerous javadoc warnings 
 http://code.google.com/p/protobuf/issues/detail?id=502 

 What steps will reproduce the problem? 
 1. Enable javadoc warnings for project 
 2. Use protoc to generate java code and add to project 
 3. Note the numerous warnings 

 What is the expected output? What do you see instead? 
 Should not be any java doc warnings 

 What version of the product are you using? On what operating system? 
 2.5.0 

 Please provide any additional information below. 
 Trivial fix.  Just add this line of code code. 

printer-Print(@SuppressWarnings(\javadoc\)\n\n); 

 to java_file.cc, line 196  just before this: 

printer-Print( 
  public final class $classname$ {\n 
private $classname$() {}\n, 
  classname, classname_); 
printer-Indent(); 

 Attachments: 
 java_file.cc  18.1 KB 

 -- 
 You received this message because this project is configured to send all   
 issue notifications to this address. 
 You may adjust your notification preferences at: 
 https://code.google.com/hosting/settings 


-- 
You received this message because you are subscribed to the Google Groups 
Protocol Buffers group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to protobuf+unsubscr...@googlegroups.com.
To post to this group, send email to protobuf@googlegroups.com.
Visit this group at http://groups.google.com/group/protobuf.
For more options, visit https://groups.google.com/d/optout.


Re: [protobuf] Re: Portable protobuf compiler

2014-06-25 Thread Chris Beams
On Jun 24, 2014, at 5:31 PM, John Calcote john.calc...@gmail.com wrote:

 I'm looking for a true native java port of the protoc compiler. It's a pretty 
 trivial compiler and the C source code is available. I guess I'll have to do 
 it myself.

I'd like to see this as well, and would be happy to help with early testing. 
Would be good to think about reaching out to the authors of the IDEA [1] and 
Gradle [2] protobuf plugins too.

[1]: https://github.com/aantono/gradle-plugin-protobuf
[2]: https://github.com/nnmatveev/idea-plugin-protobuf


signature.asc
Description: Message signed with OpenPGP using GPGMail


[protobuf] Customizing protoc to support an existing wire format

2014-05-27 Thread Chris Beams
Hello,

I'd like to see if any prior work has been done in customizing protobuf 
compilation to support message encoding/decoding against a legacy wire 
format.

Put another way, I'm interested in:

 1. specifying an existing protocol using protobuf's .proto file syntax, and
 2. reusing protobuf's .proto file parsing and code generation 
infrastructure, while
 3. replacing protobuf's default encoding algorithm and replacing it with 
one that conforms to an existing format.

This discussion from 2013 is the closest thing I've found to a similar 
question on this mailing list. Unfortunately it doesn't go into much detail:

 https://groups.google.com/forum/#!topic/protobuf/zvughVLk6BU

Some context will probably be of use. The existing wire format in question 
is that of Bitcoin's peer-to-peer network protocol. These messages and 
their binary representations are defined in this document:

 https://en.bitcoin.it/wiki/Protocol_specification#Message_types

Note that protocol buffers were considered for use during Bitcoin's initial 
development, but rejected on concerns around complexity and security:

 https://bitcointalk.org/index.php?topic=632.msg7090#msg7090

Whether or not those concerns were well-founded, Bitcoin's resulting wire 
format works well today, and for this reason, changing it is not considered 
to be an option.

The impetus for this question, then, is that there are an increasing number 
of implementations of the Bitcoin protocol under development today, and in 
order to participate in the peer-to-peer network, each must faithfully 
re-implement handling this custom wire format. Typically this work is done 
through a combination of studying the documentation above and carefully 
transcribing code from the Bitcoin Core reference implementation. This 
creates a significant barrier to entry as well as a potential source of 
bugs that can threaten network stability.

To avoid this tedious and error-prone work, there is a desire to codify the 
message formats in such a way that language-specific bindings may be 
generated rather than hand-coded.

The encoding algorithm and code generation for each specific language would 
of course have to be custom developed, but the idea is to do so within an 
otherwise widely-used framework such as protocol buffers, minimizing the 
need to re-invent as much as possible.

I have not yet looked deeply at the extension points within protocol 
buffers to assess the feasibility of this idea. I have seen that protoc 
supports plugins [1], but don't know whether anyone has gone so far with 
them as to replace fundamental assumptions about wire format. I have also 
noticed Custom Options [2], which may help in expressing particular 
quirks or nuances of the existing protocol within .proto files.

At this point, I'd simply like to see whether anyone has been down this 
road before, and whether there are reasons for dismissing the idea 
completely before digging in too much further.

- Chris

P.S: Please note that in posting this question I am in no way presuming to 
represent the Bitcoin Core development team.

[1]: 
https://developers.google.com/protocol-buffers/docs/reference/cpp/google.protobuf.compiler.plugin.pb
[2]: https://developers.google.com/protocol-buffers/docs/proto#options

-- 
You received this message because you are subscribed to the Google Groups 
Protocol Buffers group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to protobuf+unsubscr...@googlegroups.com.
To post to this group, send email to protobuf@googlegroups.com.
Visit this group at http://groups.google.com/group/protobuf.
For more options, visit https://groups.google.com/d/optout.


[protobuf] On the wire compatibility with protoc options

2014-03-19 Thread Chris Large
Hi folks,

I'm exploring the new javanano-out option available from the repository 
since I have some big issues right now with the side of the generated java 
code for the protobuf messages.  

My question is, is the wire format affected by the options you use in the 
protoc compiler?  For example, if I have a server that uses full blown 
proto definitions (built with --java_out) but I have a limited client where 
I want to use --javanano_out will they interoperate?  I'm assuming that the 
format on the wire is not affected and the two will talk just fine but 
wanted to be sure and didn't have an easy way to test this.

Thanks.

-- 
You received this message because you are subscribed to the Google Groups 
Protocol Buffers group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to protobuf+unsubscr...@googlegroups.com.
To post to this group, send email to protobuf@googlegroups.com.
Visit this group at http://groups.google.com/group/protobuf.
For more options, visit https://groups.google.com/d/optout.


[protobuf] Re: Dependencies passed to FileDescriptor.buildFrom() don't match those listed in the FileDescriptorProto.

2014-03-14 Thread Chris Johnson
Hi Nik,
Were you ever able to resolve this?  I am looking to do the same and I 
cannot seem to get the desciptor.proto to be included.  Easy to do command 
line, but not with the plugin.
Thanks!

-- 
You received this message because you are subscribed to the Google Groups 
Protocol Buffers group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to protobuf+unsubscr...@googlegroups.com.
To post to this group, send email to protobuf@googlegroups.com.
Visit this group at http://groups.google.com/group/protobuf.
For more options, visit https://groups.google.com/d/optout.


[protobuf] Fast Native C Protocol Buffers from Python

2014-03-14 Thread Chris Diehl
Hi Everyone, 

I found the following post from 2011 about accelerating decoding by setting 
PROTOCOL_BUFFERS_PYTHON_IMPLEMENTATION=cpp. 
Does this still represent current practice to accelerate performance or is 
there no need to do anything now after installing the Python package? 
http://yz.mit.edu/wp/fast-native-c-protocol-buffers-from-python/

Thanks for your help, 

Chris

-- 
You received this message because you are subscribed to the Google Groups 
Protocol Buffers group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to protobuf+unsubscr...@googlegroups.com.
To post to this group, send email to protobuf@googlegroups.com.
Visit this group at http://groups.google.com/group/protobuf.
For more options, visit https://groups.google.com/d/optout.


Re: [protobuf] segfault when deserializing under high concurrency

2013-10-16 Thread chris
I haven't reported it yet. Getting gdb working with java is a kind of a 
PITA.  But I'll probably have to break down and do it.

Chris

On Tuesday, October 15, 2013 11:14:41 PM UTC-7, Marc Gravell wrote:

 Well, there isn't much I can do to avoid underlying mono bugs. Have you 
 reported it?

 Marc
 On 16 Oct 2013 05:07, ch...@ochsnet.com javascript: wrote:

 I was using 668, but this is a mono bug with reading streams I confirmed 
 it with just reading from the stream myself.

 Rather annoying, I updated to the latest mono 3 master because of a 
 threading bug that was fixed 2 weeks ago, now this:)

 Chris

 On Tuesday, October 15, 2013 10:45:46 AM UTC-7, Marc Gravell wrote:

 Firstly: exactly what version is this? There was a bug in 663 relating 
 to threading (and which only exhibited after extended usage) that was fixed 
 in something like 668. If you are using something = 663 and  668 then 
 please update and retry.

 Marc
 (protobuf-net)
 On 15 Oct 2013 16:22, ch...@ochsnet.com wrote:

 So part of this is a mono bug,  segfaults shouldn't be happening in 
 managed code period.  

 I'm embedding mono 3 in a java app and consistently get the following 
 when deserializing a simple protobuf message.

 Stacktrace:

   at unknown 0x
   at (wrapper managed-to-native) System.Buffer.**BlockCopyInternal 
 (System.Array,int,System.**Array,int,int) 0x
   at System.Buffer.BlockCopy (System.Array,int,System.**Array,int,int) 
 0x0006b
   at System.IO.MemoryStream.Read (byte[],int,int) 0x000ff
   at ProtoBuf.ProtoReader.Ensure (int,bool) 0x00237
   at ProtoBuf.ProtoReader.**TryReadUInt32VariantWithoutMov**ing 
 (bool,uint) 0x00043
   at ProtoBuf.ProtoReader.**TryReadUInt32Variant (uint) 0x0001f
   at ProtoBuf.ProtoReader.**ReadFieldHeader () 0x00057
   at (wrapper dynamic-method) com.game_machine.entity_**
 system.generated.Entity.proto_**2 (object,ProtoBuf.ProtoReader) 
 0x02802
   at ProtoBuf.Serializers.**CompiledSerializer.ProtoBuf.**
 Serializers.IProtoSerializer.**Read (object,ProtoBuf.ProtoReader) 
 0x0003f
   at ProtoBuf.Meta.**RuntimeTypeModel.Deserialize (int,object,ProtoBuf.
 **ProtoReader) 0x00150
   at ProtoBuf.Meta.TypeModel.**DeserializeCore 
 (ProtoBuf.ProtoReader,System.**Type,object,bool) 0x00064
   at ProtoBuf.Meta.TypeModel.**Deserialize (System.IO.Stream,object,**
 System.Type,ProtoBuf.**SerializationContext) 0x0009b
   at ProtoBuf.Meta.TypeModel.**Deserialize 
 (System.IO.Stream,object,**System.Type) 
 0x0001f
   at ProtoBuf.Serializer.**DeserializeT (System.IO.Stream) 0x00043
   at GameMachine.Actor.**ByteArrayToEntity (byte[]) 0x00047
   at GameMachine.TestActor.**OnReceive (byte[]) 0x0006b
   at (wrapper runtime-invoke) 
 Module.runtime_invoke_void__**this___object 
 (object,intptr,intptr,intptr) 0x


 This goes away if I just run in a single thread.  It also takes an 
 average of 10,000 iterations or so to trigger this error.  I also tried 
 wrapping all deserialization calls in a mutex but that had no effect.

 The threading model is such that none of the objects I am deserializing 
 are accessed concurrently by different threads.

 Is protobuf-net completely reentrant, or does it try to reuse objects 
 anywhere? 

 This could very well all be a mono bug, but thought I would check here 
 first to see if someone had any ideas.

 Chris

 -- 
 You received this message because you are subscribed to the Google 
 Groups Protocol Buffers group.
 To unsubscribe from this group and stop receiving emails from it, send 
 an email to protobuf+u...@**googlegroups.com.
 To post to this group, send email to prot...@googlegroups.com.
 Visit this group at 
 http://groups.google.com/**group/protobufhttp://groups.google.com/group/protobuf
 .
 For more options, visit 
 https://groups.google.com/**groups/opt_outhttps://groups.google.com/groups/opt_out
 .

  -- 
 You received this message because you are subscribed to the Google Groups 
 Protocol Buffers group.
 To unsubscribe from this group and stop receiving emails from it, send an 
 email to protobuf+u...@googlegroups.com javascript:.
 To post to this group, send email to prot...@googlegroups.comjavascript:
 .
 Visit this group at http://groups.google.com/group/protobuf.
 For more options, visit https://groups.google.com/groups/opt_out.



-- 
You received this message because you are subscribed to the Google Groups 
Protocol Buffers group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to protobuf+unsubscr...@googlegroups.com.
To post to this group, send email to protobuf@googlegroups.com.
Visit this group at http://groups.google.com/group/protobuf.
For more options, visit https://groups.google.com/groups/opt_out.


[protobuf] segfault when deserializing under high concurrency

2013-10-15 Thread chris
So part of this is a mono bug,  segfaults shouldn't be happening in managed 
code period.  

I'm embedding mono 3 in a java app and consistently get the following when 
deserializing a simple protobuf message.

Stacktrace:

  at unknown 0x
  at (wrapper managed-to-native) System.Buffer.BlockCopyInternal 
(System.Array,int,System.Array,int,int) 0x
  at System.Buffer.BlockCopy (System.Array,int,System.Array,int,int) 
0x0006b
  at System.IO.MemoryStream.Read (byte[],int,int) 0x000ff
  at ProtoBuf.ProtoReader.Ensure (int,bool) 0x00237
  at ProtoBuf.ProtoReader.TryReadUInt32VariantWithoutMoving (bool,uint) 
0x00043
  at ProtoBuf.ProtoReader.TryReadUInt32Variant (uint) 0x0001f
  at ProtoBuf.ProtoReader.ReadFieldHeader () 0x00057
  at (wrapper dynamic-method) 
com.game_machine.entity_system.generated.Entity.proto_2 
(object,ProtoBuf.ProtoReader) 0x02802
  at 
ProtoBuf.Serializers.CompiledSerializer.ProtoBuf.Serializers.IProtoSerializer.Read
 
(object,ProtoBuf.ProtoReader) 0x0003f
  at ProtoBuf.Meta.RuntimeTypeModel.Deserialize 
(int,object,ProtoBuf.ProtoReader) 0x00150
  at ProtoBuf.Meta.TypeModel.DeserializeCore 
(ProtoBuf.ProtoReader,System.Type,object,bool) 0x00064
  at ProtoBuf.Meta.TypeModel.Deserialize 
(System.IO.Stream,object,System.Type,ProtoBuf.SerializationContext) 
0x0009b
  at ProtoBuf.Meta.TypeModel.Deserialize 
(System.IO.Stream,object,System.Type) 0x0001f
  at ProtoBuf.Serializer.DeserializeT (System.IO.Stream) 0x00043
  at GameMachine.Actor.ByteArrayToEntity (byte[]) 0x00047
  at GameMachine.TestActor.OnReceive (byte[]) 0x0006b
  at (wrapper runtime-invoke) Module.runtime_invoke_void__this___object 
(object,intptr,intptr,intptr) 0x


This goes away if I just run in a single thread.  It also takes an average 
of 10,000 iterations or so to trigger this error.  I also tried wrapping 
all deserialization calls in a mutex but that had no effect.

The threading model is such that none of the objects I am deserializing are 
accessed concurrently by different threads.

Is protobuf-net completely reentrant, or does it try to reuse objects 
anywhere? 

This could very well all be a mono bug, but thought I would check here 
first to see if someone had any ideas.

Chris

-- 
You received this message because you are subscribed to the Google Groups 
Protocol Buffers group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to protobuf+unsubscr...@googlegroups.com.
To post to this group, send email to protobuf@googlegroups.com.
Visit this group at http://groups.google.com/group/protobuf.
For more options, visit https://groups.google.com/groups/opt_out.


Re: [protobuf] segfault when deserializing under high concurrency

2013-10-15 Thread chris
I was using 668, but this is a mono bug with reading streams I confirmed it 
with just reading from the stream myself.

Rather annoying, I updated to the latest mono 3 master because of a 
threading bug that was fixed 2 weeks ago, now this:)

Chris

On Tuesday, October 15, 2013 10:45:46 AM UTC-7, Marc Gravell wrote:

 Firstly: exactly what version is this? There was a bug in 663 relating to 
 threading (and which only exhibited after extended usage) that was fixed in 
 something like 668. If you are using something = 663 and  668 then please 
 update and retry.

 Marc
 (protobuf-net)
 On 15 Oct 2013 16:22, ch...@ochsnet.com javascript: wrote:

 So part of this is a mono bug,  segfaults shouldn't be happening in 
 managed code period.  

 I'm embedding mono 3 in a java app and consistently get the following 
 when deserializing a simple protobuf message.

 Stacktrace:

   at unknown 0x
   at (wrapper managed-to-native) System.Buffer.BlockCopyInternal 
 (System.Array,int,System.Array,int,int) 0x
   at System.Buffer.BlockCopy (System.Array,int,System.Array,int,int) 
 0x0006b
   at System.IO.MemoryStream.Read (byte[],int,int) 0x000ff
   at ProtoBuf.ProtoReader.Ensure (int,bool) 0x00237
   at ProtoBuf.ProtoReader.TryReadUInt32VariantWithoutMoving (bool,uint) 
 0x00043
   at ProtoBuf.ProtoReader.TryReadUInt32Variant (uint) 0x0001f
   at ProtoBuf.ProtoReader.ReadFieldHeader () 0x00057
   at (wrapper dynamic-method) 
 com.game_machine.entity_system.generated.Entity.proto_2 
 (object,ProtoBuf.ProtoReader) 0x02802
   at 
 ProtoBuf.Serializers.CompiledSerializer.ProtoBuf.Serializers.IProtoSerializer.Read
  
 (object,ProtoBuf.ProtoReader) 0x0003f
   at ProtoBuf.Meta.RuntimeTypeModel.Deserialize 
 (int,object,ProtoBuf.ProtoReader) 0x00150
   at ProtoBuf.Meta.TypeModel.DeserializeCore 
 (ProtoBuf.ProtoReader,System.Type,object,bool) 0x00064
   at ProtoBuf.Meta.TypeModel.Deserialize 
 (System.IO.Stream,object,System.Type,ProtoBuf.SerializationContext) 
 0x0009b
   at ProtoBuf.Meta.TypeModel.Deserialize 
 (System.IO.Stream,object,System.Type) 0x0001f
   at ProtoBuf.Serializer.DeserializeT (System.IO.Stream) 0x00043
   at GameMachine.Actor.ByteArrayToEntity (byte[]) 0x00047
   at GameMachine.TestActor.OnReceive (byte[]) 0x0006b
   at (wrapper runtime-invoke) Module.runtime_invoke_void__this___object 
 (object,intptr,intptr,intptr) 0x


 This goes away if I just run in a single thread.  It also takes an 
 average of 10,000 iterations or so to trigger this error.  I also tried 
 wrapping all deserialization calls in a mutex but that had no effect.

 The threading model is such that none of the objects I am deserializing 
 are accessed concurrently by different threads.

 Is protobuf-net completely reentrant, or does it try to reuse objects 
 anywhere? 

 This could very well all be a mono bug, but thought I would check here 
 first to see if someone had any ideas.

 Chris

 -- 
 You received this message because you are subscribed to the Google Groups 
 Protocol Buffers group.
 To unsubscribe from this group and stop receiving emails from it, send an 
 email to protobuf+u...@googlegroups.com javascript:.
 To post to this group, send email to prot...@googlegroups.comjavascript:
 .
 Visit this group at http://groups.google.com/group/protobuf.
 For more options, visit https://groups.google.com/groups/opt_out.



-- 
You received this message because you are subscribed to the Google Groups 
Protocol Buffers group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to protobuf+unsubscr...@googlegroups.com.
To post to this group, send email to protobuf@googlegroups.com.
Visit this group at http://groups.google.com/group/protobuf.
For more options, visit https://groups.google.com/groups/opt_out.


[protobuf] Design question - expressing typed data for arbitrary

2013-09-16 Thread Chris Akins
Hi all.

I'm working on exposing an experimental graph database library for RPC, and 
am going with 0MQ/GPB for transport for lack of obviously better ideas and 
interest in doing an extended analysis. One of the fundamental operations 
in the database takes a value of any Java type - including POJOs and Java 
Beans - stores it according to its internal magic, and hands back a Handle 
object that can be serialized to/from a UUID and used to retrieve the 
original value. For sanity's sake I'm restricting the exposed API to types 
handled by GPB, which gives me the appended nastiness in my .proto file and 
a bunch of stupid boilerplate code on every end to do what GPB tries to do 
for me already. Is this by having to work around a very liberal API, or am 
I  missing something obvious and abusing the facilities?

Thanks,
Chris

enum MsgDataType{
double=0;
float=1;
int32=2;
int64=3;
uint32=4;
uint64=5;
sint32=6;
sint64=7;
fixed32=8;
fixed64=9;
sfixed32=10;
sfixed64=11;
bool=12;
string=13;
bytes=14;
repeated=15;
}
message MsgData{
required MsgDataType type=1;
optional double=0;
optional float=1;
optional int32=2;
optional int64=3;
optional uint32=4;
optional uint64=5;
optional sint32=6;
optional sint64=7;
optional fixed32=8;
optional fixed64=9;
optional sfixed32=10;
optional sfixed64=11;
optional bool=12;
optional string=13;
optional bytes=14;
repeated repeated=15;
}
message MsgRequest {
  required MsgFunction function = 1;
  repeated MsgData data = 2;
}
message MsgReply {
  repeated MsgData result =1;
  optional string error = 2;
}

-- 
You received this message because you are subscribed to the Google Groups 
Protocol Buffers group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to protobuf+unsubscr...@googlegroups.com.
To post to this group, send email to protobuf@googlegroups.com.
Visit this group at http://groups.google.com/group/protobuf.
For more options, visit https://groups.google.com/groups/opt_out.


[protobuf] Where is the (meta) .proto file which describes .desc files?

2013-09-05 Thread Chris Dew
Hi,

Where is the (meta) .proto file which describes .desc files?

I make .desc files with: protoc --descriptor_set_out=foo.desc 
--include_imports foo.proto.

Am I correct in believing that the .desc files are in protobuf format?

If so, where can I get the .proto file which describes their format?

Thanks,

Chris.

P.S. Cross posted to 
http://stackoverflow.com/questions/18636887/where-is-the-meta-proto-file-which-describes-desc-files

-- 
You received this message because you are subscribed to the Google Groups 
Protocol Buffers group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to protobuf+unsubscr...@googlegroups.com.
To post to this group, send email to protobuf@googlegroups.com.
Visit this group at http://groups.google.com/group/protobuf.
For more options, visit https://groups.google.com/groups/opt_out.


Re: [protobuf] Where is the (meta) .proto file which describes .desc files?

2013-09-05 Thread Chris Dew
Hi Oliver,

That work perfectly.  If you're a stackoverflow user, I'll accept your 
answer if you paste it there.

Thanks,

Chris. 

On Thursday, 5 September 2013 14:09:10 UTC+1, Oliver wrote:

 On Thu, Sep 5, 2013 at 1:52 PM, Chris Dew cms...@gmail.com 
 javascript:wrote:

 Where is the (meta) .proto file which describes .desc files?


 I make .desc files with: protoc --descriptor_set_out=foo.desc 
 --include_imports foo.proto.



 https://code.google.com/p/protobuf/source/browse/trunk/src/google/protobuf/descriptor.proto

 Look for FileDescriptorSet.

 Oliver



-- 
You received this message because you are subscribed to the Google Groups 
Protocol Buffers group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to protobuf+unsubscr...@googlegroups.com.
To post to this group, send email to protobuf@googlegroups.com.
Visit this group at http://groups.google.com/group/protobuf.
For more options, visit https://groups.google.com/groups/opt_out.


[protobuf] protobuf_spec: RSpec matchers and Cucumber step defs for testing Protocol Buffers

2013-08-28 Thread Chris Busbey
Hi there!

We've just released an open source library for inspecting and building 
protocol buffers to be used in Cucumber scenarios.  

Check it out here:

https://github.com/connamara/protobuf_spec

We think this would be a great addition to the Third-Party Add-Ons wiki 
(https://code.google.com/p/protobuf/wiki/ThirdPartyAddOns).

Our company, Connamara Systems, is positioned in the Financial Technology 
space to deliver made-to-measure software using agile and test driven 
development methodologies.  A lot of that software depends on protocol 
buffers, and we make heavy use of Cucumber for our acceptance testing. 
 protobuf_spec leverages ruby-protobuf 
(https://github.com/macks/ruby-protobuf) the very nifty json_spec 
(https://github.com/collectiveidea/json_spec) library for exploring 
protocol buffers with an easy to use path syntax.  The library has been 
hardened by many development cycles that are now in production.

If you use BDD and protocol buffers, protobuf_spec could be for you.   
Contributions are encouraged!

Chris Busbey

Connamara Systems, llc
www.connamara.com

-- 
You received this message because you are subscribed to the Google Groups 
Protocol Buffers group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to protobuf+unsubscr...@googlegroups.com.
To post to this group, send email to protobuf@googlegroups.com.
Visit this group at http://groups.google.com/group/protobuf.
For more options, visit https://groups.google.com/groups/opt_out.


[protobuf] Tool to generate MySQL/SQLite schemas from .proto files.

2013-08-07 Thread Chris Dew
Hi,

I'm looking for a toll which will generate MySQL/SQLite schemas from .proto 
files.

http://stackoverflow.com/questions/18082871/im-looking-for-a-tool-which-processes-proto-files-into-mysql-sqlite-schemas

Thanks,

Chris.

-- 
You received this message because you are subscribed to the Google Groups 
Protocol Buffers group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to protobuf+unsubscr...@googlegroups.com.
To post to this group, send email to protobuf@googlegroups.com.
Visit this group at http://groups.google.com/group/protobuf.
For more options, visit https://groups.google.com/groups/opt_out.




Re: [protobuf] Digest for protobuf@googlegroups.com - 11 Messages in 7 Topics

2013-08-06 Thread Chris Dew
Hi,

I'm looking for a toll which will generate MySQL/SQLite schemas from .proto
files.

http://stackoverflow.com/questions/18082871/im-looking-for-a-tool-which-processes-proto-files-into-mysql-sqlite-schemas

Thanks,

Chris.


On 6 August 2013 13:42, protobuf@googlegroups.com wrote:

   Today's Topic Summary

 Group: http://groups.google.com/group/protobuf/topics

- Issue 541 in protobuf: Double decode in
google.protobuf.text_format._CUnescape#14053a61a6c9c0b2_group_thread_0[1 
 Update]
- ByteString using N bytes from an 
 InputStream?#14053a61a6c9c0b2_group_thread_1[4 Updates]
- protobuf mingw error #14053a61a6c9c0b2_group_thread_2 [1 Update]
- Question about size/speed of protobufs with different 
 formats#14053a61a6c9c0b2_group_thread_3[2 Updates]
- Using Options #14053a61a6c9c0b2_group_thread_4 [1 Update]
- Protocol Buffers Specification and the syntax 
 keyword.#14053a61a6c9c0b2_group_thread_5[1 Update]
- Issue 540 in protobuf: Add protoc-gen-haxe to ThirdPartyAddOns wiki
page, please #14053a61a6c9c0b2_group_thread_6 [1 Update]

   Issue 541 in protobuf: Double decode in
 google.protobuf.text_format._CUnescapehttp://groups.google.com/group/protobuf/t/86c0c99e91cd1958

proto...@googlecode.com Aug 06 10:53AM

Status: New
Owner: 
Labels: Type-Defect Priority-Medium

New issue 541 by matt.k...@undue.org: Double decode in
google.protobuf.text_format._CUnescape
http://code.google.com/p/protobuf/issues/detail?id=541

What steps will reproduce the problem?

 print google.protobuf.text_format._CUnescape('\\x5c')
Traceback (most recent call last):
File stdin, line 1, in module
File
/usr/lib/python2.7/dist-packages/google/protobuf/text_format.py,
line 691, in _CUnescape
return result.decode('string_escape')
ValueError: Trailing \ in string

What is the expected output? What do you see instead?

The expected output is a single backslash. _CUnescape works if the
input
is instead given in octal:

 print google.protobuf.text_format._CUnescape('\\134')
\

When the input is given in hex the escaped backslash is unescaped
_twice_,
once in the re.sub() and once in the str.decode().

I'm not using the trunk HEAD but I can see that the issue is still
present.

--
You received this message because this project is configured to send
all
issue notifications to this address.
You may adjust your notification preferences at:
https://code.google.com/hosting/settings



   ByteString using N bytes from an 
 InputStream?http://groups.google.com/group/protobuf/t/1903436bc567615e

V.B. vidalborro...@gmail.com Aug 05 03:23PM -0700

Greetings all,
We are using version 2.5. What is the most efficient way (*i.e.*
single
copy operation, no extra byte arrays) to construct a ByteString from a
specific number of bytes in an InputStream? The various versions of
ByteString.readFrom() drain the stream completely, which is not what
we
need; any data past *N* bytes should remain in the stream. The
ByteString.readChunk() method looks like it will work if we simply
give it *
N* as the chunkSize parameter. Unfortunately, ByteString.readChunk()
is
declared private, so that method is not currently an option. Is there
another option that I just haven't found in the source code yet?

(Thanks for taking the time to read this question.)




Feng Xiao xiaof...@google.com Aug 05 04:32PM -0700

 ByteString.readChunk() method looks like it will work if we simply
give
 it *N* as the chunkSize parameter. Unfortunately,
ByteString.readChunk()is declared private, so that method is not currently
an option. Is there
 another option that I just haven't found in the source code yet?

How about create an wrapper InputStream that only reads N bytes from
the
original InputStream and provide the wrapper to BytesString.readFrom()?






V.B. vidalborro...@gmail.com Aug 05 09:28PM -0700

Hi Feng Xiao! Thanks for the response.
That's actually our backup plan. We were hoping to avoid it, though,
since the wrappers would each contain an extra copy of the data
internally.
Our ideal case is for the data to get copied in a single step directly
from
an InputStream to a ByteString with no intermediate copies along the
way.
Question: You would know best... Would the safety of ByteStrings be
preserved if the readChunk() method were to be made public? If so,
I'll
open a feature request on the issue tracker.

On Monday, August 5, 2013 7:32:49 PM UTC-4, Feng Xiao wrote:




V.B. vidalborro...@gmail.com Aug 05 09:31PM -0700

... Actually, I just now took a closer look at the readChunk() method.
Even
that method makes an internal copy, so it looks like readChunk() isn't
what
we are looking for after all. Hmmm.

On Tuesday, August 6

[protobuf] implications of compiling protocol buffers without stl debugging info

2012-08-08 Thread Chris Morris
I have some file. Let's call it *Msg.proto*
I use Google's protoc.exe compiler to take my proto file and it generates a 
*Msg.h* file, which contains the definition for a *Msg* class.
When I *delete* a *Msg* object it can take a really long to deallocate the 
memory (when the debugger is attached). This is because it is using the STL 
debug library. So, I want to disable STL debugging when I *delete* *Msg* 
objects, but 
I want to keep STL debugging *for the rest of my project*. This leads me to 
consider compiling the protocol buffers project without STL debugging info.

What are the implications of this?

This means that all *Msg* objects will not have STL debugging info, right? 
Consequently, no STL objects created with STL debugging info can be passed 
into any of *Msg's* functions/constructors/etc, right?

-- 
You received this message because you are subscribed to the Google Groups 
Protocol Buffers group.
To view this discussion on the web visit 
https://groups.google.com/d/msg/protobuf/-/JSw1kIA_FywJ.
To post to this group, send email to protobuf@googlegroups.com.
To unsubscribe from this group, send email to 
protobuf+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/protobuf?hl=en.



[protobuf] Re: implications of compiling protocol buffers without stl debugging info

2012-08-08 Thread Chris Morris
I'm updating my question:

I have some file. Let's call it *Msg.proto*

I use Google's Protocol Buffer protoc.exe compiler to take my proto file 
and it generates a *Msg.h* file, which contains the definition for a *Msg*
 class.

When I delete a *Msg* object it can take a really long to deallocate the 
memory (when the debugger is attached). This is because it is using the STL 
debug library. So, I want to disable STL debugging when I delete *Msg* objects, 
but I want to keep STL debugging for the rest of my project. This leads me 
to consider turning off STL debugging info for *Msg.h* and for the Google 
Protocol Buffer project (because this is used by the *Msg* class, and only 
by the *Msg* class).

What are the implications of this?

What I'm guessing is:

   1. no STL objects created with STL debugging info can be passed into any 
   of *Msg*'s functions/constructors/etc, because that would mean that an 
   STL object created w/ one version if the STL library is then being passed 
   to a portion of the code that uses a different version of the STL library
   2. others?

-- 
You received this message because you are subscribed to the Google Groups 
Protocol Buffers group.
To view this discussion on the web visit 
https://groups.google.com/d/msg/protobuf/-/cyK-OaHka-kJ.
To post to this group, send email to protobuf@googlegroups.com.
To unsubscribe from this group, send email to 
protobuf+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/protobuf?hl=en.



Re: [protobuf] implications of compiling protocol buffers without stl debugging info

2012-08-08 Thread Chris Morris
Let's pretend that file *X* is neither *Msg.h* nor any file in the Google 
Protocol Buffer library. And let's say that file *Y* is either *Msg.h* or 
some file in the Google Protocol Buffer library. In this case, any STL 
object created in *X **cannot* be passed to *Y*, and vice versa, correct?

The way to use iterator debugging or not is with the following 
symbol: _HAS_ITERATOR_DEBUGGING

Are you saying that whether or not this symbol is set to 0 or 1 will cause 
a different version of the C++ runtime library to be used?

-- 
You received this message because you are subscribed to the Google Groups 
Protocol Buffers group.
To view this discussion on the web visit 
https://groups.google.com/d/msg/protobuf/-/esnu-594wSYJ.
To post to this group, send email to protobuf@googlegroups.com.
To unsubscribe from this group, send email to 
protobuf+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/protobuf?hl=en.



Re: [protobuf] implications of compiling protocol buffers without stl debugging info

2012-08-08 Thread Chris Morris
So how do I ensure that the STL containers are destructed w/ the proper STL 
library?
 

 Let me second this. Microsoft themselves is very clear that if the 
 destructor doesn't do its cleanup on an STL container that was built 
 with debug features, bad things will happen. 


-- 
You received this message because you are subscribed to the Google Groups 
Protocol Buffers group.
To view this discussion on the web visit 
https://groups.google.com/d/msg/protobuf/-/LWaVrkRdi5cJ.
To post to this group, send email to protobuf@googlegroups.com.
To unsubscribe from this group, send email to 
protobuf+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/protobuf?hl=en.



Re: [protobuf] protobuf == SQL

2012-05-23 Thread Chris Dew
I would use:

protoc --descriptor_set_out=foo.desc --include_imports foo.proto

to generate a description of the protocol in protobuf format.

Then I would transform every message type into a table.

If you want more information, email me at cms...@gmail.com

All the best,

Chris.

On Wednesday, 21 April 2010 19:53:03 UTC+1, ury wrote:

 Hi, 

 I wonder if someone has an idea for a generic way of storing protobuf 
 messages in an SQL database (not as a blob, but in a way the data can 
 be queried with SQL statements) and how to construct the protobuf 
 messages back from the SQL database. 

 Thank you 

 -- 
 You received this message because you are subscribed to the Google Groups 
 Protocol Buffers group. 
 To post to this group, send email to protobuf@googlegroups.com. 
 To unsubscribe from this group, send email to 
 protobuf+unsubscr...@googlegroups.com. 
 For more options, visit this group at 
 http://groups.google.com/group/protobuf?hl=en. 



-- 
You received this message because you are subscribed to the Google Groups 
Protocol Buffers group.
To view this discussion on the web visit 
https://groups.google.com/d/msg/protobuf/-/i1ltQF6YWGIJ.
To post to this group, send email to protobuf@googlegroups.com.
To unsubscribe from this group, send email to 
protobuf+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/protobuf?hl=en.



[protobuf] How do you use extensions across multiple .proto files?

2012-01-05 Thread Chris Dew
How do you use extensions across multiple .proto files?

http://code.google.com/apis/protocolbuffers/docs/proto.html#extensions

see line:  // This can even be in a different file.

I've tried:
protoc -I=./ --java_out=./ ./wrapper.proto ./attach.proto

but get the error: attach.proto:6:8:
Wrapper is not defined.

I feel I need some form of #include, but have not seen any
documentation for this.

Thanks,

Chris.


wrapper.proto:

package pbtest;

option java_package = pbtest;
option java_outer_classname = WrapperProtos;

message Wrapper {
  required string address = 1;
  optional sfixed64 timestamp = 2;
  required int64 sequence = 3;
  optional bool ack_not_required = 4;
  extensions 7 to 100;
}


attach.proto:

package pbtest;

option java_package = pbtest;
option java_outer_classname = WrapperProtos;

extend Wrapper {
  optional Attach attach = 8;
}

message Attach {
  required sfixed32 pv = 1;
  required sfixed32 attempt = 2;
}

-- 
You received this message because you are subscribed to the Google Groups 
Protocol Buffers group.
To post to this group, send email to protobuf@googlegroups.com.
To unsubscribe from this group, send email to 
protobuf+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/protobuf?hl=en.



[protobuf] release date for next version

2011-08-30 Thread Chris Morris
When is the next version coming out?

-- 
You received this message because you are subscribed to the Google Groups 
Protocol Buffers group.
To view this discussion on the web visit 
https://groups.google.com/d/msg/protobuf/-/9LJa0QTWzxIJ.
To post to this group, send email to protobuf@googlegroups.com.
To unsubscribe from this group, send email to 
protobuf+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/protobuf?hl=en.



Re: [protobuf] Out-of-date Extensions Documentation?

2011-07-26 Thread Chris Morris
So should SetExtension() be explicitly listed in the Language Guide, or 
not? 

-- 
You received this message because you are subscribed to the Google Groups 
Protocol Buffers group.
To view this discussion on the web visit 
https://groups.google.com/d/msg/protobuf/-/cij0Z7Gnvn4J.
To post to this group, send email to protobuf@googlegroups.com.
To unsubscribe from this group, send email to 
protobuf+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/protobuf?hl=en.



Re: [protobuf] Failed test from tests.exe and lite-test.exe

2011-07-26 Thread Chris Morris
Silly me. Of course. I can't believe I missed that. Thanks.

-- 
You received this message because you are subscribed to the Google Groups 
Protocol Buffers group.
To view this discussion on the web visit 
https://groups.google.com/d/msg/protobuf/-/6Sc6zkWd8HkJ.
To post to this group, send email to protobuf@googlegroups.com.
To unsubscribe from this group, send email to 
protobuf+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/protobuf?hl=en.



Re: [protobuf] Out-of-date Extensions Documentation?

2011-07-26 Thread Chris Morris
Fair enough.

-- 
You received this message because you are subscribed to the Google Groups 
Protocol Buffers group.
To view this discussion on the web visit 
https://groups.google.com/d/msg/protobuf/-/7gP8hw2OQEsJ.
To post to this group, send email to protobuf@googlegroups.com.
To unsubscribe from this group, send email to 
protobuf+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/protobuf?hl=en.



Re: [protobuf] Scala protocol buffers protoc plugin

2010-09-04 Thread Chris Kuklewicz
Required fields having an explicit default do not affect serialization
but may affect how a language instantiates a message.

I wrote the Haskell version of protocol-buffers.  And I used the Maybe
type constructor for optional fields.  When serializing this means all
Nothing fields can be efficiently detected and omitted and when
de-serializing the omitted fields can be efficiently assigned Nothing.

There is a type class MessageAPI that provides uniform access to all
fields (including extension keys) with a getVal method.  The
MessageAPI also defines an isSet test that returns True or False.
The getVal look for a (Just value) in the message, then an explicit
default (Just value) then uses the default-default value.

In writing this e-mail it has occurred to me that another API function
might be useful.  A function that converted a message with optional
types to a message where there are no missing optional types.  Perhaps
this would make a sibling Message type with all the Maybe removed, but
perhaps just using the same Message type and guaranteeing that there are
no Nothing entries would be useful enough.  Hmmm



On 03/09/2010 02:34, Jeff Plaisance wrote:
 Does default on required fields currently have a meaning?  It compiles
 but I'm not sure if there are cases where the default is ever usable. 
 Maybe required with default specified could mean use the default if not
 explicitly set.  This change would also allow you to get rid of unused
 required fields over 2 releases.  This might be too serious of an api
 change at this point, but it seems like required has fallen out of favor
 anyway, so maybe this wouldn't be too bad.
 
 On Tue, Aug 24, 2010 at 4:47 PM, Kenton Varda ken...@google.com
 mailto:ken...@google.com wrote:
 
 On Wed, Aug 18, 2010 at 7:07 AM, Jeff Plaisance
 jeffplaisa...@gmail.com mailto:jeffplaisa...@gmail.com wrote:
 
 It seems like the issue here is that optional has been
 overloaded to mean two different things:
 
 1) Not really optional but in order to do rolling upgrades we're
 making it optional.  The default should be used if it is not
 set.  In my opinion, in this case there should be no has
 method because either its result is irrelevant or it is being
 used to overload some other meaning on top of optional.
 2) Optional in the sense of Option, Maybe, Nullable, empty, can
 be null, whatever you want to call it.  In my opinion this
 should be encapsulated in the type so that the programmer is
 forced to handle all possible cases.  The has method should not
 be used for this because it is too easy to ignore.
 
 
 Yes, I think you're right, and I see how it makes sense to
 distinguish these two by the presence or absence of an explicit
 default value.
 
 
 -- 
 You received this message because you are subscribed to the Google
 Groups Protocol Buffers group.
 To post to this group, send email to proto...@googlegroups.com.
 To unsubscribe from this group, send email to
 protobuf+unsubscr...@googlegroups.com.
 For more options, visit this group at
 http://groups.google.com/group/protobuf?hl=en.

-- 
You received this message because you are subscribed to the Google Groups 
Protocol Buffers group.
To post to this group, send email to proto...@googlegroups.com.
To unsubscribe from this group, send email to 
protobuf+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/protobuf?hl=en.



[protobuf] Announcing Haskell protocol-buffers 1.8.0

2010-09-03 Thread Chris Kuklewicz
I have fixed a few reported bugs and I am happy to release the Haskell
version of protocol buffers, version 1.8.0

This is split across three Haskell packages:
http://hackage.haskell.org/package/protocol-buffers
http://hackage.haskell.org/package/protocol-buffers-descriptor
http://hackage.haskell.org/package/hprotoc

All three come from a darcs source control repository at
http://code.haskell.org/protocol-buffers/

What is new in 1.8.0 ?

Submitted bug fixes!
Fix for compiling generated haskell that uses packed fields.
Fix to mangling default value Enum names.
Fix for using group when in plug-in mode.

What is new in 1.7.0 ?

Since version 1.7.0 this can operate in plugin as well as standalone mode,
thanks to a patch from George van den Driessche.  To use as a plogin: copy
the hprotoc to be named protoc-gen-haskell (not a symlink) and execute it as

 /opt/protobuf-2.3.0/bin/protoc --plugin=./protoc-gen-haskell
--haskell_out=DirOut test.proto

Cheers,
  Chris Kuklewicz

-- 
You received this message because you are subscribed to the Google Groups 
Protocol Buffers group.
To post to this group, send email to proto...@googlegroups.com.
To unsubscribe from this group, send email to 
protobuf+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/protobuf?hl=en.



[protobuf] ANN: Haskell protocol buffers 1.6.0

2010-01-26 Thread Chris Kuklewicz
I am pleased to announce that I have updated the Haskell language
version of the protocol buffers library and .proto compilation tool to
version 1.6.0.

The new versions are on hackage in three pieces:

http://hackage.haskell.org/package/protocol-buffers
http://hackage.haskell.org/package/protocol-buffers-descriptor
http://hackage.haskell.org/package/hprotoc

This version is now caught up with the official protobuf-2.3.0 release.
 The highlights of the changes are (cribbing from Kenton's announcement):

   General
   * Parsers for repeated numeric fields now always accept both packed and
 unpacked input.  The [packed=true] option only affects serializers.
 Therefore, it is possible to switch a field to packed format without
 breaking backwards-compatibility -- as long as all parties are using
 protobuf 2.3.0 or above, at least.

and

   * inf, -inf, and nan can now be used as default values for float and double
 fields.

have been added to 1.6.0.

I did not add support for plugin code generators or for writing directly
to a compressed zip or jar file.  No service related code is ever
generated so the option *_generic_services changes were ignored.

Cheers,
  Chris Kuklewicz

-- 
You received this message because you are subscribed to the Google Groups 
Protocol Buffers group.
To post to this group, send email to proto...@googlegroups.com.
To unsubscribe from this group, send email to 
protobuf+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/protobuf?hl=en.



Re: Haskell version and [protobuf] 2.3.0 released

2010-01-17 Thread Chris Kuklewicz
Thanks for the reply Kenton.  Another issue occurs to me as a get ready
to shut off for the night.  Reading repeated extension keys gets more
annoying, especially for keys not known at code generation time.  I will
sleep on it and then maybe the details will be clear.


On 16/01/2010 20:32, Kenton Varda wrote:
 Have you considered refactoring your compiler into the form of a code
 generator plugin?  Then you would not have to maintain your own parser
 anymore, you'd get the .zip output feature for free, and you could add
 insertion points to your code for someone else to extend it to support
 an RPC system.
 
 On Sat, Jan 16, 2010 at 7:47 AM, Chris Kuklewicz turingt...@gmail.com
 mailto:turingt...@gmail.com wrote:
 
 A question for Kenton or someone else who may know: Since repeated
 fields can be read as packed or unpacked, does the packed serializer
 ever use the unpacked format when it would be more efficient?  Saving a
 single packed datum is more verbose then a single unpacked datum.
 
 
 No, the official implementations do not do this.  A couple arguments
 against:
 - People who have to interact with pre-2.3.0 code cannot use such an
 optimization, so it would have to be optional, which it probably isn't
 worth.
 - The optimization you describe would only be useful in the one-element
 case, and in that case it would only save one byte.  Since this case is
 probably relatively unlikely for packed repeated fields (which are
 typically large), the extra overhead of simply checking for this case
 probably isn't worth the savings it would bring.

-- 
You received this message because you are subscribed to the Google Groups 
Protocol Buffers group.
To post to this group, send email to proto...@googlegroups.com.
To unsubscribe from this group, send email to 
protobuf+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/protobuf?hl=en.




Haskell version and [protobuf] 2.3.0 released

2010-01-16 Thread Chris Kuklewicz
I want to mention that I will be updating the Haskell version of
protocol-buffers to be compatible with the new protobuf-2.3.0 release.

If people are interested in it getting updated sooner then I will get it
updated a bit quicker, otherwise it will take a little while.

A question for Kenton or someone else who may know: Since repeated
fields can be read as packed or unpacked, does the packed serializer
ever use the unpacked format when it would be more efficient?  Saving a
single packed datum is more verbose then a single unpacked datum.

I have annotated the protobuf changes below with how it will affect the
Haskell version:

On 09/01/2010 00:51, Kenton Varda wrote:
 I've pushed the final release of Protobuf 2.3.0:
 
 http://code.google.com/p/protobuf/downloads/list
 
 Documentation updates are still in review but I hope to have them up Monday.
 
 2009-01-08 version 2.3.0:
 
   General
   * Parsers for repeated numeric fields now always accept both packed and
 unpacked input.  The [packed=true] option only affects serializers.
 Therefore, it is possible to switch a field to packed format without
 breaking backwards-compatibility -- as long as all parties are using
 protobuf 2.3.0 or above, at least.

This will be made compatible.  It will even be just as efficient.

   * The generic RPC service code generated by the C++, Java, and Python
 generators can be disabled via file options:
   option cc_generic_services = false;
   option java_generic_services = false;
   option py_generic_services = false;
 This allows plugins to generate alternative code, possibly specific to 
 some
 particular RPC implementation.

The Haskell version does no RPC code generation already.  It parses and
comprehends the specification but does nothing with it.

 
   protoc
   * Now supports a plugin system for code generators.  Plugins can generate
 code for new languages or inject additional code into the output of other
 code generators.  Plugins are just binaries which accept a protocol buffer
 on stdin and write a protocol buffer to stdout, so they may be written in
 any language.  See src/google/protobuf/compiler/plugin.proto.
 **WARNING**:  Plugins are experimental.  The interface may change in a
 future version.

This is not going to be added to the Haskell version.

   * If the output location ends in .zip or .jar, protoc will write its output
 to a zip/jar archive instead of a directory.  For example:
   protoc --java_out=myproto_srcs.jar --python_out=myproto.zip 
 myproto.proto
 Currently the archive contents are not compressed, though this could 
 change
 in the future.

This is not going to be added to the Haskell version, barring real demand.

   * inf, -inf, and nan can now be used as default values for float and double
 fields.

This will be added to the Haskell version.  (I added to the Lexer a
special case to recognize -inf as a single token.)

-- 
You received this message because you are subscribed to the Google Groups 
Protocol Buffers group.
To post to this group, send email to proto...@googlegroups.com.
To unsubscribe from this group, send email to 
protobuf+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/protobuf?hl=en.




[protobuf] Re: protoc plugin compiler extension framework

2010-01-06 Thread Chris
Is the plugin framework already part of 2.3.0? I can't find any
documentation for this new feature besides some early brainstorming
posts.

On Dec 22 2009, 7:28 pm, Kenton Varda ken...@google.com wrote:
 The plugin framework is not meant for this.  Plugins can only insert code at
 points that have explicitly been declared by the original generator.  For
 example, in Java, the code generator generates one insertion point in each
 class.  So, you can add new methods to a message type, but you cannot stick
 javadoc comments on the existing methods.

 I think that a system which let you arbitrarily edit the generated code
 would be too fragile -- any change to the code generator would potentially
 break plugins.  In fact, I'm even worried that the current system is risky
 because it allows plugins to get access to private members which could
 change, but I don't see any way around that.

 All this said, I think it would be great if the protocol compiler supported
 some format for documentation comments and automatically copied those
 comments into the generated code.  But no one has actually worked on this
 yet.

 On Tue, Dec 22, 2009 at 6:42 AM, Christopher Piggott 
 cpigg...@gmail.comwrote:

  Hmm maybe I can use the UninterpretedOption message to do this.
  Would something like this work?

  message ChrisMessage {
   option javadoc = This is an object representing Chris's Message;
   repeated int32 field1 = 1 [javadoc=This is a javadoc for field 1];
   repeated int32 field2 = 2 [javadoc=This is a javadoc for field 2];
  }

  Then write a plug-in that finds those and writes the ones whose
  NamePart.equals(javadoc) in as a /** comment */

  Possible?

  --

  You received this message because you are subscribed to the Google Groups
  Protocol Buffers group.
  To post to this group, send email to proto...@googlegroups.com.
  To unsubscribe from this group, send email to
  protobuf+unsubscr...@googlegroups.comprotobuf%2bunsubscr...@googlegroups.com
  .
  For more options, visit this group at
 http://groups.google.com/group/protobuf?hl=en.


-- 
You received this message because you are subscribed to the Google Groups 
Protocol Buffers group.
To post to this group, send email to proto...@googlegroups.com.
To unsubscribe from this group, send email to 
protobuf+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/protobuf?hl=en.




[protobuf] Message References

2009-12-08 Thread Chris
Hey,

there are some threads in this group hinting to special solutions in
cases where you want to have some kind of references build in - or on
top of - protobuf messages. For example you would want that to model
DAG (or even cyclic graph) structures.
I was thinking about how I would want such techniques to be
implemented/usable and came up with this:

For messages types A and B where A messages are supposed to have
references to B messages:

message A {
   required int32 ownvalue = 1;
   required ptr_B bref = 2;
}
message B{
   required bytes UUID = 1;
   required int32 othervalue = 2;
}
message ptr_B {
   required bytes refUUID = 1;
}

where ptr_B obviously is a reference variable message to a specific
B message (where the join of cause is on refUUID and UUID).

So far this is no problem, as I can easily have a script generate
ptr_X messages for each message X containing a defined attribute (e.g.
required bytes UUID).
The second part however is where I am not so sure. I would want to
have some type safe way of navigating/dereferencing those references.
Like:

A a; //this is what I received before
ptr_B reference = a.getRefUUID(); //get reference
B b = foo(reference); //dereference

A standard protocol buffer message type ptr_B from which a flat Java
class will be generated does not seem adequat to to so.
Instead some new abstract Class RefMessageT extends Message with a
method T getReferencedObject() would be the solution. ptr_X
messages simply had to be transformed to Java classes extending this
superclass.

My questions so far:
I could write my own code generator for that purpose, right?
Has anybody tried to implement some similar techniques for reference
types on top of protocol buffer?
Is there a completely different approach for this problem?
Am i getting the design and purpose of protobuf wrong when I need/want
such a solution?

Chris

--

You received this message because you are subscribed to the Google Groups 
Protocol Buffers group.
To post to this group, send email to proto...@googlegroups.com.
To unsubscribe from this group, send email to 
protobuf+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/protobuf?hl=en.




[protobuf] Re: Message References

2009-12-08 Thread Chris
Thanks for the replies so far.
To explain a little more what I need all this for probably gives a
better overview:

Basically I want to write a framework for mobile devices
(Android,iPhone,...) to easily enable data exchange between a device
and some server component(s). On top of this framework one could
implement ones business logic/application without worrying about how
data actually gets from the server to the device. So part of the whole
idea is that only parts of the total object world would be
physically present on the device. I though I could use protobuf for
the underlying data transfer method as it's platform independent,
light and fast.
So I dont want to just model graphs as such but rather arbitrary
dependency/attribute relations between objects. That's what I need
references for and that's why a complete lookup table wont do for
me...

@Marc:
I dont think that the unique ideas will be a problem in my case as the
server could enforce uniqueness.
You concerns about the cache may be right - that's one of the things
be I will be researching (context aware/probabilistic prefetching)
Interop isn't really an issue for me ether: all clients will be known
and in control anyway...

On Dec 8, 6:58 pm, Adam Vartanian flo...@google.com wrote:
  A standard protocol buffer message type ptr_B from which a flat Java
  class will be generated does not seem adequat to to so.
  Instead some new abstract Class RefMessageT extends Message with a
  method T getReferencedObject() would be the solution. ptr_X
  messages simply had to be transformed to Java classes extending this
  superclass.

  My questions so far:
  I could write my own code generator for that purpose, right?
  Has anybody tried to implement some similar techniques for reference
  types on top of protocol buffer?
  Is there a completely different approach for this problem?
  Am i getting the design and purpose of protobuf wrong when I need/want
  such a solution?

 When I've seen a similar thing done, it's generally been done like this:

 message A {
   required int32 ownvalue = 1;
   required int32 bref = 2;

 }

 message B {
   required int32 othervalue = 1;

 }

 message Overall {
   repeated B b = 1;
   repeated A a = 2;

 }

 And then you do

 Overall overall;
 A a;
 B b = overall.getB(a.getBref());

 Basically, you first send a lookup table, then you send the remaining
 items, with references as indexes into the table.  To serialize a
 graph using that technique, you could send a list of nodes and then a
 list of edges.  I don't know that it's the best way of doing it, but
 it's worked out well where I've seen it.

 - Adam

--

You received this message because you are subscribed to the Google Groups 
Protocol Buffers group.
To post to this group, send email to proto...@googlegroups.com.
To unsubscribe from this group, send email to 
protobuf+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/protobuf?hl=en.




[protobuf] Receiving/Parsing Messages

2009-12-03 Thread Chris
Hey,

I just started looking into protobuf for a project of mine.
From the Java Api page I could not really find how to parse a
generated (compiled .proto is present) but unknown message.

So for example: I have messages types MessageA and MessageB. The
client component receives some bytes representing a message of type A
OR B. Do I have to add information of the type of message that's been
send, or is there an easy way of automatically parsing the message
like:

byte[] b; //hols byte representation of message
Message message = foo(b); //parse message
if(message instanceof MessageA)
  System.out.println(was type A);
else
  System.out.println(was type B);

Thank you in advance,
 Chris

--

You received this message because you are subscribed to the Google Groups 
Protocol Buffers group.
To post to this group, send email to proto...@googlegroups.com.
To unsubscribe from this group, send email to 
protobuf+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/protobuf?hl=en.




[protobuf] Re: Receiving/Parsing Messages

2009-12-03 Thread Chris
I see.

Another option is to send some kind of identifier preceding the
message (just like the total size of the message), right?

 -- Chris

On Dec 3, 5:23 pm, Adam Vartanian flo...@google.com wrote:
  I just started looking into protobuf for a project of mine.
  From the Java Api page I could not really find how to parse a
  generated (compiled .proto is present) but unknown message.

  So for example: I have messages types MessageA and MessageB. The
  client component receives some bytes representing a message of type A
  OR B. Do I have to add information of the type of message that's been
  send, or is there an easy way of automatically parsing the message
  like:

  byte[] b; //hols byte representation of message
  Message message = foo(b); //parse message
  if(message instanceof MessageA)
   System.out.println(was type A);
  else
   System.out.println(was type B);

 No, there's no way to do this, because the wire format doesn't include
 information about the type of message.  It's even possible that the
 same set of bytes could be a valid message of both types.

 The usual way to handle this is to create a wrapper message that can
 hold either, like so:

 // Only one field may be filled out
 message AorB {
   optional MessageA message_a;
   optional MessageB message_b;

 }

 And then you can parse it via:

 AorB result = AorB.parseFrom(data);
 if (result.hasMessageA()) {
   System.out.println(Was type A);} else if (result.hasMessageB()) {

   System.out.println(Was type B);

 }

 - Adam

--

You received this message because you are subscribed to the Google Groups 
Protocol Buffers group.
To post to this group, send email to proto...@googlegroups.com.
To unsubscribe from this group, send email to 
protobuf+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/protobuf?hl=en.




ANN Haskell version 1.5.0 released

2009-06-15 Thread Chris Kuklewicz

Hello all,

  I have just uploaded version 1.5.0 of the Haskell version to hackage.
 The links for the three pieces are:

http://hackage.haskell.org/package/protocol-buffers
http://hackage.haskell.org/package/protocol-buffers-descriptor
http://hackage.haskell.org/package/hprotoc

This catches up to Google's version 2.1.0, as described below:

  * Support for repeated fields for primitive type (good arrays!).
Note that using on an invalid field type will generate an error.
  * NO support yet for the *_FIELD_NUMBER style constants
  * It is now an error to define a default value for a repeated field.
  * Fields can now be marked deprecated (does nothing)
  * The type name resolver will no longer resolve type names to fields.
Note that this applies to type of normal and extension fields.

A lexer bug was founds and fixed by George van den Driessche, when a
numeric literal in a proto file was followed immediately by a newline
character.

Cheers,
  Chris

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Protocol Buffers group.
To post to this group, send email to protobuf@googlegroups.com
To unsubscribe from this group, send email to 
protobuf+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/protobuf?hl=en
-~--~~~~--~~--~--~---



Haskell implementation status

2009-05-19 Thread Chris

As for the improved name resolution:

Kenton Varda wrote:
 On Sun, May 17, 2009 at 6:57 AM, Chris Kuklewicz turingt...@gmail.com
 mailto:turingt...@gmail.com wrote:


 What do people think?


 You're right, this should have been handled too.  Oh well, I'll stick
 it on my TODO list for a later release.
I am quite happy to have helped.  The two name resolution functions were
side by side in my code; making the decision to fix only one looked
odd.  I will immediately support resolving extendee names to Messages,
ignoring Fields and other things.

As for the packed fields, I just now got my Haskell version to the
next stage:
  (1) new new runtime and converter both compile with packed support
  (2) it can convert the new unittest.proto into Haskell code with
packed support
  (3) the generated Haskell code compiles against new runtime with
packed support
  (4) it has regenerated its own descriptor.proto and been recompiled
 (enums needed an extra line to get packed fields efficiently)

So the next stage is to test the behavior and see if it can
inter-operate with itself and with packed files from protobuf-2.1.0.

Making the extension fields also packable was tedious but did not
require redesigning anything.  Whew.  The unknown field support did
not need updating at all.

As for the newly exposed field number constants:  I cannot make them a
proper enum data type in Haskell because those are closed definitions
and so could not include any of the extension fields outside the
message's own proto file.   I could still make them type safe constants,
but these could not be used as targets of a case statement.  The data is
available through reflection, so I will wait to implement anything else
until an actual person comes to me with a use case that I can make
design decisions for.

As for delimiting messages by prepending the length: I already had these
commands, so all I did was change the documentation from author's
extension to compatible with protobuf-2.1.0.  Not that I actually
tested it...

-- 
Chris


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Protocol Buffers group.
To post to this group, send email to protobuf@googlegroups.com
To unsubscribe from this group, send email to 
protobuf+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/protobuf?hl=en
-~--~~~~--~~--~--~---



Re: 2.1.0 release is up

2009-05-17 Thread Chris Kuklewicz

I am patching the Haskell implementation and I have a follow up
question to this:

On May 14, 12:06 am, Kenton Varda ken...@google.com wrote:
   * The type name resolver will no longer resolve type names to fields.  For
     example, this now works:
       message Foo {}
       message Bar {
         optional int32 Foo = 1;
         optional Foo baz = 2;
       }
     Previously, the type of baz would resolve to Bar.Foo, and you'd get
     an error because Bar.Foo is a field, not a type.  Now the type of baz
     resolves to the message type Foo.  This change is unlikely to make a
     difference to anyone who follows the Protocol Buffers style guide.

You did not fix this similar case, where the int32 Baz field causes
an error when trying to extend the message Baz:

package test_resolve;

message Foo {
  optional int32 Baz = 2;

  extend Baz {
  optional int32 nonsense = 76335;
  }
}

message Baz {
  extensions 100 to max;
}

I will make the Haskell version compatible with protoc-2.1.0 but
perhaps you want to make the above a legal proto file in the future.

What do people think?
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Protocol Buffers group.
To post to this group, send email to protobuf@googlegroups.com
To unsubscribe from this group, send email to 
protobuf+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/protobuf?hl=en
-~--~~~~--~~--~--~---



Re: Backwards Compatibility of sizes and encodings.

2009-04-16 Thread Chris Brumgard


Thanks for getting back with me on this.   Its been a while but I
believe I've seen several posts that uses something akin to the
following:

message A
{
  .
}

message B
{
  .
}


message wrapper
{
 required fixed32 size = 1;
 required fixed32 type = 2;

 optional A a = 3;
 optional B b = 4;
}


So message wrapper would be used for the actual sending of  messages A
or B. You would peek at the size apparently at the size with an
initial ParseFromString() to cover size and type.  Then going back and
deserializing the whole message given by size.  Does this not place a
requirement on knowing how much to read to cover size and type or have
I missed something?  I assume this would have a similar problem to the
one that you mentioned. I wanted to do something similar but
separating out size and type (and a few others) into a separate
message. It seems like all of the optional fields would slow down
parsing.


I can agree that it might be poor design in general, but I'm not sure
if this case given the common need for type and size shouldn't be
allowed to break it.  As a developer for protocol buffers, what was
the rational for leaving out out type and size in the protocol and
requiring users to specify it themselves?  Is this just not to break
compatibility with 1.0 or speed?

By the way, how is writing the message size and type as an independent
protocol buffer varints any different then doing such as Message
header { required int size = 1; required int type=2 }.  Is this just a
design philosophy?   In truth, your still creating a message buffer
its just implicitly defined and unnamed as opposed to explicitly
defined in a .proto file some where and you can't add fields to it
either without updating all the pre-existing client code (I guess you
could add an options count int to allow for more fields later before
the main message, but that gets complicated for something that should
be simple) .  I'm not opposed to  just the varints.  I can see how in
the API you would do this C++ and Java, but how would you do it in
Python?  OutputStream and InputStream class in the internal directory?

Thanks for your patience.



On Apr 16, 8:25 pm, Kenton Varda ken...@google.com wrote:
 We will absolutely maintain backwards-compatability of the wire format in
 future versions.  A version of protocol buffers that wasn't backwards
 compatible would be thoroughly useless.
 However, our idea of compatibility means that newer versions of the code
 can successfully parse messages produced by older versions and vice-versa.
  Although it seems unlikely that the encoded size of a message (containing
 exactly the same data) would change in future versions of the serialization
 code, this isn't a guarantee I feel comfortable making.  Even if you use
 only fixed-width field types, there are many different technically-valid
 ways to encode the data which could very well have different sizes (e.g. by
 using overlong varints when encoding tags, or by splitting an optional
 sub-message into multiple parts).

 But I think assuming that messages of a particular type will always be the
 same size is a bad idea anyway, even if you stick with the same version of
 protocol buffers.  If you make this assumption, not only do you have to
 avoid using variable-width fields, but you can never add new fields to your
 message definition.  This defeats one of the most valuable features of
 protocol buffers.

 I think you should just write the size of your header message to the stream
 before the message itself.  If you write it as a varint, this will probably
 only cost you a byte, and you'll probably save at least a byte by using
 varints inside your message rather than fixed-width fields.

 On Thu, Apr 16, 2009 at 3:55 PM, Chris Brumgard 
 chris.brumg...@gmail.comwrote:



  I have question regarding the future direction of protocol buffers.
  Is Google planning on adding features or changing the encoding of data
  types in any way that would break backwards compatibility?  I've read
  through the posts and it appears that the developers will try to
  maintain compatibility as much as possible.  My primary concern is
  that I plan on using a header message type that includes various
  fields to describe the next message including type and size.  Because
  I would be using fixed integer sizes (no varints) in the header, I
  will know in advance the size of the header therefore I wouldn't need
  to give the size in the stream.  However, this makes the assumption
  that future version of Protocol Buffers will not change the size of
  the serialized header or the individual fields.   Since the header has
  more than just size data information, I would prefer to use a protocol
  buffer message instead of straight binary as it makes it easier for
  languages that do not make it easy to convert binary to native data
  types and removes concerns about endianness and data type sizes (work
  is already done for me).  My other option

Re: Generating subclasses of a protocol buffer serialized class

2009-01-18 Thread Chris

Thanks Mark, Its java.  So far people keep recommending me what I am
already doing (delegation) which is itself not maintainable.  Sounds
like there is a need for a code generator to generate the delegation
of those methods you want to expose :-}

On Jan 18, 2:17 am, Marc Gravell marc.grav...@gmail.com wrote:
 What language are you using? In C#, partial classes are a viable way
 of adding extra logic into generated classes - the protobuf-net
 generator allows this fairly well. In the more general sense, consider
 encapsulation over inheritance, or simply keep the two separate (for
 example, passing the generated object into static methods defined in
 the business class etc),

 Marc Gravell
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Protocol Buffers group.
To post to this group, send email to protobuf@googlegroups.com
To unsubscribe from this group, send email to 
protobuf+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/protobuf?hl=en
-~--~~~~--~~--~--~---



Re: Generating subclasses of a protocol buffer serialized class

2009-01-17 Thread Chris

Yes after I pressed the button I realized the error of my ways (re
reduce maintainability).  I partially agree with you about the
exposing of the internal nastyness,however that is a choice to be made
by the developer per use basis.  To not have that there is like saying
we dont offer you object oriented programming cause you will make
mistakeswell kinda (not meant to be a deeply religious argument
open for internet flaming).

C

On Jan 17, 11:53 am, Henner Zeller h.zel...@acm.org wrote:
 Hi,

 On Sat, Jan 17, 2009 at 9:30 AM, Chris chrisjcoll...@gmail.com wrote:

  Thanks Steve and Alek.  As I mentioned wrappers delegating access to a
  containing class and simple copy to from a business class is what we
  do today and its a mess, its error prone and its totally manual, am I
  missing something?  Every new attribute needs to either have a few
  lines more added to the business class to get and set its values.

 Typically, a business class should not expose all the internal state
 (i.e. the attributes of the underlying data) in the first place. You
 should be very specific which attributes you expose.

 Anyway, having said that, you might consider just returning a
 reference or pointer to the protocol buffer kept in the business class
 in case you _really_ have to access the internal data
   business_object-data().get_foobar().
 This way you don't have to modify anything in case you add fields to
 the data. However, this way you always expose all properties of the
 internal data which is a bad idea - but again, this would have
 happened with the inheritance approach as well.
 On advantage as well is that the access code looks sufficiently ugly
 that users will avoid using internal data fields directly :)

  In
  a prior job I did use a system where persisted classes were generated
  as *Base classes where you were expected to implement the * extending
  *Base.  This worked very well and reduced maintainability.

 You said the right thing here, but I guess you meant to say 'reduced
 maintenance' .. ;)

 -h

  I can of
  course imagine that for languages that are not OO it would prove to be
  a challenge (yes I dont use Python).

  Thanks

  C

  On Jan 17, 12:20 am, Alek Storm alek.st...@gmail.com wrote:
  I agree with Shane.  Wrapping the data is a great way to separate the
  business logic from the data.  Actually, if you're using Python,
  you're not allowed to subclass generated message types (though there's
  no 'sealed' modifier) - it screws with the metaclass machinery.

  Cheers,
  Alek
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Protocol Buffers group.
To post to this group, send email to protobuf@googlegroups.com
To unsubscribe from this group, send email to 
protobuf+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/protobuf?hl=en
-~--~~~~--~~--~--~---



Generating subclasses of a protocol buffer serialized class

2009-01-16 Thread Chris

Hi, sorry if this is a dumb question.  I have class A which I want to
serialize but equally want to add logic to it (which I cant today
because its generated). I was wondering if there was:

- An ability that if I created class of type B that extends A, where
no member variables of B would be serialized.
- Then provide the correct handling during serialization.

Basically B contains my business logic, A contains all the serialized
fields.

This technique is not unknown to a similar problem of object
relational mapping.

Currently it seems a little messy, we are either copying fields to and
from the serialized form into a richer object or using lots of
delegation, neither of which is that clean.

Best

ChRiS
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Protocol Buffers group.
To post to this group, send email to protobuf@googlegroups.com
To unsubscribe from this group, send email to 
protobuf+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/protobuf?hl=en
-~--~~~~--~~--~--~---



Re: Protocol buffer compatibility across library versions

2009-01-12 Thread Chris

shanibr...@gmail.com wrote:
 Hi,
Lets say my proto (abcd.proto) is compiled with the protoc compiler
 version 2.0.3 and my application also links to the corresponding
 runtime libraries. I now run my app and store the  bytes generated
 from the proto object to some persistent store (say a database).

 A month from now, I decide to upgrade to protocol buffer version 2.0.4
 (which say is released) and recompile abcd.proto using the new
 compiler to generate the new cpp classes. I hook my app to the new
 runtime libraries and try to read the older bytes (v2.0.3) from the
 persistent store. Will it be able to?

 In other words, are the bytes that are written out (for an unchanged
 proto file) compatible across protobuf library versions? Will this
 always be guaranteed?

 Thanks.
   
If abcd.proto is unchanged and version 2.0.3 is bug free and version 
2.0.4 is bug free then saving with 2.0.3 libraries and loading with 
2.0.4 libraries will work.  Also, saving with 2.0.4 should let you load 
with 2.0.3 as well.

It is possible that the c++ class definitions change slightly, so the 
actual in-memory object may be different.  But the data should 
roundtrip, regardless of the computer language used.

If fields are delete from abcd.proto and/or non-required fields are 
added to abcd.proto then it should still let you load old data into 
the new format.  Removed fields get stored or dropped in a 
language-library dependent way.  New non-required fields will be unset 
and have their default values.  The only big difficulty is loading data 
that does have a fields declared as required by the current code.

Cheers,
  Chris


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Protocol Buffers group.
To post to this group, send email to protobuf@googlegroups.com
To unsubscribe from this group, send email to 
protobuf+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/protobuf?hl=en
-~--~~~~--~~--~--~---



Announcing Haskell protocol-buffers 1.4.0 (the smashing recursive edition)

2009-01-09 Thread Chris

Hello,

  What is Haskell protocol-buffers?
This provides a program 'hprotoc' which compiles a .proto file 
defining messages to Haskell modules, and the protocol-buffers API to 
access them and convert back and forth to the binary wire protocol, and 
protocol-buffers-descriptors which are messages which describe .proto 
files and allow for runtime reflection of annotated message definitions.

  The big addition to this version (which is 1.4.0) over the previous 
version (which was 1.2.0) is for when modules are in a dependency loop.  
The most common reason this happens is whenever a message is extended in 
the same proto file but outside of the message itself (e.g. at the top 
level).  This was salved in previous versions by telling the user to 
create boilerplate header files (a .hs-boot file) and add a few {-# 
SOURCE #-} pragmas.  This was primitive, and could not cover all the 
corner cases.

  Those days are gone.

  The new version of hprotoc uses the cutting edge version of 
haskell-src-exts (4.8.0) to generate not only the Haskell modules but 
also the hs-boot files and {-# SOURCE #-} pragmas!  This is truly a 
glorious way to start the New Year.

  But wait, there is more!  If a more than two messages define 
extensions of each other in a strongly connected dependency graph then 
the hs-boot files are not enough.  For these strange cases hprotoc will 
now generate modules ending with 'Key.hs that separately define the 
extensions.  You do not need to lift a finger, you never need to import 
these modules yourself, this all exists behind the scenes.  Also, 
hprotoc goes way out its way to reduce the number of .hs-boot and 
'Key.hs files, and it uses only a minimal set of {-# SOURCE #-} 
pragmas.  It is so painless that if I did not put this into this 
announcement you might not even know.

  Now all generated should compile with no changes or additions.
  Of course, hprotoc still generates nothing for services and methods.

Where to get the new shiny packages? hackage:

http://hackage.haskell.org/cgi-bin/hackage-scripts/package/protocol-buffers

http://hackage.haskell.org/cgi-bin/hackage-scripts/package/protocol-buffers-descriptor

http://hackage.haskell.org/cgi-bin/hackage-scripts/package/hprotoc

And you will need haskell-src-exts, version 0.4.8, by Niklas Broberg:

http://hackage.haskell.org/cgi-bin/hackage-scripts/package/haskell-src-exts-0.4.8

(past version do not work and future versions will probably change 
enough to break compilation of hprotoc)

Happy New Year,
  Chris Kuklewicz


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Protocol Buffers group.
To post to this group, send email to protobuf@googlegroups.com
To unsubscribe from this group, send email to 
protobuf+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/protobuf?hl=en
-~--~~~~--~~--~--~---



Haskell protocol buffers 1.2.2 release announcement

2008-12-07 Thread Chris

Hi everyone,

To keep up with protocol-buffers 2.0.3 here is an improved Haskell 
hprotoc version 1.2.2 :

http://hackage.haskell.org/cgi-bin/hackage-scripts/package/protocol-buffers
http://hackage.haskell.org/cgi-bin/hackage-scripts/package/protocol-buffers-descriptor
http://hackage.haskell.org/cgi-bin/hackage-scripts/package/hprotoc

These ought to be compatible with the previous version 1.2.1.

The two changes are support for the field-option-like syntax for 
EnumValueOptions, and adjacent string literals are concatenated.  Note 
that strings are checked for valid utf8 encoding after concatenation, 
not individually.  But backslash escape codes are decoded for each 
individual string, not after concatenation.  If these quirks become 
problems they can be changed.

The protoc bug fixed in 2.0.3:
   * Fixed bug where .proto files which use custom options but don't 
 actually
 define them (i.e. they import another .proto file defining the 
 options)
 had to explicitly import descriptor.proto.
did not affect hprotoc, which already did the Right Thing™ (I just 
tested to be sure).

The protoc bug fixed in 2.0.3:
   * If an input file is a Windows absolute path (e.g. 
 C:\foo\bar.proto) and
 the import path only contains . (or contains . but does not 
 contain
 the file), protoc incorrectly thought that the file was under ., 
 because
 it thought that the path was relative (since it didn't start with 
 a slash).
 This has been fixed.
is unlikely to be be present in hprotoc.  But since I never run or test 
on Windows I make no promises hprotoc is finding the correct relative 
paths on Windows.

Cheers,
  Chris Kuklewicz

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Protocol Buffers group.
To post to this group, send email to protobuf@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/protobuf?hl=en
-~--~~~~--~~--~--~---



Version 1.0.0 of Haskell port

2008-11-15 Thread Chris

Hello one and all,

Amid much editing, my Haskell version of protocol-buffer is now released 
at version 1.0.0.  This version supports the features of Google's 
version 2.0.2 including the new extensible options.  It can also dump a 
binary version of the FileDescriptorSet like protoc, but this version 
does not yet take such a binary file as input.  And a few bugs have been 
fixed.

What is this for?  What does it do?  Why?

  It generates Haskell data types that can be converted back and forth 
to lazy ByteStrings that interoperate with Google's generated code in 
C++/Java/python.

  The data types are defined in a .proto text file which is translated 
into the target language.

  My code is a pure Haskell re-implementation of the Google code at
http://code.Google.com/apis/protocolbuffers/docs/overview.html
  which is ...a language-neutral, platform-neutral, extensible way of 
serializing structured data for use in communications protocols, data 
storage, and more.
  Google's project produces C++, Java, and Python code.  This one 
produces Haskell code.

Where is the code?

http://hackage.haskell.org/cgi-bin/hackage-scripts/package/protocol-buffers
http://hackage.haskell.org/cgi-bin/hackage-scripts/package/protocol-buffers-descriptor
   
 
http://hackage.haskell.org/cgi-bin/hackage-scripts/package/hprotoc

And it needs to be build and installed in the above order.  The first is 
the support library (Text.ProtocolBuffers).  The second is the 
self-describing descriptor library (Text.DescriptorProtos[.Options]).  
The third is the 'hprotoc' executable which translates the .proto 
files into Haskell code.  This works similarly to the protoc program 
from the original Google project.

The 'hprotoc' program works on descriptor.proto to produce the above, as 
well as having been tested with unittest.proto for which I have tested 
rountrip to and from the wire format, and unittest_custom_options.proto 
and for which I have retrieved and tested the stored options (see the 
new Text.DescriptorProtos.Options for an API to help access these).

Why is this not documented better?

  Because no one is using it.  Email me and this list if you use it and 
get stuck.  Note that hprotoc's options are very similar to protoc.  
Hopefully the Haskell code docs from haddock will be enough to get 
started with the libraries.

Cheers,
  Chris Kuklewicz

PS: Small example of testing the custom options from 
unittest_custom_options.proto :

import Text.ProtocolBuffers
import Text.DescriptorProtos.Options
import Protobuf_unittest

test4 :: EnumOptions
test4 = maybe (error Nothing) options $
  toDP TestMessageWithCustomOptions fileDescriptorProto 
 = descend AnEnum

testVal = getVal test4 enum_opt1 == (-789)  -- Should be True


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Protocol Buffers group.
To post to this group, send email to protobuf@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/protobuf?hl=en
-~--~~~~--~~--~--~---



Bug report and nearing completion hprotoc on par with protocol-buffers 2.0.2

2008-11-10 Thread Chris Kuklewicz

Kenton,

I am nearly ready with the Haskell update to protoc [1] that will
support the user defined options introduced by protocol-buffers 2.0.2
(protoc).

To have a hope of testing my code, I have redesigned my processing to
have hprotoc produce a binary FileDescriptorSet that I could compare
to the output of protoc.  This should also allow hprotoc to consume
the binary FileDescriptorSet output of protoc.

I have a few questions and two bug reports against protoc-2.0.2 that
all arise from me examining the FileDescriptorSet output with protoc's
decoding:

BUG * The user-defined options have the wrong value for some 32 bit
value.  You store 64 bit values:
unittest_custom_options.proto: optional int32 message_opt1 = 7739036;
unittest_custom_options.proto: option (message_opt1) = -56;
protoc:  7739036: 18446744073709551560
hprotoc:7739036: 4294967240
There is another problem which is seen in the raw output :
unittest_custom_options.proto:
message DummyMessageContainingEnum {
  enum TestEnumType {
TEST_OPTION_ENUM_TYPE1 = 22;
TEST_OPTION_ENUM_TYPE2 = -23;
  }
}
protoc:
  2 {
1: TEST_OPTION_ENUM_TYPE2
2: 18446744073709551593
  }
hprotoc:
  2 {
1: TEST_OPTION_ENUM_TYPE2
2: 4294967273
  }
The negative enum value reveals that this is stored as a 64 bits
number instead of 32 bits. This obviously makes the inefficient
negative values about twice as bad as they would otherwise be, and
threatens to cause errors when read into other implementations that
only expect 32 bits.

BUG * The user-defined options from unittest_custom.proto have
repetitions in the output from protoc that are not present in
the .proto file.  Not all fields are repeated (apparently just the
fixed width ones), but this looks dangerous in the presence of
repeated fields. Example from the raw output from protoc:
  4 {
1: CustomOptionMinIntegerValues
7 {
  7706090: 0
  7705709: 18446744071562067968
  7705542: 9223372036854775808
  7704880: 0
  7702367: 0
  7701568: 4294967295
  7700863: 18446744073709551615
  7700307: 0x
  7700307: 0x
  7700194: 0x
  7700194: 0x
  7698645: 0x8000
  7698645: 0x8000
  7685475: 0x8000
  7685475: 0x8000
}
  }


* The default_value of bytes and string types are stored differently.
The bytes are stored in a raw form at the same escaping level as the
proto file.  A string is stored after the escape codes have been
interpreted.
** Why, oh why, are they stored with different escape conventions?
** Is this documented anywhere?

* The name field of the FileDescriptorProto seems to be the file
path passed on the command line or the filepath in the import
statement.
** I have not checked, but if I were on windows would the file path
from the command line have \ instead of / ?
** Is this documented anywhere?

Thanks for your attention,
  Chris

[1] http://hackage.haskell.org/cgi-bin/hackage-scripts/package/protocol-buffers
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Protocol Buffers group.
To post to this group, send email to protobuf@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/protobuf?hl=en
-~--~~~~--~~--~--~---



Re: Any way to dissect ProtBuf serialized data without knowing the structure in advance?

2008-10-23 Thread Chris

[EMAIL PROTECTED] wrote:
 I'm trying to consume data from an app that generates output
 serialized via Protocol Buffers but do not have the original spec for
 the specific structures that have been encoded. Is there a relatively
 straight-forward path to deserializing, or even just decoding, the
 serialized data stream without knowing its structure in advance?
   
There is no straight-forward path.  The wire format is not self-describing.
You get the outermost field numbers and wire types and data chunks for free.
But the numeric wire types do not tell you how to interpret them : 
signed vs unsigned, double/float vs integer, whether it is zigzag 
encoded, the byte-size of the field, whether it is an enum (never mind 
which one).
The length-encoded fields are slightly better.  If the data chunk parses 
as a valid message then it is probably a message.  If it parses as valid 
UTF8 then it is probably a string.  Otherwise it must be a byte array.

If the same sat of field numbers + wire types comes up repeatedly then 
they may be same message type.  Many identical fields
in a row are probably for a repeated field, and as such you can assume 
the contents are the same type.  The multiple values gives you a hand in 
picking how to decode them.

For a self-describing binary type you have to look elsewhere (e.g. HDF5).

-- 
Chris


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Protocol Buffers group.
To post to this group, send email to protobuf@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/protobuf?hl=en
-~--~~~~--~~--~--~---



Re: Major update submitted; 2.0.2 release soon

2008-10-06 Thread Chris

Kenton Varda wrote:
 I'll be updating the documentation to explain this better soon.  It's 
 a bit complicated.  Check out unittest_custom_options.proto for an 
 example.  Let me try to explain briefly...


The unittest_custom_options.proto is useful.  It certainly helps show 
the constraints at work.

The design rationale I infer is that normal field names, unlike 
extensions, will never need to be qualified so there is no need for 
parenthesis around these names.   This is promoted to a hard rule that 
normal field names are never allowed to be in parenthesis.  Thus the 
parenthesis are present if and only if the thing named is an extension 
field.

Since every single existing normal field name of an option message is 
NOT a message/group type, the first name part being unparenthesized 
means that there will be no '.' and no additional name parts.

I have updated my Lexer and Parser (in my development version) to 
recognize the new proto file syntax and fill in the UninterpretedOption 
fields.  This has been tested on unittest_custom_options.proto.  No 
further processing of the new options is done yet: no name resolution, 
no conversion to extension field value.  This will eventually get done.

The normal field names are cooked into the Parser as in the previous 
version.  Once the extensions bits are finished  I might go back and 
bootstrap these so they are both more reflective and able to handle 
message/group types (even though none currently exist).

Cheers,
  Chris


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Protocol Buffers group.
To post to this group, send email to protobuf@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/protobuf?hl=en
-~--~~~~--~~--~--~---



Proto file format problem

2008-09-22 Thread Chris

There is a disagreement between the documentation at
http://code.google.com/apis/protocolbuffers/docs/proto.html#enum
and the behavior of proto.

The documentation states that
Enumerator constants must be in the range [0, 2147483647].

But the protoc program allows negative values.

I happened to write my Haskell version according the documentation, and 
noticed this when the unittest.proto defined SPARSE constants there were 
negative.  My code rejected this as invalid.

Should the documentation be changed (and my Haskell code) or the protoc 
program?

Thanks,
  Chris


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Protocol Buffers group.
To post to this group, send email to protobuf@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/protobuf?hl=en
-~--~~~~--~~--~--~---



Re: Followup: EBNF grammar for .proto files

2008-09-22 Thread Chris

Yegor wrote:
 Hi, everyone,

 I am following up on the discussion about the EBNF grammar for .proto
 files: 
 http://groups.google.com/group/protobuf/browse_thread/thread/1cccfc624cd612da

 I am now trying port this grammar to ANTLR format and make it generate
 the lexers and parsers, but so far no luck.

 Does anyone know how to translate /[^\0\n]/ to ANTLR format? I'm not
 even sure what it means. It's from the definition of strLit.
   
Not a null byte (0) and not a newline byte (10).
 Also can anyone tell me what's wrong with the following grammar? You
 should be able to just copy and paste the following in ANTLRWorks.
 (NOTE: I simplified strLit (STR_LIT) as I couldn't translate the regex
 above.) Thanks.
   

Many things are wrong.  I stopped using the EBNF that was posted to the 
list when making my lexer.

Negative constant values are allowed (for default values), but are not 
in the grammar below (including oct and hex constants).
The  ('.' | DIGIT+)? in FLOAT_LIT is just wrong. And they ought to be 
allowed to be negative.

The opening and closing QUOTE of strings must match.  You should not 
accept an opening single quote and a closing double quote.
Inside of a single quotes string you are allow unescaped double quotes.  
Inside of a double quoted string you are allowed unescaped single quotes.

The grammar should allow internal use of a period character to allow for 
qualified names in defaults for enums from imported packages.  Two 
periods in a row are not permitted, however.




 /*
  * ANTLR grammar file for Google Protocol Buffers
  */

 grammar proto;

 proto
   : ( message | extend | enum | pimport | package | option | ';' )*
   ;

 pimport
   : 'import' STR_LIT ';'
   ;

 package
   : 'package' IDENT ( '.' IDENT )* ';'
 ;

 option
   : 'option' optionBody ';'
   ;

 optionBody
   : IDENT ( '.' IDENT )* '=' constant
   ;

 message
   : 'message' IDENT messageBody
   ;

 extend
   : 'extend' userType '{' ( field | group | ';' )* '}'
   ;

 enum
   : 'enum' IDENT '{' ( option | enumField | ';' )* '}'
   ;

 enumField
   : IDENT '=' INT_LIT ';'
   ;

 service
   : 'service' IDENT '{' ( option | rpc | ';' )* '}'
   ;

 rpc
   : 'rpc' IDENT '(' userType ')' 'returns' '(' userType ')' ';'
   ;

 messageBody
   : '{' ( field | enum | message | extend | extensions | group | option
 | ':' )* '}'
   ;

 group
   : modifier 'group' camelIdent '=' INT_LIT messageBody
   ;

 // tag number must be 2^28-1 or lower
 field
   : modifier type IDENT '=' INT_LIT ( '[' fieldOption ( ','
 fieldOption )* ']' )? ';'
   ;

 fieldOption
   : optionBody | 'default' '=' constant
   ;

 extensions
   : extRange ( ',' extRange )* ';'
   ;

 extRange
   : INT_LIT ( 'to' ( INT_LIT | 'max' ) )?
   ;

 // Kenton: I would either call this label or cardinality
 modifier
   : 'required' | 'optional' | 'repeated'
   ;

 type
   : 'double' | 'float' | 'int32' | 'int64' | 'uint32' |
 'uint64' | 'sint32' | 'sint64' | 'fixed32' | 'fixed64' |
 'sfixed32' | 'sfixed64' | 'bool' | 'string' | 'bytes' | userType
   ;

 // leading dot for identifiers means they're fully qualified
 // Kenton: userType ::= .? ident ( . ident )*
 userType
   : '.'? IDENT ( '.' IDENT )*
   ;

 constant
   : IDENT | INT_LIT | FLOAT_LIT | STR_LIT | BOOL_LIT
   ;

 IDENT
   : ('a'..'z'|'A'..'Z'|'_')('A'..'Z'|'a'..'z'|'0'..'9'|'_')*
   ;

 // according to parser.cc, group names must start with a capital
 letter as a
 // hack for backwards-compatibility
 camelIdent
   : ('A'..'Z')('A'..'Z'|'a'..'z'|'0'..'9'|'_')*
   ;

 INT_LIT
   : DEC_INT | HEX_INT | OCT_INT
   ;

 DEC_INT
   : '1'..'9' DIGIT*
   ;

 HEX_INT
   : '0' ('x' | 'X') ('A'..'F' | 'a'..'f' | DIGIT)+
   ;

 OCT_INT
   : '0' ('0'..'7')+
   ;

 // allow_f_after_float_ is disabled by default in tokenizer.cc
 FLOAT_LIT
   : DIGIT+ ('.' | DIGIT+)? (('E' | 'e') ('+' | '-')? DIGIT+)?
   ;

 DIGIT
   : '0'..'9'
   ;

 BOOL_LIT
   : 'true' | 'false'
   ;

 STR_LIT
   : QUOTE ( HEX_ESCAPE | OCT_ESCAPE | CHAR_ESCAPE | 'a'..'z' | 'A'..'Z'
 | '0'..'9' | ' ' ) QUOTE
   ;

 QUOTE
   : '\'' | ''
   ;

 HEX_ESCAPE
   : '\\' ('X' | 'x') ('A'..'F' | 'a'..'f' | '0'..'9'){1,2}
   ;

 OCT_ESCAPE
   : '\\' '0'? ('0'..'7'){1,3}
   ;

 CHAR_ESCAPE
   : '\\' ('a' | 'b' | 'f' | 'n' | 'r' | 't' | 'v' | '\\' | '\?' | ('\\'
 '\'') | ('\\' ''))
   ;

 
   


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Protocol Buffers group.
To post to this group, send email to protobuf@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/protobuf?hl=en

Re: Scientific Applications

2008-09-12 Thread Chris

Nicolas wrote:
 Can anyone with some experience in these matters, and especially of
 alternative formats, e.g. netCDF, comment on this and recommend a
 standard well-supported solution?
I do not think scientific data should be stored with protocol-buffers.

I would suggest, since netCDF-4 now encloses HDF5, the you also look at 
HDF5 (e.g. wikipedia is always a good
source of links).

The netCDF and HDF5 are both self-describing data file formats unlike 
protocol-buffer's wire format. 

It is impossible to read back and analyze a protocol-buffer without the 
.proto file description because the wire format does not hold the actual 
type of any of the data (e.g. bool vs int vs unsigned int vs 
enumeration, or string vs bytes vs embedded message).

The protocol-buffer wire format for repeated elements, such as you need 
for scientific data vectors and arrays, is relatively inefficient since 
it includes the field# + wire tag before each and every single number 
in your file.  This kills efficient bulk reading and writing unless you 
create a new format and embed it as a binary blob.
So you end up using HDF5 or something else anyway.

Cheers,
  Chris


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Protocol Buffers group.
To post to this group, send email to protobuf@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/protobuf?hl=en
-~--~~~~--~~--~--~---