Extensions becoming unknown fields
I have been trying to use protocol buffers to communicate between two processes over a network. In doing so, I'm experiencing a problem where I create a protocol buffer message and set it's extension, but then when I receive it on the other process, the extension doesn't register, it turns up as an unknown field. I've gotten around it by checking the unknown fields, but I'd prefer not to. What could I be doing wrong? Here is basically what the .proto file looks like: message Message { optional string message = 1; extensions 10 to 100; } message Extension { extend Message { optional Extension extensionName = 10; } optional int32 data = 1; } what the code on the sender side looks like: // init builders Message.Builder msgBuilder = Message.newBuilder(); Extension.Builder extensionBuilder = Extension.newBuilder(); // set extension msgBuilder.setExtension(Extension.extensionName, extensionBuilder.setData(1).build()); // build, convert to a byte array, and send send(msgBuilder.build().toByteArray()); and on the receiver side: // receive the message byte[] buffer = receive(); // parse the message from the array Message msg = Message.parseFrom(buffer); // check if message has the extension if(msg.hasExtension(Extension.ExtensionName)) // this returns false = ( // blah blah blah Thanks for any insight! --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups "Protocol Buffers" group. To post to this group, send email to protobuf@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/protobuf?hl=en -~--~~~~--~~--~--~---
Re: Adding options without adding dependencies
On Tue, Oct 28, 2008 at 1:28 PM, Jon Skeet <[EMAIL PROTECTED]> wrote: > Two issues have arisen: > 1) (Fairly simple to resolve, probably) - I think it would be worth > creating a repository of "known" extensions for descriptor.proto. Or at least a list of who has reserved what field numbers. Note that descriptor.proto contains instructions for how to reserve public-use extension numbers -- currently, it says to e-mail me. :) > 2) (More important.) There's no real reason why anything other than C# > protogen needs to understand these extensions - it would be really > nice if protoc could avoid adding dependencies from the "business" > proto file to descriptor.proto and csharp_options.proto. ProtoGen will > load the descriptor set with the relevant extension registry anyway - > there's no need to actually mention it in the dependency list. Instead of extending the language to support this, how about just adding some code to your code generator which detects when a dependency is only used for custom options and does not generate language-specific imports in that case? --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups "Protocol Buffers" group. To post to this group, send email to protobuf@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/protobuf?hl=en -~--~~~~--~~--~--~---
Re: Standard for RPC proto
On Tue, Oct 28, 2008 at 9:07 PM, Pavel Shramov <[EMAIL PROTECTED]> wrote: > By the way one of the simpliest ways for RPC is to use HTTP transport. > It's have some limitations (e.g large overhead for small messages) but > also some benefits (e.g many libraries for performing HTTP calls and > simple proxying) Speaking of ugly hacks, one of the intermediate versions of rpc I used were, actually, protocol buffers over xmlrpc. messages were serialized and then wrapped into xmlrpc.Binary -- This message represents the official view of the voices in my head. --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups "Protocol Buffers" group. To post to this group, send email to protobuf@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/protobuf?hl=en -~--~~~~--~~--~--~---
Re: Status of C# Projects?
> Cool. We'll take it out for a spin and see what happens. Note that I haven't added any documentation yet; if you just want person.proto, it is on the repo. If you need more, let me know and I'll write up the process. It isn't tricky - just not written down at the moment ;-p Marc --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups "Protocol Buffers" group. To post to this group, send email to protobuf@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/protobuf?hl=en -~--~~~~--~~--~--~---
Re: Standard for RPC proto
On Tue, Oct 28, 2008 at 11:46:19AM -0700, Kenton Varda wrote: > Ever notice how practically no one uses HTTP auth? :) By the way one of the simpliest ways for RPC is to use HTTP transport. It's have some limitations (e.g large overhead for small messages) but also some benefits (e.g many libraries for performing HTTP calls and simple proxying) All RPC implementations mentioned on wiki use raw link to exchange data. So I've created simple RPC over HTTP but description [1] is a bit... incomplete :) If somebody is interested in this approach I'll expand documentation. Code may be found at [2, 3] Pavel P.S. Sorry for self-advertising :) -- [1] http://grid.pp.ru/wiki/pbufrpc [2] http://grid.pp.ru/git/?p=psha/pbufrpc/.git [3] git://grid.pp.ru/psha/pbufrpc --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups "Protocol Buffers" group. To post to this group, send email to protobuf@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/protobuf?hl=en -~--~~~~--~~--~--~---
Adding options without adding dependencies
I've been using the new options features to add options for C# code generation - similar to the existing Java options. Two issues have arisen: 1) (Fairly simple to resolve, probably) - I think it would be worth creating a repository of "known" extensions for descriptor.proto. For instance, here's my csharp_options.proto: import "google/protobuf/descriptor.proto"; package google.protobuf; option (CSharpNamespace) = "Google.ProtocolBuffers.DescriptorProtos"; option (CSharpUmbrellaClassname) = "CSharpOptions"; extend FileOptions { optional string CSharpNamespace = 2; optional string CSharpUmbrellaClassname = 20001; optional bool CSharpMultipleFiles = 20002; optional bool CSharpNestClasses = 20003; optional bool CSharpPublicClasses = 20004; } It would be nice if no-one else used 20,000-20,099 (to allow room to grow). That way different options would be interoperable. 2) (More important.) There's no real reason why anything other than C# protogen needs to understand these extensions - it would be really nice if protoc could avoid adding dependencies from the "business" proto file to descriptor.proto and csharp_options.proto. ProtoGen will load the descriptor set with the relevant extension registry anyway - there's no need to actually mention it in the dependency list. How reasonable would it be to have certain dependencies ignored on output (i.e. removed from the descriptor set FileDescriptorProto.dependency list.) For instance, instead of: import "google/protobuf/descriptor.proto"; import "google/protobuf/csharp_options.proto"; messages could have: import transient "google/protobuf/descriptor.proto"; import transient "google/protobuf/csharp_options.proto"; I suspect we'd have to make sure (somehow!) that the dependencies were only used for extensions but it would solve a definite pain point. Any thoughts? Jon --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups "Protocol Buffers" group. To post to this group, send email to protobuf@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/protobuf?hl=en -~--~~~~--~~--~--~---
Re: Status of C# Projects?
On Oct 28, 12:04 am, honce <[EMAIL PROTECTED]> wrote: > We are looking at the dotnet-protobufs for a new and are unable to > build the generated C# code. I downloaded the latest code out of git > and have been successful in building a C# file. > > We have been running: > protoc -operson.bin person.proto > protogen person.bin > > We add person.cs to our solution and rebuild with VS2008sp1 targeted > for .Net 3.5sp1 on WinXP sp3. We then get the following error: > Person.cs(7,29): error CS0441: 'Person': a class cannot be both static > and sealed > > Any suggestions on what we're doing wrong? Got it - the problem here is that it's generating a Person type as the "umbrella" class, and also Person as the message. You could either specify the name of the umbrella class as an option, or just rename person.proto to (say) person_protofile.proto. It would then create PersonProtoFile.cs with a PersonProtoFile umbrella class, and a Person class for the Person message. The resulting file definitely does compile :) I'll try to remember to add a check for that situation - I suspect I won't be able to catch everything like that, but I'll do what I can. Jon --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups "Protocol Buffers" group. To post to this group, send email to protobuf@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/protobuf?hl=en -~--~~~~--~~--~--~---
Re: Standard for RPC proto
Ever notice how practically no one uses HTTP auth? :) On Tue, Oct 28, 2008 at 1:16 AM, Paul P. Komkoff Jr <[EMAIL PROTECTED]>wrote: > > On Oct 28, 2:02 am, "Kenton Varda" <[EMAIL PROTECTED]> wrote: > > I don't really have a stake in the design of a protobuf-based RPC format. > > However, I'd like to point out that the design philosophy we tend to > prefer > > at Google is to keep each layer of the system as simple as possible, and > > implement orthogonal features using separate layers. Authentication is a > > great example of something that I would not want to make part of an RPC > > protocol itself, but rather implement as a layer under it, similar to the > > way HTTP can operate over SSL. If you keep the system separate in this > way, > > First, I'm talking about something similar to simple HTTP auth, which > allows us to authenticate by key/value pair and does not include TLS. > With support for "struct user_credentials" passed to server method, so > we can "impersonate" the user. > > Also, even before considering paragraph above, if you have the system > separate that way it will produce incompatible wire formats. My goal > is to have, at least, lowest common denominator which could be > implemented in, at least, twisted-python and something-java, in order > to bootstrap my project now. It would be wonderful if this LCD format > will have some notion of authentication (or authorization, if > authentication is performed by separate coexisting entity that > produces auth cookies). > > > it's much easier for people to avoid the overhead of features they don't > > need, find alternative ways of implementing individual features, and to > > reuse code in general. > > Just my opinion. > > > > --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups "Protocol Buffers" group. To post to this group, send email to protobuf@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/protobuf?hl=en -~--~~~~--~~--~--~---
Re: reading one message at a time
On Tue, Oct 28, 2008 at 6:35 AM, Moonstruck <[EMAIL PROTECTED]> wrote: > > you mean we should write the file like this? > (sizeof a message) | (serialized message) | (sizeof another message) > | (another serialized message) || so on and so forth > > while reading, we'd first read the message size, then read data with > the specified size to a stream, after that I can get it parsed; > > is that so ? Yes. --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups "Protocol Buffers" group. To post to this group, send email to protobuf@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/protobuf?hl=en -~--~~~~--~~--~--~---
Re: protoc and python imports
The model used by the protocol compiler is to assume that the .proto files are located in a tree that parallels the Python package tree. We don't want to get into relative imports because they can get complicated and error-prone. If you don't want to put your .proto files into a tree matching your Python package tree, you could alternatively map them into such a tree virtually like so: protoc --proto_path=mypkg=proto This maps the virtual directory "mypkg" to the physical directory "proto". You would then have to write your imports like: import "mypkg/a.proto" You can also map individual files. If this is insufficient then I guess we need a way to specify the python package explicitly in the .proto file, similar to the java_package option, rather than just inferring it from the location of the .proto file. On Tue, Oct 28, 2008 at 7:49 AM, Alan Kligman <[EMAIL PROTECTED]>wrote: > > I need the line to look like: > > from .. import a_pb2.py > > The reason this is a problem is because I'm building the protocol > buffers into the middle of an existing project. The problem is that > protoc assumes that the output is either at the top of the package, or > that the related files are all in the same sub-package (which is > rarely true). Python2.5 supports relative intra-package imports (like > the one above). More details here: > http://docs.python.org/tut/node8.html#SECTION00842. > > I think this is probably worth fixing. The workaround is to do some > post-processing on the output from protoc, which could get nasty. > > On Oct 27, 5:44 pm, "Kenton Varda" <[EMAIL PROTECTED]> wrote: > > I'm not sure I understand. What would you expect the import line > importing > > a_pb2 to look like? My understanding is that Python imports are > absolute, > > not relative to the importing file. > > > > On Sat, Oct 25, 2008 at 7:11 PM, Alan Kligman <[EMAIL PROTECTED] > >wrote: > > > > > > > > > I'm having a problem with protoc where python imports are not done > > > correctly. Here's the situation: > > > > > I have a directory structure like this: > > > > > proto/a.proto > > > proto/a/b.proto > > > proto/a/c.proto > > > > > a.proto provides some common definitions for both b.proto and c.proto. > > > I build the output like this: > > > > > protoc --proto_path=. --python_out=../dist *.proto > > > protoc --proto_path=. --python_out=../dist a/*.proto > > > > > assuming that proto is the current directory. Because a.proto is > > > included in both b.proto and c.proto, they both import it like this: > > > > > import "a.proto";# relative to the current directory > > > > > After building the protobuf files with protoc, the resulting python > > > output has import statements for a_pb2.py that look like: > > > > > import a_pb2.py > > > > > which is wrong, because a_pb2.py is actually in the directory one > > > above b_pb2.py and c_pb2.py. Is there a way to get protoc to do this > > > properly? Is it a bug? Python2.5 handles relative imports, but there > > > is no nice way to do it in python2.4. > > > > > Thoughts? > > > --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups "Protocol Buffers" group. To post to this group, send email to protobuf@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/protobuf?hl=en -~--~~~~--~~--~--~---
Re: speed - python implementation
Jeremy Leader wrote: > Might it be possible to use the XS wrappers generated by protobuf-perlxs > from Python? Aaah, not enough caffeine yet. I somehow confused XS (Perl-specific) with SWIG (supports Perl, Python, and many others). Never mind! -- Jeremy Leader [EMAIL PROTECTED] --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups "Protocol Buffers" group. To post to this group, send email to protobuf@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/protobuf?hl=en -~--~~~~--~~--~--~---
Re: Status of C# Projects?
On Oct 28, 4:53 pm, honce <[EMAIL PROTECTED]> wrote: > We added "package tex;" and got the same results. We'll take a better > look at the unit tests next. FYI -- I did notice there are no proto > files in the *.Test directories I downloaded with git. No, they're in the "protos" directory. Jon --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups "Protocol Buffers" group. To post to this group, send email to protobuf@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/protobuf?hl=en -~--~~~~--~~--~--~---
Re: speed - python implementation
Might it be possible to use the XS wrappers generated by protobuf-perlxs from Python? -- Jeremy Leader [EMAIL PROTECTED] andres wrote: > Hi, > > I would like to use protocol buffers in my python code but currently > the serialization and parsing methods are too slow compared to > cPickle. I've read several posts stating that this is because the > python implementation has not been optimized for speed yet. Are there > plans to improve the performance of proto buffers in python? Does > anybody know of a C++ extension/wrapper module which lets you access C+ > + compiled protocol buffers directly from python code? > > Thanks, > Andres --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups "Protocol Buffers" group. To post to this group, send email to protobuf@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/protobuf?hl=en -~--~~~~--~~--~--~---
Re: Status of C# Projects?
Cool. We'll take it out for a spin and see what happens. /jwh On Oct 28, 12:35 am, Marc Gravell <[EMAIL PROTECTED]> wrote: > For what it is worth, protobuf-net now has code generation support*, > and handles that file fine; > > http://code.google.com/p/protobuf-net/source/browse/trunk/Examples/pe... > > Marc > > [*=OK, it is a work in progress - I haven't added handlers for all the > combinations yet, but doesn't need much more work] --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups "Protocol Buffers" group. To post to this group, send email to protobuf@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/protobuf?hl=en -~--~~~~--~~--~--~---
Re: Status of C# Projects?
We added "package tex;" and got the same results. We'll take a better look at the unit tests next. FYI -- I did notice there are no proto files in the *.Test directories I downloaded with git. /jwh On Oct 28, 4:11 am, Jon Skeet <[EMAIL PROTECTED]> wrote: > On Oct 28, 12:04 am, honce <[EMAIL PROTECTED]> wrote: > > > We are looking at the dotnet-protobufs for a new and are unable to > > build the generated C# code. I downloaded the latest code out of git > > and have been successful in building a C# file. > > > We have been running: > > protoc -operson.bin person.proto > > protogen person.bin > > > We add person.cs to our solution and rebuild with VS2008sp1 targeted > > for .Net 3.5sp1 on WinXP sp3. We then get the following error: > > Person.cs(7,29): error CS0441: 'Person': a class cannot be both static > > and sealed > > Odd. It certainly sounds like a straight bug, but I'm amazed that I > haven't seen it elsewhere. > > Do you not specify a package or namespace anywhere? That may be > relevant (although it's still a bug in protogen, of course). > Look at the unit tests for examples of how to specify things - I'm > hoping to make it simpler in terms of the csharp_options.proto and > descriptor.proto dependencies over time. > > Jon --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups "Protocol Buffers" group. To post to this group, send email to protobuf@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/protobuf?hl=en -~--~~~~--~~--~--~---
Re: protoc and python imports
I need the line to look like: from .. import a_pb2.py The reason this is a problem is because I'm building the protocol buffers into the middle of an existing project. The problem is that protoc assumes that the output is either at the top of the package, or that the related files are all in the same sub-package (which is rarely true). Python2.5 supports relative intra-package imports (like the one above). More details here: http://docs.python.org/tut/node8.html#SECTION00842. I think this is probably worth fixing. The workaround is to do some post-processing on the output from protoc, which could get nasty. On Oct 27, 5:44 pm, "Kenton Varda" <[EMAIL PROTECTED]> wrote: > I'm not sure I understand. What would you expect the import line importing > a_pb2 to look like? My understanding is that Python imports are absolute, > not relative to the importing file. > > On Sat, Oct 25, 2008 at 7:11 PM, Alan Kligman <[EMAIL PROTECTED]>wrote: > > > > > I'm having a problem with protoc where python imports are not done > > correctly. Here's the situation: > > > I have a directory structure like this: > > > proto/a.proto > > proto/a/b.proto > > proto/a/c.proto > > > a.proto provides some common definitions for both b.proto and c.proto. > > I build the output like this: > > > protoc --proto_path=. --python_out=../dist *.proto > > protoc --proto_path=. --python_out=../dist a/*.proto > > > assuming that proto is the current directory. Because a.proto is > > included in both b.proto and c.proto, they both import it like this: > > > import "a.proto";# relative to the current directory > > > After building the protobuf files with protoc, the resulting python > > output has import statements for a_pb2.py that look like: > > > import a_pb2.py > > > which is wrong, because a_pb2.py is actually in the directory one > > above b_pb2.py and c_pb2.py. Is there a way to get protoc to do this > > properly? Is it a bug? Python2.5 handles relative imports, but there > > is no nice way to do it in python2.4. > > > Thoughts? --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups "Protocol Buffers" group. To post to this group, send email to protobuf@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/protobuf?hl=en -~--~~~~--~~--~--~---
Re: Standard for RPC proto
Authentication doesn't really belong here. You should either use an authenticated transport (like HTTPS), or in the layer above (this is what I'm currently doing). On Oct 27, 8:54 pm, "Paul P. Komkoff Jr" <[EMAIL PROTECTED]> wrote: > On 26 окт, 02:53, Alan Kligman <[EMAIL PROTECTED]> wrote: > > > I haven't had much to add recently. Protobuf-rpc is based heavily on > > json-rpc, so there's really nothing new behind it. It works well for > > my own use and is generic enough to probably work well for most other > > people. > > > Is there a great deal of interest in devising a standard rpc protocol > > definition? > > Yes it is. > Since everything is trying to design its own RPC format, running into > the same flaws as everyone else. > For example, I haven't seen (in protobuf-rpc neither in protorcp) a > single word about authentification. --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups "Protocol Buffers" group. To post to this group, send email to protobuf@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/protobuf?hl=en -~--~~~~--~~--~--~---
Re: reading one message at a time
you mean we should write the file like this? (sizeof a message) | (serialized message) | (sizeof another message) | (another serialized message) || so on and so forth while reading, we'd first read the message size, then read data with the specified size to a stream, after that I can get it parsed; is that so ? On Oct 28, 5:36 am, "Kenton Varda" <[EMAIL PROTECTED]> wrote: > The protocol buffer format expects you to remember where the message ends; > it cannot figure out for itself. So, you need to write the size of each > message to your file before you write the message itself. > > > > On Mon, Oct 27, 2008 at 11:42 AM, Amit Gupta <[EMAIL PROTECTED]> wrote: > > > I have message defined as > > message Person > > { > > required int32 id; > > } > > > and than, after protoc-compiling I dump 500 Million of such buffers > > using a c++ application into a file. > > > Now, when I read it back using a different C++ application (and > > ParseFromIoStream, I get an error "message too long and look at the > > file ..." > > > My intended behavior is to read the pb-message one at a time, analyze > > its content and than read next message. For some reason. when I > > deseralize from iostream, pb is reading and deseralizing full content > > of the file. > > > How can I make the deserialization to work online, one message at a > > time from the IoStream. > > > My code is attached below. > > > Many Thanks, Amit > > > #include > > #include > > #include > > #include > > #include > > #include "temp.pb.h" > > > using namespace std; > > > #define MAX 5 > > > void write() > > { > > fstream output("out.db", ios::out | ios::trunc | ios::binary); > > > for(int num = 0; num < MAX; ++num) > > { > > Person p; > > p.set_id(num); > > p.SerializeToOstream(&output); > > } > > } > > > void read() > > { > > fstream input("out.db", ios::in | ios::binary); > > > int num = 0; > > for(; num < MAX; ++num) > > { > > Person p; > > p.ParseFromIstream(&input); > > } > > cout << endl << num << endl; > > } > > > int main() > > { > > write(); > > //cout << "Done Writing"; > > read(); > > cout << "Done reading" << endl; > > return 0; > > }- Hide quoted text - > > - Show quoted text - --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups "Protocol Buffers" group. To post to this group, send email to protobuf@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/protobuf?hl=en -~--~~~~--~~--~--~---
Re: Status of C# Projects?
On Oct 28, 12:04 am, honce <[EMAIL PROTECTED]> wrote: > We are looking at the dotnet-protobufs for a new and are unable to > build the generated C# code. I downloaded the latest code out of git > and have been successful in building a C# file. > > We have been running: > protoc -operson.bin person.proto > protogen person.bin > > We add person.cs to our solution and rebuild with VS2008sp1 targeted > for .Net 3.5sp1 on WinXP sp3. We then get the following error: > Person.cs(7,29): error CS0441: 'Person': a class cannot be both static > and sealed Odd. It certainly sounds like a straight bug, but I'm amazed that I haven't seen it elsewhere. Do you not specify a package or namespace anywhere? That may be relevant (although it's still a bug in protogen, of course). Look at the unit tests for examples of how to specify things - I'm hoping to make it simpler in terms of the csharp_options.proto and descriptor.proto dependencies over time. Jon --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups "Protocol Buffers" group. To post to this group, send email to protobuf@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/protobuf?hl=en -~--~~~~--~~--~--~---
Re: Standard for RPC proto
Paul - I haven't looked at protobuf-rpc, but protorpc uses a .proto message for the payload. One up-shot of that is that it should be (in theory at least) fine to add extension properties to the message - i.e. you could add a security object as an extended property. A given server could check for expected extension properties and act accordingly. The only issue I have with this is that you'd need to put the options *after* the main body, or push the tags out of order... No idea if that is a good idea or not... Marc --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups "Protocol Buffers" group. To post to this group, send email to protobuf@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/protobuf?hl=en -~--~~~~--~~--~--~---
Re: Standard for RPC proto
On Oct 28, 2:02 am, "Kenton Varda" <[EMAIL PROTECTED]> wrote: > I don't really have a stake in the design of a protobuf-based RPC format. > However, I'd like to point out that the design philosophy we tend to prefer > at Google is to keep each layer of the system as simple as possible, and > implement orthogonal features using separate layers. Authentication is a > great example of something that I would not want to make part of an RPC > protocol itself, but rather implement as a layer under it, similar to the > way HTTP can operate over SSL. If you keep the system separate in this way, First, I'm talking about something similar to simple HTTP auth, which allows us to authenticate by key/value pair and does not include TLS. With support for "struct user_credentials" passed to server method, so we can "impersonate" the user. Also, even before considering paragraph above, if you have the system separate that way it will produce incompatible wire formats. My goal is to have, at least, lowest common denominator which could be implemented in, at least, twisted-python and something-java, in order to bootstrap my project now. It would be wonderful if this LCD format will have some notion of authentication (or authorization, if authentication is performed by separate coexisting entity that produces auth cookies). > it's much easier for people to avoid the overhead of features they don't > need, find alternative ways of implementing individual features, and to > reuse code in general. > Just my opinion. --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups "Protocol Buffers" group. To post to this group, send email to protobuf@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/protobuf?hl=en -~--~~~~--~~--~--~---
Re: Standard for RPC proto
I'm currently working on the guts of a protorpc layer for protobuf- net; so yes, any conversations here are very valued. Especially re test rigs ;-p I don't have any huge bias for/against either of the cited specs (protorcp/protobuf-rpc). I just want something that works ;-p Marc --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups "Protocol Buffers" group. To post to this group, send email to protobuf@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/protobuf?hl=en -~--~~~~--~~--~--~---
Re: Status of C# Projects?
For what it is worth, protobuf-net now has code generation support*, and handles that file fine; http://code.google.com/p/protobuf-net/source/browse/trunk/Examples/person.cs Marc [*=OK, it is a work in progress - I haven't added handlers for all the combinations yet, but doesn't need much more work] --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups "Protocol Buffers" group. To post to this group, send email to protobuf@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/protobuf?hl=en -~--~~~~--~~--~--~---