Echoing Chris's messages, we don't really want to get into supporting arbitrary types in the core implementation. Various language-specific annotations could be added, but doing this portably across languages would be difficult. Supporting these types also complicate the reflection implementations.
A fairly straightforward approach would be to use a custom option and write a plugin, keeping them as raw bytes on the wire. You could use the generated class insertion points to insert wrapper methods that provide a friendlier API around the field. On Fri, Jul 8, 2011 at 5:06 PM, Christopher Smith <[email protected]> wrote: > On Fri, Jul 8, 2011 at 9:46 AM, Eric Hopper <[email protected]> wrote: > > I guess. This is an interesting and general problem. Practically every > > system like protobuf needs to solve it. Python itself, for example, > > solves it for pickle by allowing you to write custom methods for your > > classes to reduce them to well known types like tuples. > > That's a very different system, and a language specific solution. A > lot of the success of protobufs is due to keeping the feature set very > slim. Type aliasing adds a fair bit of complexity and really doesn't > add much: you can always have common message types with locked in > fields and code which knows how to transform those messages in to > whatever representation your internal runtime has. > > Truth is, solutions like you are describing will have a lot of > language specific issues, and I think it'd be hard to make a case for > all that added effort. For many language, you don't need hooks in the > library/generated code in order to handle this problem... others you > need all kinds of work. At some point, you add all the "interesting > and general" problems and protobufs start to look like ASN.1 or XML. > ;-) > > > I don't think the custom translation can be avoided. But I do think it > > can be better integrated into the system. > > Modules seem like the logical way to do custom translations. Not sure > what is wrong with that? > > > Protobuf's integer type can already represent integers of arbitrary > > precision, it's just that not every language has an arbitrary > > precision integer type. My idea would solve this problem by requiring > > you to specify the (for example) C++ type to use when deserializing a > > large integer. If you didn't, the protobuf compiler would generate an > > error. > > In C++, you can accomplish this simply by overloading the conversion > operator for whatever type you want that integer casted to, no? Yes it > requires an intermediate state, but already having a mechanism to have > custom transformations is going to cost you all kinds of performance > optimization opportunities, so I don't feel there is much lost there. > > -- > Chris > > -- > You received this message because you are subscribed to the Google Groups > "Protocol Buffers" group. > To post to this group, send email to [email protected]. > To unsubscribe from this group, send email to > [email protected]. > For more options, visit this group at > http://groups.google.com/group/protobuf?hl=en. > > -- You received this message because you are subscribed to the Google Groups "Protocol Buffers" group. To post to this group, send email to [email protected]. To unsubscribe from this group, send email to [email protected]. For more options, visit this group at http://groups.google.com/group/protobuf?hl=en.
