Your metadata repository could be defined like this:
  // metadata_repository.proto

  import "google/protobuf/descriptor.proto"

  message MetadataRequest {
    repeated string filename = 1;
  }

  message MetadataReply {
    repeated google.protobuf.FileDescriptorProto descriptor = 1;
  }

  service MetadataServer {
    rpc GetMetadata(MetadataRequest) returns(MetadataReply);
  }

Basically, the client sends a MetadataRequest listing a set of .proto files,
and the server sends back a reply which is a list of FileDescriptorProtos
representing those files (and perhaps any files they depend on).

You could then construct a DecsriptorPool out of these, and then
DynamicMessages based on them.

On Wed, Feb 18, 2009 at 7:36 AM, <h.a.s...@gmail.com> wrote:

>
> Thank you both for the replies. I have outlined the thoughts I have
> had after reading your emails. You can read them below.  In short I
> have come to the conclusion that I should find a way to embed what I
> wan't into statically generated protobuf messages.
>
> I have indeed read the protocol documentation. The over-the-wire
> system I wrote also embeds only the message_identifier + an array
> indexing into the body of the message (each index represents the upper
> and lower boundries of the data type it represents). In both cases
> self-descirption is not part of the over-the-wire format.
>
> The angle I am coming from is that if there was a centralised source
> (prefferably a service of some sort but it could just be proto files)
> that held the type signatures, than both the end-points could be
> decoupled from the static code requirements.
>
> The problem of fixed size integral types mapping to varints can be
> handled at runtime since the library(at both endpoints) theoretically
> knows about the entry types it holds.
>
> The reason I am interested in this is because the project I am working
> on has a service which delivers lots of telemetry data to a relational
> database. This imposes  the requirement of having access to both pairs
> of the dictionary(name and value) since the field-name is used when
> composing the SQL batch which does the inserts.
>
> Just to clarify this is what my telemetry system looks like.
>      ______________________
>     | type Meta-Data repository|
>     ----|---------------|-------------------
> _____|__      ___|______________    _____________
> | Client    |-----|   Delivery Service |------|  Database     |
> --------------      ----------------------------
> ----------------------
>
> However now that I have thought on the matter I could re-implement
> this functionality on top of protocol buffers. The client could
> register the message signatures with the delivery service, I can then
> embed my messages into a protobuf message as a string. This approach
> is a bit horable though for performance reasons. I should go and look
> at the "DynamicMessage" Kenton metioned and perhaps I won't need to
> embed it as a string.
>
> Regards
>
> Hassan Syed
>
> On Feb 18, 2:44 pm, Chris <turingt...@gmail.com> wrote:
> > h.a.s...@gmail.com wrote:
> > > Something along the lines of :
> >
> > > #DEFINE LOGIN_EVENT_TYPE 0
> >
> > > char * userNameEntry="foo";
> >
> > > RpcChannel logDaemonConnection=new RpcChannel("/tmp/Logdaemon");
> >
> > > Message logMsg= new Message("UserLoginLogEntry");
> >
> > > logMsg.add("username",foo);
> >
> > > logMsg.add("eventType",LOGIN_EVENT_TYPE);
> >
> > > logMsg.add("timestamp","12-02-88 17:22:03);
> >
> > > logDaemonConnection.deliver(logMsg);
> >
> > Please note that the field names, such as "timestamp" are never in the
> > binary data.  The field numbers are what the binary format holds.  So
> > the above API is wrong, as the strings are unused and there is no field
> > number.
> >
> > And the binary format is not self-describing, seehttp://
> code.google.com/apis/protocolbuffers/docs/encoding.html#structure
> > for the overlapping formats.  A "varint" could be decoded as one of
> > "int32, int64, uint32, uint64, sint32, sint64, bool, enum". And
> > "Length-delimited" binary data cannot reliably distinguish binary data
> > from submessages.
> >
> > Thus protocol-buffers only makes sense if the message types are
> > statically known and fully decoded, or if the messages
> > are not being decoded at all, such as in  a proxy.  The flexibility with
> > extension fields is only useful in special cases for compatibility, such
> > as adding annotations to the options in the descriptor.proto itself.
> >
> > There is room for a reflection based API that reads a "proto" file (or a
> > descriptor set binary) at runtime.
> >
>

--~--~---------~--~----~------------~-------~--~----~
You received this message because you are subscribed to the Google Groups 
"Protocol Buffers" group.
To post to this group, send email to protobuf@googlegroups.com
To unsubscribe from this group, send email to 
protobuf+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/protobuf?hl=en
-~----------~----~----~----~------~----~------~--~---

Reply via email to