[protobuf] Using libcurl to send protocol buffer messages over HTTP

2010-01-17 Thread samarules
Hi,

I am trying to send protocol buffer messages over HTTP using libcurl
in C++. I am POSTing in binary mode. It seems, upon posting data gets
corrupt.

Here is the code snippet:

---
std::string st;
campaignpool.SerializeToString(st);

//POST using curl

struct curl_slist *headers=NULL;
headers = curl_slist_append(headers, Content-Type: application/octet-
stream);

 /* post binary data */
  easyhandle = curl_easy_init();
  curl_easy_setopt(easyhandle, CURLOPT_POSTFIELDS, st);
  curl_easy_setopt(easyhandle, CURLOPT_POSTFIELDSIZE, st.size());
  curl_easy_setopt(easyhandle, CURLOPT_HTTPHEADER, headers);
  .
  .
  curl_easy_perform(easyhandle); /* post away! */

--

Any help would be highly appreciated.


Regards,
Zia
-- 
You received this message because you are subscribed to the Google Groups 
Protocol Buffers group.
To post to this group, send email to proto...@googlegroups.com.
To unsubscribe from this group, send email to 
protobuf+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/protobuf?hl=en.




[protobuf] Re: How can I reset a FileInputStream?

2010-01-17 Thread Jacob Rief
Hello Kenton,

 What makes you think it is inefficient?  It does mean the buffer has to be
 re-allocated but with a decent malloc implementation that shouldn't take
 long.  Certainly the actual reading from the file would take longer.  Have
 you seen performance problems with this approach?

Well, in order to see any performance penalties, I would have to
implement FileInputStream::Reset() and compare the results with the
current implementation, (I can do that, if there is enough interest).
I reviewed the implementation and I saw that by reinstantiating a
FileInputStream object, 3 destructors and 3 constructors have to be
called, where one (CopyingInputStreamAdaptor) invalidates a buffer
which in the Next() step immediately afterwards has to be
reallocated. A Reset() function would avoid these unnecessary steps.

 If there really is a performance problem with allocating new objects, then
 sure.

From the performance point of view, its certainly not a big issue, but
from the code cleanness point of view, it is.
I have written a class named LzipInputStream, which offers a Reset()
functionality to randomly access any part of the uncompressed input
stream without having to decompress everything. Therefore this Reset()
function is called quite often and it has to destroy and recreate its
lower layer, ie. the FileInputStream. If each stackable ...InputStream
offers a Reset() function, the upper layer then only has to call Reset
() on the lower layer, instead of keeping track how to reconstruct the
lower layered FileInputStream object.

Regards, Jacob
-- 
You received this message because you are subscribed to the Google Groups 
Protocol Buffers group.
To post to this group, send email to proto...@googlegroups.com.
To unsubscribe from this group, send email to 
protobuf+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/protobuf?hl=en.




Re: Haskell version and [protobuf] 2.3.0 released

2010-01-17 Thread Chris Kuklewicz
Thanks for the reply Kenton.  Another issue occurs to me as a get ready
to shut off for the night.  Reading repeated extension keys gets more
annoying, especially for keys not known at code generation time.  I will
sleep on it and then maybe the details will be clear.


On 16/01/2010 20:32, Kenton Varda wrote:
 Have you considered refactoring your compiler into the form of a code
 generator plugin?  Then you would not have to maintain your own parser
 anymore, you'd get the .zip output feature for free, and you could add
 insertion points to your code for someone else to extend it to support
 an RPC system.
 
 On Sat, Jan 16, 2010 at 7:47 AM, Chris Kuklewicz turingt...@gmail.com
 mailto:turingt...@gmail.com wrote:
 
 A question for Kenton or someone else who may know: Since repeated
 fields can be read as packed or unpacked, does the packed serializer
 ever use the unpacked format when it would be more efficient?  Saving a
 single packed datum is more verbose then a single unpacked datum.
 
 
 No, the official implementations do not do this.  A couple arguments
 against:
 - People who have to interact with pre-2.3.0 code cannot use such an
 optimization, so it would have to be optional, which it probably isn't
 worth.
 - The optimization you describe would only be useful in the one-element
 case, and in that case it would only save one byte.  Since this case is
 probably relatively unlikely for packed repeated fields (which are
 typically large), the extra overhead of simply checking for this case
 probably isn't worth the savings it would bring.

-- 
You received this message because you are subscribed to the Google Groups 
Protocol Buffers group.
To post to this group, send email to proto...@googlegroups.com.
To unsubscribe from this group, send email to 
protobuf+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/protobuf?hl=en.




Re: [protobuf] C++ Parsing and de-serialization

2010-01-17 Thread Henner Zeller
Hi,
On Sun, Jan 17, 2010 at 13:21, Johan freddielunchb...@gmail.com wrote:
 Hi,

 I read data messages that comes in on a TCP/IP socket.
 These message can be of different types example Person, Project etc,
 that I have defined and the corresponding .proto files.

 I can do:

    string s((char*)message,  strlen(message));
    Person p;
    p.ParseFromString(s))

Be careful here: the buffer can contain '\0' characters so strlen
would return a shorter length. You need to transfer by some other
means the length of the message.


 and then access the object Person.

 However, I would like to have it more generic.
 I cannot know from the beginning if the incoming message is a Person
 or a Project or whatever.
 Also, I am not so interested in the access methods, as I plan to loop
 though the Fields using reflection to the decode the object.

 Is there a way to decode the 'message' into a generic object (i.e a
 Message) that I can then inspect using reflection?

 I can accept to link in the different proto buffers at compile time.

 I hope this makes sense.. and thanks for a great protocol !


 BR
 johan












 --
 You received this message because you are subscribed to the Google Groups 
 Protocol Buffers group.
 To post to this group, send email to proto...@googlegroups.com.
 To unsubscribe from this group, send email to 
 protobuf+unsubscr...@googlegroups.com.
 For more options, visit this group at 
 http://groups.google.com/group/protobuf?hl=en.




-- 
You received this message because you are subscribed to the Google Groups 
Protocol Buffers group.
To post to this group, send email to proto...@googlegroups.com.
To unsubscribe from this group, send email to 
protobuf+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/protobuf?hl=en.




[protobuf] Re: A new ActionScript 3 protobuf implementation

2010-01-17 Thread Atry
I will translate it to English when I have time.

On 1月18日, 上午7时51分, Kenton Varda ken...@google.com wrote:
 I've added this.  It's exciting to see the plugin system being used
 successfully already!

 Unfortunately I don't know enough Chinese to read your docs.  :/





 On Sun, Jan 17, 2010 at 2:33 AM, Atry pop.a...@gmail.com wrote:
  Hello,

  I wanted to announce a ActionScript 3 protobuf implementation:

 https://code.google.com/p/protoc-gen-as3/

  Currently it has all protobuf 2.3 futures except service and group,
  more than any exists AS3 implement. A message using extension or
  packed repeated fields can also be encoded and decoded. The compiler
  is written in Java, based on protobuf 2.3 's plugin.

  I wish it would be added on
 http://code.google.com/p/protobuf/wiki/ThirdPartyAddOns

  --
  You received this message because you are subscribed to the Google Groups
  Protocol Buffers group.
  To post to this group, send email to proto...@googlegroups.com.
  To unsubscribe from this group, send email to
  protobuf+unsubscr...@googlegroups.comprotobuf%2bunsubscr...@googlegroups.com
  .
  For more options, visit this group at
 http://groups.google.com/group/protobuf?hl=en.
-- 
You received this message because you are subscribed to the Google Groups 
Protocol Buffers group.
To post to this group, send email to proto...@googlegroups.com.
To unsubscribe from this group, send email to 
protobuf+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/protobuf?hl=en.




[protobuf] Why not to allow [packed = true] for all repeated field?

2010-01-17 Thread Atry
When I set [packed = true] on a nested message field, I receive this error:

[packed = true] can only be specified for repeated primitive fields.


I wonder why it is disallowed. The output for packed nested field list as
the below:

The field number and wire type LENGTH_DELIMITED.
 Length of all elements.
 Length of the first element.
 Content of the first element.
 Length of the second element.
 Content of the second element.

...

Length of the nth element.
 Content of the nth element.
-- 

You received this message because you are subscribed to the Google Groups "Protocol Buffers" group.

To post to this group, send email to proto...@googlegroups.com.

To unsubscribe from this group, send email to protobuf+unsubscr...@googlegroups.com.

For more options, visit this group at http://groups.google.com/group/protobuf?hl=en.