>> pbuf = MyProtoBuf()
>> pbuf.string_field = "" # to make sure pbuf initialization stuff
>> works (sets _has_string_field, etc)
>> pbuf._value_string_field = "bad utf8"
>> f = pbuf.DESCRIPTOR.fields_by_number[pbuf.STRING_FIELD_NUMBER]
>> f.type = f.TYPE_BYTES
>
> I don't think this will
On Mon, May 17, 2010 at 4:41 PM, JT Olds wrote:
> It looks like I figured out a solution, though I'm not sure this is
> the best way.
>
> I have:
>
> pbuf = MyProtoBuf()
> pbuf.string_field = "" # to make sure pbuf initialization stuff
> works (sets _has_string_field, etc)
> pbuf._value_str
It looks like I figured out a solution, though I'm not sure this is
the best way.
I have:
pbuf = MyProtoBuf()
pbuf.string_field = "" # to make sure pbuf initialization stuff
works (sets _has_string_field, etc)
pbuf._value_string_field = "bad utf8"
f = pbuf.DESCRIPTOR.fields_by_number[
Okay, well it's slightly more complicated. My C++ application needs to
actually accept the technically invalid code points U+ and U+FFFE.
Otherwise, I need my server application to know when invalid UTF-8 has
happened. That's all fine. I have that all implemented. That's good.
The problem is I
If you compile with the macro GOOGLE_PROTOBUF_UTF8_VALIDATION_ENABLED
defined, the C++ code will do UTF8 validation. However, it doesn't prevent
the data from serializing or parsing, it will simply log an error message.
How would you like it to fail?
On Mon, May 17, 2010 at 3:15 PM, JT Olds wrote