Probably we could, but I didn't make my research on that yet.  Will need to
tinker a bit with that to get better understanding.  At least I know that
in proto3 the unknown fields are no longer present when serializing
previously de-serialized message:

  https://developers.google.com/protocol-buffers/docs/proto3#updating

Also, I'm not sure whether it would be possible to differentiate between
compatible and non-compatible extensions.   Or it's proposed that if new
fields are found in a token then don't accept the token at all?  In that
case it might be an issue during rolling upgrade, I think.


Best regards,

Alexey

On Wed, Jan 25, 2017 at 3:14 PM, Dan Burkert <[email protected]> wrote:

> That's an interesting idea - say if the format evolved to have an
> additional field which restricts use of the token?  Could we use the
> protobuf unknown fields API to recognize if this happened?
>
> - Dan
>
> On Wed, Jan 25, 2017 at 3:03 PM, Alexey Serbin <[email protected]>
> wrote:
>
>> I like this idea.
>>
>> Probably, that's too early at this point, but consider adding notion of
>> compt/non-compat feature flags into tokens.  Imagine the new version of
>> token has some additional restriction field, which older code does not
>> understand.  In that case, even if the token signature is correct, the
>> older code should not accept the new token since unsupported non-compat
>> feature flags is present.
>>
>> But in the essence this looks great, IMO.
>>
>>
>> Best regards,
>>
>> Alexey
>>
>> On Wed, Jan 25, 2017 at 12:52 PM, Todd Lipcon <[email protected]> wrote:
>>
>>> Actually had one more idea... how about:
>>> message AuthenticationToken {
>>> };
>>>
>>> message AuthorizationToken {
>>> };
>>>
>>> message TokenPB {
>>>   // The time at which this token expires, in seconds since the
>>>   // unix epoch.
>>>   optional int64 expire_unix_epoch_seconds = 1;
>>>
>>>   oneof token {
>>>     AuthenticationToken authn = 2;
>>>     AuthenticationToken authz = 3;
>>>   }
>>> };
>>>
>>> message SignedTokenPB {
>>>   // The actual token contents. This is a serialized TokenPB protobuf.
>>> However, we use a
>>>   // 'bytes' field, since protobuf doesn't guarantee that if two
>>> implementations serialize
>>>   // a protobuf, they'll necessary get bytewise identical results,
>>> particularly in the
>>>   // presence of unknown fields.
>>>   optional bytes token_contents = 2;
>>>
>>>   // The cryptographic signature of 'token_contents'.
>>>   optional bytes signature = 3;
>>>
>>>   // The sequence number of the key which produced 'signature'.
>>>   optional int64 signing_key_seq_num = 4;
>>> };
>>>
>>>
>>>
>>> On Wed, Jan 25, 2017 at 12:44 PM, Todd Lipcon <[email protected]> wrote:
>>>
>>>> On Wed, Jan 25, 2017 at 12:40 PM, Dan Burkert <[email protected]> wrote:
>>>>
>>>>> I think it must go in the 'token_contents' itself, otherwise it can be
>>>>> modified by a malicious client.  Other than that, looks good.
>>>>>
>>>>>
>>>> well, if it went into a separate field, then we'd have something like:
>>>>
>>>> optional bytes token_contents = 1;
>>>> optional int64 expiration_timestamp = 2;
>>>>
>>>> // Signature of the string: '<32-bit big-endian length of
>>>> token_contents> token_contents <64-bit big-endian expiration>'
>>>> optional bytes signature = 3;
>>>>
>>>> so they could try to modify it, but the signature would fail.
>>>>
>>>>
>>>>
>>>>> On Wed, Jan 25, 2017 at 12:37 PM, Todd Lipcon <[email protected]>
>>>>> wrote:
>>>>>
>>>>>> Hey folks
>>>>>>
>>>>>> I'm working on the token signing/verification stuff at the moment.
>>>>>> Curious to solicit some opinions on this:
>>>>>>
>>>>>>
>>>>>> message TokenPB {
>>>>>>   // The actual token contents. This is typically a serialized
>>>>>>   // protobuf of its own. However, we use a 'bytes' field, since
>>>>>>   // protobuf doesn't guarantee that if two implementations serialize
>>>>>>   // a protobuf, they'll necessary get bytewise identical results,
>>>>>>   // particularly in the presence of unknown fields.
>>>>>>   optional bytes token_contents = 1;
>>>>>>
>>>>>>   // The cryptographic signature of 'token_contents'.
>>>>>>   optional bytes signature = 2;
>>>>>>
>>>>>>   // The sequence number of the key which produced 'signature'.
>>>>>>   optional int64_t signing_key_seq_num = 3;
>>>>>> };
>>>>>>
>>>>>> The thing that's currently missing is an expiration timestamp of the
>>>>>> signature. I have two options here:
>>>>>>
>>>>>> *Option A*) say that the TokenPB itself doesn't capture expiration,
>>>>>> and if a particular type of token needs expiration, it would have to put 
>>>>>> an
>>>>>> 'expiration time' in its token contents itself.
>>>>>>
>>>>>> *pros:*
>>>>>> - token signing/verification is just a simple operation on the
>>>>>> 'token_contents' string
>>>>>>
>>>>>> *Cons:*
>>>>>> - would likely end up with redundant code between AuthN and AuthZ
>>>>>> tokens, both of which need expiration. However, that code isn't very
>>>>>> complicated (just a timestamp comparison) so maybe not a big deal?
>>>>>>
>>>>>> *Option B)* add an expiration timestamp field to TokenPB
>>>>>> *pros:*
>>>>>> - consolidate the expiration checking code into TokenVerifier
>>>>>> *cons:*
>>>>>> - now in order to sign/verify a token, we actually need to be signing
>>>>>> something like a concatenation of 'token_contents + signature'. Not too
>>>>>> hard to construct this concatenation, but it does add some complexity.
>>>>>>
>>>>>> Any strong opinions either way?
>>>>>>
>>>>>> -Todd
>>>>>> --
>>>>>> Todd Lipcon
>>>>>> Software Engineer, Cloudera
>>>>>>
>>>>>
>>>>>
>>>>
>>>>
>>>> --
>>>> Todd Lipcon
>>>> Software Engineer, Cloudera
>>>>
>>>
>>>
>>>
>>> --
>>> Todd Lipcon
>>> Software Engineer, Cloudera
>>>
>>
>>
>

Reply via email to