[
https://issues.apache.org/jira/browse/HDFS-11026?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15860965#comment-15860965
]
Ewan Higgs commented on HDFS-11026:
-----------------------------------
[~chris.douglas], I think [~daryn] was sketching out a version of #2 as he
didn't peek the value; he actually consumed it. Solution #3 would be something
like making the protobuf look like:
{code}
enum BlockTokenSecretIndicationByte {
MAGIC_VALUE = 1;
}
message BlockTokenSecretProto {
required BlockTokenSecretIndicationByte magic = 1 [default=MAGIC_VALUE];
optional uint64 expiryDate = 2;
optional uint32 keyId = 3;
optional string userId = 4;
optional string blockPoolId = 5;
optional uint64 blockId = 6;
repeated AccessModeProto modes = 7;
}
{code}
Then detecting {{MAGIC_VALUE}} would involve peeking at the first byte as I
currently do.
I propose we move forward with the current approach. I will add a an updated
patch with the documentation to the protobuf message:
{code}
/**
* Secret information for the BlockKeyProto. This is not sent on the wire as
* such but is used to pack a byte array and encrypted and put in
* BlockKeyProto.bytes
* When adding further fields, make sure they are optional as they would
* otherwise not be backwards compatible.
*
* Note: As part of the migration from WritableUtils based tokens (aka "legacy")
* to Protocol Buffers, we use the first byte to determine the type. If the
* first byte is <=0 then it is a legacy token. This means that when using
* protobuf tokens, the the first field sent must have a `field_number` less
* than 16 to make sure that the first byte is positive. Otherwise it could be
* parsed as a legacy token. See HDFS-11026 for more discussion.
*/
{code}
I also have two more unit tests which check empty messages (e.g. when
expiryDate isn't explicitly set).
> Convert BlockTokenIdentifier to use Protobuf
> --------------------------------------------
>
> Key: HDFS-11026
> URL: https://issues.apache.org/jira/browse/HDFS-11026
> Project: Hadoop HDFS
> Issue Type: Task
> Components: hdfs, hdfs-client
> Affects Versions: 2.9.0, 3.0.0-alpha1
> Reporter: Ewan Higgs
> Assignee: Ewan Higgs
> Fix For: 3.0.0-alpha3
>
> Attachments: blocktokenidentifier-protobuf.patch,
> HDFS-11026.002.patch, HDFS-11026.003.patch, HDFS-11026.004.patch,
> HDFS-11026.005.patch
>
>
> {{BlockTokenIdentifier}} currently uses a {{DataInput}}/{{DataOutput}}
> (basically a {{byte[]}}) and manual serialization to get data into and out of
> the encrypted buffer (in {{BlockKeyProto}}). Other TokenIdentifiers (e.g.
> {{ContainerTokenIdentifier}}, {{AMRMTokenIdentifier}}) use Protobuf. The
> {{BlockTokenIdenfitier}} should use Protobuf as well so it can be expanded
> more easily and will be consistent with the rest of the system.
> NB: Release of this will require a version update since 2.8.x won't be able
> to decipher {{BlockKeyProto.keyBytes}} from 2.8.y.
--
This message was sent by Atlassian JIRA
(v6.3.15#6346)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]