[
https://issues.apache.org/jira/browse/HDFS-11026?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15858158#comment-15858158
]
Ewan Higgs commented on HDFS-11026:
-----------------------------------
Thanks for taking a look, [~daryn].
{quote}
The first byte trick is clever. I'd prefer an equality check for a definitive
magic byte that can't occur or represents something so large it can't/won't
occur . Perhaps something like -1 – I haven't checked if that's impossible for
the varint.
{quote}
Do you want to inject a magic byte or do you want to detect one? I guess if
we're going to inject a magic value in then we can choose any positive byte. It
seems to me that it would just complicate the scheme -though it would be more
explicit. Maybe [~chris.douglas] can comment as he devised the scheme with
[~owen.omalley]'s advice.
{quote}
Curiosity, why do different jdks throw an IOException or RuntimeException
during incorrect decoding? I'd expect a deterministic exception.
{quote}
Oracle Java gives the following exception (putting in an
{{e.printStackTrace()}}:
{code}
Received exception reading protobuf message: java.io.IOException: value too
long to fit in integer
java.io.IOException: value too long to fit in integer
at org.apache.hadoop.io.WritableUtils.readVInt(WritableUtils.java:331)
at
org.apache.hadoop.hdfs.security.token.block.BlockTokenIdentifier.readFieldsLegacy(BlockTokenIdentifier.java:194)
at
org.apache.hadoop.hdfs.security.token.block.TestBlockToken.testProtobufBlockTokenBytesIsProtobuf(TestBlockToken.java:544)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
at
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
at
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
at
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:271)
at
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:70)
at
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:50)
at org.junit.runners.ParentRunner$3.run(ParentRunner.java:238)
at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:63)
at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:236)
at org.junit.runners.ParentRunner.access$000(ParentRunner.java:53)
at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:229)
at org.junit.runners.ParentRunner.run(ParentRunner.java:309)
at
org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:264)
at
org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:153)
at
org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:124)
at
org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:200)
at
org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:153)
at
org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:103)
{code}
OpenJDK gives:
{code}
Received exception reading protobuf message:
java.lang.NegativeArraySizeException
java.lang.NegativeArraySizeException
at org.apache.hadoop.io.WritableUtils.readString(WritableUtils.java:124)
at
org.apache.hadoop.hdfs.security.token.block.BlockTokenIdentifier.readFieldsLegacy(BlockTokenIdentifier.java:195)
at
org.apache.hadoop.hdfs.security.token.block.TestBlockToken.testProtobufBlockTokenBytesIsProtobuf(TestBlockToken.java:544)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
at
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
at
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
at
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:271)
at
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:70)
at
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:50)
at org.junit.runners.ParentRunner$3.run(ParentRunner.java:238)
at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:63)
at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:236)
at org.junit.runners.ParentRunner.access$000(ParentRunner.java:53)
at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:229)
at org.junit.runners.ParentRunner.run(ParentRunner.java:309)
at
org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:264)
at
org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:153)
at
org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:124)
at
org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:200)
at
org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:153)
at
org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:103)
{code}
> Convert BlockTokenIdentifier to use Protobuf
> --------------------------------------------
>
> Key: HDFS-11026
> URL: https://issues.apache.org/jira/browse/HDFS-11026
> Project: Hadoop HDFS
> Issue Type: Task
> Components: hdfs, hdfs-client
> Affects Versions: 2.9.0, 3.0.0-alpha1
> Reporter: Ewan Higgs
> Assignee: Ewan Higgs
> Fix For: 3.0.0-alpha3
>
> Attachments: blocktokenidentifier-protobuf.patch,
> HDFS-11026.002.patch, HDFS-11026.003.patch, HDFS-11026.004.patch,
> HDFS-11026.005.patch
>
>
> {{BlockTokenIdentifier}} currently uses a {{DataInput}}/{{DataOutput}}
> (basically a {{byte[]}}) and manual serialization to get data into and out of
> the encrypted buffer (in {{BlockKeyProto}}). Other TokenIdentifiers (e.g.
> {{ContainerTokenIdentifier}}, {{AMRMTokenIdentifier}}) use Protobuf. The
> {{BlockTokenIdenfitier}} should use Protobuf as well so it can be expanded
> more easily and will be consistent with the rest of the system.
> NB: Release of this will require a version update since 2.8.x won't be able
> to decipher {{BlockKeyProto.keyBytes}} from 2.8.y.
--
This message was sent by Atlassian JIRA
(v6.3.15#6346)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]