[
https://issues.apache.org/jira/browse/HDFS-12151?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16101347#comment-16101347
]
Ewan Higgs commented on HDFS-12151:
-----------------------------------
{quote}Should we use {{nst > 0}} rather than {{targetStorageTypes.length > 0}}
(amended) here for clarity?{quote}
Yes.
{quote}
Should the {{targetStorageTypes.length > 0}} check really be {{nsi > 0}}? We
could elide it then since it's already captured in the outside if.
{quote}
This does look redundant since {{targetStorageIds.length}} will be either 0 or
{{== targetStorageTypes.length}}
{quote}
Finally, I don't understand why we need to add the targeted ID/type for
checkAccess. Each DN only needs to validate itself, yea? BTSM#checkAccess
indicates this in its javadoc, but it looks like we run through ourselves and
the targets each time:
{quote}
That seems like a good simplification. I think I had assumed the BTI and
requested types being checked should be the same (String - String, uint64 -
uint64); but I don't see a reason why they have to be. [~chris.douglas], what
do you think?
> Hadoop 2 clients cannot writeBlock to Hadoop 3 DataNodes
> --------------------------------------------------------
>
> Key: HDFS-12151
> URL: https://issues.apache.org/jira/browse/HDFS-12151
> Project: Hadoop HDFS
> Issue Type: Sub-task
> Components: rolling upgrades
> Affects Versions: 3.0.0-alpha4
> Reporter: Sean Mackrory
> Assignee: Sean Mackrory
> Attachments: HDFS-12151.001.patch
>
>
> Trying to write to a Hadoop 3 DataNode with a Hadoop 2 client currently
> fails. On the client side it looks like this:
> {code}
> 17/07/14 13:31:58 INFO hdfs.DFSClient: Exception in
> createBlockOutputStream
> java.io.EOFException: Premature EOF: no length prefix available
> at
> org.apache.hadoop.hdfs.protocolPB.PBHelper.vintPrefixed(PBHelper.java:2280)
> at
> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.createBlockOutputStream(DFSOutputStream.java:1318)
> at
> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.nextBlockOutputStream(DFSOutputStream.java:1237)
> at
> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:449){code}
> But on the DataNode side there's an ArrayOutOfBoundsException because there
> aren't any targetStorageIds:
> {code}
> java.lang.ArrayIndexOutOfBoundsException: 0
> at
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:815)
> at
> org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:173)
> at
> org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:107)
> at
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290)
> at java.lang.Thread.run(Thread.java:745){code}
--
This message was sent by Atlassian JIRA
(v6.4.14#64029)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]