[ https://issues.apache.org/jira/browse/HDFS-12151?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16101028#comment-16101028 ]
Andrew Wang commented on HDFS-12151: ------------------------------------ LGTM, though it'd also be good for [~ehiggs] to take a look as the original author of this code. We could really use a unit test that calls writeBlock with the optional arguments unset, since this is a very similar bug to HDFS-11956. While looking at this area of the code again, I also had some questions (mainly directed at [~ehiggs] and [~chris.douglas]) that are not related to this patch. We might end up addressing them in a different JIRA, but I'm putting the comments here the changes might be simple enough to fold into this patch: {code} int nst = targetStorageTypes.length; StorageType[] storageTypes = new StorageType[nst + 1]; storageTypes[0] = storageType; if (targetStorageTypes.length > 0) { System.arraycopy(targetStorageTypes, 0, storageTypes, 1, nst); } {code} Should we use {{nst > 0}} rather than {{targetStorageTypes > 0}} here for clarity? This block could also use a comment explaining that we're creating {{storageTypes}} and {{storageIds}} just for passing to checkAccess. It'd be even better to reduce the scope (move this inside checkAccess?) for additional clarity. {code} // To support older clients, we don't pass in empty storageIds final int nsi = targetStorageIds.length; final String[] storageIds; if (nsi > 0) { storageIds = new String[nsi + 1]; storageIds[0] = storageId; if (targetStorageTypes.length > 0) { System.arraycopy(targetStorageIds, 0, storageIds, 1, nsi); } } else { storageIds = new String[0]; } {code} Should the {{targetStorageTypes.length > 0}} check really be {{nsi > 0}}? We could elide it then since it's already captured in the outside if. Finally, I don't understand why we need to add the targeted ID/type for checkAccess. Each DN only needs to validate itself, yea? BTSM#checkAccess indicates this in its javadoc, but it looks like we run through ourselves and the targets each time: {code} * The current node can only verify that one of the storage [Type|ID] is * available. The rest will be on different nodes. {code} > Hadoop 2 clients cannot writeBlock to Hadoop 3 DataNodes > -------------------------------------------------------- > > Key: HDFS-12151 > URL: https://issues.apache.org/jira/browse/HDFS-12151 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: rolling upgrades > Affects Versions: 3.0.0-alpha4 > Reporter: Sean Mackrory > Assignee: Sean Mackrory > Attachments: HDFS-12151.001.patch > > > Trying to write to a Hadoop 3 DataNode with a Hadoop 2 client currently > fails. On the client side it looks like this: > {code} > 17/07/14 13:31:58 INFO hdfs.DFSClient: Exception in > createBlockOutputStream > java.io.EOFException: Premature EOF: no length prefix available > at > org.apache.hadoop.hdfs.protocolPB.PBHelper.vintPrefixed(PBHelper.java:2280) > at > org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.createBlockOutputStream(DFSOutputStream.java:1318) > at > org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.nextBlockOutputStream(DFSOutputStream.java:1237) > at > org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:449){code} > But on the DataNode side there's an ArrayOutOfBoundsException because there > aren't any targetStorageIds: > {code} > java.lang.ArrayIndexOutOfBoundsException: 0 > at > org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:815) > at > org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:173) > at > org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:107) > at > org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) > at java.lang.Thread.run(Thread.java:745){code} -- This message was sent by Atlassian JIRA (v6.4.14#64029) --------------------------------------------------------------------- To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org