[
https://issues.apache.org/jira/browse/HDFS-872?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Todd Lipcon updated HDFS-872:
-----------------------------
Attachment: hdfs-793-branch20.txt
Here's a version of HDFS-793 that I think should be compatible.
Though, one question: as I understand the old code, the order of the replies in
the ack packet was actually in reverse-depth order. That is to say, if the
pipeline is Client -> DN1 -> DN2 -> DN3, the replies come back in the order
DN3, DN2, DN1. The client code, however, was acting as if they came back in
DN1, DN2, DN3. This patch does the latter order, since that's what clients
expect, and I think is what makes more sense. This is a bit "incompatible", but
since the recovery never worked right anyhow, I don't think it matters.
I also took the liberty of adding some comments that explain my understanding
of the seqno=-2 stuff. Let me know if I've got it wrong.
> DFSClient 0.20.1 is incompatible with HDFS 0.20.2
> -------------------------------------------------
>
> Key: HDFS-872
> URL: https://issues.apache.org/jira/browse/HDFS-872
> Project: Hadoop HDFS
> Issue Type: Bug
> Components: data-node, hdfs client
> Affects Versions: 0.20.1, 0.20.2
> Reporter: Bassam Tabbara
> Assignee: Todd Lipcon
> Fix For: 0.20.2
>
> Attachments: hdfs-793-branch20.txt, hdfs-872.txt
>
>
> After upgrading to that latest HDFS 0.20.2 (r896310 from
> /branches/branch-0.20), old DFS clients (0.20.1) seem to not work anymore.
> HBase uses the 0.20.1 hadoop core jars and the HBase master will no longer
> startup. Here is the exception from the HBase master log:
> {code}
> 2010-01-06 09:59:46,762 WARN org.apache.hadoop.hdfs.DFSClient: DFS Read:
> java.io.IOException: Could not obtain block: blk_338051
> 2596555557728_1002 file=/hbase/hbase.version
> at
> org.apache.hadoop.hdfs.DFSClient$DFSInputStream.chooseDataNode(DFSClient.java:1788)
> at
> org.apache.hadoop.hdfs.DFSClient$DFSInputStream.blockSeekTo(DFSClient.java:1616)
> at
> org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:1743)
> at
> org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:1673)
> at java.io.DataInputStream.readUnsignedShort(DataInputStream.java:320)
> at java.io.DataInputStream.readUTF(DataInputStream.java:572)
> at org.apache.hadoop.hbase.util.FSUtils.getVersion(FSUtils.java:189)
> at org.apache.hadoop.hbase.util.FSUtils.checkVersion(FSUtils.java:208)
> at org.apache.hadoop.hbase.master.HMaster.<init>(HMaster.java:208)
> at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native
> Method)
> at
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39)
> at
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27)
> at java.lang.reflect.Constructor.newInstance(Constructor.java:513)
> at org.apache.hadoop.hbase.master.HMaster.doMain(HMaster.java:1241)
> at org.apache.hadoop.hbase.master.HMaster.main(HMaster.java:1282)
> 2010-01-06 09:59:46,763 FATAL org.apache.hadoop.hbase.master.HMaster: Not
> starting HMaster because:
> java.io.IOException: Could not obtain block: blk_3380512596555557728_1002
> file=/hbase/hbase.version
> at
> org.apache.hadoop.hdfs.DFSClient$DFSInputStream.chooseDataNode(DFSClient.java:1788)
> at
> org.apache.hadoop.hdfs.DFSClient$DFSInputStream.blockSeekTo(DFSClient.java:1616)
> at
> org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:1743)
> at
> org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:1673)
> at java.io.DataInputStream.readUnsignedShort(DataInputStream.java:320)
> at java.io.DataInputStream.readUTF(DataInputStream.java:572)
> at org.apache.hadoop.hbase.util.FSUtils.getVersion(FSUtils.java:189)
> at org.apache.hadoop.hbase.util.FSUtils.checkVersion(FSUtils.java:208)
> at org.apache.hadoop.hbase.master.HMaster.<init>(HMaster.java:208)
> at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native
> Method)
> at
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39)
> at
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27)
> at java.lang.reflect.Constructor.newInstance(Constructor.java:513)
> at org.apache.hadoop.hbase.master.HMaster.doMain(HMaster.java:1241)
> at org.apache.hadoop.hbase.master.HMaster.main(HMaster.java:1282)
> {code}
> If I switch the hadoop jars in the hbase/lib directory with 0.20.2 version it
> works well, which what led me to open this bug here and not in the HBASE
> project.
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.