>> Don't mix a append-enabled HDFS client with an older HDFS tho, the RPC 
>> version won't match.
Ah I see..

I am getting the following at the master's logs (I guess the master is unable 
to even open the WAL of the failed regionserver because append isnt supported.. 
I was under the impression it is enough to just change HADOOP_CLASSPATH..)


2010-08-23 17:35:59,566 DEBUG 
org.apache.hadoop.hbase.master.RegionServerOperationQueue: Processing todo: 
ProcessServerShutdown of b3130047.yst.yahoo.net,600
20,1282459637402
2010-08-23 17:35:59,566 INFO 
org.apache.hadoop.hbase.master.RegionServerOperation: Process shutdown of 
server b3130047.yst.yahoo.net,60020,1282459637402: log
Split: false, rootRescanned: false, numberOfMetaRegions: 1, 
onlineMetaRegions.size(): 1
2010-08-23 17:35:59,569 INFO org.apache.hadoop.hbase.regionserver.wal.HLog: 
Splitting 1 hlog(s) in 
hdfs://b3130080.yst.yahoo.net:4600/hbase/.logs/b3130047.ys
t.yahoo.net,60020,1282459637402
2010-08-23 17:35:59,569 DEBUG org.apache.hadoop.hbase.regionserver.wal.HLog: 
Splitting hlog 1 of 1: hdfs://b3130080.yst.yahoo.net:4600/hbase/.logs/b3130047.y
st.yahoo.net,60020,1282459637402/67.195.50.35%3A60020.1282459654895, length=0
2010-08-23 17:35:59,569 INFO org.apache.hadoop.hbase.util.FSUtils: Recovering 
filehdfs://b3130080.yst.yahoo.net:4600/hbase/.logs/b3130047.yst.yahoo.net,60020
,1282459637402/67.195.50.35%3A60020.1282459654895
2010-08-23 17:35:59,571 WARN 
org.apache.hadoop.hbase.master.RegionServerOperationQueue: Failed processing: 
ProcessServerShutdown of b3130047.yst.yahoo.net,60
020,1282459637402; putting onto delayed todo queue
java.io.IOException: Failed to open 
hdfs://b3130080.yst.yahoo.net:4600/hbase/.logs/b3130047.yst.yahoo.net,60020,1282459637402/67.195.50.35%3A60020.1282459654
895 for append
        at 
org.apache.hadoop.hbase.util.FSUtils.recoverFileLease(FSUtils.java:640)
        at 
org.apache.hadoop.hbase.regionserver.wal.HLog.splitLog(HLog.java:1303)
        at 
org.apache.hadoop.hbase.regionserver.wal.HLog.splitLog(HLog.java:1191)
        at 
org.apache.hadoop.hbase.master.ProcessServerShutdown.process(ProcessServerShutdown.java:299)
        at 
org.apache.hadoop.hbase.master.RegionServerOperationQueue.process(RegionServerOperationQueue.java:147)
        at org.apache.hadoop.hbase.master.HMaster.run(HMaster.java:517)
Caused by: java.io.IOException: java.io.IOException: Append to hdfs not 
supported. Please refer to dfs.support.append configuration parameter.
        at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFile(FSNamesystem.java:1160)
        at 
org.apache.hadoop.hdfs.server.namenode.NameNode.append(NameNode.java:392)
        at sun.reflect.GeneratedMethodAccessor49.invoke(Unknown Source)
        at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
        at java.lang.reflect.Method.invoke(Method.java:597)
        at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:508)
        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:960)
        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:956)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:396)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:954)
 at sun.reflect.GeneratedConstructorAccessor15.newInstance(Unknown Source)
        at 
sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27)
        at java.lang.reflect.Constructor.newInstance(Constructor.java:513)
        at 
org.apache.hadoop.hbase.RemoteExceptionHandler.decodeRemoteException(RemoteExceptionHandler.java:94)
        at 
org.apache.hadoop.hbase.RemoteExceptionHandler.checkThrowable(RemoteExceptionHandler.java:48)
        at 
org.apache.hadoop.hbase.RemoteExceptionHandler.checkIOException(RemoteExceptionHandler.java:66)
        at 
org.apache.hadoop.hbase.util.FSUtils.recoverFileLease(FSUtils.java:623)
        ... 5 more



On 8/23/10 10:46 AM, "Jean-Daniel Cryans" <[email protected]> wrote:

It should, we do a detection via reflection if HDFS-200 is in place
and if not then we don't call syncFs. Don't mix a append-enabled HDFS
client with an older HDFS tho, the RPC version won't match. This means
that you have to replace the hadoop jar in HBase's lib directory.

J-D

On Mon, Aug 23, 2010 at 10:40 AM, Vidhyashankar Venkataraman
<[email protected]> wrote:
> Can 0.89.x work with a hadoop version that doesn't support append?
>
> Vidhya
>

Reply via email to