Ivan:

This should work -- or at least, its worked for me in the past as long as the shutdown was clean (as you say your's was). The errors below don't help much. Perhaps retry with DEBUG enabled and send us over a log (See FAQ for how to enable DEBUG).

Thanks,
St.Ack


Иван wrote:
The situation is quite simple: I'm just trying to launch an HBase instance from 
a hbase directory in HDFS which was remaining inactive for a some period of 
time (in fact the HBase was just running from a different directory on the same 
HDFS, it was something like a backup). But for some reasons there has been some 
problems in launching it, which resulted in some exceptions, mostly like this:

2008-09-18 12:35:53,995 INFO org.apache.hadoop.ipc.Server: IPC Server handler 2 
on 60020 caught: java.nio.channels.ClosedChannelException
        at 
sun.nio.ch.SocketChannelImpl.ensureWriteOpen(SocketChannelImpl.java:125)
        at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:294)
        at 
org.apache.hadoop.ipc.Server$Responder.processResponse(Server.java:594)
        at org.apache.hadoop.ipc.Server$Responder.doRespond(Server.java:654)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:917)

and this:

2008-09-18 12:53:35,097 WARN 
org.apache.hadoop.hbase.regionserver.HRegionServer: Processing message (Retry: 
1)
java.net.SocketTimeoutException: timed out waiting for rpc response
        at org.apache.hadoop.ipc.Client.call(Client.java:559)
        at 
org.apache.hadoop.hbase.ipc.HbaseRPC$Invoker.invoke(HbaseRPC.java:230)
        at $Proxy0.regionServerReport(Unknown Source)
        at 
org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:314)
        at java.lang.Thread.run(Thread.java:595)

The snapshot was made while cleanly stopping of HBase and copying (or maybe 
moving, I'm not really sure) it's folder to another location in DFS.
Even web interface of HBase have failed to launch, but it seems that that HBase 
is doing something when it tries to launch: assigning some regions, 
creating/removing some blocks in HDFS, etc. But finally it fails...

Does anyone have any ideas about the reasons of such a behavior?

P.S.: Hadoop 0.17.1, HBase 0.2.0, Debian Etch

Thanks,
Ivan Blinkov

Reply via email to