No, I do not see any suspicious log entry in regionserver log. Here is it
(note that all of my server processes are on the same machine because I am
running it with pseudo distributed mode). Any other hint? Thanks.



regionserver.log ==>

2011-05-28 22:55:38,982 INFO org.apache.hadoop.hbase.metrics: new MBeanInfo
2011-05-28 22:55:38,984 INFO org.apache.hadoop.hbase.metrics: new MBeanInfo
2011-05-28 22:55:38,984 INFO
org.apache.hadoop.hbase.regionserver.metrics.RegionServerMetrics:
Initialized
2011-05-28 22:55:39,008 DEBUG
org.apache.hadoop.hbase.executor.ExecutorService: Starting executor service
name=RS_OPEN_REGION-localhost,60020,1306648534687, corePoolSize=3,
maxPoolSize=3
2011-05-28 22:55:39,009 DEBUG
org.apache.hadoop.hbase.executor.ExecutorService: Starting executor service
name=RS_OPEN_ROOT-localhost,60020,1306648534687, corePoolSize=1,
maxPoolSize=1
2011-05-28 22:55:39,009 DEBUG
org.apache.hadoop.hbase.executor.ExecutorService: Starting executor service
name=RS_OPEN_META-localhost,60020,1306648534687, corePoolSize=1,
maxPoolSize=1
2011-05-28 22:55:39,009 DEBUG
org.apache.hadoop.hbase.executor.ExecutorService: Starting executor service
name=RS_CLOSE_REGION-localhost,60020,1306648534687, corePoolSize=3,
maxPoolSize=3
2011-05-28 22:55:39,009 DEBUG
org.apache.hadoop.hbase.executor.ExecutorService: Starting executor service
name=RS_CLOSE_ROOT-localhost,60020,1306648534687, corePoolSize=1,
maxPoolSize=1
2011-05-28 22:55:39,009 DEBUG
org.apache.hadoop.hbase.executor.ExecutorService: Starting executor service
name=RS_CLOSE_META-localhost,60020,1306648534687, corePoolSize=1,
maxPoolSize=1
2011-05-28 22:55:39,107 INFO org.mortbay.log: Logging to
org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via
org.mortbay.log.Slf4jLog
2011-05-28 22:55:39,192 INFO org.apache.hadoop.http.HttpServer: Added global
filtersafety (class=org.apache.hadoop.http.HttpServer$QuotingInputFilter)
2011-05-28 22:55:39,196 INFO org.apache.hadoop.http.HttpServer: Port
returned by webServer.getConnectors()[0].getLocalPort() before open() is -1.
Opening the listener on 60030
2011-05-28 22:55:39,196 INFO org.apache.hadoop.http.HttpServer:
listener.getLocalPort() returned 60030
webServer.getConnectors()[0].getLocalPort() returned 60030
2011-05-28 22:55:39,196 INFO org.apache.hadoop.http.HttpServer: Jetty bound
to port 60030
2011-05-28 22:55:39,197 INFO org.mortbay.log: jetty-6.1.26
2011-05-28 22:55:39,472 INFO org.mortbay.log: Started
[email protected]:60030
2011-05-28 22:55:39,473 INFO org.apache.hadoop.ipc.HBaseServer: IPC Server
Responder: starting
2011-05-28 22:55:39,473 INFO org.apache.hadoop.ipc.HBaseServer: IPC Server
Responder: starting
2011-05-28 22:55:39,475 INFO org.apache.hadoop.ipc.HBaseServer: IPC Server
handler 0 on 60020: starting
2011-05-28 22:55:39,475 INFO org.apache.hadoop.ipc.HBaseServer: IPC Server
listener on 60020: starting
2011-05-28 22:55:39,476 INFO org.apache.hadoop.ipc.HBaseServer: IPC Server
handler 1 on 60020: starting
2011-05-28 22:55:39,476 INFO org.apache.hadoop.ipc.HBaseServer: IPC Server
handler 2 on 60020: starting
2011-05-28 22:55:39,476 INFO org.apache.hadoop.ipc.HBaseServer: IPC Server
handler 3 on 60020: starting
2011-05-28 22:55:39,476 INFO org.apache.hadoop.ipc.HBaseServer: IPC Server
handler 4 on 60020: starting
2011-05-28 22:55:39,476 INFO org.apache.hadoop.ipc.HBaseServer: IPC Server
handler 5 on 60020: starting
2011-05-28 22:55:39,477 INFO org.apache.hadoop.ipc.HBaseServer: IPC Server
handler 6 on 60020: starting
2011-05-28 22:55:39,477 INFO org.apache.hadoop.ipc.HBaseServer: IPC Server
handler 7 on 60020: starting
2011-05-28 22:55:39,501 INFO org.apache.hadoop.ipc.HBaseServer: IPC Server
handler 8 on 60020: starting
2011-05-28 22:55:39,503 INFO org.apache.hadoop.ipc.HBaseServer: PRI IPC
Server handler 1 on 60020: starting
2011-05-28 22:55:39,503 INFO org.apache.hadoop.ipc.HBaseServer: PRI IPC
Server handler 0 on 60020: starting
2011-05-28 22:55:39,503 INFO org.apache.hadoop.ipc.HBaseServer: PRI IPC
Server handler 2 on 60020: starting
2011-05-28 22:55:39,503 INFO org.apache.hadoop.ipc.HBaseServer: PRI IPC
Server handler 3 on 60020: starting
2011-05-28 22:55:39,504 INFO org.apache.hadoop.ipc.HBaseServer: IPC Server
handler 9 on 60020: starting
2011-05-28 22:55:39,504 INFO org.apache.hadoop.ipc.HBaseServer: PRI IPC
Server handler 4 on 60020: starting
2011-05-28 22:55:39,504 INFO org.apache.hadoop.ipc.HBaseServer: PRI IPC
Server handler 5 on 60020: starting
2011-05-28 22:55:39,504 INFO org.apache.hadoop.ipc.HBaseServer: PRI IPC
Server handler 6 on 60020: starting
2011-05-28 22:55:39,505 INFO org.apache.hadoop.ipc.HBaseServer: PRI IPC
Server handler 7 on 60020: starting
2011-05-28 22:55:39,512 INFO org.apache.hadoop.ipc.HBaseServer: PRI IPC
Server handler 8 on 60020: starting
2011-05-28 22:55:39,512 INFO
org.apache.hadoop.hbase.regionserver.HRegionServer: Serving as
localhost,60020,1306648534687, RPC listening on /127.0.1.1:60020,
sessionid=0x1303a5253dc0002
2011-05-28 22:55:39,513 INFO org.apache.hadoop.ipc.HBaseServer: PRI IPC
Server handler 9 on 60020: starting
2011-05-28 22:55:39,520 INFO org.apache.hadoop.hbase.regionserver.StoreFile:
Allocating LruBlockCache with maximum size 199.4m
2011-05-28 23:00:39,529 DEBUG
org.apache.hadoop.hbase.io.hfile.LruBlockCache: LRU Stats: total=957.86 KB,
free=198.43 MB, max=199.36 MB, blocks=0, accesses=0, hits=0, hitRatio=�%,
cachingAccesses=0, cachingHits=0, cachingHitsRatio=�%, evictions=0,
evicted=0, evictedPerRun=NaN
2011-05-28 23:05:39,528 DEBUG
org.apache.hadoop.hbase.io.hfile.LruBlockCache: LRU Stats: total=957.86 KB,
free=198.43 MB, max=199.36 MB, blocks=0, accesses=0, hits=0, hitRatio=�%,
cachingAccesses=0, cachingHits=0, cachingHitsRatio=�%, evictions=0,
evicted=0, evictedPerRun=NaN
2011-05-28 23:10:39,528 DEBUG
org.apache.hadoop.hbase.io.hfile.LruBlockCache: LRU Stats: total=957.86 KB,
free=198.43 MB, max=199.36 MB, blocks=0, accesses=0, hits=0, hitRatio=�%,
cachingAccesses=0, cachingHits=0, cachingHitsRatio=�%, evictions=0,
evicted=0, evictedPerRun=NaN
2011-05-28 23:15:39,528 DEBUG
org.apache.hadoop.hbase.io.hfile.LruBlockCache: LRU Stats: total=957.86 KB,
free=198.43 MB, max=199.36 MB, blocks=0, accesses=0, hits=0, hitRatio=�%,
cachingAccesses=0, cachingHits=0, cachingHitsRatio=�%, evictions=0,
evicted=0, evictedPerRun=NaN
2011-05-28 23:20:39,528 DEBUG
org.apache.hadoop.hbase.io.hfile.LruBlockCache: LRU Stats: total=957.86 KB,
free=198.43 MB, max=199.36 MB, blocks=0, accesses=0, hits=0, hitRatio=�%,
cachingAccesses=0, cachingHits=0, cachingHitsRatio=�%, evictions=0,
evicted=0, evictedPerRun=NaN
2011-05-28 23:25:39,528 DEBUG
org.apache.hadoop.hbase.io.hfile.LruBlockCache: LRU Stats: total=957.86 KB,
free=198.43 MB, max=199.36 MB, blocks=0, accesses=0, hits=0, hitRatio=�%,
cachingAccesses=0, cachingHits=0, cachingHitsRatio=�%, evictions=0,
evicted=0, evictedPerRun=NaN
2011-05-28 23:30:39,528 DEBUG
org.apache.hadoop.hbase.io.hfile.LruBlockCache: LRU Stats: total=957.86 KB,
free=198.43 MB, max=199.36 MB, blocks=0, accesses=0, hits=0, hitRatio=�%,
cachingAccesses=0, cachingHits=0, cachingHitsRatio=�%, evictions=0,
evicted=0, evictedPerRun=NaN
2011-05-28 23:35:39,528 DEBUG
org.apache.hadoop.hbase.io.hfile.LruBlockCache: LRU Stats: total=957.86 KB,
free=198.43 MB, max=199.36 MB, blocks=0, accesses=0, hits=0, hitRatio=�%,
cachingAccesses=0, cachingHits=0, cachingHitsRatio=�%, evictions=0,
evicted=0, evictedPerRun=NaN







2011/5/29 Ferdy Galema <[email protected]>

> 2011-05-28 23:23:35,292 INFO org.apache.hadoop.ipc.HbaseRPC: Server at /
> 127.0.0.1:60020 could not be reached after 1 tries, giving up.
>
> This means the regionserver could not be reached. Check the regionserver
> logs to see why. Perhaps it failed to start? Is the HDFS fully functional?
>
> Ferdy.
>
> On 05/29/2011 08:28 AM, Sean Bigdatafun wrote:
> > I am trying for 0.90.1 (hbase-0.90.1-CDH3B4) under pseudo-dist mode, and
> met
> > the problem of HMaster crashing. Here is how I did.
> >
> > I. First I installed Hadoop pseudo cluster (hadoop-0.20.2-CDH3B4) with
> the
> > following conf edited.
> >
> > 1) core-site.xml ==>
> > <property>
> >   <name>fs.default.name</name>
> >   <value>hdfs://localhost:9000</value>
> > </property>
> >
> > 2) hdfs-site.xml ==>
> >   <property>
> >     <name>dfs.replication</name>
> >     <value>1</value>
> >   </property>
> >
> > (with above confs, start-all.sh was run, and the hadoop pseudo cluster
> > started to run happily)
> >
> >
> > Secondly, I installed hbase-0.90.1-CDH3B4 with the following conf edited.
> >
> > hbase-site.xml ==>
> >   <property>
> >     <name>hbase.rootdir</name>
> >     <value>hdfs://localhost:9000/hbase</value>
> >   </property>
> >
> >   <property>
> >     <name>hbase.cluster.distributed</name>
> >     <value>true</value>
> >   </property>
> >
> >   <property>
> >     <name>hbase.zookeeper.quorum</name>
> >     <value>localhost</value>
> >   </property>
> >
> >   <property>
> >     <name>dfs.replication</name>
> >     <value>1</value>
> >     <description>The replication count for HLog and HFile storage. Should
> > not be greater than HDFS datanode count.
> >     </description>
> >   </property>
> >
> > (with the above conf, I run the command of hbase-start.sh, and I realised
> > that HMaster did not function well -- i can't access localhost:60010)
> >
> >
> > II. Here is the HMaster error log:
> >
> > 2011-05-28 23:22:55,292 WARN
> > org.apache.hadoop.hbase.master.AssignmentManager: Unable to find a viable
> > location to assign region -ROOT-,,0.70236052
> > 2011-05-28 23:23:35,291 INFO
> > org.apache.hadoop.hbase.master.AssignmentManager: Regions in transition
> > timed out:  -ROOT-,,0.70236052 state=OFFLINE, ts=1306650175292
> > 2011-05-28 23:23:35,291 INFO
> > org.apache.hadoop.hbase.master.AssignmentManager: Region has been OFFLINE
> > for too long, reassigning -ROOT-,,0.70236052 to a random server
> > 2011-05-28 23:23:35,291 DEBUG
> > org.apache.hadoop.hbase.master.AssignmentManager: Forcing OFFLINE;
> > was=-ROOT-,,0.70236052 state=OFFLINE, ts=1306650175292
> > 2011-05-28 23:23:35,291 DEBUG
> > org.apache.hadoop.hbase.master.AssignmentManager: Using pre-existing plan
> > for region -ROOT-,,0.70236052; plan=hri=-ROOT-,,0.70236052, src=,
> > dest=localhost,60020,1306648534687
> > 2011-05-28 23:23:35,291 DEBUG
> > org.apache.hadoop.hbase.master.AssignmentManager: Assigning region
> > -ROOT-,,0.70236052 to localhost,60020,1306648534687
> > 2011-05-28 23:23:35,291 DEBUG
> org.apache.hadoop.hbase.master.ServerManager:
> > New connection to localhost,60020,1306648534687
> > 2011-05-28 23:23:35,292 INFO org.apache.hadoop.ipc.HbaseRPC: Server at /
> > 127.0.0.1:60020 could not be reached after 1 tries, giving up.
> > 2011-05-28 23:23:35,292 WARN
> > org.apache.hadoop.hbase.master.AssignmentManager: Failed assignment of
> > -ROOT-,,0.70236052 to serverName=localhost,60020,1306648534687,
> > load=(requests=0, regions=0, usedHeap=22, maxHeap=996), trying to assign
> > elsewhere instead; retry=0
> > org.apache.hadoop.hbase.client.RetriesExhaustedException: Failed setting
> up
> > proxy interface org.apache.hadoop.hbase.ipc.HRegionInterface to /
> > 127.0.0.1:60020 after attempts=1
> >         at
> > org.apache.hadoop.hbase.ipc.HBaseRPC.waitForProxy(HBaseRPC.java:355)
> >         at
> >
> org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.getHRegionConnection(HConnectionManager.java:954)
> >         at
> >
> org.apache.hadoop.hbase.master.ServerManager.getServerConnection(ServerManager.java:606)
> >         at
> >
> org.apache.hadoop.hbase.master.ServerManager.sendRegionOpen(ServerManager.java:541)
> >         at
> >
> org.apache.hadoop.hbase.master.AssignmentManager.assign(AssignmentManager.java:901)
> >         at
> >
> org.apache.hadoop.hbase.master.AssignmentManager.assign(AssignmentManager.java:730)
> >         at
> >
> org.apache.hadoop.hbase.master.AssignmentManager.assign(AssignmentManager.java:710)
> >         at
> >
> org.apache.hadoop.hbase.master.AssignmentManager$TimeoutMonitor.chore(AssignmentManager.java:1605)
> >         at org.apache.hadoop.hbase.Chore.run(Chore.java:66)
> > Caused by: java.net.ConnectException: Connection refused
> >         at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
> >         at
> > sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:567)
> >         at
> >
> org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)
> >         at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:408)
> >         at
> >
> org.apache.hadoop.hbase.ipc.HBaseClient$Connection.setupIOstreams(HBaseClient.java:328)
> >         at
> >
> org.apache.hadoop.hbase.ipc.HBaseClient.getConnection(HBaseClient.java:883)
> >         at
> > org.apache.hadoop.hbase.ipc.HBaseClient.call(HBaseClient.java:750)
> >         at
> > org.apache.hadoop.hbase.ipc.HBaseRPC$Invoker.invoke(HBaseRPC.java:257)
> >         at $Proxy6.getProtocolVersion(Unknown Source)
> >         at
> org.apache.hadoop.hbase.ipc.HBaseRPC.getProxy(HBaseRPC.java:419)
> >         at
> org.apache.hadoop.hbase.ipc.HBaseRPC.getProxy(HBaseRPC.java:393)
> >         at
> org.apache.hadoop.hbase.ipc.HBaseRPC.getProxy(HBaseRPC.java:444)
> >         at
> > org.apache.hadoop.hbase.ipc.HBaseRPC.waitForProxy(HBaseRPC.java:349)
> >         ... 8 more
> > 2011-05-28 23:23:35,292 WARN
> > org.apache.hadoop.hbase.master.AssignmentManager: Unable to find a viable
> > location to assign region -ROOT-,,0.70236052
> >
> >
> >
> > III. Here is the zk status from http://localhost:60010/zk.jsp
> >
> > HBase is rooted at /hbase
> > Master address: sean-PowerEdge:60000
> > Region server holding ROOT: null
> > Region servers:
> >  sean-PowerEdge:60020
> > Quorum Server Statistics:
> >  localhost:2181
> >   Zookeeper version: 3.3.2-CDH3B4--1, built on 02/21/2011 20:16 GMT
> >   Clients:
> >    /127.0.0.1:42221[0](queued=0,recved=1,sent=0)
> >    /127.0.0.1:44071[1](queued=0,recved=39,sent=44)
> >    /127.0.0.1:44078[1](queued=0,recved=23,sent=24)
> >    /127.0.0.1:44085[1](queued=0,recved=23,sent=23)
> >    /127.0.0.1:44077[1](queued=0,recved=19,sent=19)
> >
> >   Latency min/avg/max: 0/6/164
> >   Received: 105
> >   Sent: 110
> >   Outstanding: 0
> >   Zxid: 0x148
> >   Mode: standalone
> >   Node count: 12
> >
> >
> > What's the problem causing the above symptom?
> >
> > Thanks,
>



-- 
--Sean

Reply via email to