[
https://issues.apache.org/jira/browse/HBASE-7668?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13562270#comment-13562270
]
张双福 commented on HBASE-7668:
----------------------------
the log:hbase-root-regionserver-yun1.c******ft.com.cn.log
2 IOException, will wait for 5281.083434829379 msec.
at
org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.splitLogFileToTemp(HLogSplitter.java:351)
at
org.apache.hadoop.hbase.regionserver.SplitLogWorker$1.exec(SplitLogWorker.java:113)
at
org.apache.hadoop.hbase.regionserver.SplitLogWorker.grabTask(SplitLogWorker.java:266)
at
org.apache.hadoop.hbase.regionserver.SplitLogWorker.taskLoop(SplitLogWorker.java:197)
at
org.apache.hadoop.hbase.regionserver.SplitLogWorker.run(SplitLogWorker.java:165)
at java.lang.Thread.run(Thread.java:662)
and :
at
org.apache.hadoop.hdfs.DFSInputStream.chooseDataNode(DFSInputStream.java:644)
at
org.apache.hadoop.hdfs.DFSInputStream.blockSeekTo(DFSInputStream.java:437)
at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:577)
at java.io.DataInputStream.read(DataInputStream.java:132)
at java.io.DataInputStream.readFully(DataInputStream.java:178)
at java.io.DataInputStream.readFully(DataInputStream.java:152)
at org.apache.hadoop.io.SequenceFile$Reader.init(SequenceFile.java:1781)
at
org.apache.hadoop.io.SequenceFile$Reader.initialize(SequenceFile.java:1746)
at
org.apache.hadoop.io.SequenceFile$Reader.<init>(SequenceFile.java:1695)
at
org.apache.hadoop.io.SequenceFile$Reader.<init>(SequenceFile.java:1709)
at
org.apache.hadoop.hbase.regionserver.wal.SequenceFileLogReader$WALReader.<init>(SequenceFileLogReader.java:58)
at
org.apache.hadoop.hbase.regionserver.wal.SequenceFileLogReader.init(SequenceFileLogReader.java:166)
at
org.apache.hadoop.hbase.regionserver.wal.HLog.getReader(HLog.java:659)
at
org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.getReader(HLogSplitter.java:846)
at
org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.getReader(HLogSplitter.java:759)
at
org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.splitLogFileToTemp(HLogSplitter.java:384)
at
org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.splitLogFileToTemp(HLogSplitter.java:351)
at
org.apache.hadoop.hbase.regionserver.SplitLogWorker$1.exec(SplitLogWorker.java:113)
at
org.apache.hadoop.hbase.regionserver.SplitLogWorker.grabTask(SplitLogWorker.java:266)
at
org.apache.hadoop.hbase.regionserver.SplitLogWorker.taskLoop(SplitLogWorker.java:197)
at
org.apache.hadoop.hbase.regionserver.SplitLogWorker.run(SplitLogWorker.java:165)
at java.lang.Thread.run(Thread.java:662)
and:
2013-01-25 09:53:14,261 DEBUG
org.apache.hadoop.hbase.regionserver.SplitLogWorker: tasks arrived or departed
> I have a hbase problem
> ----------------------
>
> Key: HBASE-7668
> URL: https://issues.apache.org/jira/browse/HBASE-7668
> Project: HBase
> Issue Type: Bug
> Reporter: 张双福
>
> I have a problem that I have been trap weeks, can somebody can help me?
> With full of appreciate
> The problem is below:
> I have writed about 1G data to the hbase ,then it does not work.
> when I set the hadoop dfsadmin -safemode leave ,then the hbase can use
> "list" command,but when i use "count 'tableTest' or get ,put and so on
> ,finaly ,It tell me below:
> hbase(main):002:0> count 'zsfTest'
> ERROR: org.apache.hadoop.hbase.NotServingRegionException:
> org.apache.hadoop.hbase.NotServingRegionException: Region is not online:
> -ROOT-,,0
> at
> org.apache.hadoop.hbase.regionserver.HRegionServer.getRegion(HRegionServer.java:2862)
> at
> org.apache.hadoop.hbase.regionserver.HRegionServer.getClosestRowBefore(HRegionServer.java:1768)
> at sun.reflect.GeneratedMethodAccessor15.invoke(Unknown Source)
> at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
> at java.lang.reflect.Method.invoke(Method.java:597)
> at
> org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
> at
> org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1336)
> Here is some help for this command:
> Count the number of rows in a table. This operation may take a LONG
> time (Run '$HADOOP_HOME/bin/hadoop jar hbase.jar rowcount' to run a
> counting mapreduce job). Current count is shown every 1000 rows by
> default. Count interval may be optionally specified. Scan caching
> is enabled on count scans by default. Default cache size is 10 rows.
> If your rows are small in size, you may want to increase this
> parameter. Examples:
> hbase> count 't1'
> hbase> count 't1', INTERVAL => 100000
> hbase> count 't1', CACHE => 1000
> hbase> count 't1', INTERVAL => 10, CACHE => 1000
> hbase(main):003:0>
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira