this is the piece of code related to your stack trace:

              // check periodically to see if a system stop is requested
              if (Store.closeCheckInterval > 0) {
                bytesWritten += kv.getLength();
                if (bytesWritten > Store.closeCheckInterval) {
                  bytesWritten = 0;
                  isInterrupted(store, writer);
                }
              }


It seems something asked the region to close. Can you check the same time
in the master to see if the balancer get started? Your region was
compacting. And I think that if balancer start while region is compacting,
balancer ask the compaction to stop.

This is most probably not what caused your writes to be stuck.

JM


2014-04-29 21:56 GMT-04:00 jingych <[email protected]>:

>  I found the compaction action in the region server log file:
>
> 2014-04-29 16:23:25,373 INFO org.apache.hadoop.hbase.regionserver.HRegion: 
> Finished memstore flush of ~41.8 M/43820608, currentsize=13.9 M/14607128 for 
> region gspt_jbxx,,1398754604428.d6fd8d39289985adda9a3e048b92a24b. in 2325ms, 
> sequenceid=819260, compaction requested=true
>
> 2014-04-29 16
> :23:25,373 INFO org.apache.hadoop.hbase.regionserver.HRegion: Starting 
> compaction on i in region 
> gspt_jbxx,,1398754604428.d6fd8d39289985adda9a3e048b92a24b.
> 2014-04-29 16
> :23:25,374 INFO org.apache.hadoop.hbase.regionserver.Store: Starting 
> compaction of 4 file(s) in i of 
> gspt_jbxx,,1398754604428.d6fd8d39289985adda9a3e048b92a24b. into 
> tmpdir=hdfs://cdh-namenode:8020/hbase/gspt_jbxx/d6fd8d39289985adda9a3e048b92a24b/.tmp,
>  seqid=819260, totalSize=120.0 M
> 2014-04-29 16
> :23:25,387 INFO org.apache.hadoop.hbase.regionserver.StoreFile: Delete Family 
> Bloom filter type for 
> hdfs://cdh-namenode:8020/hbase/gspt_jbxx/d6fd8d39289985adda9a3e048b92a24b/.tmp/9bdbdbccb6c94d978b740765f5e01426:
>  CompoundBloomFilterWriter
> 2014-04-29 16
> :23:31,431 INFO org.apache.hadoop.hbase.regionserver.MemStoreFlusher: Flush 
> of region gspt_jbxx,,1398754604428.d6fd8d39289985adda9a3e048b92a24b. due to 
> global heap pressure
> 2014-04-29 16
> :23:31,471 INFO org.apache.hadoop.hbase.regionserver.StoreFile: Delete Family 
> Bloom filter type for 
> hdfs://cdh-namenode:8020/hbase/gspt_jbxx/d6fd8d39289985adda9a3e048b92a24b/.tmp/18d58013c52f413497aca2ff1fd50c6b:
>  CompoundBloomFilterWriter
> 2014-04-29 16
> :23:35,094 INFO org.apache.hadoop.hbase.regionserver.StoreFile: NO General 
> Bloom and NO DeleteFamily was added to HFile 
> (hdfs://cdh-namenode:8020/hbase/gspt_jbxx/d6fd8d39289985adda9a3e048b92a24b/.tmp/18d58013c52f413497aca2ff1fd50c6b)
> 2014-04-29 16
> :23:35,094 INFO org.apache.hadoop.hbase.regionserver.Store: Flushed , 
> sequenceid=819276, memsize=41.8 M, into tmp file 
> hdfs://cdh-namenode:8020/hbase/gspt_jbxx/d6fd8d39289985adda9a3e048b92a24b/.tmp/18d58013c52f413497aca2ff1fd50c6b
> 2014-04-29 16
> :23:35,116 INFO org.apache.hadoop.hbase.regionserver.Store: Added 
> hdfs://cdh-namenode:8020/hbase/gspt_jbxx/d6fd8d39289985adda9a3e048b92a24b/i/18d58013c52f413497aca2ff1fd50c6b,
>  entries=204648, sequenceid=819276, filesize=14.7 M
> 2014-04-29 16
> :23:35,131 INFO org.apache.hadoop.hbase.regionserver.HRegion: Finished 
> memstore flush of ~41.8 M/43820792, currentsize=16.7 M/17528408 for region 
> gspt_jbxx,,1398754604428.d6fd8d39289985adda9a3e048b92a24b. in 3700ms, 
> sequenceid=819276, compaction requested=false
> 2014-04-29 16
> :23:35,278 INFO org.apache.hadoop.hbase.regionserver.SplitTransaction: 
> Starting split of region 
> gspt_jbxx,,1398754604428.d6fd8d39289985adda9a3e048b92a24b.
> 2014-04-29 16
> :23:36,622 INFO org.apache.hadoop.hbase.regionserver.StoreFile: NO General 
> Bloom and NO DeleteFamily was added to HFile 
> (hdfs://cdh-namenode:8020/hbase/gspt_jbxx/d6fd8d39289985adda9a3e048b92a24b/.tmp/9bdbdbccb6c94d978b740765f5e01426)
>
> 2014-04-29 16:23:36,626 INFO org.apache.hadoop.hbase.regionserver.StoreFile: 
> NO General Bloom and NO DeleteFamily was added to HFile 
> (hdfs://cdh-namenode:8020/hbase/gspt_jbxx/d6fd8d39289985adda9a3e048b92a24b/.tmp/9bdbdbccb6c94d978b740765f5e01426)
>
> 2014-04-29 16:23:36,626 INFO org.apache.hadoop.hbase.regionserver.HRegion: 
> compaction interrupted
>
> java.io.InterruptedIOException: Aborting compaction of store i in region 
> gspt_jbxx,,1398754604428.d6fd8d39289985adda9a3e048b92a24b. because it was 
> interrupted.
>
> at 
> org.apache.hadoop.hbase.regionserver.Compactor.isInterrupted(Compactor.java:230)
>
> at org.apache.hadoop.hbase.regionserver.Compactor.compact(Compactor.java:203)
> at org.apache.hadoop.hbase.regionserver.Store.compact(Store.java:1081)
> at org.apache.hadoop.hbase.regionserver.HRegion.compact(HRegion.java:1336)
>
> at 
> org.apache.hadoop.hbase.regionserver.compactions.CompactionRequest.run(CompactionRequest.java:303)
>
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
>
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
> at java.lang.Thread.run(Thread.java:662)
>
> 2014-04-29 16:23:36,627 INFO org.apache.hadoop.hbase.regionserver.HRegion: 
> Running close preflush of 
> gspt_jbxx,,1398754604428.d6fd8d39289985adda9a3e048b92a24b.
>
> 2014-04-29 16:23:36,628 INFO 
> org.apache.hadoop.hbase.regionserver.compactions.CompactionRequest: aborted 
> compaction: 
> regionName=gspt_jbxx,,1398754604428.d6fd8d39289985adda9a3e048b92a24b., 
> storeName=i, fileCount=4, fileSize=120.0 M (75.9 M, 14.7 M, 14.7 M, 14.7 M), 
> priority=3, time=2838185051945976; duration=11sec
>
> 2014-04-29 16:23:36,647 INFO org.apache.hadoop.hbase.regionserver.StoreFile: 
> Delete Family Bloom filter type for 
> hdfs://cdh-namenode:8020/hbase/gspt_jbxx/d6fd8d39289985adda9a3e048b92a24b/.tmp/0495ad0bb6dd467eafd78eafd66897d6:
>  CompoundBloomFilterWriter
>
> 2014-04-29 16:23:37,694 INFO org.apache.hadoop.hbase.regionserver.StoreFile: 
> NO General Bloom and NO DeleteFamily was added to HFile 
> (hdfs://cdh-namenode:8020/hbase/gspt_jbxx/d6fd8d39289985adda9a3e048b92a24b/.tmp/0495ad0bb6dd467eafd78eafd66897d6)
>
> 2014-04-29 16:23:37,694 INFO org.apache.hadoop.hbase.regionserver.Store: 
> Flushed , sequenceid=819286, memsize=25.1 M, into tmp file 
> hdfs://cdh-namenode:8020/hbase/gspt_jbxx/d6fd8d39289985adda9a3e048b92a24b/.tmp/0495ad0bb6dd467eafd78eafd66897d6
>
> Does it normal with the InterruptedIOException?
>
> Thanks!
>
> ------------------------------
>
> 井玉成
>
>
>
> 基础软件事业部
>
> 东软集团股份有限公司
>
> 手机:13889491801
>
> 电话:0411-84835702
>
>
>
> 大连市甘井子区黄浦路901号 D1座217室
>
> Postcode:116085
>
> Email:[email protected]
>
>  *From:* jingych <[email protected]>
> *Date:* 2014-04-30 09:50
> *To:* Jean-Marc Spaggiari <[email protected]>; 
> user<[email protected]>
> *Subject:* Re: Re: Java Client Write Data blocked
>  Thanks JM.
>
> But the log is too big, How can I post the log file?
>
> The query from HBase is slower too.
>
> The Network is OK, I'm sure.
>
> Does GC have the log file ? And how to know the swap?
>
> Sorry, I'm rookie.
>
>
>
>
> jingych
>
> From: Jean-Marc Spaggiari
> Date: 2014-04-30 09:30
> To: user; jingych
> Subject: Re: Java Client Write Data blocked
> Any logs?
>
>
> Gargabe collection on the server side? Network issue? Swap?
>
>
> Please share your master and region servers logs so we can provide feedback.
>
> JM
>
>
>
>
> 2014-04-29 21:26 GMT-04:00 jingych <[email protected]>:
>
> Hi, All!
>
> I need help!
>
> I run the java client to write 3million data into HBase,
>
>
> but when wrote almost 1 million, the process was blocked without any 
> exception.
>
> Does anyone know the possible reason? So i can find the solution.
>
> Thanks All!
>
> By the way, the HBase Version is 0.94.6-cdh4.5.0!
>
> Thanks again!
>
>
>
>
> jingych
>
> ---------------------------------------------------------------------------------------------------
>
> Confidentiality Notice: The information contained in this e-mail and any 
> accompanying attachment(s)
>
> is intended only for the use of the intended recipient and may be 
> confidential and/or privileged of
>
> Neusoft Corporation, its subsidiaries and/or its affiliates. If any reader of 
> this communication is
>
> not the intended recipient, unauthorized use, forwarding, printing,  storing, 
> disclosure or copying
>
> is strictly prohibited, and may be unlawful.If you have received this 
> communication in error,please
>
> immediately notify the sender by return e-mail, and delete the original 
> message and all copies from
> your system. Thank you.
>
> ---------------------------------------------------------------------------------------------------
>
> ---------------------------------------------------------------------------------------------------
>
> Confidentiality Notice: The information contained in this e-mail and any 
> accompanying attachment(s)
>
> is intended only for the use of the intended recipient and may be 
> confidential and/or privileged of
>
> Neusoft Corporation, its subsidiaries and/or its affiliates. If any reader of 
> this communication is
>
> not the intended recipient, unauthorized use, forwarding, printing,  storing, 
> disclosure or copying
>
> is strictly prohibited, and may be unlawful.If you have received this 
> communication in error,please
>
> immediately notify the sender by return e-mail, and delete the original 
> message and all copies from
> your system. Thank you.
>
> ---------------------------------------------------------------------------------------------------
>
>
> ---------------------------------------------------------------------------------------------------
> Confidentiality Notice: The information contained in this e-mail and any
> accompanying attachment(s)
> is intended only for the use of the intended recipient and may be
> confidential and/or privileged of
> Neusoft Corporation, its subsidiaries and/or its affiliates. If any reader
> of this communication is
> not the intended recipient, unauthorized use, forwarding, printing,
> storing, disclosure or copying
> is strictly prohibited, and may be unlawful.If you have received this
> communication in error,please
> immediately notify the sender by return e-mail, and delete the original
> message and all copies from
> your system. Thank you.
>
> ---------------------------------------------------------------------------------------------------
>

Reply via email to