Hi,

The table definition : 

{NAME => 'lbc_zte_1', COPROCESSOR$1 => 
'/zsmar/zsmar.jar|com.zsmar.hbase.query.rowkey.RowKey3Endpoint|536870911', 
DEFERRED_LOG_FLUSH => 'true', MAX_FILESIZE => '0', FAMILIES => [{NAME => 'F', 
BLOOMFILTER => 'ROW', VERSIONS => '1', IN_MEMORY => 'true'}]}

{NAME => 'lbc_zte_1_imei_index', DEFERRED_LOG_FLUSH => 'true', MAX_FILESIZE => 
'107374182400', FAMILIES => [{NAME => 'F', BLOOMFILTER => 'ROW', IN_MEMORY => 
'true'}]}

Use HexStringSplit to pre-split :

The start-key end-key :

-      051EB851

051EB851   0A3D70A2

0A3D70A2   0F5C28F3

0F5C28F3   147AE144

. . . . . . 

F5C28F30   FAE14781

FAE14781      -

Totally 50 regions

 

And The rowkey likes 

    0000175622867EBCF052CD83973A91C3000000

    0000175622867EBCF052CD83973A91C3235959

    . . . . . . 

00006C60F4538B20CF2248E5455D119E000000

00006C60F4538B20CF2248E5455D119E235959

 

There is table regions and count after inserting data below :

 

Table lbc_zte_1 regions :



 

Table lbc_zte_1 region count :



 

So the result is that regions increase automatically .and client throws 
exceptions :

 

2013-08-14 10:05:26,228 INFO  [pool-2-thread-7] put.PutTask 
(PutTask.java:doTask(225)) - 现已装载:45300000

2013-08-14 10:05:46,493 WARN  [pool-2-thread-11] 
client.HConnectionManager$HConnectionImplementation 
(HConnectionManager.java:processBatchCallback(1574)) - Failed all from 
region=lbc_zte_1_imei_index,3333332A,1376379199776.b11321169ec6cdd765486773640523e3.,
 hostname=phd03.hadoop.audaque.com, port=60020

java.util.concurrent.ExecutionException: java.io.IOException: Call to 
phd03.hadoop.audaque.com/172.16.1.93:60020 failed on local exception: 
java.io.IOException: Connection reset by peer

         at java.util.concurrent.FutureTask$Sync.innerGet(FutureTask.java:222)

         at java.util.concurrent.FutureTask.get(FutureTask.java:83)

         at 
org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.processBatchCallback(HConnectionManager.java:1544)

         at 
org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.processBatch(HConnectionManager.java:1396)

         at org.apache.hadoop.hbase.client.HTable.flushCommits(HTable.java:918)

         at org.apache.hadoop.hbase.client.HTable.doPut(HTable.java:770)

         at org.apache.hadoop.hbase.client.HTable.put(HTable.java:757)

         at 
org.apache.hadoop.hbase.client.HTablePool$PooledHTable.put(HTablePool.java:399)

         at com.zsmulti.put.PutTask.processPut(PutTask.java:262)

         at com.zsmulti.put.PutTask.doTask(PutTask.java:212)

         at dev.toolkit.concurrent.Task2.run(Task2.java:61)

         at 
java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)

         at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)

         at java.lang.Thread.run(Thread.java:662)

Caused by: java.io.IOException: Call to 
phd03.hadoop.audaque.com/172.16.1.93:60020 failed on local exception: 
java.io.IOException: Connection reset by peer

         at 
org.apache.hadoop.hbase.ipc.HBaseClient.wrapException(HBaseClient.java:1056)

         at org.apache.hadoop.hbase.ipc.HBaseClient.call(HBaseClient.java:1025)

         at 
org.apache.hadoop.hbase.ipc.WritableRpcEngine$Invoker.invoke(WritableRpcEngine.java:150)

         at $Proxy7.multi(Unknown Source)

         at 
org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation$3$1.call(HConnectionManager.java:1373)

         at 
org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation$3$1.call(HConnectionManager.java:1371)

         at 
org.apache.hadoop.hbase.client.ServerCallable.withoutRetries(ServerCallable.java:210)

         at 
org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation$3.call(HConnectionManager.java:1380)

         at 
org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation$3.call(HConnectionManager.java:1368)

         at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)

         at java.util.concurrent.FutureTask.run(FutureTask.java:138)

         ... 3 more

Caused by: java.io.IOException: Connection reset by peer

         at sun.nio.ch.FileDispatcher.read0(Native Method)

         at sun.nio.ch.SocketDispatcher.read(SocketDispatcher.java:21)

         at sun.nio.ch.IOUtil.readIntoNativeBuffer(IOUtil.java:198)

         at sun.nio.ch.IOUtil.read(IOUtil.java:171)

         at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:243)

         at 
org.apache.hadoop.net.SocketInputStream$Reader.performIO(SocketInputStream.java:55)

         at 
org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:142)

         at 
org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:159)

         at 
org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:129)

         at java.io.FilterInputStream.read(FilterInputStream.java:116)

         at java.io.FilterInputStream.read(FilterInputStream.java:116)

         at 
org.apache.hadoop.hbase.ipc.HBaseClient$Connection$PingInputStream.read(HBaseClient.java:399)

         at java.io.BufferedInputStream.fill(BufferedInputStream.java:218)

         at java.io.BufferedInputStream.read(BufferedInputStream.java:237)

         at java.io.DataInputStream.readInt(DataInputStream.java:370)

         at 
org.apache.hadoop.hbase.ipc.HBaseClient$Connection.receiveResponse(HBaseClient.java:672)

         at 
org.apache.hadoop.hbase.ipc.HBaseClient$Connection.run(HBaseClient.java:606)

发件人: Jean-Marc Spaggiari [mailto:[email protected]] 
发送时间: 2013年8月15日 星期四 20:42
收件人: [email protected]
主题: Re: regionserver died when using Put to insert data

 

Hi,

What is you key and how have your splitted your regions?

2000 regions for 77GB of data mean 40MB regions... Default value is 10GB. 
What's our table definition? There is something wrong here. You should have get 
less than 15 regions at the end.

Also, something you need to keep in mind. Here is an example.

Let's say your keys are something like that:

000ABCDE
000ABCDF
000ABCDG
000BBCDH
000BBCDI
000BBCDJ
000CBCDK
000CBCDE
000CBCDF
000DBCDG
000DBCDH
000DBCDI
000EBCDJ
000EBCDK

And so on. I you pre-split your table with, let's say, 256 regions. 

starting from:
#00 to #01
#01 to #02
etc
0 to 1
1 to 2
2 to 3
etc
a to b
b to c
c to d
etc.

All you keys are going to be on the same single region, even with the 
pre-split. So this region will have to split many times, and others are going 
to stay empty.

So, to summarize:
1) What is your key;
2) How have you splited your regions;
3) What is your table definition.

JM

2013/8/15 <[email protected]>

Hi jean-marc ;

 

I know what you say , and I create the table with 50 empty regions first .

 

But during inserting data into hbase , the regions increase rapidly ;  can 
greater than 2000 regions ….

 

I execute several times , it happens the same way .  And  soon after that  ,  
the regionserver died .

 

I don’t know why the regions increase so rapidly , When I restart the 
regionserver and don’t insert data , the regions decrease to 50 over a period 
of time .

 

Best wishes .

 

 

 

Reply via email to