wangxianbin created PHOENIX-999:
-----------------------------------

             Summary: SocketTimeoutException under high concurrent write access 
to phoenix indexed table
                 Key: PHOENIX-999
                 URL: https://issues.apache.org/jira/browse/PHOENIX-999
             Project: Phoenix
          Issue Type: Bug
    Affects Versions: 4.0.0
         Environment: HBase 0.98.1-SNAPSHOT, Hadoop 2.3.0-cdh5.0.0
            Reporter: wangxianbin
            Priority: Critical


we have a small hbase cluster, which has one master, six slaves, we test 
phoenix index concurrent write access performance with four write clients, each 
client has 100 threads, each thread has one phoenix jdbc connection, and we 
encounter SocketTimeoutException as follow, and it will retry for very long 
time, how can i deal with such issue?

2014-05-22 17:22:58,490 INFO  
[storm4.org,60020,1400750242045-index-writer--pool3-t10] client.AsyncProcess: 
#16016, waiting for some tasks to finish. Expected max=0, tasksSent=13, 
tasksDone=12, currentTasksDone=12, retries=11 hasError=false, 
tableName=IPHOENIX10M
2014-05-22 17:23:00,436 INFO  
[storm4.org,60020,1400750242045-index-writer--pool3-t6] client.AsyncProcess: 
#16027, waiting for some tasks to finish. Expected max=0, tasksSent=13, 
tasksDone=12, currentTasksDone=12, retries=11 hasError=false, 
tableName=IPHOENIX10M
2014-05-22 17:23:00,440 INFO  
[storm4.org,60020,1400750242045-index-writer--pool3-t1] client.AsyncProcess: 
#16013, waiting for some tasks to finish. Expected max=0, tasksSent=13, 
tasksDone=12, currentTasksDone=12, retries=11 hasError=false, 
tableName=IPHOENIX10M
2014-05-22 17:23:00,449 INFO  
[storm4.org,60020,1400750242045-index-writer--pool3-t7] client.AsyncProcess: 
#16028, waiting for some tasks to finish. Expected max=0, tasksSent=13, 
tasksDone=12, currentTasksDone=12, retries=11 hasError=false, 
tableName=IPHOENIX10M
2014-05-22 17:23:00,473 INFO  
[storm4.org,60020,1400750242045-index-writer--pool3-t8] client.AsyncProcess: 
#16020, waiting for some tasks to finish. Expected max=0, tasksSent=13, 
tasksDone=12, currentTasksDone=12, retries=11 hasError=false, 
tableName=IPHOENIX10M
2014-05-22 17:23:00,494 INFO  [htable-pool20-t13] client.AsyncProcess: #16016, 
table=IPHOENIX10M, attempt=12/350 failed 1 ops, last exception: 
java.net.SocketTimeoutException: Call to storm3.org/172.16.2.23:60020 failed 
because java.net.SocketTimeoutException: 2000 millis timeout while waiting for 
channel to be ready for read. ch : java.nio.channels.SocketChannel[connected 
local=/172.16.2.24:52017 remote=storm3.org/172.16.2.23:60020] on 
storm3.org,60020,1400750242156, tracking started Thu May 22 17:21:32 CST 2014, 
retrying after 20189 ms, replay 1 ops.
2014-05-22 17:23:02,439 INFO  
[storm4.org,60020,1400750242045-index-writer--pool3-t4] client.AsyncProcess: 
#16022, waiting for some tasks to finish. Expected max=0, tasksSent=13, 
tasksDone=12, currentTasksDone=12, retries=11 hasError=false, 
tableName=IPHOENIX10M
2014-05-22 17:23:02,496 INFO  [htable-pool20-t3] client.AsyncProcess: #16013, 
table=IPHOENIX10M, attempt=12/350 failed 1 ops, last exception: 
java.net.SocketTimeoutException: Call to storm3.org/172.16.2.23:60020 failed 
because java.net.SocketTimeoutException: 2000 millis timeout while waiting for 
channel to be ready for read. ch : java.nio.channels.SocketChannel[connected 
local=/172.16.2.24:52017 remote=storm3.org/172.16.2.23:60020] on 
storm3.org,60020,1400750242156, tracking started Thu May 22 17:21:32 CST 2014, 
retrying after 20001 ms, replay 1 ops.
2014-05-22 17:23:02,496 INFO  [htable-pool20-t16] client.AsyncProcess: #16028, 
table=IPHOENIX10M, attempt=12/350 failed 1 ops, last exception: 
java.net.SocketTimeoutException: Call to storm3.org/172.16.2.23:60020 failed 
because java.net.SocketTimeoutException: 2000 millis timeout while waiting for 
channel to be ready for read. ch : java.nio.channels.SocketChannel[connected 
local=/172.16.2.24:52017 remote=storm3.org/172.16.2.23:60020] on 
storm3.org,60020,1400750242156, tracking started Thu May 22 17:21:37 CST 2014, 
retrying after 20095 ms, replay 1 ops.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

Reply via email to