Hi Shapoor,

Moving the conversation to the users list.

Have you solved your issue? Sorry you haven't gotten a response sooner -- I
think everyone is working overtime to get 0.96 released.

I'm assuming each put is independent of the others. You're not putting
100mm times to the same row, are you?  I'm also curious, did you pre-split
your table before starting all of those inserts?

In the log you pasted, it looks like host kcs-testhadoop02 is the one that
times out. Can you reproduce the event and provide for us the RegionServer
logs from the machine that times out around the time of the event. Please
use a pastebin service rather than pasting to the list directly.

Thanks,
Nick

On Tuesday, July 9, 2013, shapoor wrote:

> hello,
>
> i am doing a lot of saves in HBase. like 100,000,000 documents, each 100KB.
> before i start the program there are almost 18 connections after starting
> my
> 2 regions and one master cluster. the connections are zookeeper, hdfs and
> hbase. as the process of saving starts, i have repeatedly more connections
> (i guess for each time flushing) until i reach almost 80 connections and
> that's when the following exeption appears. now HBase manages to save the
> data somehow but it is not effective with so many connections. how do I
> save
> this problem??
>
> regards, shapoor
>
> 13/07/09 13:31:48 WARN client.HConnectionManager$HConnectionImplementation:
> Failed all from
>
> region=table2,doc-id-866604,1373369430484.09001c90b3d2a4c20b56c35bb976ff91.,
> hostname=kcs-testhadoop02, port=60020
> java.util.concurrent.ExecutionException: java.net.SocketTimeoutException:
> Call to kcs-testhadoop02/192.168.111.211:60020 failed on socket timeout
> exception: java.net.SocketTimeoutException: 60000 millis timeout while
> waiting for channel to be ready for read. ch :
> java.nio.channels.SocketChannel[connected local=/192.168.111.72:55354
> remote=kcs-testhadoop02/192.168.111.211:60020]
>         at java.util.concurrent.FutureTask$Sync.innerGet(Unknown Source)
>         at java.util.concurrent.FutureTask.get(Unknown Source)
>         at
>
> org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.processBatchCallback(HConnectionManager.java:1598)
>         at
>
> org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.processBatch(HConnectionManager.java:1450)
>         at
> org.apache.hadoop.hbase.client.HTable.flushCommits(HTable.java:916)
>         at at.knowcenter.backends.HBaseStorage.flush(HBaseStorage.java:324)
>         at at.knowcenter.evaltool.Evaluate.save(Evaluate.java:155)
>         at
> at.knowcenter.evaltool.Evaluate.performSaveEvaluation(Evaluate.java:100)
>         at at.knowcenter.evaltool.Evaluate.evaluate(Evaluate.java:77)
>         at
> at.knowcenter.evaltool.EvaluationTool.execute(EvaluationTool.java:144)
>         at
> at.knowcenter.evaltool.EvaluationTool.main(EvaluationTool.java:199)
> Caused by: java.net.SocketTimeoutException: Call to
> kcs-testhadoop02/192.168.111.211:60020 failed on socket timeout exception:
> java.net.SocketTimeoutException: 60000 millis timeout while waiting for
> channel to be ready for read. ch :
> java.nio.channels.SocketChannel[connected
> local=/192.168.111.72:55354 remote=kcs-testhadoop02/192.168.111.211:60020]
>         at
>
> org.apache.hadoop.hbase.ipc.HBaseClient.wrapException(HBaseClient.java:1026)
>         at
> org.apache.hadoop.hbase.ipc.HBaseClient.call(HBaseClient.java:999)
>         at
>
> org.apache.hadoop.hbase.ipc.WritableRpcEngine$Invoker.invoke(WritableRpcEngine.java:86)
>         at com.sun.proxy.$Proxy6.multi(Unknown Source)
>         at
>
> org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation$3$1.call(HConnectionManager.java:1427)
>         at
>
> org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation$3$1.call(HConnectionManager.java:1425)
>         at
>
> org.apache.hadoop.hbase.client.ServerCallable.withoutRetries(ServerCallable.java:215)
>         at
>
> org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation$3.call(HConnectionManager.java:1434)
>         at
>
> org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation$3.call(HConnectionManager.java:1422)
>         at java.util.concurrent.FutureTask$Sync.innerRun(Unknown Source)
>         at java.util.concurrent.FutureTask.run(Unknown Source)
>         at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown
> Source)
>         at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown
> Source)
>         at java.lang.Thread.run(Unknown Source)
> Caused by: java.net.SocketTimeoutException: 60000 millis timeout while
> waiting for channel to be ready for read. ch :
> java.nio.channels.SocketChannel[connected local=/192.168.111.72:55354
> remote=kcs-testhadoop02/192.168.111.211:60020]
>         at
>
> org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:164)
>         at
> org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:155)
>         at
> org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:128)
>         at java.io.FilterInputStream.read(Unknown Source)
>         at
>
> org.apache.hadoop.hbase.ipc.HBaseClient$Connection$PingInputStream.read(HBaseClient.java:373)
>         at java.io.BufferedInputStream.fill(Unknown Source)
>         at java.io.BufferedInputStream.read(Unknown Source)
>         at java.io.DataInputStream.readInt(Unknown Source)
>         at
>
> org.apache.hadoop.hbase.ipc.HBaseClient$Connection.receiveResponse(HBaseClient.java:646)
>         at
>
> org.apache.hadoop.hbase.ipc.HBaseClient$Connection.run(HBaseClient.java:580)
>
>
>
> --
> View this message in context:
> http://apache-hbase.679495.n3.nabble.com/60000-millis-timeout-while-waiting-for-channel-to-be-ready-for-read-tp4047612.html
> Sent from the HBase Developer mailing list archive at Nabble.com.
>

Reply via email to