Can you upgrade?  That release is > 18 months old.  A bunch has
happened in the meantime.

For retries exhausted, check whats going on on the remote regionserver
that you are trying to write too.  Its probably struggling and thats
why requests are not going through -- or the client missed the fact
that region moved (all stuff that should be working better in latest
hbase).

St.Ack

On Tue, Jun 28, 2011 at 9:51 PM, Srikanth P. Shreenivas
<[email protected]> wrote:
> Hi,
>
> We are using HBase 0.20.3 (hbase-0.20-0.20.3-1.cloudera.noarch.rpm) cluster 
> in distributed mode with Hadoop 0.20.2 (hadoop-0.20-0.20.2+320-1.noarch).
> We are using pretty much default configuration, and only thing we have 
> customized is that we have allocated 4GB RAM in 
> /etc/hbase-0.20/conf/hbase-env.sh
>
> In our setup, we have a web application that reads a record from HBase and 
> writes a record as part of each web request.   The application is hosted in 
> Apache Tomcat 7 and is a stateless web application providing a REST-like web 
> service API.
>
> We are observing that our reads and writes times out once in a  while.  This 
> happens more for writes.
> We see below exception in our application logs:
>
>
> Exception Type 1 - During Get:
> ---------------------------------------
> org.apache.hadoop.hbase.client.RetriesExhaustedException: Trying to contact 
> region server 10.1.68.36:60020 for region 
> employeedata,be8784ac8b57c45625a03d52be981b88097c2fdc,1308657957879, row 
> 'd51b74eb05e07f96cee0ec556f5d8d161e3281f3', but failed after 10 attempts.
> Exceptions:
> java.io.IOException: Call to /10.1.68.36:60020 failed on local exception: 
> java.nio.channels.ClosedByInterruptException
> java.nio.channels.ClosedByInterruptException
> java.nio.channels.ClosedByInterruptException
> java.nio.channels.ClosedByInterruptException
> java.nio.channels.ClosedByInterruptException
> java.nio.channels.ClosedByInterruptException
> java.nio.channels.ClosedByInterruptException
> java.nio.channels.ClosedByInterruptException
> java.nio.channels.ClosedByInterruptException
> java.nio.channels.ClosedByInterruptException
>
>        at 
> org.apache.hadoop.hbase.client.HConnectionManager$TableServers.getRegionServerWithRetries(HConnectionManager.java:1048)
>        at org.apache.hadoop.hbase.client.HTable.get(HTable.java:417)
>     <snip>
>
> Exception  Type 2 - During Put:
> ---------------------------------------------
> Caused by: org.apache.hadoop.hbase.client.RetriesExhaustedException: Trying 
> to contact region server 10.1.68.34:60020 for region 
> audittable,,1309183872019, row '2a012017120f80a801b28f5f66a83dc2a8882d1b', 
> but failed after 10 attempts.
> Exceptions:
> java.io.IOException: Call to /10.1.68.34:60020 failed on local exception: 
> java.nio.channels.ClosedByInterruptException
> java.io.IOException: Call to /10.1.68.34:60020 failed on local exception: 
> java.nio.channels.ClosedByInterruptException
> java.io.IOException: Call to /10.1.68.34:60020 failed on local exception: 
> java.nio.channels.ClosedByInterruptException
> java.io.IOException: Call to /10.1.68.34:60020 failed on local exception: 
> java.nio.channels.ClosedByInterruptException
> java.io.IOException: Call to /10.1.68.34:60020 failed on local exception: 
> java.nio.channels.ClosedByInterruptException
> java.io.IOException: Call to /10.1.68.34:60020 failed on local exception: 
> java.nio.channels.ClosedByInterruptException
> java.io.IOException: Call to /10.1.68.34:60020 failed on local exception: 
> java.nio.channels.ClosedByInterruptException
> java.io.IOException: Call to /10.1.68.34:60020 failed on local exception: 
> java.nio.channels.ClosedByInterruptException
> java.io.IOException: Call to /10.1.68.34:60020 failed on local exception: 
> java.nio.channels.ClosedByInterruptException
> java.io.IOException: Call to /10.1.68.34:60020 failed on local exception: 
> java.nio.channels.ClosedByInterruptException
>
>        at 
> org.apache.hadoop.hbase.client.HConnectionManager$TableServers.getRegionServerWithRetries(HConnectionManager.java:1048)
>        at 
> org.apache.hadoop.hbase.client.HConnectionManager$TableServers$3.doCall(HConnectionManager.java:1239)
>        at 
> org.apache.hadoop.hbase.client.HConnectionManager$TableServers$Batch.process(HConnectionManager.java:1161)
>        at 
> org.apache.hadoop.hbase.client.HConnectionManager$TableServers.processBatchOfRows(HConnectionManager.java:1247)
>        at org.apache.hadoop.hbase.client.HTable.flushCommits(HTable.java:609)
>        at org.apache.hadoop.hbase.client.HTable.put(HTable.java:474)
>     <snip>
>
> Any inputs on why this is happening, or how to rectify it will be of immense 
> help.
>
> Thanks,
> Srikanth
>
>
>
> Srikanth P Shreenivas|Principal Consultant | MindTree Ltd.|Global Village, 
> RVCE Post, Mysore Road, Bangalore-560 059, INDIA|Voice +91 80 26264000 / Fax 
> +91 80 2626 4100| Mob: 9880141059|email: 
> [email protected]<mailto:[email protected]> 
> |www.mindtree.com<http://www.mindtree.com/> |
>
>
> ________________________________
>
> http://www.mindtree.com/email/disclaimer.html
>

Reply via email to