The other important timeout is Phoenix specific: phoenix.query.timeoutMs. Set this in your hbase-site.xml on the client side to the value in milliseconds for the amount of time you're willing to wait before the query finishes. I might be wrong, but I believe the hbase.rpc.timeout config parameter needs to be set in the hbase-site.xml on the server side (i.e. each region server).
On Tue, Sep 15, 2015 at 6:29 PM, Ravi Kiran <maghamraviki...@gmail.com> wrote: > Hi James, > You need to increase the value of hbase.rpc.timeout in hbase-site.xml > on your client end. > http://hbase.apache.org/book.html#trouble.client.lease.exception > > Ravi > > On Tue, Sep 15, 2015 at 12:56 PM, James Heather < > james.heat...@mendeley.com> wrote: > >> I'm a bit lost as to what I need to change, and where I need to change >> it, to bump up timeouts for this kind of error: >> >> Caused by: org.apache.phoenix.exception.PhoenixIOException: >> org.apache.phoenix.exception.PhoenixIOException: Failed after attempts=36, >> exceptions: >> >> Tue Sep 15 18:48:13 UTC 2015, null, java.net.SocketTimeoutException: >> callTimeout=60000, callDuration=60304: row '�>' on table 'LOADTEST.TESTING' >> at >> region=LOADTEST.TESTING,\x03\x00\x00\x00\x00\x00\x00\x00\x00,1442332822105.b6b3682074d6c65bd4efa3f1e2b58ffa., >> hostname=ip-172-31-31-177.ec2.chonp.net,60020,1442309899160, seqNum=2 >> >> at >> org.apache.phoenix.util.ServerUtil.parseServerException(ServerUtil.java:108) >> >> at >> org.apache.phoenix.iterate.BaseResultIterators.getIterators(BaseResultIterators.java:538) >> >> at >> org.apache.phoenix.iterate.ConcatResultIterator.getIterators(ConcatResultIterator.java:50) >> >> at >> org.apache.phoenix.iterate.ConcatResultIterator.currentIterator(ConcatResultIterator.java:97) >> >> at >> org.apache.phoenix.iterate.ConcatResultIterator.next(ConcatResultIterator.java:117) >> >> at >> org.apache.phoenix.iterate.BaseGroupedAggregatingResultIterator.next(BaseGroupedAggregatingResultIterator.java:64) >> >> at >> org.apache.phoenix.iterate.UngroupedAggregatingResultIterator.next(UngroupedAggregatingResultIterator.java:39) >> >> at >> org.apache.phoenix.iterate.DelegateResultIterator.next(DelegateResultIterator.java:44) >> >> at >> org.apache.phoenix.iterate.LimitingResultIterator.next(LimitingResultIterator.java:47) >> >> at >> org.apache.phoenix.jdbc.PhoenixResultSet.next(PhoenixResultSet.java:764) >> >> at org.skife.jdbi.v2.Query$4.munge(Query.java:176) >> >> at >> org.skife.jdbi.v2.QueryResultSetMunger.munge(QueryResultSetMunger.java:42) >> >> at >> org.skife.jdbi.v2.SQLStatement.internalExecute(SQLStatement.java:1340) >> >> ... 20 more >> >> Caused by: java.util.concurrent.ExecutionException: >> org.apache.phoenix.exception.PhoenixIOException: Failed after attempts=36, >> exceptions: >> >> Tue Sep 15 18:48:13 UTC 2015, null, java.net.SocketTimeoutException: >> callTimeout=60000, callDuration=60304: row '�>' on table 'LOADTEST.TESTING' >> at >> region=LOADTEST.TESTING,\x03\x00\x00\x00\x00\x00\x00\x00\x00,1442332822105.b6b3682074d6c65bd4efa3f1e2b58ffa., >> hostname=ip-172-31-31-177.ec2.chonp.net,60020,1442309899160, seqNum=2 >> >> at java.util.concurrent.FutureTask.report(FutureTask.java:122) >> >> at java.util.concurrent.FutureTask.get(FutureTask.java:206) >> >> at >> org.apache.phoenix.iterate.BaseResultIterators.getIterators(BaseResultIterators.java:534) >> >> ... 31 more >> >> >> Is this a client-side timeout, or do I need to change something >> HBase-related on the server and restart the cluster? On master, or all >> region servers? >> >> If it's a client-side thing, where (in JDBC terms) do I do this? >> >> I've tried various things, but I always hit this timeout, and it always >> says the timeout is 60000 (ms, presumably). >> >> James >> > >