hmm, my 2nd problem is probably caused by region servers running out of memory. 2 out of 3 region servers reached the mem limit of 4GB

So my settings are -Xmx4G and

  phoenix.query.maxServerCacheBytes = 512000000
  phoenix.query.maxGlobalMemoryPercentage = 50

I'll try to put maxGlobalMemoryPercentage to 25% but I don't think it will help. Doesn't phoenix cap the memory usage on heavy queries? What is using the memory?

On 28/09/15 15:49, Lukáš Lalinský wrote:
You need to set "hbase.rpc.timeout" to the same value as you have for "phoenix.query.timeoutMs".

It seems that in the pre-Apache version of Phoenix it was set automatically:

https://issues.apache.org/jira/browse/PHOENIX-269?focusedCommentId=14681924&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14681924

Lukas


On Mon, Sep 28, 2015 at 4:41 PM, Konstantinos Kougios <[email protected] <mailto:[email protected]>> wrote:

    I have a fairly big table on a noy-so-fairly-powerfull cluster, so
    it takes a lot of time for queries to respond. I don't mind that
    but it times out for many queries :

    0: jdbc:phoenix:nn.lan> select count(*) from words;
    +------------------------------------------+
    |                 COUNT(1)                 |
    +------------------------------------------+
    java.lang.RuntimeException:
    org.apache.phoenix.exception.PhoenixIOException:
    org.apache.phoenix.exception.PhoenixIOException: Failed after
    attempts=1, exceptions:
    Mon Sep 28 15:32:25 BST 2015,
    RpcRetryingCaller{globalStartTime=1443450685716, pause=100,
    retries=1},
    org.apache.hadoop.hbase.client.RetriesExhaustedException: Failed
    after attempts=1, exceptions:
    Mon Sep 28 15:32:25 BST 2015,
    RpcRetryingCaller{globalStartTime=1443450685716, pause=100,
    retries=1},
    org.apache.hadoop.hbase.client.RetriesExhaustedException: Failed
    after attempts=1, exceptions:
    Mon Sep 28 15:32:25 BST 2015,
    RpcRetryingCaller{globalStartTime=1443450685716, pause=100,
    retries=1}, java.io.IOException: Call to d2.lan/192.168.0.30:16020
    <http://192.168.0.30:16020> failed on local exception:
    org.apache.hadoop.hbase.ipc.CallTimeoutException: Call id=842,
    waitTime=60001, operationTimeout=60000 expired.



        at sqlline.IncrementalRows.hasNext(IncrementalRows.java:73)
        at sqlline.TableOutputFormat.print(TableOutputFormat.java:33)
        at sqlline.SqlLine.print(SqlLine.java:1653)
        at sqlline.Commands.execute(Commands.java:833)
        at sqlline.Commands.sql(Commands.java:732)
        at sqlline.SqlLine.dispatch(SqlLine.java:808)
        at sqlline.SqlLine.begin(SqlLine.java:681)
        at sqlline.SqlLine.start(SqlLine.java:398)
        at sqlline.SqlLine.main(SqlLine.java:292)
    0: jdbc:phoenix:nn.lan> Closing:
    org.apache.phoenix.jdbc.PhoenixConnection


    I did try some settings on hbase-site without luck:


    phoenix.query.timeoutMs =6000000 hbase.client.operation.timeout =1200000 
hbase.client.backpressure.enabled =true hbase.client.retries.number =1

    Any ideas how can this be fixed?

    It seems the problem is that the timeout for iterating through
    results is 60secs. I assume if it doesn't get 1 result within that
    period, it times out. Since this is a count(*) query with only 1
    row, it does timeout.

    Thanks



Reply via email to