When I query from a very big table
It get errors as follow:
java.lang.RuntimeException: org.apache.phoenix.exception.PhoenixIOException: 
org.apache.phoenix.exception.PhoenixIOException: Failed after attempts=36, 
exceptions:




Fri May 13 16:15:29 CST 2016, null, java.net.SocketTimeoutException: 
callTimeout=60000, callDuration=60307: row '' on table 'IC_WHOLESALE_PRICE' at 
region=IC_WHOLESALE_PRICE,,1463121082822.e7e7cbd63c1831df75aee8842df5c7f6., 
hostname=hadoop09,60020,1463119404385, seqNum=2




        at sqlline.IncrementalRows.hasNext(IncrementalRows.java:73)

        at sqlline.TableOutputFormat.print(TableOutputFormat.java:33)

        at sqlline.SqlLine.print(SqlLine.java:1653)

        at sqlline.Commands.execute(Commands.java:833)

        at sqlline.Commands.sql(Commands.java:732)

        at sqlline.SqlLine.dispatch(SqlLine.java:808)

        at sqlline.SqlLine.begin(SqlLine.java:681)

        at sqlline.SqlLine.start(SqlLine.java:398)

        at sqlline.SqlLine.main(SqlLine.java:292)



Is this a client-side timeout, or do I need to change something 
HBase-related on the server and restart the cluster? On master, or all 
region servers?

Reply via email to