Squirrel sql client possibly connect to host server if your phoenix and hbase 
are distributed deployed. So I consider it 
working to configure your phoenix server via modifying hbase-site.xml and 
reconnect phoenix through squirrel. 

Thanks,
Sun




 
From: Abe Weinograd
Date: 2014-10-09 00:15
To: user
Subject: Re: count on large table
Good point.  I have to figure out how to do that in a SQL Tool like Squirrel or 
workbench.

Is there any obvious thing i can do to help tune this?  I know that's a loaded 
question.  My client scanner batches are 1000 (also tried 10000 with no luck).

Thanks,
Abe

On Tue, Oct 7, 2014 at 9:09 PM, [email protected] <[email protected]> 
wrote:
Hi, Abe
Maybe setting the following property would help...
<property> 
    <name>phoenix.query.timeoutMs</name> 
    <value>3600000</value> 
</property>

Thanks,
Sun






From: Abe Weinograd
Date: 2014-10-08 04:34
To: user
Subject: count on large table
I have a table with 1B  rows.  I know this can is very specific to my 
environment, but just doing a SELECT COUNT(1) on the table   It never finished. 
 

We have a 10 node cluster with the RS's Heap size at 26GiB and skewed towards 
the block cache.  In the RS logs, i see a lot of these:

2014-10-07 16:27:04,942 WARN org.apache.hadoop.ipc.RpcServer: 
(responseTooSlow): 
{"processingtimems":22770,"call":"Scan(org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ScanRequest)","client":"10.10.0.10:44791","starttimems":1412713602172,"queuetimems":0,"class":"HRegionServer","responsesize":8,"method":"Scan"}

They stop eventually, but i the query times out and the query tool reports: 
org.apache.phoenix.exception.PhoenixIOException: 187541ms passed since the last 
invocation, timeout is currently set to 60000

Any ideas of where I can start in order to figure this out?

using Phoenix 4.1 on CDH 5.1 (Hbase 0.98.1)

Thanks,
Abe

Reply via email to