You're on 4.2.2, Sun? Have you overridden either of phoenix.stats.guidepost.width or phoenix.stats.guidepost.per.region? These control the size of each parallel scan. I assume you've run a major compaction on the table at some point?
Thanks, James On Wed, Jan 14, 2015 at 7:06 PM, [email protected] <[email protected]> wrote: > Hi, all > > When counting on large table, we got the following exception > org.apache.hadoop.hbase.ipc.RpcClient$CallTimeoutException: Call id=, > waitTime=69714 rpcTimetout=60000 > > How would that be resolved? Table size goes to 17.3G with issuing hdfs dfs > -du. Table with 90+ columns > and only one column family F. Compression codec is snappy. > > Thanks, > Sun. > > ________________________________ > ________________________________ > > CertusNet > >
