Hi Pedro,
I was query the COUNT just as a first dumb query to test if everything was
ok...indeed I had to increase 4 timeouts in order to answer that query
without errors.
By the way, I think that count is something very useful to know about a
table and, IMHO, should be something always available
Flavio I get same behaviour, a count(*) over 180M records needs a couple of
minutes to complete for a table with 10 regions and 4 rs serving it.
Why are you evaluating robustness in terms of full scans? As Anil said I
wouldn't expect a NoSQL database to run quick counts on hundreds of
millions or
I don’t think the HBase row_counter job is going to be faster than a
count(*) query. Both require a full table scan, so neither will be
particularly fast.
A couple of alternatives if you’re ok with an approximate count: 1) enable
stats collection (but you can leave off usage to parallelize
Hi Anil,
Obviously I'm not using HBase just for the count query..Most of the time I
do INSERT and selective queries, I was just trying to figure out if my
HBase + Phoenix installation is robust enough to deal with a huge amount of
data..
On Thu, Feb 1, 2018 at 5:07 PM, anil gupta
I was able to make it work changing the following params (both on server
and client side and restarting hbase) and now the query answers in about 6
minutes:
hbase.rpc.timeout (to 60)
phoenix.query.timeoutMs (to 60)
hbase.client.scanner.timeout.period (from 1 m to 10m)
I've the same problem, even after I increased the hbase.rpc.timeout the
result is same. The difference is that I use 4.12.
On Thu, Feb 1, 2018 at 8:23 PM, Flavio Pompermaier
wrote:
> Hi to all,
> I'm trying to use the brand-new Phoenix 4.13.2-cdh5.11.2 over HBase and
>
Hi to all,
I'm trying to use the brand-new Phoenix 4.13.2-cdh5.11.2 over HBase and
everything was fine until the data was quite small (about few millions). As
I inserted 170 M of rows in my table I cannot get the row count anymore
(using ELECT COUNT) because of