Hi all,
Do you know of any integration approach to stream documents from Phoenix to
Solr in a similar way to what Lily HBase Indexer does?
Thanks!
that kind of messages may happen when there were queries that utilize
memory manager (usually joins and group by) and they were timed out or
failed due to some reason. So the message itself is hardly related to CPU
usage or GC.
BUT. That may mean that your region servers are unable to handle
I don’t think the HBase row_counter job is going to be faster than a
count(*) query. Both require a full table scan, so neither will be
particularly fast.
A couple of alternatives if you’re ok with an approximate count: 1) enable
stats collection (but you can leave off usage to parallelize
Hi Anil,
Obviously I'm not using HBase just for the count query..Most of the time I
do INSERT and selective queries, I was just trying to figure out if my
HBase + Phoenix installation is robust enough to deal with a huge amount of
data..
On Thu, Feb 1, 2018 at 5:07 PM, anil gupta
I was able to make it work changing the following params (both on server
and client side and restarting hbase) and now the query answers in about 6
minutes:
hbase.rpc.timeout (to 60)
phoenix.query.timeoutMs (to 60)
hbase.client.scanner.timeout.period (from 1 m to 10m)
I've the same problem, even after I increased the hbase.rpc.timeout the
result is same. The difference is that I use 4.12.
On Thu, Feb 1, 2018 at 8:23 PM, Flavio Pompermaier
wrote:
> Hi to all,
> I'm trying to use the brand-new Phoenix 4.13.2-cdh5.11.2 over HBase and
>
Hi to all,
I'm trying to use the brand-new Phoenix 4.13.2-cdh5.11.2 over HBase and
everything was fine until the data was quite small (about few millions). As
I inserted 170 M of rows in my table I cannot get the row count anymore
(using ELECT COUNT) because of