Hello,
Using a portion of exported production data we have attempted to add secondary
indexes in our testing environment. (This question is not related to my
previous question in the mailing list which used a different dataset).
The primary row-key in HBase is a 30-byte binary value: 1 Byte
Hi,
We are currently using Cloudera as a package manager for our Hadoop Cluster
with Phoenix 4.7.0 (CLABS_PHOENIX) and HBase 1.2.0-cdh5.7.6. Phoenix 4.7.0
appears to be the latest version supported
(http://archive.cloudera.com/cloudera-labs/phoenix/parcels/latest/) even though
it’s old.
The
Dear Phoenix Team/Community,
The column mapping is not working in the latest version of the phoenix hive
storage handler. The issue is outlined in the Jira below and although it was
marked as solved it does not appear to be so:
https://issues.apache.org/jira/browse/PHOENIX-3346
This happens
Please start by sharing the version of Phoenix that you're using.
Did you search Jira to see if there was someone else who also reported
this issue?
On 7/23/19 4:24 PM, Alexander Batyrshin wrote:
Hello all,
Got this:
alter table TEST_TABLE SET APPEND_ONLY_SCHEMA=true;
Hi Chinmay,
Queries are of select * from
where name=value type, they are not complex having joins. From profiler i
see that lots of cpu time gets consumed during the course of instantiating
PheonixInputPartition.createPartitionReader().
Please check the profiler picture i have attached to know
Hi friend,are you fixed this problem?
I have some problem when I simulate sumAggregateFunction, I call it
ZSumAggregateFunction, also, I add some class such ZDecimalAggregator,
ZSumAggregateParseNode, and anyone were related with SumAggregateFunction.
so now, I can't find what place will