Writing to HDFS with a columnar format like Parquet will always be faster
than writing to HBase. How about random access of a row? If you're not
doing point lookups and small range scans, you probably don't want to use
HBase (& Phoenix). HBase is writing more information than is written when
using
Any answer on this..?
On Fri, Jan 12, 2018 at 10:38 AM, Flavio Pompermaier
wrote:
> Hi to all,
> looking at the documentation (https://phoenix.apache.org/tuning_guide.html),
> in the writing section, there's the following sentence: "Phoenix uses
> commit() instead of
Hi to all,
I've tested a program that writes (UPSERTS) to Phoenix using executeBatch().
In the logs I see "*Sent batch of 2 for SOMETABLE*" .
Is this correct? I fear that the batch is not executed in batch but
statement by statement.. the code within the
PhoenixStatement.executeBatch() is:
for (i