P.S. I meant to say normalizing rather than de-normalizing.
On 21/10/2016 10:36 AM, F21 wrote:
Hey all,
Normally, rather than de-normalizing my data, I prefer to have the
data duplicated in 2 tables. With transactions, it is quite simple to
ensure atomic updates to those 2 tables (especially
Hey all,
Normally, rather than de-normalizing my data, I prefer to have the data
duplicated in 2 tables. With transactions, it is quite simple to ensure
atomic updates to those 2 tables (especially for read-heavy apps). This
also makes things easier to query and avoids the memory limits of
Hi,
How to get the Schema out of Hbase using Phoenix driver?
metadata.getSchemas() produce..
http://stackoverflow.com/questions/40123547/phoenixioexception-nomethodexception-deferredfileoutputstramstring-string-fi
On Thu, Oct 20, 2016 at 11:31 AM, Sanooj Padmakumar
wrote:
>
Hi James,
We are loading data from Phoenix tables into in-memory database. Based on
the query we are finding the number of phoenix input splits (similar to
what happens inside phoenix MR) and loads the data into in-memory database
in parallel. So we are looking for ways to further parallelize the