I have successfully deployed phoenix and the phoenix query server into a
toy HBase cluster.
I am currently running the http query server on all regionserver,
however I think it would be much better if I can run the http query
servers on separate docker containers or machines. This way, I can
I'm wondering if somebody can provide some guidance on how to use
CsvBulkLoadTool from within a Java Class, instead of via the command line, as
is shown in the documentation. I'd like to determine if CsvBulkLoadTool ran
without throwing any exceptions. However, exceptions generated by
Hey Rafa,
So in terms of the hbase-site.xml, I just need the entries for the
location to the zookeeper quorum and the zookeeper znode for the cluster
right?
Cheers!
On 17/12/2015 9:48 PM, rafa wrote:
Hi F21,
You can install Query Server in any server that has network connection
with your
think so. Copy the hbase-site.xml from the cluster into the new query
Server machine and set the directory where the xml resides in the classpath
of the Query Server. That should be enough,
Regards
rafa
On Thu, Dec 17, 2015 at 12:21 PM, F21 wrote:
> Hey Rafa,
>
> So in
I am trying to ingest a 575MB CSV file with 192,444 lines using the
CsvBulkLoadTool MapReduce job. When running this job, I find that I have to
boost the max Java heap space to 48GB (24GB fails with Java out of memory
errors).
I'm concerned about scaling issues. It seems like it shouldn't
Greetings,
I've been reading about Phoenix with an eye toward implementing a "versioned
database" on Hadoop. It looks pretty slick, especially the ability to query at
past timestamp. But I can't figure out what happens with deleted records. Are
all versions deleted, or can I still go back