to start
over with this table…
From: Gabriel Reid
[mailto:gabriel.r...@gmail.commailto:gabriel.r...@gmail.com]
Sent: Tuesday, June 23, 2015 2:57 AM
To: user@phoenix.apache.orgmailto:user@phoenix.apache.org
Subject: Re: How To Count Rows In Large Phoenix Table?
Hi Zack,
Would it be possible
that I’m missing quite a bit of data and I’m going to have to
start over with this table…
*From:* Gabriel Reid [mailto:gabriel.r...@gmail.com]
*Sent:* Tuesday, June 23, 2015 2:57 AM
*To:* user@phoenix.apache.org
*Subject:* Re: How To Count Rows In Large Phoenix Table?
Hi Zack,
Would
Hi Zack,
Would it be possible to provide a few more details on what kinds of
failures that you're getting, both with the CsvBulkLoadTool, and with the
SELECT COUNT(*) query?
About question #1, there aren't any known bugs (that I'm aware of) that
would cause some records to go missing in the
I had a very large Hive table that I needed in HBase.
After asking around, I came to the conclusion that my best bet was to:
1 - export the hive table to a CSV 'file'/folder on the HDFS
2 - Use the org.apache.phoenix.mapreduce.CsvBulkLoadTool to import the data.
I found that if I tried to pass
For #2: hbase org.apache.hadoop.hbase.mapreduce.RowCounter TABLE_NAME
On Mon, Jun 22, 2015 at 11:34 AM, Riesland, Zack zack.riesl...@sensus.com
wrote:
I had a very large Hive table that I needed in HBase.
After asking around, I came to the conclusion that my best bet was to:
1 –
For#2: You can use Row_Counter mapreduce job of HBase to count rows of
large table. You dont need to write any code.
Here is the sample command to invoke:
hbase org.apache.hadoop.hbase.mapreduce.RowCounter TABLE_NAME
~Anil
On Mon, Jun 22, 2015 at 12:08 PM, Ciureanu Constantin