Re: Loading via MapReduce, Not Moving HFiles to HBase

2016-09-29 Thread Gabriel Reid
Hi Ravi,

I see in your output that the final upload of created HFiles is failing due
to the number of HFiles created per region. I also just noticed that you're
supplying the hbase.mapreduce.bulkload.max.hfiles.perRegion.perFamily
config parameter.

Could you post the exact, complete command that you're using to run this
import.

Also, be aware that overriding the max hfiles per region setting is
probably not the best way to get around this -- the fact that you've got so
many HFiles per region probably indicates that you should have more
regions. See this discussion in an earlier thread[1] for more info.

- Gabriel

1.
https://lists.apache.org/list.html?user@phoenix.apache.org:lte=3M:CsvBulkLoadTool%20with%20%7E75GB%20file

On Thu, Sep 29, 2016 at 5:16 PM, Ravi Kumar Bommada <braviku...@juniper.net>
wrote:

> Hi Gabriel,
>
>
>
> Please find the logs attached.
>
>
>
> R’s
>
> Ravi Kumar B
>
>
>
> *From:* Gabriel Reid [mailto:gabriel.r...@gmail.com]
> *Sent:* Wednesday, September 28, 2016 5:51 PM
> *To:* user@phoenix.apache.org
> *Subject:* Re: Loading via MapReduce, Not Moving HFiles to HBase
>
>
>
> Hi Ravi,
>
>
>
> It looks like those log file entries you posted are from a mapreduce task.
> Could you post the output of the command that you're using to start the
> actual job (i.e. console output of "hadoop jar ...").
>
>
>
> - Gabriel
>
>
>
> On Wed, Sep 28, 2016 at 1:49 PM, Ravi Kumar Bommada <
> braviku...@juniper.net> wrote:
>
> Hi All,
>
>
>
> I’m trying to load data via phoenix mapreduce referring to below screen:
>
>
>
>
>
> HFiles are getting created, each HFile is of size 300MB and 176 such
> HFiles are there, but after that files are not moving to HBase. i.e when
> I’m querying HBase I’m not able to see data.According to the logs below
> data commit is successful.
>
>
>
> Please suggest, if I’m missing any configuration.
>
>
>
> Provided:
>
>
>
> Using property: -Dhbase.mapreduce.bulkload.max.hfiles.perRegion.
> perFamily=1024
>
>
>
> Last Few Logs:
>
> 2016-09-27 07:27:35,845 INFO [main] org.apache.hadoop.io.compress.CodecPool:
> Got brand-new decompressor [.snappy]
>
> 2016-09-27 07:27:35,846 INFO [main] org.apache.hadoop.io.compress.CodecPool:
> Got brand-new decompressor [.snappy]
>
> 2016-09-27 07:27:35,846 INFO [main] org.apache.hadoop.io.compress.CodecPool:
> Got brand-new decompressor [.snappy]
>
> 2016-09-27 07:27:35,846 INFO [main] org.apache.hadoop.mapred.Merger:
> Merging 64 intermediate segments out of a total of 127
>
> 2016-09-27 07:28:21,238 INFO [main] org.apache.hadoop.mapred.Merger: Down
> to the last merge-pass, with 64 segments left of total size: -40111574372
> bytes
>
> 2016-09-27 07:30:24,933 INFO [main] org.apache.hadoop.mapred.Merger:
> Merging 179 sorted segments
>
> 2016-09-27 07:30:24,965 INFO [main] org.apache.hadoop.mapred.Merger: Down
> to the last merge-pass, with 0 segments left of total size: 4736 bytes
>
> 2016-09-27 07:30:24,967 INFO [main] org.apache.hadoop.mapred.Merger:
> Merging 179 sorted segments
>
> 2016-09-27 07:30:24,999 INFO [main] org.apache.hadoop.mapred.Merger: Down
> to the last merge-pass, with 0 segments left of total size: 4736 bytes
>
> 2016-09-27 07:30:25,000 INFO [main] org.apache.hadoop.mapred.Merger:
> Merging 179 sorted segments
>
> 2016-09-27 07:30:25,033 INFO [main] org.apache.hadoop.mapred.Merger: Down
> to the last merge-pass, with 0 segments left of total size: 4736 bytes
>
> 2016-09-27 07:30:25,035 INFO [main] org.apache.hadoop.mapred.Merger:
> Merging 179 sorted segments
>
> 2016-09-27 07:30:25,068 INFO [main] org.apache.hadoop.mapred.Merger: Down
> to the last merge-pass, with 0 segments left of total size: 4736 bytes
>
> 2016-09-27 07:30:25,723 INFO [main] org.apache.hadoop.mapred.Task:
> Task:attempt_1467713708066_29809_m_16_0 is done. And is in the
> process of committing
>
> 2016-09-27 07:30:25,788 INFO [main] org.apache.hadoop.mapred.Task: Task
> 'attempt_1467713708066_29809_m_16_0' done.
>
>
>
>
>
> Regard’s
>
>
>
> Ravi Kumar B
>
> Mob: +91 9591144511
>
>
>
>
>
>
>
>
>


RE: Loading via MapReduce, Not Moving HFiles to HBase

2016-09-29 Thread Ravi Kumar Bommada
Hi Gabriel,

Please find the logs attached.

R’s
Ravi Kumar B

From: Gabriel Reid [mailto:gabriel.r...@gmail.com]
Sent: Wednesday, September 28, 2016 5:51 PM
To: user@phoenix.apache.org
Subject: Re: Loading via MapReduce, Not Moving HFiles to HBase

Hi Ravi,

It looks like those log file entries you posted are from a mapreduce task. 
Could you post the output of the command that you're using to start the actual 
job (i.e. console output of "hadoop jar ...").

- Gabriel

On Wed, Sep 28, 2016 at 1:49 PM, Ravi Kumar Bommada 
<braviku...@juniper.net<mailto:braviku...@juniper.net>> wrote:
Hi All,

I’m trying to load data via phoenix mapreduce referring to below screen:

[cid:image001.jpg@01D21A92.5D1636D0]

HFiles are getting created, each HFile is of size 300MB and 176 such HFiles are 
there, but after that files are not moving to HBase. i.e when I’m querying 
HBase I’m not able to see data.According to the logs below data commit is 
successful.

Please suggest, if I’m missing any configuration.

Provided:

Using property: -Dhbase.mapreduce.bulkload.max.hfiles.perRegion.perFamily=1024

Last Few Logs:
2016-09-27 07:27:35,845 INFO [main] org.apache.hadoop.io.compress.CodecPool: 
Got brand-new decompressor [.snappy]
2016-09-27 07:27:35,846 INFO [main] org.apache.hadoop.io.compress.CodecPool: 
Got brand-new decompressor [.snappy]
2016-09-27 07:27:35,846 INFO [main] org.apache.hadoop.io.compress.CodecPool: 
Got brand-new decompressor [.snappy]
2016-09-27 07:27:35,846 INFO [main] org.apache.hadoop.mapred.Merger: Merging 64 
intermediate segments out of a total of 127
2016-09-27 07:28:21,238 INFO [main] org.apache.hadoop.mapred.Merger: Down to 
the last merge-pass, with 64 segments left of total size: -40111574372 bytes
2016-09-27 07:30:24,933 INFO [main] org.apache.hadoop.mapred.Merger: Merging 
179 sorted segments
2016-09-27 07:30:24,965 INFO [main] org.apache.hadoop.mapred.Merger: Down to 
the last merge-pass, with 0 segments left of total size: 4736 bytes
2016-09-27 07:30:24,967 INFO [main] org.apache.hadoop.mapred.Merger: Merging 
179 sorted segments
2016-09-27 07:30:24,999 INFO [main] org.apache.hadoop.mapred.Merger: Down to 
the last merge-pass, with 0 segments left of total size: 4736 bytes
2016-09-27 07:30:25,000 INFO [main] org.apache.hadoop.mapred.Merger: Merging 
179 sorted segments
2016-09-27 07:30:25,033 INFO [main] org.apache.hadoop.mapred.Merger: Down to 
the last merge-pass, with 0 segments left of total size: 4736 bytes
2016-09-27 07:30:25,035 INFO [main] org.apache.hadoop.mapred.Merger: Merging 
179 sorted segments
2016-09-27 07:30:25,068 INFO [main] org.apache.hadoop.mapred.Merger: Down to 
the last merge-pass, with 0 segments left of total size: 4736 bytes
2016-09-27 07:30:25,723 INFO [main] org.apache.hadoop.mapred.Task: 
Task:attempt_1467713708066_29809_m_16_0 is done. And is in the process of 
committing
2016-09-27 07:30:25,788 INFO [main] org.apache.hadoop.mapred.Task: Task 
'attempt_1467713708066_29809_m_16_0' done.


Regard’s

Ravi Kumar B
Mob: +91 9591144511<tel:%2B91%209591144511>






logs.docx
Description: logs.docx


Re: Loading via MapReduce, Not Moving HFiles to HBase

2016-09-28 Thread Gabriel Reid
Hi Ravi,

It looks like those log file entries you posted are from a mapreduce task.
Could you post the output of the command that you're using to start the
actual job (i.e. console output of "hadoop jar ...").

- Gabriel

On Wed, Sep 28, 2016 at 1:49 PM, Ravi Kumar Bommada 
wrote:

> Hi All,
>
> I’m trying to load data via phoenix mapreduce referring to below screen:
>
>
> HFiles are getting created, each HFile is of size 300MB and 176 such
> HFiles are there, but after that files are not moving to HBase. i.e when
> I’m querying HBase I’m not able to see data.According to the logs below
> data commit is successful.
>
> Please suggest, if I’m missing any configuration.
>
> Provided:
>
> Using property: -Dhbase.mapreduce.bulkload.max.hfiles.perRegion.
> perFamily=1024
>
> Last Few Logs:
> 2016-09-27 07:27:35,845 INFO [main] org.apache.hadoop.io.compress.CodecPool:
> Got brand-new decompressor [.snappy]
> 2016-09-27 07:27:35,846 INFO [main] org.apache.hadoop.io.compress.CodecPool:
> Got brand-new decompressor [.snappy]
> 2016-09-27 07:27:35,846 INFO [main] org.apache.hadoop.io.compress.CodecPool:
> Got brand-new decompressor [.snappy]
> 2016-09-27 07:27:35,846 INFO [main] org.apache.hadoop.mapred.Merger:
> Merging 64 intermediate segments out of a total of 127
> 2016-09-27 07:28:21,238 INFO [main] org.apache.hadoop.mapred.Merger: Down
> to the last merge-pass, with 64 segments left of total size: -40111574372
> bytes
> 2016-09-27 07:30:24,933 INFO [main] org.apache.hadoop.mapred.Merger:
> Merging 179 sorted segments
> 2016-09-27 07:30:24,965 INFO [main] org.apache.hadoop.mapred.Merger: Down
> to the last merge-pass, with 0 segments left of total size: 4736 bytes
> 2016-09-27 07:30:24,967 INFO [main] org.apache.hadoop.mapred.Merger:
> Merging 179 sorted segments
> 2016-09-27 07:30:24,999 INFO [main] org.apache.hadoop.mapred.Merger: Down
> to the last merge-pass, with 0 segments left of total size: 4736 bytes
> 2016-09-27 07:30:25,000 INFO [main] org.apache.hadoop.mapred.Merger:
> Merging 179 sorted segments
> 2016-09-27 07:30:25,033 INFO [main] org.apache.hadoop.mapred.Merger: Down
> to the last merge-pass, with 0 segments left of total size: 4736 bytes
> 2016-09-27 07:30:25,035 INFO [main] org.apache.hadoop.mapred.Merger:
> Merging 179 sorted segments
> 2016-09-27 07:30:25,068 INFO [main] org.apache.hadoop.mapred.Merger: Down
> to the last merge-pass, with 0 segments left of total size: 4736 bytes
> 2016-09-27 07:30:25,723 INFO [main] org.apache.hadoop.mapred.Task:
> Task:attempt_1467713708066_29809_m_16_0 is done. And is in the
> process of committing
> 2016-09-27 07:30:25,788 INFO [main] org.apache.hadoop.mapred.Task: Task
> 'attempt_1467713708066_29809_m_16_0' done.
>
>
> Regard’s
>
> Ravi Kumar B
> Mob: +91 9591144511
>
>
>
>


Loading via MapReduce, Not Moving HFiles to HBase

2016-09-28 Thread Ravi Kumar Bommada
Hi All,

I'm trying to load data via phoenix mapreduce referring to below screen:



HFiles are getting created, each HFile is of size 300MB and 176 such HFiles are 
there, but after that files are not moving to HBase. i.e when I'm querying 
HBase I'm not able to see data.According to the logs below data commit is 
successful.

Please suggest, if I'm missing any configuration.

Provided:

Using property: -Dhbase.mapreduce.bulkload.max.hfiles.perRegion.perFamily=1024

Last Few Logs:
2016-09-27 07:27:35,845 INFO [main] org.apache.hadoop.io.compress.CodecPool: 
Got brand-new decompressor [.snappy]
2016-09-27 07:27:35,846 INFO [main] org.apache.hadoop.io.compress.CodecPool: 
Got brand-new decompressor [.snappy]
2016-09-27 07:27:35,846 INFO [main] org.apache.hadoop.io.compress.CodecPool: 
Got brand-new decompressor [.snappy]
2016-09-27 07:27:35,846 INFO [main] org.apache.hadoop.mapred.Merger: Merging 64 
intermediate segments out of a total of 127
2016-09-27 07:28:21,238 INFO [main] org.apache.hadoop.mapred.Merger: Down to 
the last merge-pass, with 64 segments left of total size: -40111574372 bytes
2016-09-27 07:30:24,933 INFO [main] org.apache.hadoop.mapred.Merger: Merging 
179 sorted segments
2016-09-27 07:30:24,965 INFO [main] org.apache.hadoop.mapred.Merger: Down to 
the last merge-pass, with 0 segments left of total size: 4736 bytes
2016-09-27 07:30:24,967 INFO [main] org.apache.hadoop.mapred.Merger: Merging 
179 sorted segments
2016-09-27 07:30:24,999 INFO [main] org.apache.hadoop.mapred.Merger: Down to 
the last merge-pass, with 0 segments left of total size: 4736 bytes
2016-09-27 07:30:25,000 INFO [main] org.apache.hadoop.mapred.Merger: Merging 
179 sorted segments
2016-09-27 07:30:25,033 INFO [main] org.apache.hadoop.mapred.Merger: Down to 
the last merge-pass, with 0 segments left of total size: 4736 bytes
2016-09-27 07:30:25,035 INFO [main] org.apache.hadoop.mapred.Merger: Merging 
179 sorted segments
2016-09-27 07:30:25,068 INFO [main] org.apache.hadoop.mapred.Merger: Down to 
the last merge-pass, with 0 segments left of total size: 4736 bytes
2016-09-27 07:30:25,723 INFO [main] org.apache.hadoop.mapred.Task: 
Task:attempt_1467713708066_29809_m_16_0 is done. And is in the process of 
committing
2016-09-27 07:30:25,788 INFO [main] org.apache.hadoop.mapred.Task: Task 
'attempt_1467713708066_29809_m_16_0' done.


Regard's

Ravi Kumar B
Mob: +91 9591144511





Recall: Loading via MapReduce, Not Moving HFiles to HBase

2016-09-28 Thread Ravi Kumar Bommada
Ravi Kumar Bommada would like to recall the message, "Loading via MapReduce, 
Not Moving HFiles to HBase".

Loading via MapReduce, Not Moving HFiles to HBase

2016-09-28 Thread Ravi Kumar Bommada
Hi All,

I'm trying to load data via phoenix mapreduce referring to below screen:



HFiles are getting created, each HFile is of size 300MB and 176 such HFiles are 
there, but after that files are not moving to HBase. i.e when I'm querying 
HBase I'm not able to see data.According to the logs below data commit is 
successful.

Please suggest, if I'm missing any configuration.

Provided:
Command Executed:

HADOOP_CLASSPATH=/opt/cloudera/parcels/CDH/lib/hbase/hbase-protocol-1.0.0-cdh5.5.1.jar:/opt/cloudera/parcels/CLABS_PHOENIX-4.5.2-1.clabs_phoenix1.2.0.p0.774/lib/phoenix/phoenix-1.2.0-client.jar:/opt/cloudera/parcels/CDH/lib/hbase/conf
 hadoop jar jtac-case-import-0.0.1-SNAPSHOT.jar 
net.juniper.cs.sr.bulkload.edge.index.InstallBaseIndexBulkLoadTool 
-Dhbase.mapreduce.bulkload.max.hfiles.perRegion.perFamily=1024 --table 
 --input /tmp/ibase/details/chunks/07 --output 
/tmp/output/ibindexedgedeltaload10 --zookeeper 

Last Few Logs:
2016-09-27 07:27:35,845 INFO [main] org.apache.hadoop.io.compress.CodecPool: 
Got brand-new decompressor [.snappy]
2016-09-27 07:27:35,846 INFO [main] org.apache.hadoop.io.compress.CodecPool: 
Got brand-new decompressor [.snappy]
2016-09-27 07:27:35,846 INFO [main] org.apache.hadoop.io.compress.CodecPool: 
Got brand-new decompressor [.snappy]
2016-09-27 07:27:35,846 INFO [main] org.apache.hadoop.mapred.Merger: Merging 64 
intermediate segments out of a total of 127
2016-09-27 07:28:21,238 INFO [main] org.apache.hadoop.mapred.Merger: Down to 
the last merge-pass, with 64 segments left of total size: -40111574372 bytes
2016-09-27 07:30:24,933 INFO [main] org.apache.hadoop.mapred.Merger: Merging 
179 sorted segments
2016-09-27 07:30:24,965 INFO [main] org.apache.hadoop.mapred.Merger: Down to 
the last merge-pass, with 0 segments left of total size: 4736 bytes
2016-09-27 07:30:24,967 INFO [main] org.apache.hadoop.mapred.Merger: Merging 
179 sorted segments
2016-09-27 07:30:24,999 INFO [main] org.apache.hadoop.mapred.Merger: Down to 
the last merge-pass, with 0 segments left of total size: 4736 bytes
2016-09-27 07:30:25,000 INFO [main] org.apache.hadoop.mapred.Merger: Merging 
179 sorted segments
2016-09-27 07:30:25,033 INFO [main] org.apache.hadoop.mapred.Merger: Down to 
the last merge-pass, with 0 segments left of total size: 4736 bytes
2016-09-27 07:30:25,035 INFO [main] org.apache.hadoop.mapred.Merger: Merging 
179 sorted segments
2016-09-27 07:30:25,068 INFO [main] org.apache.hadoop.mapred.Merger: Down to 
the last merge-pass, with 0 segments left of total size: 4736 bytes
2016-09-27 07:30:25,723 INFO [main] org.apache.hadoop.mapred.Task: 
Task:attempt_1467713708066_29809_m_16_0 is done. And is in the process of 
committing
2016-09-27 07:30:25,788 INFO [main] org.apache.hadoop.mapred.Task: Task 
'attempt_1467713708066_29809_m_16_0' done.


Regard's

Ravi Kumar B
Mob: +91 9591144511