Namenode shutdown due to long GC Pauses

2016-02-24 Thread Gokulakannan M (Engineering - Data Platform)
Hi,

It is known that namenode shuts down when a long GC pause happens when NN
writes edits to journal nodes - Namenode thinks that journal nodes didn't
respond but actually it was due to the long GC pause. Any pointers on
solving this issue?


hadoop distcp and hbase ExportSnapshot hdfs replication factor question.

2016-02-24 Thread Mark Selby
I have a primary Hadoop cluster (2.6.0) running Mapreduce and HBase. I 
am backing up to a remote data center that has many fewer machines with 
a higher per disk density.


The default HDFS replication factor on the primary is 3.
The default HDFS replication factor on the primary is 2.

When I run distcp on the primary cluster specifying the remote are the 
source, and I DO NOT specify preserve replication factor as an argument, 
I still get 3 replicas on the remote.


All my HBase snapshots that are copied from the primary to the backup 
also end up with h-files that have a replication factor of 3.


As a test I ran distcp from the backup pulling from the primary and this 
did result in a replication factor of 2. I have many fewer resources on 
the backup and think that it would be faster to perform the large copy 
with a larger number of machines.


As well I can not pull HBase snapshots from the backup cluster. The 
ExportSnapshot utility does not support this.


Does anyone know if it is possible to distcp to another cluster that has 
a smaller replication factor and have that take effect.


Thanks!

-
To unsubscribe, e-mail: user-unsubscr...@hadoop.apache.org
For additional commands, e-mail: user-h...@hadoop.apache.org



[Query] : How to read null values in Spark 1.5.2

2016-02-24 Thread Divya Gehlot
Hi,
I have a data set(source is data -> database) which has null values .
When I am defining the custom schema as any type except string type,
I get number format exception on null values .
Has anybody come across this kind of scenario?
Would really appreciate if you can share your resolution or workaround.

Thanks,
Divya


Re: MapReduce job doesn't make any progress for a very very long time after one Node become unusable.

2016-02-24 Thread Namikaze Minato
Hello.

I have very rare cases of map and/or reduce tasks stopping or continuing at
a very slow pace (0.1% in an hour). Just a piece of advice here: Instead of
killing the job, kill only the slow/stuck task, it will restart and this
time (hopefully) not get stuck.

Regards,
LLoyd

On 24 February 2016 at 08:56, Varun saxena  wrote:

> Hi Silnov,
>
>
>
> Can you check your AM logs and compare it with MAPREDUCE-6513 scenario ?
>
> I suspect its same.
>
> MAPREDUCE-6513 is marked to go in 2.7.3
>
>
>
> Regards,
>
> Varun Saxena.
>
>
>
>
>
> *From:* Silnov [mailto:sil...@sina.com]
> *Sent:* 24 February 2016 14:52
> *To:* user
> *Subject:* MapReduce job doesn't make any progress for a very very long
> time after one Node become unusable.
>
>
>
> hello everyone! I am a greenhand on hadoop.
>
> I have one question seeking for your help!
>
>
>
>
>
> I have some nodes running hadoop 2.6.0.
> The cluster's configuration remain default largely.
> I run some job on the cluster(especially some job processing a lot of
> data) every day.
> Sometimes, I found my job remain the same progression for a very very long
> time. So I have to kill the job mannually and re-submit it to the cluster.
> It works well before(re-submit the job and it run to the end), but
> something go wrong today.
> After I re-submit the same task for 3 times, its running go deadlock(the
> progression doesn't change for a long time, and each time has a different
> progress value.e.g.33.01%,45.8%,73.21%).
>
> I begin to check the web UI for the hadoop, then I find there are 98 map
> suspend while all the running reduce task have consumed all the avaliable
> memory. I stop the yarn and add configuration below into yarn-site.xml and
> then restart the yarn.
> yarn.app.mapreduce.am.job.reduce.rampup.limit
> 0.1
> yarn.app.mapreduce.am.job.reduce.preemption.limit
> 1.0
> (wanting the yarn to preempt the reduce task's resource to run suspending
> map task)
> After restart the yarn,I submit the job with the property
> mapreduce.job.reduce.slowstart.completedmaps=1.
> but the same result happen again!!(my job remain the same progress value
> for a very very long time)
>
> I check the web UI for the hadoop again,and find that the suspended map
> task is newed with the previous note:"TaskAttempt killed because it ran on
> unusable node node02:21349".
> Then I check the resourcemanager's log and find some useful messages below:
> **Deactivating Node node02:21349 as it is now LOST.
> **node02:21349 Node Transitioned from RUNNING to LOST.
>
> I think this may happen because my network across the cluster is not good
> which cause the RM don't receive the NM's heartbeat in time.
>
> But I wonder that why the yarn framework can't preempt the running reduce
> task's resource to run the suspend map task?(this cause the job remain the
> same progress value for a very very long time )
>
> Any one can help?
> Thank you very much!
>