Hey, Arthur:
Could you show me the error message for rm2. please ?
Thanks
Xuan Gong
On Mon, Aug 11, 2014 at 10:17 PM, arthur.hk.c...@gmail.com
arthur.hk.c...@gmail.com wrote:
Hi,
Thank y very much!
At the moment if I run ./sbin/start-yarn.sh in rm1, the standby STANDBY
Hi
I am running Hadoop 2.4.1 with YARN HA enabled (two name nodes, NM1 and NM2).
When verifying ResourceManager failover, I use “kill -9” to terminate the
ResourceManager in name node 1 (NM1), if I run the the test job, it seems that
the failover of ResourceManager keeps trying NM1 and NM2
Hi,
If I have TWO nodes for ResourceManager HA, what should be the correct steps
and commands to start and stop ResourceManager in a ResourceManager HA cluster ?
Unlike ./sbin/start-dfs.sh (which can start all NNs from a NN), it seems that
./sbin/start-yarn.sh can only start YARN in a node at
Hey, Arthur:
Did you use single node cluster or multiple nodes cluster? Could you
share your configuration file (yarn-site.xml) ? This looks like a
configuration issue.
Thanks
Xuan Gong
On Mon, Aug 11, 2014 at 9:45 AM, arthur.hk.c...@gmail.com
arthur.hk.c...@gmail.com wrote:
Hi,
If I
Hi,
it is a multiple-node cluster, two master nodes (rm1 and rm2), below is my
yarn-site.xml.
At the moment, the ResourceManager HA works if:
1) at rm1, run ./sbin/start-yarn.sh
yarn rmadmin -getServiceState rm1
active
yarn rmadmin -getServiceState rm2
14/08/12 07:47:59 INFO ipc.Client:
Some questions:
Q1) I need start yarn in EACH master separately, is this normal? Is there
a way that I just run ./sbin/start-yarn.sh in rm1 and get the
STANDBY ResourceManager in rm2 started as well?
No, need to start multiple RMs separately.
Q2) How to get alerts (e.g. by email) if the ACTIVE
Hi,
Thank y very much!
At the moment if I run ./sbin/start-yarn.sh in rm1, the standby STANDBY
ResourceManager in rm2 is not started accordingly. Please advise what would be
wrong? Thanks
Regards
Arthur
On 12 Aug, 2014, at 1:13 pm, Xuan Gong xg...@hortonworks.com wrote:
Some questions:
Hi
I have set up the Hadoop 2.4.1 with HDFS High Availability using the Quorum
Journal Manager.
I am verifying Automatic Failover: I manually used “kill -9” command to disable
all running Hadoop services in active node (NN-1), I can find that the Standby
node (NN-2) now becomes ACTIVE now
You need additional settings to make ResourceManager auto-failover.
http://hadoop.apache.org/docs/r2.4.1/hadoop-yarn/hadoop-yarn-site/ResourceManagerHA.html
JobHistoryServer does not have automatic failover feature.
Regards,
Akira
(2014/08/05 20:15), arthur.hk.c...@gmail.com wrote:
Hi
I