RE: anyone seen this weird "setXIncludeAware is not supported" error?

2016-07-29 Thread Benjamin Ross
First thought is that I would check to see if you're somehow pulling in a 
xerces library that your version of Hadoop wasn't built against.  Can you 
provide your pom file?  Also, I would do a mvn dependencies:list and see if 
something looks off. You should probably paste the output of that for others on 
this list also.

Ben

From: Frank Luo [j...@merkleinc.com]
Sent: Friday, July 29, 2016 4:07 PM
To: user@hadoop.apache.org
Subject: anyone seen this weird "setXIncludeAware is not supported" error?

Ok, this drives me into nuts.

I got a junit test case as simple as below:

  @Before
public void setup() throws IOException {
Job job = Job.getInstance();
Configuration config = job.getConfiguration();

And I got Exception at Job.getInstance() as:

  java.lang.UnsupportedOperationException:  setXIncludeAware is not supported 
on this JAXP implementation or earlier: class 
org.apache.xerces.jaxp.DocumentBuilderFactoryImpl

  at 
javax.xml.parsers.DocumentBuilderFactory.setXIncludeAware(DocumentBuilderFactory.java:614)

at 
org.apache.hadoop.conf.Configuration.loadResource(Configuration.java:2523)

at 
org.apache.hadoop.conf.Configuration.loadResources(Configuration.java:2492)

at 
org.apache.hadoop.conf.Configuration.getProps(Configuration.java:2405)

at org.apache.hadoop.conf.Configuration.get(Configuration.java:981)

at 
org.apache.hadoop.mapred.JobConf.checkAndWarnDeprecation(JobConf.java:2069)

at org.apache.hadoop.mapred.JobConf.(JobConf.java:447)

at org.apache.hadoop.mapreduce.Job.getInstance(Job.java:175)

at org.apache.hadoop.mapreduce.Job.getInstance(Job.java:156)

at 
com.merkleinc.crkb.match.keymatching.KeyMatchingTest.setup(KeyMatchingTest.java:52)

What is strange is that the same code works in windows but not on Linux. Even 
on Linux, only one class is facing the problem. Other classes are fine running 
the exact same code.  On that problematic class, there are four test methods, 
one is successful and three are failing.

Anyone has similar experience?


I have Hadoop 2.7.1, hive 1.2.1 and hbase 1.1.2. and I am running with maven 
3.3.3 and/or 3.3.9.

Access the Q2 2016 Digital Marketing Report for a fresh set of trends and 
benchmarks in digital 
marketing

Download the latest installment of our annual Marketing Imperatives, “Winning 
with People-Based 
Marketing”

This email and any attachments transmitted with it are intended for use by the 
intended recipient(s) only. If you have received this email in error, please 
notify the sender immediately and then delete it. If you are not the intended 
recipient, you must not keep, use, disclose, copy or distribute this email 
without the author’s prior permission. We take precautions to minimize the risk 
of transmitting software viruses, but we advise you to perform your own virus 
checks on any attachment to this message. We cannot accept liability for any 
loss or damage caused by software viruses. The information contained in this 
communication may be confidential and may be subject to the attorney-client 
privilege.



Click here to report 
this email as spam.



This message has been scanned for malware by Websense. www.websense.com

-
To unsubscribe, e-mail: user-unsubscr...@hadoop.apache.org
For additional commands, e-mail: user-h...@hadoop.apache.org



anyone seen this weird "setXIncludeAware is not supported" error?

2016-07-29 Thread Frank Luo
Ok, this drives me into nuts.

I got a junit test case as simple as below:

  @Before
public void setup() throws IOException {
Job job = Job.getInstance();
Configuration config = job.getConfiguration();

And I got Exception at Job.getInstance() as:

  java.lang.UnsupportedOperationException:  setXIncludeAware is not supported 
on this JAXP implementation or earlier: class 
org.apache.xerces.jaxp.DocumentBuilderFactoryImpl

  at 
javax.xml.parsers.DocumentBuilderFactory.setXIncludeAware(DocumentBuilderFactory.java:614)

at 
org.apache.hadoop.conf.Configuration.loadResource(Configuration.java:2523)

at 
org.apache.hadoop.conf.Configuration.loadResources(Configuration.java:2492)

at 
org.apache.hadoop.conf.Configuration.getProps(Configuration.java:2405)

at org.apache.hadoop.conf.Configuration.get(Configuration.java:981)

at 
org.apache.hadoop.mapred.JobConf.checkAndWarnDeprecation(JobConf.java:2069)

at org.apache.hadoop.mapred.JobConf.(JobConf.java:447)

at org.apache.hadoop.mapreduce.Job.getInstance(Job.java:175)

at org.apache.hadoop.mapreduce.Job.getInstance(Job.java:156)

at 
com.merkleinc.crkb.match.keymatching.KeyMatchingTest.setup(KeyMatchingTest.java:52)

What is strange is that the same code works in windows but not on Linux. Even 
on Linux, only one class is facing the problem. Other classes are fine running 
the exact same code.  On that problematic class, there are four test methods, 
one is successful and three are failing.

Anyone has similar experience?


I have Hadoop 2.7.1, hive 1.2.1 and hbase 1.1.2. and I am running with maven 
3.3.3 and/or 3.3.9.

Access the Q2 2016 Digital Marketing Report for a fresh set of trends and 
benchmarks in digital 
marketing

Download the latest installment of our annual Marketing Imperatives, “Winning 
with People-Based 
Marketing”

This email and any attachments transmitted with it are intended for use by the 
intended recipient(s) only. If you have received this email in error, please 
notify the sender immediately and then delete it. If you are not the intended 
recipient, you must not keep, use, disclose, copy or distribute this email 
without the author’s prior permission. We take precautions to minimize the risk 
of transmitting software viruses, but we advise you to perform your own virus 
checks on any attachment to this message. We cannot accept liability for any 
loss or damage caused by software viruses. The information contained in this 
communication may be confidential and may be subject to the attorney-client 
privilege.


Re: AM Container exits with code 2

2016-07-29 Thread Sunil Govind
Hi Rahul,
>From the given log, I do not think YARN is killing containers due to memory
issue. Usage is under the limits. However full log is not shared, so you
can verify that when the AM launch is failed whether memory was under limit
or not.
Which application are you trying to run?
Also its better if we have "application master container" log.  *sysout* or
*syserr* of that launch will have some more information.

Thanks
Sunil

On Fri, Jul 29, 2016 at 12:49 PM Rahul Chhiber <
rahul.chhi...@cumulus-systems.com> wrote:

> Hi all,
>
>
>
> I have launched an application on yarn cluster which has following config.
>
> Master (Resource Manager) - 16GB RAM + 8 vCPU
>
> Slave 1 (Node manager 1) - 8GB RAM + 4 vCPU
>
>
>
> Intermittently AM(2GB, 1 core) is exiting with code - 2 with the following
> trace. I am not able to find anything about exit code 2.
>
>
>
> Last log is
>
> org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl:
> Memory usage of ProcessTree 22504 for container-id
> container_1469709900068_0002_01_01: 203.8 MB of 2 GB physical memory
> used; 2.8 GB of 4.2 GB virtual memory used
>
>
>
> Does this have anything to do with my application logic or Is it possible
> that it is killed because of exceeding the memory limits?
>
>
>
> 2016-07-28 17:08:50,672 WARN
> org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor:
> Exception from container-launch with container ID:
> container_1469709900068_0002_01_01 and exit code: 2
>
> ExitCodeException exitCode=2:
>
> at org.apache.hadoop.util.Shell.runCommand(Shell.java:538)
>
> at org.apache.hadoop.util.Shell.run(Shell.java:455)
>
> at
> org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:715)
>
> at
> org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:212)
>
> at
> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:302)
>
> at
> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:82)
>
> at java.util.concurrent.FutureTask.run(FutureTask.java:262)
>
> at
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>
> at
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>
> at java.lang.Thread.run(Thread.java:745)
>
> 2016-07-28 17:08:50,674 INFO
> org.apache.hadoop.yarn.server.nodemanager.ContainerExecutor: Exception from
> container-launch.
>
> 2016-07-28 17:08:50,674 INFO
> org.apache.hadoop.yarn.server.nodemanager.ContainerExecutor: Container id:
> container_1469709900068_0002_01_01
>
> 2016-07-28 17:08:50,674 INFO
> org.apache.hadoop.yarn.server.nodemanager.ContainerExecutor: Exit code: 2
>
> 2016-07-28 17:08:50,674 INFO
> org.apache.hadoop.yarn.server.nodemanager.ContainerExecutor: Stack trace:
> ExitCodeException exitCode=2:
>
>
>
> Thanks,
>
> Rahul Chhiber
>
>
>


FSDataInputStream

2016-07-29 Thread Kristoffer Sjögren
Hi

We're seeing exceptions when closing a FSDataInputStream. I'm not sure
how to interpret the exception. Is there anything that can be done to
avoid it?

Cheers,
-Kristoffer

[2016-07-29 09:28:20,162] ERROR Error closing
hdfs://hdpcluster/tmp/kafka-connect/logs/sting_actions_inscreen/83/log.
(io.confluent.connect.hdfs.TopicPartitionWriter:328)
org.apache.kafka.connect.errors.ConnectException: Error closing
hdfs://hdpcluster/tmp/kafka-connect/logs/sting_actions_inscreen/83/log
at io.confluent.connect.hdfs.wal.FSWAL.close(FSWAL.java:156)
at 
io.confluent.connect.hdfs.TopicPartitionWriter.close(TopicPartitionWriter.java:326)
at io.confluent.connect.hdfs.DataWriter.close(DataWriter.java:296)
at io.confluent.connect.hdfs.HdfsSinkTask.close(HdfsSinkTask.java:109)
at 
org.apache.kafka.connect.runtime.WorkerSinkTask.commitOffsets(WorkerSinkTask.java:290)
at 
org.apache.kafka.connect.runtime.WorkerSinkTask.closePartitions(WorkerSinkTask.java:421)
at 
org.apache.kafka.connect.runtime.WorkerSinkTask.access$1100(WorkerSinkTask.java:54)
at 
org.apache.kafka.connect.runtime.WorkerSinkTask$HandleRebalance.onPartitionsRevoked(WorkerSinkTask.java:465)
at 
org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.onJoinPrepare(ConsumerCoordinator.java:283)
at 
org.apache.kafka.clients.consumer.internals.AbstractCoordinator.ensureActiveGroup(AbstractCoordinator.java:212)
at 
org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.ensurePartitionAssignment(ConsumerCoordinator.java:345)
at 
org.apache.kafka.clients.consumer.KafkaConsumer.pollOnce(KafkaConsumer.java:977)
at org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:937)
at 
org.apache.kafka.connect.runtime.WorkerSinkTask.pollConsumer(WorkerSinkTask.java:305)
at org.apache.kafka.connect.runtime.WorkerSinkTask.poll(WorkerSinkTask.java:222)
at 
org.apache.kafka.connect.runtime.WorkerSinkTask.iteration(WorkerSinkTask.java:170)
at 
org.apache.kafka.connect.runtime.WorkerSinkTask.execute(WorkerSinkTask.java:142)
at org.apache.kafka.connect.runtime.WorkerTask.doRun(WorkerTask.java:140)
at org.apache.kafka.connect.runtime.WorkerTask.run(WorkerTask.java:175)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.hadoop.ipc.RemoteException(java.io.IOException):
BP-141202528-10.3.138.26-1448020478061:blk_1098384937_24779008 does
not exist or is not under Constructionnull
at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkUCBlock(FSNamesystem.java:6344)
at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.updateBlockForPipeline(FSNamesystem.java:6411)
at 
org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.updateBlockForPipeline(NameNodeRpcServer.java:870)
at 
org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.updateBlockForPipeline(ClientNamenodeProtocolServerSideTranslatorPB.java:955)
at 
org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:616)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:969)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2151)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2147)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2145)

at org.apache.hadoop.ipc.Client.call(Client.java:1468)
at org.apache.hadoop.ipc.Client.call(Client.java:1399)
at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:232)
at com.sun.proxy.$Proxy48.updateBlockForPipeline(Unknown Source)
at 
org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.updateBlockForPipeline(ClientNamenodeProtocolTranslatorPB.java:877)
at sun.reflect.GeneratedMethodAccessor14.invoke(Unknown Source)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:187)
at 
org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
at com.sun.proxy.$Proxy49.updateBlockForPipeline(Unknown Source)
at 
org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.setupPipelineForAppendOrRecovery(DFSOutputStream.java:1266)
at 
org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:594)


Oozie distcp failed in secure cluster

2016-07-29 Thread kumar r
Hi,

I have configured hadoop-2.7.2 and oozie-4.2.0 with Kerberos security
enabled.

Distcp oozie action submitted as workflow job. When running the oozie
launcher, i am getting following exception.


2016-07-29 12:39:04,394 ERROR [uber-SubtaskRunner]
org.apache.hadoop.tools.DistCp: Exception encountered
org.apache.hadoop.ipc.RemoteException(java.io.IOException): Delegation
Token can be issued only with kerberos or web authentication
at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getDelegationToken(FSNamesystem.java:6635)
at 
org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getDelegationToken(NameNodeRpcServer.java:563)
at 
org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getDelegationToken(ClientNamenodeProtocolServerSideTranslatorPB.java:987)
at 
org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:616)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:969)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2049)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2045)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1656)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2043)

at org.apache.hadoop.ipc.Client.call(Client.java:1475)
at org.apache.hadoop.ipc.Client.call(Client.java:1412)
at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:229)
at com.sun.proxy.$Proxy14.getDelegationToken(Unknown Source)
at 
org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getDelegationToken(ClientNamenodeProtocolTranslatorPB.java:933)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:191)
at 
org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
at com.sun.proxy.$Proxy15.getDelegationToken(Unknown Source)
at org.apache.hadoop.hdfs.DFSClient.getDelegationToken(DFSClient.java:1029)
at 
org.apache.hadoop.hdfs.DistributedFileSystem.getDelegationToken(DistributedFileSystem.java:1542)
at org.apache.hadoop.fs.FileSystem.collectDelegationTokens(FileSystem.java:530)
at org.apache.hadoop.fs.FileSystem.addDelegationTokens(FileSystem.java:508)
at 
org.apache.hadoop.hdfs.DistributedFileSystem.addDelegationTokens(DistributedFileSystem.java:2228)
at 
org.apache.hadoop.mapreduce.security.TokenCache.obtainTokensForNamenodesInternal(TokenCache.java:121)
at 
org.apache.hadoop.mapreduce.security.TokenCache.obtainTokensForNamenodesInternal(TokenCache.java:100)
at 
org.apache.hadoop.mapreduce.security.TokenCache.obtainTokensForNamenodes(TokenCache.java:80)
at 
org.apache.hadoop.tools.mapred.CopyOutputFormat.checkOutputSpecs(CopyOutputFormat.java:121)
at org.apache.hadoop.mapreduce.JobSubmitter.checkSpecs(JobSubmitter.java:266)
at 
org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:139)
at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1290)
at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1287)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1656)
at org.apache.hadoop.mapreduce.Job.submit(Job.java:1287)
at org.apache.hadoop.tools.DistCp.createAndSubmitJob(DistCp.java:183)
at org.apache.hadoop.tools.DistCp.execute(DistCp.java:153)
at org.apache.hadoop.tools.DistCp.run(DistCp.java:126)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:84)
at org.apache.oozie.action.hadoop.DistcpMain.run(DistcpMain.java:64)
at org.apache.oozie.action.hadoop.LauncherMain.run(LauncherMain.java:47)
at org.apache.oozie.action.hadoop.DistcpMain.main(DistcpMain.java:34)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.oozie.action.hadoop.LauncherMapper.map(LauncherMapper.java:236)
at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:54)
at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:453)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:343)
at 
org.apache.hadoop.mapred.LocalContainerLauncher$EventHandler.runSubtask(LocalContainerLauncher.java:380)

AM Container exits with code 2

2016-07-29 Thread Rahul Chhiber
Hi all,

I have launched an application on yarn cluster which has following config.
Master (Resource Manager) - 16GB RAM + 8 vCPU
Slave 1 (Node manager 1) - 8GB RAM + 4 vCPU

Intermittently AM(2GB, 1 core) is exiting with code - 2 with the following 
trace. I am not able to find anything about exit code 2.

Last log is
org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl:
 Memory usage of ProcessTree 22504 for container-id 
container_1469709900068_0002_01_01: 203.8 MB of 2 GB physical memory used; 
2.8 GB of 4.2 GB virtual memory used

Does this have anything to do with my application logic or Is it possible that 
it is killed because of exceeding the memory limits?

2016-07-28 17:08:50,672 WARN 
org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor: Exception 
from container-launch with container ID: container_1469709900068_0002_01_01 
and exit code: 2
ExitCodeException exitCode=2:
at org.apache.hadoop.util.Shell.runCommand(Shell.java:538)
at org.apache.hadoop.util.Shell.run(Shell.java:455)
at 
org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:715)
at 
org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:212)
at 
org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:302)
at 
org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:82)
at java.util.concurrent.FutureTask.run(FutureTask.java:262)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
2016-07-28 17:08:50,674 INFO 
org.apache.hadoop.yarn.server.nodemanager.ContainerExecutor: Exception from 
container-launch.
2016-07-28 17:08:50,674 INFO 
org.apache.hadoop.yarn.server.nodemanager.ContainerExecutor: Container id: 
container_1469709900068_0002_01_01
2016-07-28 17:08:50,674 INFO 
org.apache.hadoop.yarn.server.nodemanager.ContainerExecutor: Exit code: 2
2016-07-28 17:08:50,674 INFO 
org.apache.hadoop.yarn.server.nodemanager.ContainerExecutor: Stack trace: 
ExitCodeException exitCode=2:

Thanks,
Rahul Chhiber