Hi,

Usually this is caused by having multiple jars in the sharelib containing
the same class.
To address this issue I would suggest:
1) set yarn.nodemanager.delete.debug-delay-sec to 3600 in Yarn-site.xml.
This will cause localized folders to stay around for one hour after the
application is finished.
2) start the wokrflow and check the logs to find the local folder (It's
usually something in /yarn/applciaton_id/attempt/container_id )
3) Save all the jars from that folder and check for multiple versions of
the same library or check their contents for class files. If you find the
same class is in different jars.
If you find two or more versions of the same jar, you should decide which
one to keep.
I hope this helps,
gp


On Tue, Oct 18, 2016 at 1:25 AM, Saurabh Malviya (samalviy) <
samal...@cisco.com> wrote:

> Hi,
>
> I am facing the similar problem in EMR and, no luck even using below
> approach. Any suggestion?
>
> Env- EMR -4.7.2, Oozie workflow is very simple working in another
> environment.
>
>
>             Summary: Spark action failed with error starting MRAppMaster
>                  Key: OOZIE-2389
>                  URL: https://issues.apache.org/jira/browse/OOZIE-2389
>              Project: Oozie
>           Issue Type: Bug
>     Affects Versions: 4.2.0
>             Reporter: Hunt Tang
>             Priority: Blocker
>
>
> I used spark-examples-1.5.1-hadoop2.6.0.jar to generate a test spark
> action in Oozie, it succeeded sometimes, but in most of the times, it
> failed.
> I checked the Hadoop job history, and it said
> {quote}
> ERROR [main] org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Error
> starting MRAppMaster
> java.lang.NoSuchMethodError: org.apache.hadoop.mapred.
> TaskLog.createLogSyncer()Ljava/util/concurrent/ScheduledExecutorService;
>         at org.apache.hadoop.mapreduce.v2.app.MRAppMaster.<init>(
> MRAppMaster.java:244)
>         at org.apache.hadoop.mapreduce.v2.app.MRAppMaster.<init>(
> MRAppMaster.java:227)
>         at org.apache.hadoop.mapreduce.v2.app.MRAppMaster.main(
> MRAppMaster.java:1412)
> 2015-10-22 17:01:56,203 INFO [main] org.apache.hadoop.util.ExitUtil:
> Exiting with status 1
>
>
> Detailed ,log-------------
>
>
>
> 2016-10-17 23:23:06,652 INFO [main] 
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster:
> Created MRAppMaster for application appattempt_1476310567008_2495_000001
>
> 2016-10-17 23:23:06,881 WARN [main] org.apache.hadoop.conf.Configuration:
> job.xml:an attempt to override final parameter: mapreduce.job.end-
> notification.max.retry.interval;  Ignoring.
>
> 2016-10-17 23:23:06,892 WARN [main] org.apache.hadoop.conf.Configuration:
> job.xml:an attempt to override final parameter:
> yarn.nodemanager.local-dirs;  Ignoring.
>
> 2016-10-17 23:23:06,893 WARN [main] org.apache.hadoop.conf.Configuration:
> job.xml:an attempt to override final parameter: 
> mapreduce.job.end-notification.max.attempts;
> Ignoring.
>
> 2016-10-17 23:23:07,000 WARN [main] org.apache.hadoop.util.NativeCodeLoader:
> Unable to load native-hadoop library for your platform... using
> builtin-java classes where applicable
>
> 2016-10-17 23:23:07,011 INFO [main] 
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster:
> Executing with tokens:
>
> 2016-10-17 23:23:07,037 INFO [main] 
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster:
> Kind: YARN_AM_RM_TOKEN, Service: , Ident: (org.apache.hadoop.yarn.
> security.AMRMTokenIdentifier@ba54932)
>
> 2016-10-17 23:23:07,038 INFO [main] 
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster:
> Kind: RM_DELEGATION_TOKEN, Service: 10.0.1.106:8032, Ident: 0a 04 72 6f
> 6f 74 12 0e 6f 6f 7a 69 65 20 6d 72 20 74 6f 6b 65 6e 1a 05 6f 6f 7a 69 65
> 20 8e ac d1 a7 fd 2a 28 8e b4 83 c8 ff 2a 30 ff 30 38 07
>
> 2016-10-17 23:23:07,044 INFO [main] 
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster:
> The specific max attempts: 2 for application: 2495. Attempt num: 1 is last
> retry: false
>
> 2016-10-17 23:23:07,187 WARN [main] org.apache.hadoop.conf.Configuration:
> job.xml:an attempt to override final parameter: mapreduce.job.end-
> notification.max.retry.interval;  Ignoring.
>
> 2016-10-17 23:23:07,192 WARN [main] org.apache.hadoop.conf.Configuration:
> job.xml:an attempt to override final parameter:
> yarn.nodemanager.local-dirs;  Ignoring.
>
> 2016-10-17 23:23:07,193 WARN [main] org.apache.hadoop.conf.Configuration:
> job.xml:an attempt to override final parameter: 
> mapreduce.job.end-notification.max.attempts;
> Ignoring.
>
> 2016-10-17 23:23:07,574 INFO [main] 
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster:
> OutputCommitter set in config org.apache.hadoop.mapred.
> DirectFileOutputCommitter
>
> 2016-10-17 23:23:07,576 INFO [main] 
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster:
> OutputCommitter is org.apache.hadoop.mapred.DirectFileOutputCommitter
>
> 2016-10-17 23:23:07,615 INFO [main] 
> org.apache.hadoop.yarn.event.AsyncDispatcher:
> Registering class org.apache.hadoop.mapreduce.jobhistory.EventType for
> class org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler
>
> 2016-10-17 23:23:07,616 INFO [main] 
> org.apache.hadoop.yarn.event.AsyncDispatcher:
> Registering class org.apache.hadoop.mapreduce.v2.app.job.event.JobEventType
> for class org.apache.hadoop.mapreduce.v2.app.MRAppMaster$
> JobEventDispatcher
>
> 2016-10-17 23:23:07,616 INFO [main] 
> org.apache.hadoop.yarn.event.AsyncDispatcher:
> Registering class org.apache.hadoop.mapreduce.v2.app.job.event.TaskEventType
> for class org.apache.hadoop.mapreduce.v2.app.MRAppMaster$
> TaskEventDispatcher
>
> 2016-10-17 23:23:07,617 INFO [main] 
> org.apache.hadoop.yarn.event.AsyncDispatcher:
> Registering class 
> org.apache.hadoop.mapreduce.v2.app.job.event.TaskAttemptEventType
> for class org.apache.hadoop.mapreduce.v2.app.MRAppMaster$
> TaskAttemptEventDispatcher
>
> 2016-10-17 23:23:07,617 INFO [main] 
> org.apache.hadoop.yarn.event.AsyncDispatcher:
> Registering class org.apache.hadoop.mapreduce.v2.app.commit.CommitterEventType
> for class org.apache.hadoop.mapreduce.v2.app.commit.CommitterEventHandler
>
> 2016-10-17 23:23:07,618 INFO [main] 
> org.apache.hadoop.yarn.event.AsyncDispatcher:
> Registering class 
> org.apache.hadoop.mapreduce.v2.app.speculate.Speculator$EventType
> for class org.apache.hadoop.mapreduce.v2.app.MRAppMaster$
> SpeculatorEventDispatcher
>
> 2016-10-17 23:23:07,618 INFO [main] 
> org.apache.hadoop.yarn.event.AsyncDispatcher:
> Registering class org.apache.hadoop.mapreduce.
> v2.app.rm.ContainerAllocator$EventType for class
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster$ContainerAllocatorRouter
>
> 2016-10-17 23:23:07,619 INFO [main] 
> org.apache.hadoop.yarn.event.AsyncDispatcher:
> Registering class 
> org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncher$EventType
> for class org.apache.hadoop.mapreduce.v2.app.MRAppMaster$
> ContainerLauncherRouter
>
> 2016-10-17 23:23:07,685 INFO [main] 
> org.apache.hadoop.yarn.event.AsyncDispatcher:
> Registering class 
> org.apache.hadoop.mapreduce.v2.app.job.event.JobFinishEvent$Type
> for class org.apache.hadoop.mapreduce.v2.app.MRAppMaster$
> JobFinishEventHandler
>
> 2016-10-17 23:23:07,983 INFO [main] 
> org.apache.hadoop.metrics2.impl.MetricsConfig:
> loaded properties from hadoop-metrics2.properties
>
> 2016-10-17 23:23:07,993 INFO [main] com.amazon.ws.emr.hadoop.
> metrics2.sink.cloudwatch.CloudWatchSink: Initializing the CloudWatchSink
> for metrics.
>
> 2016-10-17 23:23:08,055 INFO [main] 
> org.apache.hadoop.metrics2.impl.MetricsSinkAdapter:
> Sink cloudwatch started
>
> 2016-10-17 23:23:08,119 INFO [main] 
> org.apache.hadoop.metrics2.impl.MetricsSystemImpl:
> Scheduled snapshot period at 300 second(s).
>
> 2016-10-17 23:23:08,120 INFO [main] 
> org.apache.hadoop.metrics2.impl.MetricsSystemImpl:
> MRAppMaster metrics system started
>
> 2016-10-17 23:23:08,130 INFO [main] 
> org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl:
> Adding job token for job_1476310567008_2495 to jobTokenSecretManager
>
> 2016-10-17 23:23:08,213 INFO [main] 
> org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl:
> Uberizing job job_1476310567008_2495: 1m+0r tasks (0 input bytes) will run
> sequentially on single node.
>
> 2016-10-17 23:23:08,232 INFO [main] 
> org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl:
> Input size for job job_1476310567008_2495 = 0. Number of splits = 1
>
> 2016-10-17 23:23:08,232 INFO [main] 
> org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl:
> Number of reduces for job job_1476310567008_2495 = 0
>
> 2016-10-17 23:23:08,232 INFO [main] 
> org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl:
> job_1476310567008_2495Job Transitioned from NEW to INITED
>
> 2016-10-17 23:23:08,232 INFO [main] 
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster:
> MRAppMaster uberizing job job_1476310567008_2495 in local container
> ("uber-AM") on node ip-10-0-1-143.ec2.internal:8041.
>
> 2016-10-17 23:23:08,259 INFO [Socket Reader #1 for port 45951]
> org.apache.hadoop.ipc.Server: Starting Socket Reader #1 for port 45951
>
> 2016-10-17 23:23:08,275 INFO [main] org.apache.hadoop.yarn.
> factories.impl.pb.RpcServerFactoryPBImpl: Adding protocol
> org.apache.hadoop.mapreduce.v2.api.MRClientProtocolPB to the server
>
> 2016-10-17 23:23:08,275 INFO [IPC Server Responder]
> org.apache.hadoop.ipc.Server: IPC Server Responder: starting
>
> 2016-10-17 23:23:08,275 INFO [IPC Server listener on 45951]
> org.apache.hadoop.ipc.Server: IPC Server listener on 45951: starting
>
> 2016-10-17 23:23:08,276 INFO [main] 
> org.apache.hadoop.mapreduce.v2.app.client.MRClientService:
> Instantiated MRClientService at ip-10-0-1-143/10.0.1.143:45951
>
> 2016-10-17 23:23:08,327 INFO [main] org.mortbay.log: Logging to
> org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via
> org.mortbay.log.Slf4jLog
>
> 2016-10-17 23:23:08,368 INFO [main] org.apache.hadoop.http.HttpServer:
> Added global filter 'safety' (class=org.apache.hadoop.http.
> HttpServer$QuotingInputFilter)
>
> 2016-10-17 23:23:08,371 ERROR [main] 
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster:
> Error starting MRAppMaster
>
> java.lang.NoSuchMethodError: org.apache.hadoop.yarn.webapp.
> util.WebAppUtils.getProxyHostsAndPortsForAmFilter(Lorg/apache/hadoop/conf/
> Configuration;)Ljava/util/List;
>
>         at org.apache.hadoop.yarn.server.webproxy.amfilter.
> AmFilterInitializer.initFilter(AmFilterInitializer.java:40)
>
>         at org.apache.hadoop.http.HttpServer.<init>(HttpServer.java:272)
>
>         at org.apache.hadoop.yarn.webapp.WebApps$Builder$2.<init>(
> WebApps.java:222)
>
>         at org.apache.hadoop.yarn.webapp.WebApps$Builder.start(WebApps.
> java:219)
>
>         at org.apache.hadoop.mapreduce.v2.app.client.MRClientService.
> serviceStart(MRClientService.java:136)
>
>         at org.apache.hadoop.service.AbstractService.start(
> AbstractService.java:193)
>
>         at org.apache.hadoop.mapreduce.v2.app.MRAppMaster.
> serviceStart(MRAppMaster.java:1058)
>
>         at org.apache.hadoop.service.AbstractService.start(
> AbstractService.java:193)
>
>         at org.apache.hadoop.mapreduce.v2.app.MRAppMaster$1.run(
> MRAppMaster.java:1445)
>
>         at java.security.AccessController.doPrivileged(Native Method)
>
>         at javax.security.auth.Subject.doAs(Subject.java:422)
>
>         at org.apache.hadoop.security.UserGroupInformation.doAs(
> UserGroupInformation.java:1491)
>
>         at org.apache.hadoop.mapreduce.v2.app.MRAppMaster.
> initAndStartAppMaster(MRAppMaster.java:1441)
>
>         at org.apache.hadoop.mapreduce.v2.app.MRAppMaster.main(
> MRAppMaster.java:1374)
>
> 2016-10-17 23:23:08,374 INFO [Thread-1] 
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster:
> MRAppMaster received a signal. Signaling RMCommunicator and
> JobHistoryEventHandler.
>
> 2016-10-17 23:23:08,374 WARN [Thread-1] 
> org.apache.hadoop.util.ShutdownHookManager:
> ShutdownHook 'MRAppMasterShutdownHook' failed,
> java.lang.NullPointerException
>
> java.lang.NullPointerException
>
>         at org.apache.hadoop.mapreduce.v2.app.MRAppMaster$
> ContainerAllocatorRouter.setSignalled(MRAppMaster.java:827)
>
>         at org.apache.hadoop.mapreduce.v2.app.MRAppMaster$
> MRAppMasterShutdownHook.run(MRAppMaster.java:1395)
>
>         at org.apache.hadoop.util.ShutdownHookManager$1.run(
> ShutdownHookManager.java:54)
>
>
>


-- 
Peter Cseh
Software Engineer
<http://www.cloudera.com>

Reply via email to