[jira] [Created] (YARN-7975) Add an optional arg to yarn cluster -list-node-labels to list all nodes collection partitioned by labels

2018-02-26 Thread Shen Yinjie (JIRA)
Shen Yinjie created YARN-7975:
-

 Summary: Add an optional arg to yarn cluster -list-node-labels to 
list all nodes collection partitioned by labels
 Key: YARN-7975
 URL: https://issues.apache.org/jira/browse/YARN-7975
 Project: Hadoop YARN
  Issue Type: Improvement
Reporter: Shen Yinjie


Since we have "yarn cluster -lnl" to print all nodelabels info .But it's not 
enough,we should be abale to list nodes collection partitioned by 
labels,especially in large cluster.

So  I propose to add an optional argument  "-nodes" for  "yarn cluster -lnl" to 
achieve this.

e.g.

[yarn@docker1 ~]$ yarn cluster -lnl -nodes
Node Labels Num: 3
              Labels                                               Nodes
 

[jira] [Created] (YARN-7974) Allow updating application tracking url after registration

2018-02-26 Thread Jonathan Hung (JIRA)
Jonathan Hung created YARN-7974:
---

 Summary: Allow updating application tracking url after registration
 Key: YARN-7974
 URL: https://issues.apache.org/jira/browse/YARN-7974
 Project: Hadoop YARN
  Issue Type: Improvement
Reporter: Jonathan Hung


Normally an application's tracking url is set on AM registration. We have a use 
case for updating the tracking url after registration (e.g. the UI is hosted on 
one of the containers).

Currently we added a {{updateTrackingUrl}} API to ApplicationClientProtocol.

We'll post the patch soon, assuming there are no issues with this.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-dev-h...@hadoop.apache.org



[DISCUSS] 2.9+ stabilization branch

2018-02-26 Thread Subru Krishnan
Folks,

We (i.e. Microsoft) have started stabilization of 2.9 for our production
deployment. During planning, we realized that we need to backport 3.x
features to support GPUs (and more resource types like network IO) natively
as part of the upgrade. We'd like to share that work with the community.

Instead of stabilizing the base release and cherry-picking fixes back to
Apache, we want to work publicly and push fixes directly into
trunk/.../branch-2 for a stable 2.10.0 release. Our goal is to create a
bridge release for our production clusters to the 3.x series and to address
scalability problems in large clusters (N*10k nodes). As we find issues, we
will file JIRAs and track resolution of significant regressions/faults in
wiki. Moreover, LinkedIn also has committed plans for a production
deployment of the same branch. We welcome broad participation, particularly
since we'll be stabilizing relatively new features.

The exact list of features we would like to backport in YARN are:

   - Support for Resource types [1][2]
   - Native support for GPUs[3]
   - Absolute Resource configuration in CapacityScheduler [4]


With regards to HDFS, we are currently looking at mainly fixes to Router
based Federation and Windows specific fixes which should anyways flow
normally.

Thoughts?

Thanks,
Subru/Arun

[1] https://www.mail-archive.com/yarn-dev@hadoop.apache.org/msg27786.html
[2] https://www.mail-archive.com/yarn-dev@hadoop.apache.org/msg28281.html
[3] https://issues.apache.org/jira/browse/YARN-6223
[4] https://www.mail-archive.com/yarn-dev@hadoop.apache.org/msg28772.html


[VOTE] Merging branch HDFS-7240 to trunk

2018-02-26 Thread Jitendra Pandey
Dear folks,
   We would like to start a vote to merge HDFS-7240 branch into trunk. 
The context can be reviewed in the DISCUSSION thread, and in the jiras (See 
references below).
  
HDFS-7240 introduces Hadoop Distributed Storage Layer (HDSL), which is a 
distributed, replicated block layer.
The old HDFS namespace and NN can be connected to this new block layer as 
we have described in HDFS-10419.
We also introduce a key-value namespace called Ozone built on HDSL.
  
The code is in a separate module and is turned off by default. In a secure 
setup, HDSL and Ozone daemons cannot be started.

The detailed documentation is available at 
 
https://cwiki.apache.org/confluence/display/HADOOP/Hadoop+Distributed+Storage+Layer+and+Applications


I will start with my vote.
+1 (binding)


Discussion Thread:
  https://s.apache.org/7240-merge
  https://s.apache.org/4sfU

Jiras:
   https://issues.apache.org/jira/browse/HDFS-7240
   https://issues.apache.org/jira/browse/HDFS-10419
   https://issues.apache.org/jira/browse/HDFS-13074
   https://issues.apache.org/jira/browse/HDFS-13180

   
Thanks
jitendra





DISCUSSION THREAD SUMMARY :

On 2/13/18, 6:28 PM, "sanjay Radia"  wrote:

Sorry the formatting got messed by my email client.  Here it is 
again


Dear
 Hadoop Community Members,

   We had multiple community discussions, a few meetings in 
smaller groups and also jira discussions with respect to this thread. We 
express our gratitude for participation and valuable comments. 

The key questions raised were following
1) How the new block storage layer and OzoneFS benefit HDFS and 
we were asked to chalk out a roadmap towards the goal of a scalable namenode 
working with the new storage layer
2) We were asked to provide a security design
3)There were questions around stability given ozone brings in a 
large body of code.
4) Why can’t they be separate projects forever or merged in 
when production ready?

We have responded to all the above questions with detailed 
explanations and answers on the jira as well as in the discussions. We believe 
that should sufficiently address community’s concerns. 

Please see the summary below:

1) The new code base benefits HDFS scaling and a roadmap has 
been provided. 

Summary:
  - New block storage layer addresses the scalability of the 
block layer. We have shown how existing NN can be connected to the new block 
layer and its benefits. We have shown 2 milestones, 1st milestone is much 
simpler than 2nd milestone while giving almost the same scaling benefits. 
Originally we had proposed simply milestone 2 and the community felt that 
removing the FSN/BM lock was was a fair amount of work and a simpler solution 
would be useful
  - We provide a new K-V namespace called Ozone FS with 
FileSystem/FileContext plugins to allow the users to use the new system. BTW 
Hive and Spark work very well on KV-namespaces on the cloud. This will 
facilitate stabilizing the new block layer. 
  - The new block layer has a new netty based protocol engine 
in the Datanode which, when stabilized, can be used by  the old hdfs block 
layer. See details below on sharing of code.


2) Stability impact on the existing HDFS code base and code 
separation. The new block layer and the OzoneFS are in modules that are 
separate from old HDFS code - currently there are no calls from HDFS into Ozone 
except for DN starting the new block  layer module if configured to do so. It 
does not add instability (the instability argument has been raised many times). 
Over time as we share code, we will ensure that the old HDFS continues to 
remains stable. (for example we plan to stabilize the new netty based protocol 
engine in the new block layer before sharing it with HDFS’s old block layer)


3) In the short term and medium term, the new system and HDFS  
will be used side-by-side by users. Side by-side usage in the short term for 
testing and side-by-side in the medium term for actual production use till the 
new system has feature parity with old HDFS. During this time, sharing the DN 
daemon and admin functions between the two systems is operationally important: 

Apache Hadoop qbt Report: branch2+JDK7 on Linux/x86

2018-02-26 Thread Apache Jenkins Server
For more details, see 
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/147/

No changes




-1 overall


The following subsystems voted -1:
asflicense findbugs mvnsite unit xml


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace


The following subsystems are considered long running:
(runtime bigger than 1h  0m  0s)
unit


Specific tests:

FindBugs :

   
module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase/hadoop-yarn-server-timelineservice-hbase-client
 
   Boxed value is unboxed and then immediately reboxed in 
org.apache.hadoop.yarn.server.timelineservice.storage.common.ColumnRWHelper.readResultsWithTimestamps(Result,
 byte[], byte[], KeyConverter, ValueConverter, boolean) At 
ColumnRWHelper.java:then immediately reboxed in 
org.apache.hadoop.yarn.server.timelineservice.storage.common.ColumnRWHelper.readResultsWithTimestamps(Result,
 byte[], byte[], KeyConverter, ValueConverter, boolean) At 
ColumnRWHelper.java:[line 335] 

Unreaped Processes :

   hadoop-hdfs:20 
   bkjournal:5 
   hadoop-yarn-server-resourcemanager:1 
   hadoop-yarn-client:4 
   hadoop-yarn-applications-distributedshell:1 
   hadoop-mapreduce-client-jobclient:9 
   hadoop-distcp:4 
   hadoop-extras:1 

Failed junit tests :

   hadoop.hdfs.server.namenode.snapshot.TestSnapshotBlocksMap 
   hadoop.hdfs.web.TestWebHdfsWithMultipleNameNodes 
   hadoop.hdfs.web.TestFSMainOperationsWebHdfs 
   
hadoop.yarn.server.nodemanager.containermanager.linux.runtime.TestDockerContainerRuntime
 
   
hadoop.yarn.server.resourcemanager.scheduler.capacity.TestContainerResizing 
   hadoop.yarn.server.resourcemanager.metrics.TestSystemMetricsPublisher 
   hadoop.yarn.server.TestDiskFailures 
   hadoop.conf.TestNoDefaultsJobConf 
   hadoop.mapred.TestJobSysDirWithDFS 
   hadoop.tools.TestIntegration 
   hadoop.tools.TestDistCpViewFs 
   hadoop.resourceestimator.solver.impl.TestLpSolver 
   hadoop.resourceestimator.service.TestResourceEstimatorService 

Timed out junit tests :

   org.apache.hadoop.hdfs.TestLeaseRecovery2 
   org.apache.hadoop.hdfs.TestRead 
   org.apache.hadoop.security.TestPermission 
   org.apache.hadoop.hdfs.web.TestWebHdfsTokens 
   org.apache.hadoop.hdfs.TestDFSInotifyEventInputStream 
   org.apache.hadoop.hdfs.TestDatanodeLayoutUpgrade 
   org.apache.hadoop.hdfs.TestFileAppendRestart 
   org.apache.hadoop.hdfs.TestReadWhileWriting 
   org.apache.hadoop.hdfs.security.TestDelegationToken 
   org.apache.hadoop.hdfs.TestDFSMkdirs 
   org.apache.hadoop.hdfs.TestDFSOutputStream 
   org.apache.hadoop.hdfs.web.TestWebHDFS 
   org.apache.hadoop.metrics2.sink.TestRollingFileSystemSinkWithSecureHdfs 
   org.apache.hadoop.hdfs.web.TestWebHDFSXAttr 
   org.apache.hadoop.metrics2.sink.TestRollingFileSystemSinkWithHdfs 
   org.apache.hadoop.hdfs.TestDistributedFileSystem 
   org.apache.hadoop.hdfs.TestReplaceDatanodeFailureReplication 
   org.apache.hadoop.hdfs.TestDFSShell 
   org.apache.hadoop.contrib.bkjournal.TestBootstrapStandbyWithBKJM 
   org.apache.hadoop.contrib.bkjournal.TestBookKeeperJournalManager 
   org.apache.hadoop.contrib.bkjournal.TestBookKeeperHACheckpoints 
   org.apache.hadoop.contrib.bkjournal.TestBookKeeperAsHASharedDir 
   org.apache.hadoop.contrib.bkjournal.TestBookKeeperSpeculativeRead 
   
org.apache.hadoop.yarn.server.resourcemanager.metrics.TestCombinedSystemMetricsPublisher
 
   
org.apache.hadoop.yarn.server.resourcemanager.recovery.TestFSRMStateStore 
   org.apache.hadoop.yarn.client.TestRMFailover 
   org.apache.hadoop.yarn.client.TestApplicationClientProtocolOnHA 
   org.apache.hadoop.yarn.client.api.impl.TestYarnClientWithReservation 
   org.apache.hadoop.yarn.client.api.impl.TestAMRMClient 
   
org.apache.hadoop.yarn.applications.distributedshell.TestDistributedShell 
   org.apache.hadoop.fs.TestFileSystem 
   org.apache.hadoop.mapred.TestMiniMRClasspath 
   org.apache.hadoop.mapred.TestClusterMapReduceTestCase 
   org.apache.hadoop.mapred.TestMRIntermediateDataEncryption 
   org.apache.hadoop.mapred.TestMRTimelineEventHandling 
   org.apache.hadoop.mapred.join.TestDatamerge 
   org.apache.hadoop.mapred.TestReduceFetchFromPartialMem 
   org.apache.hadoop.mapred.TestLazyOutput 
   org.apache.hadoop.mapred.TestReduceFetch 
   org.apache.hadoop.tools.TestDistCpWithAcls 
   org.apache.hadoop.tools.TestDistCpSync 
   org.apache.hadoop.tools.TestDistCpSyncReverseFromTarget 
   org.apache.hadoop.tools.TestDistCpSyncReverseFromSource 
   org.apache.hadoop.tools.TestCopyFiles 
  

   cc:

   

[jira] [Created] (YARN-7973) Support ContainerRelaunch for Docker containers

2018-02-26 Thread Shane Kumpf (JIRA)
Shane Kumpf created YARN-7973:
-

 Summary: Support ContainerRelaunch for Docker containers
 Key: YARN-7973
 URL: https://issues.apache.org/jira/browse/YARN-7973
 Project: Hadoop YARN
  Issue Type: Sub-task
Reporter: Shane Kumpf


Prior to YARN-5366, {{container-executor}} would remove the Docker container 
when it exited. The removal is now handled by the 
{{DockerLinuxContainerRuntime}}. {{ContainerRelaunch}} is intended to reuse the 
workdir from the previous attempt, and does not call {{cleanupContainer}} prior 
to {{launchContainer}}. The container ID is reused as well. As a result, the 
previous Docker container still exists, resulting in an error from Docker 
indicating the a container by that name already exists.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-dev-h...@hadoop.apache.org



Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86

2018-02-26 Thread Apache Jenkins Server
For more details, see 
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/704/

No changes




-1 overall


The following subsystems voted -1:
findbugs unit xml


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace


The following subsystems are considered long running:
(runtime bigger than 1h  0m  0s)
unit


Specific tests:

FindBugs :

   module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api 
   org.apache.hadoop.yarn.api.records.Resource.getResources() may expose 
internal representation by returning Resource.resources At Resource.java:by 
returning Resource.resources At Resource.java:[line 234] 

Failed junit tests :

   hadoop.crypto.key.kms.server.TestKMS 
   hadoop.hdfs.TestDFSStripedOutputStreamWithFailure 
   hadoop.hdfs.web.TestWebHdfsTimeouts 
   hadoop.hdfs.server.datanode.TestDataNodeVolumeFailure 
   hadoop.fs.http.server.TestHttpFSServerWebServer 
   hadoop.yarn.client.api.impl.TestTimelineClientV2Impl 
   hadoop.yarn.server.nodemanager.webapp.TestContainerLogsPage 
  

   cc:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/704/artifact/out/diff-compile-cc-root.txt
  [4.0K]

   javac:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/704/artifact/out/diff-compile-javac-root.txt
  [280K]

   checkstyle:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/704/artifact/out/diff-checkstyle-root.txt
  [17M]

   pylint:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/704/artifact/out/diff-patch-pylint.txt
  [24K]

   shellcheck:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/704/artifact/out/diff-patch-shellcheck.txt
  [20K]

   shelldocs:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/704/artifact/out/diff-patch-shelldocs.txt
  [12K]

   whitespace:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/704/artifact/out/whitespace-eol.txt
  [9.2M]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/704/artifact/out/whitespace-tabs.txt
  [288K]

   xml:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/704/artifact/out/xml.txt
  [4.0K]

   findbugs:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/704/artifact/out/branch-findbugs-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-api-warnings.html
  [8.0K]

   javadoc:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/704/artifact/out/diff-javadoc-javadoc-root.txt
  [760K]

   unit:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/704/artifact/out/patch-unit-hadoop-common-project_hadoop-kms.txt
  [12K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/704/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
  [320K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/704/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs-httpfs.txt
  [20K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/704/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-common.txt
  [40K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/704/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager.txt
  [48K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/704/artifact/out/patch-unit-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-jobclient.txt
  [84K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/704/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-applications_hadoop-yarn-services_hadoop-yarn-services-core.txt
  [8.0K]

Powered by Apache Yetus 0.8.0-SNAPSHOT   http://yetus.apache.org

-
To unsubscribe, e-mail: yarn-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-dev-h...@hadoop.apache.org

[jira] [Created] (YARN-7972) Support inter-app placement constraints for allocation tags

2018-02-26 Thread Weiwei Yang (JIRA)
Weiwei Yang created YARN-7972:
-

 Summary: Support inter-app placement constraints for allocation 
tags
 Key: YARN-7972
 URL: https://issues.apache.org/jira/browse/YARN-7972
 Project: Hadoop YARN
  Issue Type: Sub-task
Reporter: Weiwei Yang
Assignee: Weiwei Yang


Per discussion in [this 
comment|https://issues.apache.org/jira/browse/YARN-6599focusedCommentId=16319662=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16319662]
 in  YARN-6599, we need to support inter-app PC for allocation tags.

This will help to do better placement when dealing with potential competing 
resource applications, e.g don't place two tensorflow workers from two 
different applications on one same node.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-dev-h...@hadoop.apache.org



[jira] [Created] (YARN-7971) add COOKIE when pass through headers in WebAppProxyServlet

2018-02-26 Thread Fan Yunbo (JIRA)
Fan Yunbo created YARN-7971:
---

 Summary: add COOKIE when pass through headers in WebAppProxyServlet
 Key: YARN-7971
 URL: https://issues.apache.org/jira/browse/YARN-7971
 Project: Hadoop YARN
  Issue Type: Improvement
Affects Versions: 2.6.4
Reporter: Fan Yunbo


I am using Spark on Yarn and I add some authentication filters in spark web 
server.

And the filters need to add query string for authentication like

[https://RM:8088/proxy/application_xxx_xxx?q1=xx|https://rm:8088/proxy/application_xxx_xxx?user.name=xxx]x=xxx...

The filters will add cookies in headers when the web server respond the request.

However, the query string need to be added in the URL every time when I access 
the web server because the app proxy servlet in Yarn doesn't pass the cookies 
in headers.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-dev-h...@hadoop.apache.org



[jira] [Created] (YARN-7970) Compatibility issue: throw RpcNoSuchMethodException when run mapreduce job

2018-02-26 Thread Jiandan Yang (JIRA)
Jiandan Yang  created YARN-7970:
---

 Summary: Compatibility issue: throw RpcNoSuchMethodException when 
run mapreduce job
 Key: YARN-7970
 URL: https://issues.apache.org/jira/browse/YARN-7970
 Project: Hadoop YARN
  Issue Type: Bug
  Components: yarn
Affects Versions: 3.0.0
Reporter: Jiandan Yang 


Running teragen failed in the version of hadoop-3.1, and hdfs server is 2.8.
The reason of failing is 2.8 HDFS does not have setErasureCodingPolicy.
The detailed exception trace is:
```
2018-02-26 11:22:53,178 INFO mapreduce.JobSubmitter: Cleaning up the staging 
area /tmp/hadoop-yarn/staging/hadoop/.staging/job_1518615699369_0006
org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.ipc.RpcNoSuchMethodException):
 Unknown method setErasureCodingPolicy called on 
org.apache.hadoop.hdfs.protocol.ClientProtocol protocol.
at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:436)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:989)
at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:846)
at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:789)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1804)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2457)

at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1491)
at org.apache.hadoop.ipc.Client.call(Client.java:1437)
at org.apache.hadoop.ipc.Client.call(Client.java:1347)
at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:228)
at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:116)
at com.sun.proxy.$Proxy11.setErasureCodingPolicy(Unknown Source)
at 
org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.setErasureCodingPolicy(ClientNamenodeProtocolTranslatorPB.java:1583)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:422)
at 
org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:165)
at 
org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:157)
at 
org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95)
at 
org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:359)
at com.sun.proxy.$Proxy12.setErasureCodingPolicy(Unknown Source)
at 
org.apache.hadoop.hdfs.DFSClient.setErasureCodingPolicy(DFSClient.java:2678)
at 
org.apache.hadoop.hdfs.DistributedFileSystem$63.doCall(DistributedFileSystem.java:2665)
at 
org.apache.hadoop.hdfs.DistributedFileSystem$63.doCall(DistributedFileSystem.java:2662)
at 
org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
at 
org.apache.hadoop.hdfs.DistributedFileSystem.setErasureCodingPolicy(DistributedFileSystem.java:2680)
at 
org.apache.hadoop.mapreduce.JobResourceUploader.disableErasureCodingForPath(JobResourceUploader.java:882)
at 
org.apache.hadoop.mapreduce.JobResourceUploader.uploadResourcesInternal(JobResourceUploader.java:174)
at 
org.apache.hadoop.mapreduce.JobResourceUploader.uploadResources(JobResourceUploader.java:131)
at 
org.apache.hadoop.mapreduce.JobSubmitter.copyAndConfigureFiles(JobSubmitter.java:102)
at 
org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:197)
at org.apache.hadoop.mapreduce.Job$11.run(Job.java:1570)
at org.apache.hadoop.mapreduce.Job$11.run(Job.java:1567)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1965)
at org.apache.hadoop.mapreduce.Job.submit(Job.java:1567)
at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:1588)
at org.apache.hadoop.examples.terasort.TeraGen.run(TeraGen.java:304)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
at org.apache.hadoop.examples.terasort.TeraGen.main(TeraGen.java:308)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at