[jira] [Created] (HADOOP-16026) Replace incorrect use of system property user.name

2019-01-02 Thread Dinesh Chitlangia (JIRA)
Dinesh Chitlangia created HADOOP-16026:
--

 Summary: Replace incorrect use of system property user.name
 Key: HADOOP-16026
 URL: https://issues.apache.org/jira/browse/HADOOP-16026
 Project: Hadoop Common
  Issue Type: Improvement
 Environment: Kerberized
Reporter: Dinesh Chitlangia
Assignee: Dinesh Chitlangia


This jira has been created to track the suggested changes for YARN as 
identified in HDFS-14176

Following occurrence need to be corrected:
Common/PseudoAuthenticator L85
Common/FileSystem L2233
Common/AbstractFileSystem L451
Common/KMSWebApp L91
Common/SFTPConnectionPool L146
Common/SshFenceByTcpPort L239



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



Re: getting Yetus and github to be friends

2019-01-02 Thread lqjacklee
Thanks Steve, I like the style PR from the GitHub. when should we start to
change it ?

On Thu, Jan 3, 2019 at 7:07 AM Steve Loughran 
wrote:

>
> The new gitbox repo apparently does 2 way linking from github: you can
> commit a PR there and it'll make its way back
>
> this could be really slick -and do a big change to our review process.
>
> Before we can go near it though, we need to get Yetus doing its review &
> test of github PRs, which is not working right now
>
> What will it take to do that? And that means not "what does AW have to
> do", but "how can we help get this done?"
>
> -steve
>
>
> -
> To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
> For additional commands, e-mail: common-dev-h...@hadoop.apache.org
>
>


getting Yetus and github to be friends

2019-01-02 Thread Steve Loughran


The new gitbox repo apparently does 2 way linking from github: you can commit a 
PR there and it'll make its way back

this could be really slick -and do a big change to our review process. 

Before we can go near it though, we need to get Yetus doing its review & test 
of github PRs, which is not working right now

What will it take to do that? And that means not "what does AW have to do", but 
"how can we help get this done?"

-steve


-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-14493) YARN distributed shell application fails, when RM failed over or Restarts

2019-01-02 Thread Steve Loughran (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-14493?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran resolved HADOOP-14493.
-
Resolution: Invalid

> YARN distributed shell application fails, when RM failed over or Restarts
> -
>
> Key: HADOOP-14493
> URL: https://issues.apache.org/jira/browse/HADOOP-14493
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.7.0
>Reporter: Sathish
>Priority: Minor
>  Labels: distributedshell, yarn
>
> YARN Distributed shell application fails when doing RM failover or RM 
> restarts.
> Exception trace:
> 17/05/30 11:57:38 DEBUG security.UserGroupInformation: PrivilegedAction 
> as:mapr (auth:SIMPLE) 
> from:org.apache.hadoop.yarn.applications.distributedshell.ApplicationMaster.renameScriptFile(ApplicationMaster.java:1032)
> 17/05/30 11:57:38 DEBUG security.UserGroupInformation: 
> PrivilegedActionException as:mapr (auth:SIMPLE) cause:java.io.IOException: 
> Invalid source or target
> 17/05/30 11:57:38 ERROR distributedshell.ApplicationMaster: Not able to add 
> suffix (.bat/.sh) to the shell script filename
> java.io.IOException: Invalid source or target
>   at com.mapr.fs.MapRFileSystem.rename(MapRFileSystem.java:1132)
>   at 
> org.apache.hadoop.yarn.applications.distributedshell.ApplicationMaster$2.run(ApplicationMaster.java:1036)
>   at 
> org.apache.hadoop.yarn.applications.distributedshell.ApplicationMaster$2.run(ApplicationMaster.java:1032)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1595)
>   at 
> org.apache.hadoop.yarn.applications.distributedshell.ApplicationMaster.renameScriptFile(ApplicationMaster.java:1032)
>   at 
> org.apache.hadoop.yarn.applications.distributedshell.ApplicationMaster.access$1400(ApplicationMaster.java:167)
>   at 
> org.apache.hadoop.yarn.applications.distributedshell.ApplicationMaster$LaunchContainerRunnable.run(ApplicationMaster.java:953)
>   at java.lang.Thread.run(Thread.java:748)
> DS application trying to launch the additional container and it is failing to 
> rename the path Execscript.sh as it was already renamed by the previous 
> containers in  filesystem path.
> I will upload the logs and path details soon.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Reopened] (HADOOP-14493) YARN distributed shell application fails, when RM failed over or Restarts

2019-01-02 Thread Steve Loughran (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-14493?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran reopened HADOOP-14493:
-

> YARN distributed shell application fails, when RM failed over or Restarts
> -
>
> Key: HADOOP-14493
> URL: https://issues.apache.org/jira/browse/HADOOP-14493
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.7.0
>Reporter: Sathish
>Priority: Minor
>  Labels: distributedshell, yarn
>
> YARN Distributed shell application fails when doing RM failover or RM 
> restarts.
> Exception trace:
> 17/05/30 11:57:38 DEBUG security.UserGroupInformation: PrivilegedAction 
> as:mapr (auth:SIMPLE) 
> from:org.apache.hadoop.yarn.applications.distributedshell.ApplicationMaster.renameScriptFile(ApplicationMaster.java:1032)
> 17/05/30 11:57:38 DEBUG security.UserGroupInformation: 
> PrivilegedActionException as:mapr (auth:SIMPLE) cause:java.io.IOException: 
> Invalid source or target
> 17/05/30 11:57:38 ERROR distributedshell.ApplicationMaster: Not able to add 
> suffix (.bat/.sh) to the shell script filename
> java.io.IOException: Invalid source or target
>   at com.mapr.fs.MapRFileSystem.rename(MapRFileSystem.java:1132)
>   at 
> org.apache.hadoop.yarn.applications.distributedshell.ApplicationMaster$2.run(ApplicationMaster.java:1036)
>   at 
> org.apache.hadoop.yarn.applications.distributedshell.ApplicationMaster$2.run(ApplicationMaster.java:1032)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1595)
>   at 
> org.apache.hadoop.yarn.applications.distributedshell.ApplicationMaster.renameScriptFile(ApplicationMaster.java:1032)
>   at 
> org.apache.hadoop.yarn.applications.distributedshell.ApplicationMaster.access$1400(ApplicationMaster.java:167)
>   at 
> org.apache.hadoop.yarn.applications.distributedshell.ApplicationMaster$LaunchContainerRunnable.run(ApplicationMaster.java:953)
>   at java.lang.Thread.run(Thread.java:748)
> DS application trying to launch the additional container and it is failing to 
> rename the path Execscript.sh as it was already renamed by the previous 
> containers in  filesystem path.
> I will upload the logs and path details soon.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-16021) SequenceFile.createWriter appendIfExists codec cause NullPointerException

2019-01-02 Thread Steve Loughran (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16021?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran resolved HADOOP-16021.
-
Resolution: Duplicate

> SequenceFile.createWriter appendIfExists codec cause NullPointerException
> -
>
> Key: HADOOP-16021
> URL: https://issues.apache.org/jira/browse/HADOOP-16021
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: common
>Affects Versions: 2.7.3
> Environment: windows10 or Linux-centos , hadoop2.7.3, jdk8
>Reporter: asin
>Priority: Major
>  Labels: bug
> Attachments: 055.png, 62.png, CompressionType.BLOCK-Not 
> supported-error log.txt, CompressionType.NONE-NullPointerException-error 
> log.txt
>
>
>  
>  I want append the data in a file , when i use SequenceFile.appendIfExists , 
> it throw NullPointerException at at 
> org.apache.hadoop.io.SequenceFile$Writer.(SequenceFile.java:1119)
> when i remove the 'appendIfExists', it works, but it will cover old file.
>  
> when i try use CompressionType.RECORD or CompressionType.BLOCK throw "not 
> support" exception
>  
> {code:java}
> // my code
> SequenceFile.Writer writer = null; 
> writer = SequenceFile.createWriter(conf, 
> SequenceFile.Writer.file(path), 
> SequenceFile.Writer.keyClass(Text.class), 
> SequenceFile.Writer.valueClass(Text.class), 
> SequenceFile.Writer.appendIfExists(true) );
> {code}
>  
> {code:java}
> // all my code
> public class Writer1 implements VoidFunction String>>> {
> private static Configuration conf = new Configuration();
> private int MAX_LINE = 3; // little num,for test
> @Override
> public void call(Iterator> iterator) throws 
> Exception {
> int partitionId = TaskContext.get().partitionId();
> int count = 0;
> SequenceFile.Writer writer = null;
> while (iterator.hasNext()) {
> Tuple2 tp = iterator.next();
> Path path = new Path("D:/tmp-doc/logs/logs.txt");
> if (writer == null)
> writer = SequenceFile.createWriter(conf, 
> SequenceFile.Writer.file(path),
> SequenceFile.Writer.keyClass(Text.class),
> SequenceFile.Writer.valueClass(Text.class),
> SequenceFile.Writer.appendIfExists(true)
> );
> writer.append(new Text(tp._1), new Text(tp._2));
> count++;
> if (count > MAX_LINE) {
> IOUtils.closeStream(writer);
> count = 0;
> writer = SequenceFile.createWriter(... // same as above
> }
> }
> if (count > 0) {
> IOUtils.closeStream(writer);
> }
> IOUtils.closeStream(writer);
> }
> }
> {code}
>  // above code call by below
> {code:java}
> import com.xxx.algo.hadoop.Writer1
> import com.xxx.algo.utils.Utils
> import kafka.serializer.StringDecoder
> import org.apache.spark.sql.SparkSession
> import org.apache.spark.streaming.kafka.KafkaUtils
> import org.apache.spark.streaming.{Durations, StreamingContext}
> import org.apache.spark.{SparkConf, SparkContext}
> object KafkaSparkStreamingApp {
>   def main(args: Array[String]): Unit = {
> val kafka = "192.168.30.4:9092,192.168.30.5:9092,192.168.30.6:9092"
> val zk = "192.168.30.4:2181,192.168.30.5:2181,192.168.30.6:2181"
> val topics = Set("test.aries.collection.appevent.biz")
> val tag = "biz"
> val durationSeconds = 5000
> val conf = new SparkConf()
> conf.setAppName("user-log-consumer")
>   .set("spark.serilizer","org.apache.spark.serializer.KryoSerializer")
>   .set("spark.kryo.registrationRequired", "true")
>   .set("spark.defalut.parallelism","2")
>   .set("spark.rdd.compress","true")
>   .setMaster("local[2]")
> val sc = new SparkContext(conf)
> val session = SparkSession.builder()
>   .config(conf)
>   .getOrCreate()
> val ssc = new StreamingContext(sc, 
> Durations.milliseconds(durationSeconds))
> val kafkaParams = Map[String, String](
>   "metadata.broker.list" -> kafka,
>   "bootstrap.servers" -> kafka,
>   "zookeeper.connect" -> zk,
>   "group.id" -> "recommend_stream_spark",
>   "key.serializer" -> 
> "org.apache.kafka.common.serialization.StringSerializer",
>   "key.deserializer" -> 
> "org.apache.kafka.common.serialization.StringDeserializer",
>   "value.deserializer" -> 
> "org.apache.kafka.common.serialization.StringDeserializer"
> )
> val stream = KafkaUtils.createDirectStream[String, String, StringDecoder, 
> StringDecoder](
>   ssc,
>   kafkaParams,
>   topics
> )
> val timeFieldName = "log_time"
> stream.foreachRDD(rddMsg => {
>   rddMsg.map(msg => {
> val 

Re: [NOTICE] Move to gitbox

2019-01-02 Thread Elek, Marton
Thanks the report Ayush,

The bogus repository is removed by the INFRA:

https://issues.apache.org/jira/browse/INFRA-17526

And the cwiki page[1] is updated to use the gitbox url instead of
git.apache.org

Marton

[1] https://cwiki.apache.org/confluence/display/HADOOP/Git+And+Hadoop

On 1/2/19 8:56 AM, Ayush Saxena wrote:
> Hi Akira
> 
> I guess the mirror at git.apache.org/hadoop.git hasn’t been updated with the 
> new location.
> 
> It is still pointing to https://git-wip-us.apache.org/repos/asf/hadoop.git
> 
> Checking out the source still has its mention 
> As 
> git clone git://git.apache.org/hadoop.git 
> 
> Or this link needs to be updated?
> 
> Can you give a check.
> 
> -Ayush
> 
>> On 31-Dec-2018, at 3:33 AM, Akira Ajisaka  wrote:
>>
>> Hi all,
>>
>> The migration has been finished.
>> All the jenkins jobs under the hadoop view and ozone view were also
>> updated except the beam_PerformanceTests_* jobs.
>>
>> Thank you Elek and ASF infra team for your help!
>>
>> Regards,
>> Akira
>>
>> 2018年12月25日(火) 13:27 Akira Ajisaka :
>>>
>>> Hi all,
>>>
>>> The Apache Hadoop git repository will be migrated to gitbox at 9PM UTC
>>> in 30th December.
>>> After the migration, the old repository cannot be accessed. Please use
>>> the new repository for committing. The migration is pretty much atomic
>>> and it will take up to a few minutes.
>>>
>>> Old repository: https://git-wip-us.apache.org/repos/asf?p=hadoop.git
>>> New repository: https://gitbox.apache.org/repos/asf?p=hadoop.git
>>>
>>> The GitHub repository (https://github.com/apache/hadoop) is not affected.
>>>
>>> Elek will update the jenkins jobs and I'll update the source code and
>>> documentation as soon as the migration is finished.
>>>
>>> Discussion: 
>>> https://lists.apache.org/thread.html/8b37cd69191648f1163ee23e3498f33da1c44ac876c6225b429dc835@%3Ccommon-dev.hadoop.apache.org%3E
>>>
>>> JIRA:
>>> - https://issues.apache.org/jira/browse/HADOOP-16003
>>> - https://issues.apache.org/jira/browse/INFRA-17448
>>>
>>> Happy Holidays!
>>>
>>> -Akira
>>
>> -
>> To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
>> For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
>>

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86

2019-01-02 Thread Apache Jenkins Server
For more details, see 
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1004/

No changes




-1 overall


The following subsystems voted -1:
asflicense findbugs hadolint pathlen unit


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace


The following subsystems are considered long running:
(runtime bigger than 1h  0m  0s)
unit


Specific tests:

Failed junit tests :

   hadoop.security.ssl.TestSSLFactory 
   hadoop.registry.secure.TestSecureLogins 
   hadoop.hdfs.server.balancer.TestBalancerWithNodeGroup 
   hadoop.hdfs.server.blockmanagement.TestSequentialBlockGroupId 
   hadoop.hdfs.server.mover.TestMover 
   
hadoop.hdfs.server.namenode.snapshot.TestINodeFileUnderConstructionWithSnapshot 
   hadoop.hdfs.server.namenode.metrics.TestNameNodeMetrics 
   hadoop.hdfs.web.TestWebHdfsTimeouts 
   hadoop.hdfs.server.balancer.TestBalancerWithHANameNodes 
   hadoop.hdfs.server.namenode.ha.TestHASafeMode 
   hadoop.hdfs.server.namenode.ha.TestRetryCacheWithHA 
   hadoop.hdfs.server.blockmanagement.TestBlocksWithNotEnoughRacks 
   hadoop.hdfs.server.namenode.TestListCorruptFileBlocks 
   hadoop.hdfs.server.namenode.web.resources.TestWebHdfsDataLocality 
   hadoop.hdfs.server.balancer.TestBalancer 
   hadoop.hdfs.server.namenode.TestReencryption 
   hadoop.yarn.client.cli.TestRMAdminCLI 
  

   cc:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1004/artifact/out/diff-compile-cc-root.txt
  [4.0K]

   javac:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1004/artifact/out/diff-compile-javac-root.txt
  [336K]

   checkstyle:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1004/artifact/out/diff-checkstyle-root.txt
  [17M]

   hadolint:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1004/artifact/out/diff-patch-hadolint.txt
  [4.0K]

   pathlen:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1004/artifact/out/pathlen.txt
  [12K]

   pylint:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1004/artifact/out/diff-patch-pylint.txt
  [60K]

   shellcheck:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1004/artifact/out/diff-patch-shellcheck.txt
  [20K]

   shelldocs:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1004/artifact/out/diff-patch-shelldocs.txt
  [12K]

   whitespace:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1004/artifact/out/whitespace-eol.txt
  [9.3M]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1004/artifact/out/whitespace-tabs.txt
  [1.1M]

   findbugs:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1004/artifact/out/branch-findbugs-hadoop-hdds_client.txt
  [28K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1004/artifact/out/branch-findbugs-hadoop-hdds_container-service.txt
  [8.0K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1004/artifact/out/branch-findbugs-hadoop-hdds_framework.txt
  [4.0K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1004/artifact/out/branch-findbugs-hadoop-hdds_server-scm.txt
  [12K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1004/artifact/out/branch-findbugs-hadoop-hdds_tools.txt
  [4.0K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1004/artifact/out/branch-findbugs-hadoop-ozone_client.txt
  [8.0K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1004/artifact/out/branch-findbugs-hadoop-ozone_common.txt
  [4.0K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1004/artifact/out/branch-findbugs-hadoop-ozone_objectstore-service.txt
  [8.0K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1004/artifact/out/branch-findbugs-hadoop-ozone_ozone-manager.txt
  [4.0K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1004/artifact/out/branch-findbugs-hadoop-ozone_ozonefs.txt
  [16K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1004/artifact/out/branch-findbugs-hadoop-ozone_s3gateway.txt
  [48K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1004/artifact/out/branch-findbugs-hadoop-ozone_tools.txt
  [8.0K]

   javadoc:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1004/artifact/out/diff-javadoc-javadoc-root.txt
  [752K]

   unit:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1004/artifact/out/patch-unit-hadoop-common-project_hadoop-common.txt
  [168K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1004/artifact/out/patch-unit-hadoop-common-project_hadoop-registry.txt
  [12K]
   

[jira] [Resolved] (HADOOP-14493) YARN distributed shell application fails, when RM failed over or Restarts

2019-01-02 Thread Sathish (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-14493?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sathish resolved HADOOP-14493.
--
Resolution: Fixed

Issue is specific to MapR File System. Closing this as FS nature has to be 
changed

> YARN distributed shell application fails, when RM failed over or Restarts
> -
>
> Key: HADOOP-14493
> URL: https://issues.apache.org/jira/browse/HADOOP-14493
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.7.0
>Reporter: Sathish
>Priority: Minor
>  Labels: distributedshell, yarn
>
> YARN Distributed shell application fails when doing RM failover or RM 
> restarts.
> Exception trace:
> 17/05/30 11:57:38 DEBUG security.UserGroupInformation: PrivilegedAction 
> as:mapr (auth:SIMPLE) 
> from:org.apache.hadoop.yarn.applications.distributedshell.ApplicationMaster.renameScriptFile(ApplicationMaster.java:1032)
> 17/05/30 11:57:38 DEBUG security.UserGroupInformation: 
> PrivilegedActionException as:mapr (auth:SIMPLE) cause:java.io.IOException: 
> Invalid source or target
> 17/05/30 11:57:38 ERROR distributedshell.ApplicationMaster: Not able to add 
> suffix (.bat/.sh) to the shell script filename
> java.io.IOException: Invalid source or target
>   at com.mapr.fs.MapRFileSystem.rename(MapRFileSystem.java:1132)
>   at 
> org.apache.hadoop.yarn.applications.distributedshell.ApplicationMaster$2.run(ApplicationMaster.java:1036)
>   at 
> org.apache.hadoop.yarn.applications.distributedshell.ApplicationMaster$2.run(ApplicationMaster.java:1032)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1595)
>   at 
> org.apache.hadoop.yarn.applications.distributedshell.ApplicationMaster.renameScriptFile(ApplicationMaster.java:1032)
>   at 
> org.apache.hadoop.yarn.applications.distributedshell.ApplicationMaster.access$1400(ApplicationMaster.java:167)
>   at 
> org.apache.hadoop.yarn.applications.distributedshell.ApplicationMaster$LaunchContainerRunnable.run(ApplicationMaster.java:953)
>   at java.lang.Thread.run(Thread.java:748)
> DS application trying to launch the additional container and it is failing to 
> rename the path Execscript.sh as it was already renamed by the previous 
> containers in  filesystem path.
> I will upload the logs and path details soon.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org