[jira] [Updated] (HADOOP-15881) Update JUnit section in LICENSE.txt

2018-10-24 Thread Akira Ajisaka (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15881?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka updated HADOOP-15881:
---
Description: 
HADOOP-14775 updated JUnit 4 version from 4.11 to 4.12, and added JUnit 5 to 
dependency management section.
* We need to update JUnit 4 version in LICENSE.txt.
* For now, JUnit 5 is not used in any modules, however, if we start using JUnit 
5 in any module, we need to add JUnit 5 section to LICENSE.txt.

> Update JUnit section in LICENSE.txt
> ---
>
> Key: HADOOP-15881
> URL: https://issues.apache.org/jira/browse/HADOOP-15881
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Akira Ajisaka
>Priority: Major
>
> HADOOP-14775 updated JUnit 4 version from 4.11 to 4.12, and added JUnit 5 to 
> dependency management section.
> * We need to update JUnit 4 version in LICENSE.txt.
> * For now, JUnit 5 is not used in any modules, however, if we start using 
> JUnit 5 in any module, we need to add JUnit 5 section to LICENSE.txt.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-15881) Update JUnit section in LICENSE.txt

2018-10-24 Thread Akira Ajisaka (JIRA)
Akira Ajisaka created HADOOP-15881:
--

 Summary: Update JUnit section in LICENSE.txt
 Key: HADOOP-15881
 URL: https://issues.apache.org/jira/browse/HADOOP-15881
 Project: Hadoop Common
  Issue Type: Sub-task
Reporter: Akira Ajisaka






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14775) Change junit dependency in parent pom file to junit 5 while maintaining backward compatibility to junit4.

2018-10-24 Thread Akira Ajisaka (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-14775?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka updated HADOOP-14775:
---
Issue Type: Sub-task  (was: Improvement)
Parent: HADOOP-14693

> Change junit dependency in parent pom file to junit 5 while maintaining 
> backward compatibility to junit4. 
> --
>
> Key: HADOOP-14775
> URL: https://issues.apache.org/jira/browse/HADOOP-14775
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: build
>Affects Versions: 3.0.0-alpha4
>Reporter: Ajay Kumar
>Assignee: Akira Ajisaka
>Priority: Major
>  Labels: junit5
> Fix For: 3.3.0
>
> Attachments: HADOOP-14775.01.patch, HADOOP-14775.02.patch, 
> HADOOP-14775.03.patch, HADOOP-14775.04.patch, HADOOP-14775.05.patch, 
> HADOOP-14775.06.patch
>
>
> Change junit dependency in parent pom file to junit 5 while maintaining 
> backward compatibility to junit4. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14693) Upgrade JUnit from 4 to 5

2018-10-24 Thread Akira Ajisaka (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-14693?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka updated HADOOP-14693:
---
Issue Type: Improvement  (was: Sub-task)
Parent: (was: HADOOP-11123)

> Upgrade JUnit from 4 to 5
> -
>
> Key: HADOOP-14693
> URL: https://issues.apache.org/jira/browse/HADOOP-14693
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Akira Ajisaka
>Priority: Major
>
> JUnit 4 does not support Java 9. We need to upgrade this.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15864) Job submitter / executor fail when SBN domain name can not resolved

2018-10-24 Thread Wei-Chiu Chuang (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15864?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16663157#comment-16663157
 ] 

Wei-Chiu Chuang commented on HADOOP-15864:
--

+1 on the rev003 patch

> Job submitter / executor fail when SBN domain name can not resolved
> ---
>
> Key: HADOOP-15864
> URL: https://issues.apache.org/jira/browse/HADOOP-15864
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: He Xiaoqiao
>Assignee: He Xiaoqiao
>Priority: Critical
> Fix For: 2.7.8, 3.3.0
>
> Attachments: HADOOP-15864-branch.2.7.001.patch, 
> HADOOP-15864-branch.2.7.002.patch, HADOOP-15864.003.patch, 
> HADOOP-15864.branch.2.7.004.patch
>
>
> Job submit failure and Task executes failure if Standby NameNode domain name 
> can not resolved on HDFS HA with DelegationToken feature.
> This issue is triggered when create {{ConfiguredFailoverProxyProvider}} 
> instance which invoke {{HAUtil.cloneDelegationTokenForLogicalUri}} in HA mode 
> with Security. Since in HDFS HA mode UGI need include separate token for each 
> NameNode in order to dealing with Active-Standby switch, the double tokens' 
> content is same of course. 
> However when #setTokenService in {{HAUtil.cloneDelegationTokenForLogicalUri}} 
> it checks whether the address of NameNode has been resolved or not, if Not, 
> throw #IllegalArgumentException upon, then job submitter/ task executor fail.
> HDFS-8068 and HADOOP-12125 try to fix it, but I don't think the two tickets 
> resolve completely.
> Another questions many guys consider is why NameNode domain name can not 
> resolve? I think there are many scenarios, for instance node replace when 
> meet fault, and refresh DNS sometimes. Anyway, Standby NameNode failure 
> should not impact Hadoop cluster stability in my opinion.
> a. code ref: org.apache.hadoop.security.SecurityUtil line373-386
> {code:java}
>   public static Text buildTokenService(InetSocketAddress addr) {
> String host = null;
> if (useIpForTokenService) {
>   if (addr.isUnresolved()) { // host has no ip address
> throw new IllegalArgumentException(
> new UnknownHostException(addr.getHostName())
> );
>   }
>   host = addr.getAddress().getHostAddress();
> } else {
>   host = StringUtils.toLowerCase(addr.getHostName());
> }
> return new Text(host + ":" + addr.getPort());
>   }
> {code}
> b.exception log ref:
> {code:xml}
> at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:227)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
> at java.lang.Thread.run(Thread.java:745)
> Caused by: java.io.IOException: Couldn't create proxy provider class 
> org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider
> at 
> org.apache.hadoop.hdfs.NameNodeProxies.createFailoverProxyProvider(NameNodeProxies.java:515)
> at 
> org.apache.hadoop.hdfs.NameNodeProxies.createProxy(NameNodeProxies.java:170)
> at org.apache.hadoop.hdfs.DFSClient.(DFSClient.java:761)
> at org.apache.hadoop.hdfs.DFSClient.(DFSClient.java:691)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFileSystem.java:150)
> at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2713)
> at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:93)
> at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2747)
> at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2729)
> at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:385)
> at 
> org.apache.hadoop.fs.viewfs.ChRootedFileSystem.(ChRootedFileSystem.java:106)
> at 
> org.apache.hadoop.fs.viewfs.ViewFileSystem$1.getTargetFileSystem(ViewFileSystem.java:178)
> at 
> org.apache.hadoop.fs.viewfs.ViewFileSystem$1.getTargetFileSystem(ViewFileSystem.java:172)
> at org.apache.hadoop.fs.viewfs.InodeTree.createLink(InodeTree.java:303)
> at org.apache.hadoop.fs.viewfs.InodeTree.(InodeTree.java:377)
> at 
> org.apache.hadoop.fs.viewfs.ViewFileSystem$1.(ViewFileSystem.java:172)
> at 
> org.apache.hadoop.fs.viewfs.ViewFileSystem.initialize(ViewFileSystem.java:172)
> at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2713)
> at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:93)
> at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2747)
> at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2729)
> at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:385)
> at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:176)
> at org.apache.hadoop.mapred.JobConf.getWorkingDirectory(JobConf.java:665)
> ... 35 more
> Caused by: java.lang.reflect.InvocationTargetException
> at 

[jira] [Commented] (HADOOP-15815) Upgrade Eclipse Jetty version due to security concerns

2018-10-24 Thread Sunil Govindan (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15815?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16663108#comment-16663108
 ] 

Sunil Govindan commented on HADOOP-15815:
-

Hi [~borisvu] [~bharatviswa] [~busbey]

3.2 release preps are on going and RC is almost ready. Could u pls help to 
confirm the status of this issue. Is this a blocker for any release from hadoop 
line? cud u pls help to confirm. If so, when can we land this and any impacts 
if any?

> Upgrade Eclipse Jetty version due to security concerns
> --
>
> Key: HADOOP-15815
> URL: https://issues.apache.org/jira/browse/HADOOP-15815
> Project: Hadoop Common
>  Issue Type: Task
>Affects Versions: 3.1.1, 3.0.3
>Reporter: Boris Vulikh
>Assignee: Boris Vulikh
>Priority: Major
> Attachments: HADOOP-15815.01-2.patch
>
>
> * 
> [CVE-2017-7657|https://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2017-7657]
>  * 
> [CVE-2017-7658|https://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2017-7658]
>  * 
> [CVE-2017-7656|https://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2017-7656]
>  * 
> [CVE-2018-12536|https://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2018-12536]
> We should upgrade the dependency to version 9.3.24 or the latest, if possible.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Assigned] (HADOOP-15815) Upgrade Eclipse Jetty version due to security concerns

2018-10-24 Thread Sunil Govindan (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15815?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sunil Govindan reassigned HADOOP-15815:
---

Assignee: Boris Vulikh  (was: Sunil Govindan)

> Upgrade Eclipse Jetty version due to security concerns
> --
>
> Key: HADOOP-15815
> URL: https://issues.apache.org/jira/browse/HADOOP-15815
> Project: Hadoop Common
>  Issue Type: Task
>Affects Versions: 3.1.1, 3.0.3
>Reporter: Boris Vulikh
>Assignee: Boris Vulikh
>Priority: Major
> Attachments: HADOOP-15815.01-2.patch
>
>
> * 
> [CVE-2017-7657|https://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2017-7657]
>  * 
> [CVE-2017-7658|https://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2017-7658]
>  * 
> [CVE-2017-7656|https://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2017-7656]
>  * 
> [CVE-2018-12536|https://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2018-12536]
> We should upgrade the dependency to version 9.3.24 or the latest, if possible.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Assigned] (HADOOP-15815) Upgrade Eclipse Jetty version due to security concerns

2018-10-24 Thread Sunil Govindan (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15815?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sunil Govindan reassigned HADOOP-15815:
---

Assignee: Sunil Govindan  (was: Boris Vulikh)

> Upgrade Eclipse Jetty version due to security concerns
> --
>
> Key: HADOOP-15815
> URL: https://issues.apache.org/jira/browse/HADOOP-15815
> Project: Hadoop Common
>  Issue Type: Task
>Affects Versions: 3.1.1, 3.0.3
>Reporter: Boris Vulikh
>Assignee: Sunil Govindan
>Priority: Major
> Attachments: HADOOP-15815.01-2.patch
>
>
> * 
> [CVE-2017-7657|https://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2017-7657]
>  * 
> [CVE-2017-7658|https://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2017-7658]
>  * 
> [CVE-2017-7656|https://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2017-7656]
>  * 
> [CVE-2018-12536|https://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2018-12536]
> We should upgrade the dependency to version 9.3.24 or the latest, if possible.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-15880) WASB doesn't honor fs.trash.interval and this fails to auto purge trash folder

2018-10-24 Thread Arpit Agarwal (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15880?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16663079#comment-16663079
 ] 

Arpit Agarwal edited comment on HADOOP-15880 at 10/25/18 1:04 AM:
--

Hi [~Sunilkc], I don't expect any of the connectors (E.g. WASB, S3A) to honor 
_fs.trash.interval_. The purge functionality works for HDFS because it is 
implemented in the HDFS namenode. The cloud object stores have no idea about 
this setting.


was (Author: arpitagarwal):
Hi [~Sunilkc], I don't expect any of the connectors (E.g. WASB, S3A) to honor 
_fs.trash.interval_. The purge functionality is implemented in the HDFS 
namenode. The cloud object stores have no idea about this setting.

> WASB doesn't honor fs.trash.interval and this fails to auto purge trash folder
> --
>
> Key: HADOOP-15880
> URL: https://issues.apache.org/jira/browse/HADOOP-15880
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/azure
>Affects Versions: 2.7.3
> Environment: Any HDInsigth cluster pointing to WASB. 
>Reporter: Sunil Kumar Chakrapani
>Priority: Major
>  Labels: WASB
>
> when "fs.trash.interval" is set to a value,  trash for the local hdfs got 
> cleared where as the trash folder on WASB doesn't get deleted and the files 
> get piled up on WASB store..
> WASB doesn't pick up  fs.trash.interval value and this fails to auto purge 
> trash folder on WASB store.
>  
> *Issue : WASB doesn't honor fs.trash.interval and this fails to auto purge 
> trash folder*
> *Steps to reproduce Scenario:*
> *Delete any file stored on HDFS*
> hdfs dfs -D "fs.default.name=hdfs://mycluster/" -rm /hivestore.txt
> 18/10/23 06:18:05 INFO fs.TrashPolicyDefault: Moved: 
> 'hdfs://mycluster/hivestore.txt' to trash at: 
> hdfs://mycluster/user/sshuser/.Trash/Current/hivestore.txt
> *When deleted the file is moved to trash folder* 
> hdfs dfs -rm wasb:///hivestore.txt
> 18/10/23 06:19:13 INFO fs.TrashPolicyDefault: Moved: 
> 'wasb://kcspark-2018-10-18t17-07-40-5...@kcdnsproxy.blob.core.windows.net/hivestore.txt'
>  to trash at: 
> wasb://kcspark-2018-10-18t17-07-40-5...@kcdnsproxy.blob.core.windows.net/user/sshuser/.Trash/Current/hivestore.txt
> *Reduced the fs.trash.interval from 360 to 1 and restarted all related 
> services.*
> *Trash for the local hdfs gets cleared honoring the "fs.trash.interval" 
> value.*
> hdfs dfs -D "fs.default.name=hdfs://mycluster/" -ls 
> hdfs://mycluster/user/sshuser/.Trash/Current/
> ls: File hdfs://mycluster/user/sshuser/.Trash/Current does not exist.
> *Where as the trash for WASB doesn't get cleared.*
> hdfs dfs -ls 
> wasb://kcspark-2018-10-18t17-07-40-5...@kcdnsproxy.blob.core.windows.net/user/sshuser/.Trash/Current/
> Found 1 items
> -rw-r--r-- 1 sshuser supergroup 1084 2018-10-23 06:19 
> wasb://kcspark-2018-10-18t17-07-40-5...@kcdnsproxy.blob.core.windows.net/user/sshuser/.Trash/Current/hivestore.txt
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15880) WASB doesn't honor fs.trash.interval and this fails to auto purge trash folder

2018-10-24 Thread Arpit Agarwal (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15880?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16663079#comment-16663079
 ] 

Arpit Agarwal commented on HADOOP-15880:


Hi [~Sunilkc], I don't expect any of the connectors (E.g. WASB, S3A) to honor 
_fs.trash.interval_. The purge functionality is implemented in the HDFS 
namenode. The cloud object stores have no idea about this setting.

> WASB doesn't honor fs.trash.interval and this fails to auto purge trash folder
> --
>
> Key: HADOOP-15880
> URL: https://issues.apache.org/jira/browse/HADOOP-15880
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/azure
>Affects Versions: 2.7.3
> Environment: Any HDInsigth cluster pointing to WASB. 
>Reporter: Sunil Kumar Chakrapani
>Priority: Major
>  Labels: WASB
>
> when "fs.trash.interval" is set to a value,  trash for the local hdfs got 
> cleared where as the trash folder on WASB doesn't get deleted and the files 
> get piled up on WASB store..
> WASB doesn't pick up  fs.trash.interval value and this fails to auto purge 
> trash folder on WASB store.
>  
> *Issue : WASB doesn't honor fs.trash.interval and this fails to auto purge 
> trash folder*
> *Steps to reproduce Scenario:*
> *Delete any file stored on HDFS*
> hdfs dfs -D "fs.default.name=hdfs://mycluster/" -rm /hivestore.txt
> 18/10/23 06:18:05 INFO fs.TrashPolicyDefault: Moved: 
> 'hdfs://mycluster/hivestore.txt' to trash at: 
> hdfs://mycluster/user/sshuser/.Trash/Current/hivestore.txt
> *When deleted the file is moved to trash folder* 
> hdfs dfs -rm wasb:///hivestore.txt
> 18/10/23 06:19:13 INFO fs.TrashPolicyDefault: Moved: 
> 'wasb://kcspark-2018-10-18t17-07-40-5...@kcdnsproxy.blob.core.windows.net/hivestore.txt'
>  to trash at: 
> wasb://kcspark-2018-10-18t17-07-40-5...@kcdnsproxy.blob.core.windows.net/user/sshuser/.Trash/Current/hivestore.txt
> *Reduced the fs.trash.interval from 360 to 1 and restarted all related 
> services.*
> *Trash for the local hdfs gets cleared honoring the "fs.trash.interval" 
> value.*
> hdfs dfs -D "fs.default.name=hdfs://mycluster/" -ls 
> hdfs://mycluster/user/sshuser/.Trash/Current/
> ls: File hdfs://mycluster/user/sshuser/.Trash/Current does not exist.
> *Where as the trash for WASB doesn't get cleared.*
> hdfs dfs -ls 
> wasb://kcspark-2018-10-18t17-07-40-5...@kcdnsproxy.blob.core.windows.net/user/sshuser/.Trash/Current/
> Found 1 items
> -rw-r--r-- 1 sshuser supergroup 1084 2018-10-23 06:19 
> wasb://kcspark-2018-10-18t17-07-40-5...@kcdnsproxy.blob.core.windows.net/user/sshuser/.Trash/Current/hivestore.txt
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15870) S3AInputStream.remainingInFile should use nextReadPos

2018-10-24 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15870?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16663065#comment-16663065
 ] 

ASF GitHub Bot commented on HADOOP-15870:
-

Github user lqjack commented on the issue:

https://github.com/apache/hadoop/pull/433
  
@steveloughran I have updated the code , please help reveiw and comment, 
thanks .


> S3AInputStream.remainingInFile should use nextReadPos
> -
>
> Key: HADOOP-15870
> URL: https://issues.apache.org/jira/browse/HADOOP-15870
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.8.4, 3.1.1
>Reporter: Shixiong Zhu
>Priority: Major
>
> Otherwise `remainingInFile` will not change after `seek`.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] hadoop issue #433: HADOOP-15870

2018-10-24 Thread lqjack
Github user lqjack commented on the issue:

https://github.com/apache/hadoop/pull/433
  
@steveloughran I have updated the code , please help reveiw and comment, 
thanks .


---

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-15880) WASB doesn't honor fs.trash.interval and this fails to auto purge trash folder

2018-10-24 Thread Sunil Kumar Chakrapani (JIRA)
Sunil Kumar Chakrapani created HADOOP-15880:
---

 Summary: WASB doesn't honor fs.trash.interval and this fails to 
auto purge trash folder
 Key: HADOOP-15880
 URL: https://issues.apache.org/jira/browse/HADOOP-15880
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs/azure
Affects Versions: 2.7.3
 Environment: Any HDInsigth cluster pointing to WASB. 
Reporter: Sunil Kumar Chakrapani


when "fs.trash.interval" is set to a value,  trash for the local hdfs got 
cleared where as the trash folder on WASB doesn't get deleted and the files get 
piled up on WASB store..

WASB doesn't pick up  fs.trash.interval value and this fails to auto purge 
trash folder on WASB store.

 

*Issue : WASB doesn't honor fs.trash.interval and this fails to auto purge 
trash folder*

*Steps to reproduce Scenario:*

*Delete any file stored on HDFS*

hdfs dfs -D "fs.default.name=hdfs://mycluster/" -rm /hivestore.txt
18/10/23 06:18:05 INFO fs.TrashPolicyDefault: Moved: 
'hdfs://mycluster/hivestore.txt' to trash at: 
hdfs://mycluster/user/sshuser/.Trash/Current/hivestore.txt

*When deleted the file is moved to trash folder* 
hdfs dfs -rm wasb:///hivestore.txt
18/10/23 06:19:13 INFO fs.TrashPolicyDefault: Moved: 
'wasb://kcspark-2018-10-18t17-07-40-5...@kcdnsproxy.blob.core.windows.net/hivestore.txt'
 to trash at: 
wasb://kcspark-2018-10-18t17-07-40-5...@kcdnsproxy.blob.core.windows.net/user/sshuser/.Trash/Current/hivestore.txt

*Reduced the fs.trash.interval from 360 to 1 and restarted all related 
services.*

*Trash for the local hdfs gets cleared honoring the "fs.trash.interval" value.*

hdfs dfs -D "fs.default.name=hdfs://mycluster/" -ls 
hdfs://mycluster/user/sshuser/.Trash/Current/
ls: File hdfs://mycluster/user/sshuser/.Trash/Current does not exist.

*Where as the trash for WASB doesn't get cleared.*

hdfs dfs -ls 
wasb://kcspark-2018-10-18t17-07-40-5...@kcdnsproxy.blob.core.windows.net/user/sshuser/.Trash/Current/
Found 1 items
-rw-r--r-- 1 sshuser supergroup 1084 2018-10-23 06:19 
wasb://kcspark-2018-10-18t17-07-40-5...@kcdnsproxy.blob.core.windows.net/user/sshuser/.Trash/Current/hivestore.txt

 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-15815) Upgrade Eclipse Jetty version due to security concerns

2018-10-24 Thread Bharat Viswanadham (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15815?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16662946#comment-16662946
 ] 

Bharat Viswanadham edited comment on HADOOP-15815 at 10/24/18 10:48 PM:


>From the Jira comments 

MSHADE-242 is happening in case of minifying the jar, this issue happens when 
relocating classes of jars with a module descriptor. This will also mean that 
it'll break the intended strong encapsulation.
Java 9 will not provide a solution for that yet, so I guess we'll have to log a 
warning as well.

It might impact Java 9, as this oms 6 supports Java 9. But I have not 
completely understood the issue or will it affect Java 8? 

 

Is this what you are asking [~busbey]

 

Ping [~ajisakaa] and [~tasanuma0829] for help on this to know any impact it 
will create just by upgrading the maven shaded plugin version and ignore this 
warning. 


was (Author: bharatviswa):
>From the Jira comments 

MSHADE-242 is happening in case of minifying the jar, this issue happens when 
relocating classes of jars with a module descriptor. This will also mean that 
it'll break the intended strong encapsulation.
Java 9 will not provide a solution for that yet, so I guess we'll have to log a 
warning as well.

It might impact Java 9, as this oms 6 supports java 9. But I have not 
completely understood the issue or will it affect Java 8?

 

Is this what you are asking [~busbey]

 

Tagging [~ajisakaa] for help on this to know any impact it will create just by 
upgrading the maven shaded plugin version and ignore this warning.

> Upgrade Eclipse Jetty version due to security concerns
> --
>
> Key: HADOOP-15815
> URL: https://issues.apache.org/jira/browse/HADOOP-15815
> Project: Hadoop Common
>  Issue Type: Task
>Affects Versions: 3.1.1, 3.0.3
>Reporter: Boris Vulikh
>Assignee: Boris Vulikh
>Priority: Major
> Attachments: HADOOP-15815.01-2.patch
>
>
> * 
> [CVE-2017-7657|https://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2017-7657]
>  * 
> [CVE-2017-7658|https://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2017-7658]
>  * 
> [CVE-2017-7656|https://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2017-7656]
>  * 
> [CVE-2018-12536|https://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2018-12536]
> We should upgrade the dependency to version 9.3.24 or the latest, if possible.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15815) Upgrade Eclipse Jetty version due to security concerns

2018-10-24 Thread Bharat Viswanadham (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15815?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16662946#comment-16662946
 ] 

Bharat Viswanadham commented on HADOOP-15815:
-

>From the Jira comments 

MSHADE-242 is happening in case of minifying the jar, this issue happens when 
relocating classes of jars with a module descriptor. This will also mean that 
it'll break the intended strong encapsulation.
Java 9 will not provide a solution for that yet, so I guess we'll have to log a 
warning as well.

It might impact Java 9, as this oms 6 supports java 9. But I have not 
completely understood the issue or will it affect Java 8?

 

Is this what you are asking [~busbey]

 

Tagging [~ajisakaa] for help on this to know any impact it will create just by 
upgrading the maven shaded plugin version and ignore this warning.

> Upgrade Eclipse Jetty version due to security concerns
> --
>
> Key: HADOOP-15815
> URL: https://issues.apache.org/jira/browse/HADOOP-15815
> Project: Hadoop Common
>  Issue Type: Task
>Affects Versions: 3.1.1, 3.0.3
>Reporter: Boris Vulikh
>Assignee: Boris Vulikh
>Priority: Major
> Attachments: HADOOP-15815.01-2.patch
>
>
> * 
> [CVE-2017-7657|https://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2017-7657]
>  * 
> [CVE-2017-7658|https://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2017-7658]
>  * 
> [CVE-2017-7656|https://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2017-7656]
>  * 
> [CVE-2018-12536|https://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2018-12536]
> We should upgrade the dependency to version 9.3.24 or the latest, if possible.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15872) ABFS: Update to target 2018-11-09 REST version for ADLS Gen 2

2018-10-24 Thread Thomas Marquardt (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15872?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas Marquardt updated HADOOP-15872:
--
Description: 
This update to the latest REST version (2018-11-09) will make the following 
changes to the ABFS driver:

1) The ABFS implementation of getFileStatus currently requires read permission. 
 According to HDFS permissions guide, it should only require execute on the 
parent folders (traversal access).  A new REST API has been introduced in REST 
version "2018-11-09" of ADLS Gen 2 to fix this problem.

2) The new "2018-11-09" REST version introduces support to i) automatically 
translate UPNs to OIDs when setting the owner, owning group, or ACL and ii) 
optionally translate OIDs to UPNs in the responses when getting the owner, 
owning group, or ACL.  Configuration will be introduced to optionally translate 
OIDs to UPNs in the responses.  Since translation has a performance impact, the 
default will be to perform no translation and return the OIDs.

  was:
The ABFS implementation of getFileStatus currently requires read permission.  
According to HDFS permissions guide, it should only require execute on the 
parent folders (traversal access).

 

getFileStatus should only require execute permission on the parent folders


> ABFS: Update to target 2018-11-09 REST version for ADLS Gen 2
> -
>
> Key: HADOOP-15872
> URL: https://issues.apache.org/jira/browse/HADOOP-15872
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/azure
>Affects Versions: 3.2.0
>Reporter: Thomas Marquardt
>Assignee: Thomas Marquardt
>Priority: Major
> Attachments: HADOOP-15872-001.patch
>
>
> This update to the latest REST version (2018-11-09) will make the following 
> changes to the ABFS driver:
> 1) The ABFS implementation of getFileStatus currently requires read 
> permission.  According to HDFS permissions guide, it should only require 
> execute on the parent folders (traversal access).  A new REST API has been 
> introduced in REST version "2018-11-09" of ADLS Gen 2 to fix this problem.
> 2) The new "2018-11-09" REST version introduces support to i) automatically 
> translate UPNs to OIDs when setting the owner, owning group, or ACL and ii) 
> optionally translate OIDs to UPNs in the responses when getting the owner, 
> owning group, or ACL.  Configuration will be introduced to optionally 
> translate OIDs to UPNs in the responses.  Since translation has a performance 
> impact, the default will be to perform no translation and return the OIDs.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15872) ABFS: Update to target 2018-11-09 REST version for ADLS Gen 2

2018-10-24 Thread Thomas Marquardt (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15872?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas Marquardt updated HADOOP-15872:
--
Summary: ABFS: Update to target 2018-11-09 REST version for ADLS Gen 2  
(was: ABFS: )

> ABFS: Update to target 2018-11-09 REST version for ADLS Gen 2
> -
>
> Key: HADOOP-15872
> URL: https://issues.apache.org/jira/browse/HADOOP-15872
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/azure
>Affects Versions: 3.2.0
>Reporter: Thomas Marquardt
>Assignee: Thomas Marquardt
>Priority: Major
> Attachments: HADOOP-15872-001.patch
>
>
> The ABFS implementation of getFileStatus currently requires read permission.  
> According to HDFS permissions guide, it should only require execute on the 
> parent folders (traversal access).
>  
> getFileStatus should only require execute permission on the parent folders



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15872) ABFS:

2018-10-24 Thread Thomas Marquardt (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15872?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas Marquardt updated HADOOP-15872:
--
Summary: ABFS:   (was: ABFS: getFileStatus should only require execute 
permission on the parent folders)

> ABFS: 
> --
>
> Key: HADOOP-15872
> URL: https://issues.apache.org/jira/browse/HADOOP-15872
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/azure
>Affects Versions: 3.2.0
>Reporter: Thomas Marquardt
>Assignee: Thomas Marquardt
>Priority: Major
> Attachments: HADOOP-15872-001.patch
>
>
> The ABFS implementation of getFileStatus currently requires read permission.  
> According to HDFS permissions guide, it should only require execute on the 
> parent folders (traversal access).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15872) ABFS:

2018-10-24 Thread Thomas Marquardt (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15872?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas Marquardt updated HADOOP-15872:
--
Description: 
The ABFS implementation of getFileStatus currently requires read permission.  
According to HDFS permissions guide, it should only require execute on the 
parent folders (traversal access).

 

getFileStatus should only require execute permission on the parent folders

  was:The ABFS implementation of getFileStatus currently requires read 
permission.  According to HDFS permissions guide, it should only require 
execute on the parent folders (traversal access).


> ABFS: 
> --
>
> Key: HADOOP-15872
> URL: https://issues.apache.org/jira/browse/HADOOP-15872
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/azure
>Affects Versions: 3.2.0
>Reporter: Thomas Marquardt
>Assignee: Thomas Marquardt
>Priority: Major
> Attachments: HADOOP-15872-001.patch
>
>
> The ABFS implementation of getFileStatus currently requires read permission.  
> According to HDFS permissions guide, it should only require execute on the 
> parent folders (traversal access).
>  
> getFileStatus should only require execute permission on the parent folders



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13916) Document how downstream clients should make use of the new shaded client artifacts

2018-10-24 Thread Sean Busbey (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-13916?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16662922#comment-16662922
 ] 

Sean Busbey commented on HADOOP-13916:
--

moved down from Critical to Major to reflect current prioritization.

> Document how downstream clients should make use of the new shaded client 
> artifacts
> --
>
> Key: HADOOP-13916
> URL: https://issues.apache.org/jira/browse/HADOOP-13916
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: documentation
>Affects Versions: 3.0.0-alpha2
>Reporter: Sean Busbey
>Assignee: Sean Busbey
>Priority: Major
>
> provide a quickstart that walks through using the new shaded dependencies 
> with Maven to create a simple downstream project.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-11656) Classpath isolation for downstream clients

2018-10-24 Thread Sean Busbey (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-11656?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16662921#comment-16662921
 ] 

Sean Busbey commented on HADOOP-11656:
--

bq. Is there a user guide on how to use the new client for HDFS/YARN/MapReduce 
apps?

there is not yet. it's tracked in HADOOP-13916

> Classpath isolation for downstream clients
> --
>
> Key: HADOOP-11656
> URL: https://issues.apache.org/jira/browse/HADOOP-11656
> Project: Hadoop Common
>  Issue Type: New Feature
>Reporter: Sean Busbey
>Assignee: Sean Busbey
>Priority: Blocker
>  Labels: classloading, classpath, dependencies, scripts, shell
> Fix For: 3.0.0-beta1
>
> Attachments: HADOOP-11656_proposal.md
>
>
> Currently, Hadoop exposes downstream clients to a variety of third party 
> libraries. As our code base grows and matures we increase the set of 
> libraries we rely on. At the same time, as our user base grows we increase 
> the likelihood that some downstream project will run into a conflict while 
> attempting to use a different version of some library we depend on. This has 
> already happened with i.e. Guava several times for HBase, Accumulo, and Spark 
> (and I'm sure others).
> While YARN-286 and MAPREDUCE-1700 provided an initial effort, they default to 
> off and they don't do anything to help dependency conflicts on the driver 
> side or for folks talking to HDFS directly. This should serve as an umbrella 
> for changes needed to do things thoroughly on the next major version.
> We should ensure that downstream clients
> 1) can depend on a client artifact for each of HDFS, YARN, and MapReduce that 
> doesn't pull in any third party dependencies
> 2) only see our public API classes (or as close to this as feasible) when 
> executing user provided code, whether client side in a launcher/driver or on 
> the cluster in a container or within MR.
> This provides us with a double benefit: users get less grief when they want 
> to run substantially ahead or behind the versions we need and the project is 
> freer to change our own dependency versions because they'll no longer be in 
> our compatibility promises.
> Project specific task jiras to follow after I get some justifying use cases 
> written in the comments.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13916) Document how downstream clients should make use of the new shaded client artifacts

2018-10-24 Thread Sean Busbey (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-13916?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Busbey updated HADOOP-13916:
-
Priority: Major  (was: Critical)

> Document how downstream clients should make use of the new shaded client 
> artifacts
> --
>
> Key: HADOOP-13916
> URL: https://issues.apache.org/jira/browse/HADOOP-13916
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: documentation
>Affects Versions: 3.0.0-alpha2
>Reporter: Sean Busbey
>Assignee: Sean Busbey
>Priority: Major
>
> provide a quickstart that walks through using the new shaded dependencies 
> with Maven to create a simple downstream project.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13916) Document how downstream clients should make use of the new shaded client artifacts

2018-10-24 Thread Sean Busbey (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-13916?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Busbey updated HADOOP-13916:
-
Issue Type: Improvement  (was: Bug)

> Document how downstream clients should make use of the new shaded client 
> artifacts
> --
>
> Key: HADOOP-13916
> URL: https://issues.apache.org/jira/browse/HADOOP-13916
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: documentation
>Affects Versions: 3.0.0-alpha2
>Reporter: Sean Busbey
>Assignee: Sean Busbey
>Priority: Critical
>
> provide a quickstart that walks through using the new shaded dependencies 
> with Maven to create a simple downstream project.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-11656) Classpath isolation for downstream clients

2018-10-24 Thread Dagang Wei (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-11656?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16662919#comment-16662919
 ] 

Dagang Wei edited comment on HADOOP-11656 at 10/24/18 10:05 PM:


Is there a user guide on how to use the new client for HDFS/YARN/MapReduce apps?


was (Author: functicons):
Is there a user guide describing how to use the new client for 
HDFS/YARN/MapReduce apps?

> Classpath isolation for downstream clients
> --
>
> Key: HADOOP-11656
> URL: https://issues.apache.org/jira/browse/HADOOP-11656
> Project: Hadoop Common
>  Issue Type: New Feature
>Reporter: Sean Busbey
>Assignee: Sean Busbey
>Priority: Blocker
>  Labels: classloading, classpath, dependencies, scripts, shell
> Fix For: 3.0.0-beta1
>
> Attachments: HADOOP-11656_proposal.md
>
>
> Currently, Hadoop exposes downstream clients to a variety of third party 
> libraries. As our code base grows and matures we increase the set of 
> libraries we rely on. At the same time, as our user base grows we increase 
> the likelihood that some downstream project will run into a conflict while 
> attempting to use a different version of some library we depend on. This has 
> already happened with i.e. Guava several times for HBase, Accumulo, and Spark 
> (and I'm sure others).
> While YARN-286 and MAPREDUCE-1700 provided an initial effort, they default to 
> off and they don't do anything to help dependency conflicts on the driver 
> side or for folks talking to HDFS directly. This should serve as an umbrella 
> for changes needed to do things thoroughly on the next major version.
> We should ensure that downstream clients
> 1) can depend on a client artifact for each of HDFS, YARN, and MapReduce that 
> doesn't pull in any third party dependencies
> 2) only see our public API classes (or as close to this as feasible) when 
> executing user provided code, whether client side in a launcher/driver or on 
> the cluster in a container or within MR.
> This provides us with a double benefit: users get less grief when they want 
> to run substantially ahead or behind the versions we need and the project is 
> freer to change our own dependency versions because they'll no longer be in 
> our compatibility promises.
> Project specific task jiras to follow after I get some justifying use cases 
> written in the comments.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-11656) Classpath isolation for downstream clients

2018-10-24 Thread Dagang Wei (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-11656?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16662919#comment-16662919
 ] 

Dagang Wei commented on HADOOP-11656:
-

Is there a user guide describing how to use the new client for 
HDFS/YARN/MapReduce apps?

> Classpath isolation for downstream clients
> --
>
> Key: HADOOP-11656
> URL: https://issues.apache.org/jira/browse/HADOOP-11656
> Project: Hadoop Common
>  Issue Type: New Feature
>Reporter: Sean Busbey
>Assignee: Sean Busbey
>Priority: Blocker
>  Labels: classloading, classpath, dependencies, scripts, shell
> Fix For: 3.0.0-beta1
>
> Attachments: HADOOP-11656_proposal.md
>
>
> Currently, Hadoop exposes downstream clients to a variety of third party 
> libraries. As our code base grows and matures we increase the set of 
> libraries we rely on. At the same time, as our user base grows we increase 
> the likelihood that some downstream project will run into a conflict while 
> attempting to use a different version of some library we depend on. This has 
> already happened with i.e. Guava several times for HBase, Accumulo, and Spark 
> (and I'm sure others).
> While YARN-286 and MAPREDUCE-1700 provided an initial effort, they default to 
> off and they don't do anything to help dependency conflicts on the driver 
> side or for folks talking to HDFS directly. This should serve as an umbrella 
> for changes needed to do things thoroughly on the next major version.
> We should ensure that downstream clients
> 1) can depend on a client artifact for each of HDFS, YARN, and MapReduce that 
> doesn't pull in any third party dependencies
> 2) only see our public API classes (or as close to this as feasible) when 
> executing user provided code, whether client side in a launcher/driver or on 
> the cluster in a container or within MR.
> This provides us with a double benefit: users get less grief when they want 
> to run substantially ahead or behind the versions we need and the project is 
> freer to change our own dependency versions because they'll no longer be in 
> our compatibility promises.
> Project specific task jiras to follow after I get some justifying use cases 
> written in the comments.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15823) ABFS: Stop requiring client ID and tenant ID for MSI

2018-10-24 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15823?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16662896#comment-16662896
 ] 

Hudson commented on HADOOP-15823:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #15313 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/15313/])
HADOOP-15823. ABFS: Stop requiring client ID and tenant ID for MSI (templedf: 
rev e374584479b687e41d5379bb6d827dcae620e123)
* (edit) 
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/oauth2/AzureADAuthenticator.java


> ABFS: Stop requiring client ID and tenant ID for MSI
> 
>
> Key: HADOOP-15823
> URL: https://issues.apache.org/jira/browse/HADOOP-15823
> Project: Hadoop Common
>  Issue Type: Sub-task
>Affects Versions: 3.2.0
>Reporter: Sean Mackrory
>Assignee: Da Zhou
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: HADOOP-15823-001.patch, HADOOP-15823-002.patch
>
>
> ABFS requires the user to configure the tenant ID and client ID. From my 
> understanding of MSI, that shouldn't be necessary and is an added requirement 
> compared to MSI in ADLS. Can that be dropped?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15823) ABFS: Stop requiring client ID and tenant ID for MSI

2018-10-24 Thread Daniel Templeton (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15823?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Daniel Templeton updated HADOOP-15823:
--
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 3.3.0
   Status: Resolved  (was: Patch Available)

Thank you for the patch, [~DanielZhou] and for the reviews, [~mackrorysd], 
[~shwetayakkali], and [~tmarquardt]!  Committed to trunk.

> ABFS: Stop requiring client ID and tenant ID for MSI
> 
>
> Key: HADOOP-15823
> URL: https://issues.apache.org/jira/browse/HADOOP-15823
> Project: Hadoop Common
>  Issue Type: Sub-task
>Affects Versions: 3.2.0
>Reporter: Sean Mackrory
>Assignee: Da Zhou
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: HADOOP-15823-001.patch, HADOOP-15823-002.patch
>
>
> ABFS requires the user to configure the tenant ID and client ID. From my 
> understanding of MSI, that shouldn't be necessary and is an added requirement 
> compared to MSI in ADLS. Can that be dropped?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15815) Upgrade Eclipse Jetty version due to security concerns

2018-10-24 Thread Sean Busbey (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15815?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16662861#comment-16662861
 ] 

Sean Busbey commented on HADOOP-15815:
--

sounds good to me. anything in release notes for maven shaded plugin versions 
we'll pass that looks like it'll need investigating?

> Upgrade Eclipse Jetty version due to security concerns
> --
>
> Key: HADOOP-15815
> URL: https://issues.apache.org/jira/browse/HADOOP-15815
> Project: Hadoop Common
>  Issue Type: Task
>Affects Versions: 3.1.1, 3.0.3
>Reporter: Boris Vulikh
>Assignee: Boris Vulikh
>Priority: Major
> Attachments: HADOOP-15815.01-2.patch
>
>
> * 
> [CVE-2017-7657|https://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2017-7657]
>  * 
> [CVE-2017-7658|https://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2017-7658]
>  * 
> [CVE-2017-7656|https://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2017-7656]
>  * 
> [CVE-2018-12536|https://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2018-12536]
> We should upgrade the dependency to version 9.3.24 or the latest, if possible.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15823) ABFS: Stop requiring client ID and tenant ID for MSI

2018-10-24 Thread Daniel Templeton (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15823?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16662857#comment-16662857
 ] 

Daniel Templeton commented on HADOOP-15823:
---

Thanks for testing the patch, [~shwetayakkali].  The patch looks good to me as 
well.  +1  I'll go ahead and commit shortly.

> ABFS: Stop requiring client ID and tenant ID for MSI
> 
>
> Key: HADOOP-15823
> URL: https://issues.apache.org/jira/browse/HADOOP-15823
> Project: Hadoop Common
>  Issue Type: Sub-task
>Affects Versions: 3.2.0
>Reporter: Sean Mackrory
>Assignee: Da Zhou
>Priority: Major
> Attachments: HADOOP-15823-001.patch, HADOOP-15823-002.patch
>
>
> ABFS requires the user to configure the tenant ID and client ID. From my 
> understanding of MSI, that shouldn't be necessary and is an added requirement 
> compared to MSI in ADLS. Can that be dropped?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-15823) ABFS: Stop requiring client ID and tenant ID for MSI

2018-10-24 Thread Shweta (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15823?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16662814#comment-16662814
 ] 

Shweta edited comment on HADOOP-15823 at 10/24/18 8:38 PM:
---

Thank you [~mackrorysd], [~DanielZhou], [~tmarquardt] for working on this. 
I was able to test patch v002 in my environment with the help of [~twu] and it 
passes the tests. I turned off the System Identity on all VMs and only had the 
User Identity so that client ID matches expected behavior.


Note: One of the VMs reported an error for Managed Identity, after a restart 
of the VM with system assigned identity removed and user assigned identity 
added. This is possible due to an old identity being stuck on the VM.  The 
reason for this, from an offline sync was understood to be due to how the 
“default” identity caching works in IMDS.


Although, this issue doesn’t seem to be a blocker for this JIRA.

+1 from my 
side.


was (Author: shwetayakkali):
Thank you [~mackrorysd], [~DanielZhou], [~tmarquardt] for working on this. 
I was able to test patch v002 in my environment and it passes the tests. I 
turned off the System Identity on all VMs and only had the User Identity so 
that client ID matches expected behavior.


Note: One of the VMs reported an error for Managed Identity, after a restart 
of the VM with system assigned identity removed and user assigned identity 
added. This is possible due to an old identity being stuck on the VM.  The 
reason for this, from an offline sync was understood to be due to how the 
“default” identity caching works in IMDS.


Although, this issue doesn’t seem to be a blocker for this JIRA.

+1 from my 
side.

> ABFS: Stop requiring client ID and tenant ID for MSI
> 
>
> Key: HADOOP-15823
> URL: https://issues.apache.org/jira/browse/HADOOP-15823
> Project: Hadoop Common
>  Issue Type: Sub-task
>Affects Versions: 3.2.0
>Reporter: Sean Mackrory
>Assignee: Da Zhou
>Priority: Major
> Attachments: HADOOP-15823-001.patch, HADOOP-15823-002.patch
>
>
> ABFS requires the user to configure the tenant ID and client ID. From my 
> understanding of MSI, that shouldn't be necessary and is an added requirement 
> compared to MSI in ADLS. Can that be dropped?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-15823) ABFS: Stop requiring client ID and tenant ID for MSI

2018-10-24 Thread Shweta (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15823?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16662814#comment-16662814
 ] 

Shweta edited comment on HADOOP-15823 at 10/24/18 8:37 PM:
---

Thank you [~mackrorysd], [~DanielZhou], [~tmarquardt] for working on this. 
I was able to test patch v002 in my environment and it passes the tests. I 
turned off the System Identity on all VMs and only had the User Identity so 
that client ID matches expected behavior.


Note: One of the VMs reported an error for Managed Identity, after a restart 
of the VM with system assigned identity removed and user assigned identity 
added. This is possible due to an old identity being stuck on the VM.  The 
reason for this, from an offline sync was understood to be due to how the 
“default” identity caching works in IMDS.


Although, this issue doesn’t seem to be a blocker for this JIRA.

+1 from my 
side.


was (Author: shwetayakkali):
Thank you [~mackrorysd], [~DanielZhou], [~tmarquardt] for working on this. 
I was able to test patch v002 in my environment and it passes the tests. I 
turned off the System Identity on all VMs and only had the User Identity so 
that client ID matches expected behavior.


Note: One of the VMs reported an error for Managed Identity, after a restart 
of the VM with system assigned identity removed and user assigned identity 
added. This is possible due to an old identity being stuck on the VM.  The 
reason for this, from an offline sync was understood to be due to how the 
“default” identity is cached in IMDS.


Although, this issue doesn’t seem to be a blocker for this JIRA.

+1 from my 
side.

> ABFS: Stop requiring client ID and tenant ID for MSI
> 
>
> Key: HADOOP-15823
> URL: https://issues.apache.org/jira/browse/HADOOP-15823
> Project: Hadoop Common
>  Issue Type: Sub-task
>Affects Versions: 3.2.0
>Reporter: Sean Mackrory
>Assignee: Da Zhou
>Priority: Major
> Attachments: HADOOP-15823-001.patch, HADOOP-15823-002.patch
>
>
> ABFS requires the user to configure the tenant ID and client ID. From my 
> understanding of MSI, that shouldn't be necessary and is an added requirement 
> compared to MSI in ADLS. Can that be dropped?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15823) ABFS: Stop requiring client ID and tenant ID for MSI

2018-10-24 Thread Shweta (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15823?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16662814#comment-16662814
 ] 

Shweta commented on HADOOP-15823:
-

Thank you [~mackrorysd], [~DanielZhou], [~tmarquardt] for working on this. 
I was able to test patch v002 in my environment and it passes the tests. I 
turned off the System Identity on all VMs and only had the User Identity so 
that client ID matches expected behavior.


Note: One of the VMs reported an error for Managed Identity, after a restart 
of the VM with system assigned identity removed and user assigned identity 
added. This is possible due to an old identity being stuck on the VM.  The 
reason for this, from an offline sync was understood to be due to how the 
“default” identity is cached in IMDS.


Although, this issue doesn’t seem to be a blocker for this JIRA.

+1 from my 
side.

> ABFS: Stop requiring client ID and tenant ID for MSI
> 
>
> Key: HADOOP-15823
> URL: https://issues.apache.org/jira/browse/HADOOP-15823
> Project: Hadoop Common
>  Issue Type: Sub-task
>Affects Versions: 3.2.0
>Reporter: Sean Mackrory
>Assignee: Da Zhou
>Priority: Major
> Attachments: HADOOP-15823-001.patch, HADOOP-15823-002.patch
>
>
> ABFS requires the user to configure the tenant ID and client ID. From my 
> understanding of MSI, that shouldn't be necessary and is an added requirement 
> compared to MSI in ADLS. Can that be dropped?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15875) S3AInputStream.seek should throw EOFException if seeking past the end of file

2018-10-24 Thread Shixiong Zhu (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15875?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16662802#comment-16662802
 ] 

Shixiong Zhu commented on HADOOP-15875:
---

[~ste...@apache.org] S3AInputStream can check this easily since it has the file 
length. 
https://github.com/apache/hadoop/blob/trunk/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azure/BlockBlobInputStream.java#L123
 has the same check as well.

> S3AInputStream.seek should throw EOFException if seeking past the end of file
> -
>
> Key: HADOOP-15875
> URL: https://issues.apache.org/jira/browse/HADOOP-15875
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.2.0
>Reporter: Shixiong Zhu
>Priority: Minor
>
> I read the javadoc of `Seekable.seek` but it doesn't say what should be done 
> when seeking past the end of file. Right now, DFSInputStream throws new 
> EOFException, but S3AInputStream doesn't throw any error.
> I think it's better to have consistent behavior in `seek.`



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-15815) Upgrade Eclipse Jetty version due to security concerns

2018-10-24 Thread Bharat Viswanadham (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15815?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16662718#comment-16662718
 ] 

Bharat Viswanadham edited comment on HADOOP-15815 at 10/24/18 7:26 PM:
---

I also see the same issue when applying patch.

But when I have upgraded maven shaded plugin version to 3.1.0 this resolved 
this issue

https://issues.apache.org/jira/browse/MSHADE-258

This will happen when a jar has with a module descriptor. The Jira also 
mentioned the same issue when using jar with module descriptor (same asm jar)

This is happening exactly after asm jar.  When I have checked the jar it has 
moduleinfo.class.

So, upgrading maven-shaded-plugin will resolve this issue. And coming to why we 
are seeing this issue with this patch because jetty 9.3.24.v20180605 depends on 
osm 6.0 jar which has moduleinfo.class, Whereas from 9.3.19 we get asm jar 
5.0.1 which does not have moduleinfo.class.

 
{code:java}
HW13865:Downloads bviswanadham$ jar -tf asm-commons-6.0.jar | grep "module"
module-info.class
{code}
 

 
{code:java}
HW13865:Downloads bviswanadham$ jar -tf asm-commons-5.0.jar | grep "module"
HW13865:Downloads bviswanadham$ 
{code}
{code:java}

[INFO] +- 
org.apache.hadoop:hadoop-yarn-server-nodemanager:jar:3.3.0-SNAPSHOT:compile 
(optional) 
[INFO] | +- 
org.eclipse.jetty.websocket:javax-websocket-server-impl:jar:9.3.24.v20180605:compile
[INFO] | | +- org.eclipse.jetty:jetty-annotations:jar:9.3.24.v20180605:compile
[INFO] | | | +- org.eclipse.jetty:jetty-plus:jar:9.3.24.v20180605:compile
[INFO] | | | | \- org.eclipse.jetty:jetty-jndi:jar:9.3.24.v20180605:compile
[INFO] | | | +- javax.annotation:javax.annotation-api:jar:1.2:compile
[INFO] | | | \- org.ow2.asm:asm-commons:jar:6.0:compile
[INFO] | | | \- org.ow2.asm:asm-tree:jar:6.0:compile{code}
{code:java}
[INFO] +- 
org.apache.hadoop:hadoop-yarn-server-nodemanager:jar:3.3.0-SNAPSHOT:compile 
(optional) 
[INFO] | +- 
org.eclipse.jetty.websocket:javax-websocket-server-impl:jar:9.3.19.v20170502:compile
[INFO] | | +- org.eclipse.jetty:jetty-annotations:jar:9.3.19.v20170502:compile
[INFO] | | | +- org.eclipse.jetty:jetty-plus:jar:9.3.19.v20170502:compile
[INFO] | | | | \- org.eclipse.jetty:jetty-jndi:jar:9.3.19.v20170502:compile
[INFO] | | | +- javax.annotation:javax.annotation-api:jar:1.2:compile
[INFO] | | | \- org.ow2.asm:asm-commons:jar:5.0.1:compile
[INFO] | | | \- org.ow2.asm:asm-tree:jar:5.0.1:compile{code}
 

So, I think to resolve this we upgrade to latest maven-shaded-plugin like 3.1.0 
which can resolve this issue.  
{code:java}
[DEBUG] Processing JAR 
/Users/bviswanadham/.m2/repository/org/ow2/asm/asm-commons/6.0/asm-commons-6.0.jar
[INFO] 
[INFO] BUILD FAILURE
[INFO] 
[INFO] Total time: 01:27 min
[INFO] Finished at: 2018-10-24T12:10:58-07:00
[INFO] Final Memory: 51M/1642M
[INFO] 
[ERROR] Failed to execute goal 
org.apache.maven.plugins:maven-shade-plugin:2.4.3:shade (default) on project 
hadoop-client-minicluster: Error creating shaded jar: null: 
IllegalArgumentException -> [Help 1]
org.apache.maven.lifecycle.LifecycleExecutionException: Failed to execute goal 
org.apache.maven.plugins:maven-shade-plugin:2.4.3:shade (default) on project 
hadoop-client-minicluster: Error creating shaded jar: null
at 
org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:213)
at 
org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:154)
at 
org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:146)
at 
org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject(LifecycleModuleBuilder.java:117)
at 
org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject(LifecycleModuleBuilder.java:81)
at 
org.apache.maven.lifecycle.internal.builder.singlethreaded.SingleThreadedBuilder.build(SingleThreadedBuilder.java:51)
at 
org.apache.maven.lifecycle.internal.LifecycleStarter.execute(LifecycleStarter.java:128)
at org.apache.maven.DefaultMaven.doExecute(DefaultMaven.java:309)
at org.apache.maven.DefaultMaven.doExecute(DefaultMaven.java:194)
at org.apache.maven.DefaultMaven.execute(DefaultMaven.java:107)
at org.apache.maven.cli.MavenCli.execute(MavenCli.java:993)
at org.apache.maven.cli.MavenCli.doMain(MavenCli.java:345)
at org.apache.maven.cli.MavenCli.main(MavenCli.java:191)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.codehaus.plexus.classworlds.launcher.Launcher.launchEnhanced(Launcher.java:289)
at 

[jira] [Commented] (HADOOP-15815) Upgrade Eclipse Jetty version due to security concerns

2018-10-24 Thread Bharat Viswanadham (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15815?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16662718#comment-16662718
 ] 

Bharat Viswanadham commented on HADOOP-15815:
-

I also see the same issue when applying patch.

But when I have upgraded maven shaded plugin version to 3.1.0 this resolved 
this issue

https://issues.apache.org/jira/browse/MSHADE-258

This will happen when a jar has with a module descriptor. The Jira 

This is happening exactly after asm jar.  When I have checked the jar it has 
moduleinfo.class

 
{code:java}
HW13865:Downloads bviswanadham$ jar -tf asm-commons-6.0.jar | grep "module"
module-info.class
{code}
 

 

So, upgrading maven-shaded-plugin will resolve this issue. And we are seeing 
this issue with this patch because jetty 9.3.24.v20180605 depends on osm 6.0 
jar which has moduleinfo.class, Where as from 9.3.19 we get asm jar 5.0.1 which 
does not have moduleinfo.class.

 
{code:java}
[INFO] +- 
org.apache.hadoop:hadoop-yarn-server-nodemanager:jar:3.3.0-SNAPSHOT:compile 
(optional) 
[INFO] | +- 
org.eclipse.jetty.websocket:javax-websocket-server-impl:jar:9.3.24.v20180605:compile
[INFO] | | +- org.eclipse.jetty:jetty-annotations:jar:9.3.24.v20180605:compile
[INFO] | | | +- org.eclipse.jetty:jetty-plus:jar:9.3.24.v20180605:compile
[INFO] | | | | \- org.eclipse.jetty:jetty-jndi:jar:9.3.24.v20180605:compile
[INFO] | | | +- javax.annotation:javax.annotation-api:jar:1.2:compile
[INFO] | | | \- org.ow2.asm:asm-commons:jar:6.0:compile
[INFO] | | | \- org.ow2.asm:asm-tree:jar:6.0:compile{code}
{code:java}
[INFO] +- 
org.apache.hadoop:hadoop-yarn-server-nodemanager:jar:3.3.0-SNAPSHOT:compile 
(optional) 
[INFO] | +- 
org.eclipse.jetty.websocket:javax-websocket-server-impl:jar:9.3.19.v20170502:compile
[INFO] | | +- org.eclipse.jetty:jetty-annotations:jar:9.3.19.v20170502:compile
[INFO] | | | +- org.eclipse.jetty:jetty-plus:jar:9.3.19.v20170502:compile
[INFO] | | | | \- org.eclipse.jetty:jetty-jndi:jar:9.3.19.v20170502:compile
[INFO] | | | +- javax.annotation:javax.annotation-api:jar:1.2:compile
[INFO] | | | \- org.ow2.asm:asm-commons:jar:5.0.1:compile
[INFO] | | | \- org.ow2.asm:asm-tree:jar:5.0.1:compile{code}
 

So, I think to resolve this we upgrade to latest maven-shaded-plugin like 3.1.0 
which can resolve this issue.  
{code:java}
[DEBUG] Processing JAR 
/Users/bviswanadham/.m2/repository/org/ow2/asm/asm-commons/6.0/asm-commons-6.0.jar
[INFO] 
[INFO] BUILD FAILURE
[INFO] 
[INFO] Total time: 01:27 min
[INFO] Finished at: 2018-10-24T12:10:58-07:00
[INFO] Final Memory: 51M/1642M
[INFO] 
[ERROR] Failed to execute goal 
org.apache.maven.plugins:maven-shade-plugin:2.4.3:shade (default) on project 
hadoop-client-minicluster: Error creating shaded jar: null: 
IllegalArgumentException -> [Help 1]
org.apache.maven.lifecycle.LifecycleExecutionException: Failed to execute goal 
org.apache.maven.plugins:maven-shade-plugin:2.4.3:shade (default) on project 
hadoop-client-minicluster: Error creating shaded jar: null
at 
org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:213)
at 
org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:154)
at 
org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:146)
at 
org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject(LifecycleModuleBuilder.java:117)
at 
org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject(LifecycleModuleBuilder.java:81)
at 
org.apache.maven.lifecycle.internal.builder.singlethreaded.SingleThreadedBuilder.build(SingleThreadedBuilder.java:51)
at 
org.apache.maven.lifecycle.internal.LifecycleStarter.execute(LifecycleStarter.java:128)
at org.apache.maven.DefaultMaven.doExecute(DefaultMaven.java:309)
at org.apache.maven.DefaultMaven.doExecute(DefaultMaven.java:194)
at org.apache.maven.DefaultMaven.execute(DefaultMaven.java:107)
at org.apache.maven.cli.MavenCli.execute(MavenCli.java:993)
at org.apache.maven.cli.MavenCli.doMain(MavenCli.java:345)
at org.apache.maven.cli.MavenCli.main(MavenCli.java:191)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.codehaus.plexus.classworlds.launcher.Launcher.launchEnhanced(Launcher.java:289)
at org.codehaus.plexus.classworlds.launcher.Launcher.launch(Launcher.java:229)
at 
org.codehaus.plexus.classworlds.launcher.Launcher.mainWithExitCode(Launcher.java:415)
at org.codehaus.plexus.classworlds.launcher.Launcher.main(Launcher.java:356)
Caused by: 

[jira] [Commented] (HADOOP-15879) Upgrade eclipse jetty version to 9.3.25.v20180904

2018-10-24 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15879?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16662672#comment-16662672
 ] 

Hadoop QA commented on HADOOP-15879:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
20s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 19m 
23s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
15s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
17s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
34m 45s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
16s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:red}-1{color} | {color:red} shadedclient {color} | {color:red} 12m 
11s{color} | {color:red} patch has errors when building and testing our client 
artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
15s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
13s{color} | {color:green} hadoop-project in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
24s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 49m 59s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | HADOOP-15879 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12945444/HADOOP-15879.00.patch 
|
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  xml  |
| uname | Linux 1df8c79bb71e 3.13.0-144-generic #193-Ubuntu SMP Thu Mar 15 
17:03:53 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / c187404 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_181 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/15416/testReport/ |
| Max. process+thread count | 305 (vs. ulimit of 1) |
| modules | C: hadoop-project U: hadoop-project |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/15416/console |
| Powered by | Apache Yetus 0.8.0   http://yetus.apache.org |


This message was automatically generated.



> Upgrade eclipse jetty version to 9.3.25.v20180904
> -
>
> Key: HADOOP-15879
> URL: https://issues.apache.org/jira/browse/HADOOP-15879
> Project: Hadoop Common
>  Issue Type: Task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
> Attachments: HADOOP-15879.00.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HADOOP-15815) Upgrade Eclipse Jetty version due to security concerns

2018-10-24 Thread Sean Busbey (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15815?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16662664#comment-16662664
 ] 

Sean Busbey commented on HADOOP-15815:
--

here's the shaded client failure log:

 

[https://builds.apache.org/job/PreCommit-HADOOP-Build/15385/artifact/out/patch-shadedclient.txt/*view*/]

 

here's the relevant bit pulled out in case that build gets eaten by the history 
monster before things can be addressed:

 
{code}
[ERROR] Failed to execute goal 
org.apache.maven.plugins:maven-shade-plugin:2.4.3:shade (default) on project 
hadoop-client-minicluster: Error creating shaded jar: null: 
IllegalArgumentException -> [Help 1]
 {code}

Does the dependency update include any pom dependencies?  this sounds like 
MSHADE-122

> Upgrade Eclipse Jetty version due to security concerns
> --
>
> Key: HADOOP-15815
> URL: https://issues.apache.org/jira/browse/HADOOP-15815
> Project: Hadoop Common
>  Issue Type: Task
>Affects Versions: 3.1.1, 3.0.3
>Reporter: Boris Vulikh
>Assignee: Boris Vulikh
>Priority: Major
> Attachments: HADOOP-15815.01-2.patch
>
>
> * 
> [CVE-2017-7657|https://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2017-7657]
>  * 
> [CVE-2017-7658|https://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2017-7658]
>  * 
> [CVE-2017-7656|https://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2017-7656]
>  * 
> [CVE-2018-12536|https://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2018-12536]
> We should upgrade the dependency to version 9.3.24 or the latest, if possible.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15879) Upgrade eclipse jetty version to 9.3.25.v20180904

2018-10-24 Thread Bharat Viswanadham (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15879?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16662647#comment-16662647
 ] 

Bharat Viswanadham commented on HADOOP-15879:
-

Thank You [~jeagles]

I have closed this jira.

> Upgrade eclipse jetty version to 9.3.25.v20180904
> -
>
> Key: HADOOP-15879
> URL: https://issues.apache.org/jira/browse/HADOOP-15879
> Project: Hadoop Common
>  Issue Type: Task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
> Attachments: HADOOP-15879.00.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15879) Upgrade eclipse jetty version to 9.3.25.v20180904

2018-10-24 Thread Bharat Viswanadham (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15879?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HADOOP-15879:

Resolution: Duplicate
Status: Resolved  (was: Patch Available)

> Upgrade eclipse jetty version to 9.3.25.v20180904
> -
>
> Key: HADOOP-15879
> URL: https://issues.apache.org/jira/browse/HADOOP-15879
> Project: Hadoop Common
>  Issue Type: Task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
> Attachments: HADOOP-15879.00.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15864) Job submitter / executor fail when SBN domain name can not resolved

2018-10-24 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15864?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16662629#comment-16662629
 ] 

Hadoop QA commented on HADOOP-15864:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 28m 
28s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  4m 
32s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 26m 
26s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 22m 
37s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  4m 
 1s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
41s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
19m 50s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m  
4s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
59s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
20s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m 
50s{color} | {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red}  2m 
18s{color} | {color:red} root in the patch failed. {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red}  2m 18s{color} 
| {color:red} root in the patch failed. {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  3m 
 9s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red}  0m 
53s{color} | {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:red}-1{color} | {color:red} shadedclient {color} | {color:red}  3m 
48s{color} | {color:red} patch has errors when building and testing our client 
artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
32s{color} | {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
39s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  8m 
55s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  0m 51s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
25s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}137m 37s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | HADOOP-15864 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12945420/HADOOP-15864.branch.2.7.004.patch
 |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux ff605aefe3ca 3.13.0-144-generic #193-Ubuntu SMP Thu Mar 15 
17:03:53 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / bbc6dcd |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_181 |
| findbugs | v3.1.0-RC1 |
| mvninstall | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/15414/artifact/out/patch-mvninstall-hadoop-hdfs-project_hadoop-hdfs.txt
 |

[jira] [Updated] (HADOOP-15879) Upgrade eclipse jetty version to 9.3.25.v20180904

2018-10-24 Thread Bharat Viswanadham (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15879?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HADOOP-15879:

Attachment: HADOOP-15879.00.patch

> Upgrade eclipse jetty version to 9.3.25.v20180904
> -
>
> Key: HADOOP-15879
> URL: https://issues.apache.org/jira/browse/HADOOP-15879
> Project: Hadoop Common
>  Issue Type: Task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
> Attachments: HADOOP-15879.00.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15879) Upgrade eclipse jetty version to 9.3.25.v20180904

2018-10-24 Thread Bharat Viswanadham (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15879?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HADOOP-15879:

Status: Patch Available  (was: Open)

> Upgrade eclipse jetty version to 9.3.25.v20180904
> -
>
> Key: HADOOP-15879
> URL: https://issues.apache.org/jira/browse/HADOOP-15879
> Project: Hadoop Common
>  Issue Type: Task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
> Attachments: HADOOP-15879.00.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15879) Upgrade eclipse jetty version to 9.3.25.v20180904

2018-10-24 Thread Jonathan Eagles (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15879?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16662626#comment-16662626
 ] 

Jonathan Eagles commented on HADOOP-15879:
--

This seems to be the same as HADOOP-15815

> Upgrade eclipse jetty version to 9.3.25.v20180904
> -
>
> Key: HADOOP-15879
> URL: https://issues.apache.org/jira/browse/HADOOP-15879
> Project: Hadoop Common
>  Issue Type: Task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-15879) Upgrade eclipse jetty version to 9.3.25.v20180904

2018-10-24 Thread Bharat Viswanadham (JIRA)
Bharat Viswanadham created HADOOP-15879:
---

 Summary: Upgrade eclipse jetty version to 9.3.25.v20180904
 Key: HADOOP-15879
 URL: https://issues.apache.org/jira/browse/HADOOP-15879
 Project: Hadoop Common
  Issue Type: Task
Reporter: Bharat Viswanadham






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Assigned] (HADOOP-15879) Upgrade eclipse jetty version to 9.3.25.v20180904

2018-10-24 Thread Bharat Viswanadham (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15879?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham reassigned HADOOP-15879:
---

Assignee: Bharat Viswanadham

> Upgrade eclipse jetty version to 9.3.25.v20180904
> -
>
> Key: HADOOP-15879
> URL: https://issues.apache.org/jira/browse/HADOOP-15879
> Project: Hadoop Common
>  Issue Type: Task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15878) website should have a list of CVEs w/impacted versions and guidance

2018-10-24 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15878?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16662615#comment-16662615
 ] 

Hadoop QA commented on HADOOP-15878:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red}  0m  6s{color} 
| {color:red} HADOOP-15878 does not apply to trunk. Rebase required? Wrong 
Branch? See https://wiki.apache.org/hadoop/HowToContribute for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | HADOOP-15878 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12945436/HADOOP-15878.0.rendered.patch
 |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/15415/console |
| Powered by | Apache Yetus 0.8.0   http://yetus.apache.org |


This message was automatically generated.



> website should have a list of CVEs w/impacted versions and guidance
> ---
>
> Key: HADOOP-15878
> URL: https://issues.apache.org/jira/browse/HADOOP-15878
> Project: Hadoop Common
>  Issue Type: Task
>  Components: documentation
>Reporter: Sean Busbey
>Assignee: Sean Busbey
>Priority: Minor
> Attachments: HADOOP-15878.0.patch, HADOOP-15878.0.rendered.patch
>
>
> Our website should have a page with publicly disclosed CVEs listed. They 
> should include the community's understanding of impacted and fixed versions.
> For a simple example, see what kafka does:
> https://kafka.apache.org/cve-list



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15878) website should have a list of CVEs w/impacted versions and guidance

2018-10-24 Thread Sean Busbey (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15878?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Busbey updated HADOOP-15878:
-
Status: Patch Available  (was: In Progress)

QABot will fail, since it doesn't understand the hadoop-site repo.

> website should have a list of CVEs w/impacted versions and guidance
> ---
>
> Key: HADOOP-15878
> URL: https://issues.apache.org/jira/browse/HADOOP-15878
> Project: Hadoop Common
>  Issue Type: Task
>  Components: documentation
>Reporter: Sean Busbey
>Assignee: Sean Busbey
>Priority: Minor
> Attachments: HADOOP-15878.0.patch, HADOOP-15878.0.rendered.patch
>
>
> Our website should have a page with publicly disclosed CVEs listed. They 
> should include the community's understanding of impacted and fixed versions.
> For a simple example, see what kafka does:
> https://kafka.apache.org/cve-list



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15878) website should have a list of CVEs w/impacted versions and guidance

2018-10-24 Thread Sean Busbey (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15878?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16662603#comment-16662603
 ] 

Sean Busbey commented on HADOOP-15878:
--

- v0
  - adds new page for CVE List  under "community" section of navbar
  - adds entries for everything from the last ~12 months

- v0 rendered
  - same as above, but after running {{hugo}} to render

If there's a PMC member with better records on reported on dates, please let me 
know. These ones are what I could figure out from mailing lists.

> website should have a list of CVEs w/impacted versions and guidance
> ---
>
> Key: HADOOP-15878
> URL: https://issues.apache.org/jira/browse/HADOOP-15878
> Project: Hadoop Common
>  Issue Type: Task
>  Components: documentation
>Reporter: Sean Busbey
>Assignee: Sean Busbey
>Priority: Minor
> Attachments: HADOOP-15878.0.patch, HADOOP-15878.0.rendered.patch
>
>
> Our website should have a page with publicly disclosed CVEs listed. They 
> should include the community's understanding of impacted and fixed versions.
> For a simple example, see what kafka does:
> https://kafka.apache.org/cve-list



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-15878) website should have a list of CVEs w/impacted versions and guidance

2018-10-24 Thread Sean Busbey (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15878?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16662603#comment-16662603
 ] 

Sean Busbey edited comment on HADOOP-15878 at 10/24/18 5:39 PM:


-v0
  - adds new page for CVE List  under "community" section of navbar
  - adds entries for everything from the last ~12 months

-v0 rendered
  - same as above, but after running {{hugo}} to render

If there's a PMC member with better records on reported on dates, please let me 
know. These ones are what I could figure out from mailing lists.


was (Author: busbey):
- v0
  - adds new page for CVE List  under "community" section of navbar
  - adds entries for everything from the last ~12 months

- v0 rendered
  - same as above, but after running {{hugo}} to render

If there's a PMC member with better records on reported on dates, please let me 
know. These ones are what I could figure out from mailing lists.

> website should have a list of CVEs w/impacted versions and guidance
> ---
>
> Key: HADOOP-15878
> URL: https://issues.apache.org/jira/browse/HADOOP-15878
> Project: Hadoop Common
>  Issue Type: Task
>  Components: documentation
>Reporter: Sean Busbey
>Assignee: Sean Busbey
>Priority: Minor
> Attachments: HADOOP-15878.0.patch, HADOOP-15878.0.rendered.patch
>
>
> Our website should have a page with publicly disclosed CVEs listed. They 
> should include the community's understanding of impacted and fixed versions.
> For a simple example, see what kafka does:
> https://kafka.apache.org/cve-list



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15878) website should have a list of CVEs w/impacted versions and guidance

2018-10-24 Thread Sean Busbey (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15878?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Busbey updated HADOOP-15878:
-
Attachment: HADOOP-15878.0.rendered.patch
HADOOP-15878.0.patch

> website should have a list of CVEs w/impacted versions and guidance
> ---
>
> Key: HADOOP-15878
> URL: https://issues.apache.org/jira/browse/HADOOP-15878
> Project: Hadoop Common
>  Issue Type: Task
>  Components: documentation
>Reporter: Sean Busbey
>Assignee: Sean Busbey
>Priority: Minor
> Attachments: HADOOP-15878.0.patch, HADOOP-15878.0.rendered.patch
>
>
> Our website should have a page with publicly disclosed CVEs listed. They 
> should include the community's understanding of impacted and fixed versions.
> For a simple example, see what kafka does:
> https://kafka.apache.org/cve-list



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15850) CopyCommitter#concatFileChunks should check that the blocks per chunk is not 0

2018-10-24 Thread Wei-Chiu Chuang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15850?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HADOOP-15850:
-
Fix Version/s: 2.9.2

> CopyCommitter#concatFileChunks should check that the blocks per chunk is not 0
> --
>
> Key: HADOOP-15850
> URL: https://issues.apache.org/jira/browse/HADOOP-15850
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: tools/distcp
>Affects Versions: 3.1.1
>Reporter: Ted Yu
>Assignee: Ted Yu
>Priority: Critical
> Fix For: 2.10.0, 2.9.2, 3.0.4, 3.3.0, 3.1.2, 3.2.1
>
> Attachments: HADOOP-15850.branch-3.0.patch, HADOOP-15850.v2.patch, 
> HADOOP-15850.v3.patch, HADOOP-15850.v4.patch, HADOOP-15850.v5.patch, 
> HADOOP-15850.v6.patch, testIncrementalBackupWithBulkLoad-output.txt
>
>
> I was investigating test failure of TestIncrementalBackupWithBulkLoad from 
> hbase against hadoop 3.1.1
> hbase MapReduceBackupCopyJob$BackupDistCp would create listing file:
> {code}
> LOG.debug("creating input listing " + listing + " , totalRecords=" + 
> totalRecords);
> cfg.set(DistCpConstants.CONF_LABEL_LISTING_FILE_PATH, listing);
> cfg.setLong(DistCpConstants.CONF_LABEL_TOTAL_NUMBER_OF_RECORDS, 
> totalRecords);
> {code}
> For the test case, two bulk loaded hfiles are in the listing:
> {code}
> 2018-10-13 14:09:24,123 DEBUG [Time-limited test] 
> mapreduce.MapReduceBackupCopyJob$BackupDistCp(195): BackupDistCp : 
> hdfs://localhost:42796/user/hbase/test-data/160aeab5-6bca-9f87-465e-2517a0c43119/data/default/test-1539439707496/96b5a3613d52f4df1ba87a1cef20684c/f/394e6d39a9b94b148b9089c4fb967aad_SeqId_205_
> 2018-10-13 14:09:24,125 DEBUG [Time-limited test] 
> mapreduce.MapReduceBackupCopyJob$BackupDistCp(195): BackupDistCp : 
> hdfs://localhost:42796/user/hbase/test-data/160aeab5-6bca-9f87-465e-2517a0c43119/data/default/test-1539439707496/96b5a3613d52f4df1ba87a1cef20684c/f/a7599081e835440eb7bf0dd3ef4fd7a5_SeqId_205_
> 2018-10-13 14:09:24,125 DEBUG [Time-limited test] 
> mapreduce.MapReduceBackupCopyJob$BackupDistCp(197): BackupDistCp execute for 
> 2 files of 10242
> {code}
> Later on, CopyCommitter#concatFileChunks would throw the following exception:
> {code}
> 2018-10-13 14:09:25,351 WARN  [Thread-936] mapred.LocalJobRunner$Job(590): 
> job_local1795473782_0004
> java.io.IOException: Inconsistent sequence file: current chunk file 
> org.apache.hadoop.tools.CopyListingFileStatus@bb8826ee{hdfs://localhost:42796/user/hbase/test-data/
>
> 160aeab5-6bca-9f87-465e-2517a0c43119/data/default/test-1539439707496/96b5a3613d52f4df1ba87a1cef20684c/f/a7599081e835440eb7bf0dd3ef4fd7a5_SeqId_205_
>  length = 5100 aclEntries  = null, xAttrs = null} doesnt match prior entry 
> org.apache.hadoop.tools.CopyListingFileStatus@243d544d{hdfs://localhost:42796/user/hbase/test-data/160aeab5-6bca-9f87-465e-
>
> 2517a0c43119/data/default/test-1539439707496/96b5a3613d52f4df1ba87a1cef20684c/f/394e6d39a9b94b148b9089c4fb967aad_SeqId_205_
>  length = 5142 aclEntries = null, xAttrs = null}
>   at 
> org.apache.hadoop.tools.mapred.CopyCommitter.concatFileChunks(CopyCommitter.java:276)
>   at 
> org.apache.hadoop.tools.mapred.CopyCommitter.commitJob(CopyCommitter.java:100)
>   at org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:567)
> {code}
> The above warning shouldn't happen - the two bulk loaded hfiles are 
> independent.
> From the contents of the two CopyListingFileStatus instances, we can see that 
> their isSplit() return false. Otherwise the following from toString should be 
> logged:
> {code}
> if (isSplit()) {
>   sb.append(", chunkOffset = ").append(this.getChunkOffset());
>   sb.append(", chunkLength = ").append(this.getChunkLength());
> }
> {code}
> From hbase side, we can specify one bulk loaded hfile per job but that 
> defeats the purpose of using DistCp.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15850) CopyCommitter#concatFileChunks should check that the blocks per chunk is not 0

2018-10-24 Thread Wei-Chiu Chuang (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15850?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16662529#comment-16662529
 ] 

Wei-Chiu Chuang commented on HADOOP-15850:
--

Pushed to branch-2 and branch-2.9.

> CopyCommitter#concatFileChunks should check that the blocks per chunk is not 0
> --
>
> Key: HADOOP-15850
> URL: https://issues.apache.org/jira/browse/HADOOP-15850
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: tools/distcp
>Affects Versions: 3.1.1
>Reporter: Ted Yu
>Assignee: Ted Yu
>Priority: Critical
> Fix For: 2.10.0, 2.9.2, 3.0.4, 3.3.0, 3.1.2, 3.2.1
>
> Attachments: HADOOP-15850.branch-3.0.patch, HADOOP-15850.v2.patch, 
> HADOOP-15850.v3.patch, HADOOP-15850.v4.patch, HADOOP-15850.v5.patch, 
> HADOOP-15850.v6.patch, testIncrementalBackupWithBulkLoad-output.txt
>
>
> I was investigating test failure of TestIncrementalBackupWithBulkLoad from 
> hbase against hadoop 3.1.1
> hbase MapReduceBackupCopyJob$BackupDistCp would create listing file:
> {code}
> LOG.debug("creating input listing " + listing + " , totalRecords=" + 
> totalRecords);
> cfg.set(DistCpConstants.CONF_LABEL_LISTING_FILE_PATH, listing);
> cfg.setLong(DistCpConstants.CONF_LABEL_TOTAL_NUMBER_OF_RECORDS, 
> totalRecords);
> {code}
> For the test case, two bulk loaded hfiles are in the listing:
> {code}
> 2018-10-13 14:09:24,123 DEBUG [Time-limited test] 
> mapreduce.MapReduceBackupCopyJob$BackupDistCp(195): BackupDistCp : 
> hdfs://localhost:42796/user/hbase/test-data/160aeab5-6bca-9f87-465e-2517a0c43119/data/default/test-1539439707496/96b5a3613d52f4df1ba87a1cef20684c/f/394e6d39a9b94b148b9089c4fb967aad_SeqId_205_
> 2018-10-13 14:09:24,125 DEBUG [Time-limited test] 
> mapreduce.MapReduceBackupCopyJob$BackupDistCp(195): BackupDistCp : 
> hdfs://localhost:42796/user/hbase/test-data/160aeab5-6bca-9f87-465e-2517a0c43119/data/default/test-1539439707496/96b5a3613d52f4df1ba87a1cef20684c/f/a7599081e835440eb7bf0dd3ef4fd7a5_SeqId_205_
> 2018-10-13 14:09:24,125 DEBUG [Time-limited test] 
> mapreduce.MapReduceBackupCopyJob$BackupDistCp(197): BackupDistCp execute for 
> 2 files of 10242
> {code}
> Later on, CopyCommitter#concatFileChunks would throw the following exception:
> {code}
> 2018-10-13 14:09:25,351 WARN  [Thread-936] mapred.LocalJobRunner$Job(590): 
> job_local1795473782_0004
> java.io.IOException: Inconsistent sequence file: current chunk file 
> org.apache.hadoop.tools.CopyListingFileStatus@bb8826ee{hdfs://localhost:42796/user/hbase/test-data/
>
> 160aeab5-6bca-9f87-465e-2517a0c43119/data/default/test-1539439707496/96b5a3613d52f4df1ba87a1cef20684c/f/a7599081e835440eb7bf0dd3ef4fd7a5_SeqId_205_
>  length = 5100 aclEntries  = null, xAttrs = null} doesnt match prior entry 
> org.apache.hadoop.tools.CopyListingFileStatus@243d544d{hdfs://localhost:42796/user/hbase/test-data/160aeab5-6bca-9f87-465e-
>
> 2517a0c43119/data/default/test-1539439707496/96b5a3613d52f4df1ba87a1cef20684c/f/394e6d39a9b94b148b9089c4fb967aad_SeqId_205_
>  length = 5142 aclEntries = null, xAttrs = null}
>   at 
> org.apache.hadoop.tools.mapred.CopyCommitter.concatFileChunks(CopyCommitter.java:276)
>   at 
> org.apache.hadoop.tools.mapred.CopyCommitter.commitJob(CopyCommitter.java:100)
>   at org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:567)
> {code}
> The above warning shouldn't happen - the two bulk loaded hfiles are 
> independent.
> From the contents of the two CopyListingFileStatus instances, we can see that 
> their isSplit() return false. Otherwise the following from toString should be 
> logged:
> {code}
> if (isSplit()) {
>   sb.append(", chunkOffset = ").append(this.getChunkOffset());
>   sb.append(", chunkLength = ").append(this.getChunkLength());
> }
> {code}
> From hbase side, we can specify one bulk loaded hfile per job but that 
> defeats the purpose of using DistCp.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15850) CopyCommitter#concatFileChunks should check that the blocks per chunk is not 0

2018-10-24 Thread Wei-Chiu Chuang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15850?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HADOOP-15850:
-
Fix Version/s: 2.9.2

> CopyCommitter#concatFileChunks should check that the blocks per chunk is not 0
> --
>
> Key: HADOOP-15850
> URL: https://issues.apache.org/jira/browse/HADOOP-15850
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: tools/distcp
>Affects Versions: 3.1.1
>Reporter: Ted Yu
>Assignee: Ted Yu
>Priority: Critical
> Fix For: 2.10.0, 3.0.4, 3.3.0, 3.1.2, 3.2.1
>
> Attachments: HADOOP-15850.branch-3.0.patch, HADOOP-15850.v2.patch, 
> HADOOP-15850.v3.patch, HADOOP-15850.v4.patch, HADOOP-15850.v5.patch, 
> HADOOP-15850.v6.patch, testIncrementalBackupWithBulkLoad-output.txt
>
>
> I was investigating test failure of TestIncrementalBackupWithBulkLoad from 
> hbase against hadoop 3.1.1
> hbase MapReduceBackupCopyJob$BackupDistCp would create listing file:
> {code}
> LOG.debug("creating input listing " + listing + " , totalRecords=" + 
> totalRecords);
> cfg.set(DistCpConstants.CONF_LABEL_LISTING_FILE_PATH, listing);
> cfg.setLong(DistCpConstants.CONF_LABEL_TOTAL_NUMBER_OF_RECORDS, 
> totalRecords);
> {code}
> For the test case, two bulk loaded hfiles are in the listing:
> {code}
> 2018-10-13 14:09:24,123 DEBUG [Time-limited test] 
> mapreduce.MapReduceBackupCopyJob$BackupDistCp(195): BackupDistCp : 
> hdfs://localhost:42796/user/hbase/test-data/160aeab5-6bca-9f87-465e-2517a0c43119/data/default/test-1539439707496/96b5a3613d52f4df1ba87a1cef20684c/f/394e6d39a9b94b148b9089c4fb967aad_SeqId_205_
> 2018-10-13 14:09:24,125 DEBUG [Time-limited test] 
> mapreduce.MapReduceBackupCopyJob$BackupDistCp(195): BackupDistCp : 
> hdfs://localhost:42796/user/hbase/test-data/160aeab5-6bca-9f87-465e-2517a0c43119/data/default/test-1539439707496/96b5a3613d52f4df1ba87a1cef20684c/f/a7599081e835440eb7bf0dd3ef4fd7a5_SeqId_205_
> 2018-10-13 14:09:24,125 DEBUG [Time-limited test] 
> mapreduce.MapReduceBackupCopyJob$BackupDistCp(197): BackupDistCp execute for 
> 2 files of 10242
> {code}
> Later on, CopyCommitter#concatFileChunks would throw the following exception:
> {code}
> 2018-10-13 14:09:25,351 WARN  [Thread-936] mapred.LocalJobRunner$Job(590): 
> job_local1795473782_0004
> java.io.IOException: Inconsistent sequence file: current chunk file 
> org.apache.hadoop.tools.CopyListingFileStatus@bb8826ee{hdfs://localhost:42796/user/hbase/test-data/
>
> 160aeab5-6bca-9f87-465e-2517a0c43119/data/default/test-1539439707496/96b5a3613d52f4df1ba87a1cef20684c/f/a7599081e835440eb7bf0dd3ef4fd7a5_SeqId_205_
>  length = 5100 aclEntries  = null, xAttrs = null} doesnt match prior entry 
> org.apache.hadoop.tools.CopyListingFileStatus@243d544d{hdfs://localhost:42796/user/hbase/test-data/160aeab5-6bca-9f87-465e-
>
> 2517a0c43119/data/default/test-1539439707496/96b5a3613d52f4df1ba87a1cef20684c/f/394e6d39a9b94b148b9089c4fb967aad_SeqId_205_
>  length = 5142 aclEntries = null, xAttrs = null}
>   at 
> org.apache.hadoop.tools.mapred.CopyCommitter.concatFileChunks(CopyCommitter.java:276)
>   at 
> org.apache.hadoop.tools.mapred.CopyCommitter.commitJob(CopyCommitter.java:100)
>   at org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:567)
> {code}
> The above warning shouldn't happen - the two bulk loaded hfiles are 
> independent.
> From the contents of the two CopyListingFileStatus instances, we can see that 
> their isSplit() return false. Otherwise the following from toString should be 
> logged:
> {code}
> if (isSplit()) {
>   sb.append(", chunkOffset = ").append(this.getChunkOffset());
>   sb.append(", chunkLength = ").append(this.getChunkLength());
> }
> {code}
> From hbase side, we can specify one bulk loaded hfile per job but that 
> defeats the purpose of using DistCp.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15850) CopyCommitter#concatFileChunks should check that the blocks per chunk is not 0

2018-10-24 Thread Wei-Chiu Chuang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15850?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HADOOP-15850:
-
Fix Version/s: (was: 2.9.2)
   2.10.0

> CopyCommitter#concatFileChunks should check that the blocks per chunk is not 0
> --
>
> Key: HADOOP-15850
> URL: https://issues.apache.org/jira/browse/HADOOP-15850
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: tools/distcp
>Affects Versions: 3.1.1
>Reporter: Ted Yu
>Assignee: Ted Yu
>Priority: Critical
> Fix For: 2.10.0, 3.0.4, 3.3.0, 3.1.2, 3.2.1
>
> Attachments: HADOOP-15850.branch-3.0.patch, HADOOP-15850.v2.patch, 
> HADOOP-15850.v3.patch, HADOOP-15850.v4.patch, HADOOP-15850.v5.patch, 
> HADOOP-15850.v6.patch, testIncrementalBackupWithBulkLoad-output.txt
>
>
> I was investigating test failure of TestIncrementalBackupWithBulkLoad from 
> hbase against hadoop 3.1.1
> hbase MapReduceBackupCopyJob$BackupDistCp would create listing file:
> {code}
> LOG.debug("creating input listing " + listing + " , totalRecords=" + 
> totalRecords);
> cfg.set(DistCpConstants.CONF_LABEL_LISTING_FILE_PATH, listing);
> cfg.setLong(DistCpConstants.CONF_LABEL_TOTAL_NUMBER_OF_RECORDS, 
> totalRecords);
> {code}
> For the test case, two bulk loaded hfiles are in the listing:
> {code}
> 2018-10-13 14:09:24,123 DEBUG [Time-limited test] 
> mapreduce.MapReduceBackupCopyJob$BackupDistCp(195): BackupDistCp : 
> hdfs://localhost:42796/user/hbase/test-data/160aeab5-6bca-9f87-465e-2517a0c43119/data/default/test-1539439707496/96b5a3613d52f4df1ba87a1cef20684c/f/394e6d39a9b94b148b9089c4fb967aad_SeqId_205_
> 2018-10-13 14:09:24,125 DEBUG [Time-limited test] 
> mapreduce.MapReduceBackupCopyJob$BackupDistCp(195): BackupDistCp : 
> hdfs://localhost:42796/user/hbase/test-data/160aeab5-6bca-9f87-465e-2517a0c43119/data/default/test-1539439707496/96b5a3613d52f4df1ba87a1cef20684c/f/a7599081e835440eb7bf0dd3ef4fd7a5_SeqId_205_
> 2018-10-13 14:09:24,125 DEBUG [Time-limited test] 
> mapreduce.MapReduceBackupCopyJob$BackupDistCp(197): BackupDistCp execute for 
> 2 files of 10242
> {code}
> Later on, CopyCommitter#concatFileChunks would throw the following exception:
> {code}
> 2018-10-13 14:09:25,351 WARN  [Thread-936] mapred.LocalJobRunner$Job(590): 
> job_local1795473782_0004
> java.io.IOException: Inconsistent sequence file: current chunk file 
> org.apache.hadoop.tools.CopyListingFileStatus@bb8826ee{hdfs://localhost:42796/user/hbase/test-data/
>
> 160aeab5-6bca-9f87-465e-2517a0c43119/data/default/test-1539439707496/96b5a3613d52f4df1ba87a1cef20684c/f/a7599081e835440eb7bf0dd3ef4fd7a5_SeqId_205_
>  length = 5100 aclEntries  = null, xAttrs = null} doesnt match prior entry 
> org.apache.hadoop.tools.CopyListingFileStatus@243d544d{hdfs://localhost:42796/user/hbase/test-data/160aeab5-6bca-9f87-465e-
>
> 2517a0c43119/data/default/test-1539439707496/96b5a3613d52f4df1ba87a1cef20684c/f/394e6d39a9b94b148b9089c4fb967aad_SeqId_205_
>  length = 5142 aclEntries = null, xAttrs = null}
>   at 
> org.apache.hadoop.tools.mapred.CopyCommitter.concatFileChunks(CopyCommitter.java:276)
>   at 
> org.apache.hadoop.tools.mapred.CopyCommitter.commitJob(CopyCommitter.java:100)
>   at org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:567)
> {code}
> The above warning shouldn't happen - the two bulk loaded hfiles are 
> independent.
> From the contents of the two CopyListingFileStatus instances, we can see that 
> their isSplit() return false. Otherwise the following from toString should be 
> logged:
> {code}
> if (isSplit()) {
>   sb.append(", chunkOffset = ").append(this.getChunkOffset());
>   sb.append(", chunkLength = ").append(this.getChunkLength());
> }
> {code}
> From hbase side, we can specify one bulk loaded hfile per job but that 
> defeats the purpose of using DistCp.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15836) Review of AccessControlList

2018-10-24 Thread JIRA


[ 
https://issues.apache.org/jira/browse/HADOOP-15836?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16662516#comment-16662516
 ] 

Íñigo Goiri commented on HADOOP-15836:
--

[~belugabehr] take care and we'll be waiting for your return.

> Review of AccessControlList
> ---
>
> Key: HADOOP-15836
> URL: https://issues.apache.org/jira/browse/HADOOP-15836
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: common, security
>Affects Versions: 3.2.0
>Reporter: BELUGA BEHR
>Assignee: BELUGA BEHR
>Priority: Minor
> Fix For: 3.3.0
>
> Attachments: HADOOP-15836.1.patch, assertEqualACLStrings.patch
>
>
> * Improve unit tests (expected / actual were backwards)
> * Unit test expected elements to be in order but the class's return 
> Collections were unordered
> * Formatting cleanup
> * Removed superfluous white space
> * Remove use of LinkedList
> * Removed superfluous code
> * Use {{unmodifiable}} Collections where JavaDoc states that caller must not 
> manipulate the data structure



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work started] (HADOOP-15878) website should have a list of CVEs w/impacted versions and guidance

2018-10-24 Thread Sean Busbey (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15878?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HADOOP-15878 started by Sean Busbey.

> website should have a list of CVEs w/impacted versions and guidance
> ---
>
> Key: HADOOP-15878
> URL: https://issues.apache.org/jira/browse/HADOOP-15878
> Project: Hadoop Common
>  Issue Type: Task
>  Components: documentation
>Reporter: Sean Busbey
>Assignee: Sean Busbey
>Priority: Minor
>
> Our website should have a page with publicly disclosed CVEs listed. They 
> should include the community's understanding of impacted and fixed versions.
> For a simple example, see what kafka does:
> https://kafka.apache.org/cve-list



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15850) CopyCommitter#concatFileChunks should check that the blocks per chunk is not 0

2018-10-24 Thread Wei-Chiu Chuang (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15850?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16662498#comment-16662498
 ] 

Wei-Chiu Chuang commented on HADOOP-15850:
--

{quote}Am I correct that this impacts downstream users of DistCp beyond HBase?
{quote}
Most likely yes.

> CopyCommitter#concatFileChunks should check that the blocks per chunk is not 0
> --
>
> Key: HADOOP-15850
> URL: https://issues.apache.org/jira/browse/HADOOP-15850
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: tools/distcp
>Affects Versions: 3.1.1
>Reporter: Ted Yu
>Assignee: Ted Yu
>Priority: Critical
> Fix For: 3.0.4, 3.3.0, 3.1.2, 3.2.1
>
> Attachments: HADOOP-15850.branch-3.0.patch, HADOOP-15850.v2.patch, 
> HADOOP-15850.v3.patch, HADOOP-15850.v4.patch, HADOOP-15850.v5.patch, 
> HADOOP-15850.v6.patch, testIncrementalBackupWithBulkLoad-output.txt
>
>
> I was investigating test failure of TestIncrementalBackupWithBulkLoad from 
> hbase against hadoop 3.1.1
> hbase MapReduceBackupCopyJob$BackupDistCp would create listing file:
> {code}
> LOG.debug("creating input listing " + listing + " , totalRecords=" + 
> totalRecords);
> cfg.set(DistCpConstants.CONF_LABEL_LISTING_FILE_PATH, listing);
> cfg.setLong(DistCpConstants.CONF_LABEL_TOTAL_NUMBER_OF_RECORDS, 
> totalRecords);
> {code}
> For the test case, two bulk loaded hfiles are in the listing:
> {code}
> 2018-10-13 14:09:24,123 DEBUG [Time-limited test] 
> mapreduce.MapReduceBackupCopyJob$BackupDistCp(195): BackupDistCp : 
> hdfs://localhost:42796/user/hbase/test-data/160aeab5-6bca-9f87-465e-2517a0c43119/data/default/test-1539439707496/96b5a3613d52f4df1ba87a1cef20684c/f/394e6d39a9b94b148b9089c4fb967aad_SeqId_205_
> 2018-10-13 14:09:24,125 DEBUG [Time-limited test] 
> mapreduce.MapReduceBackupCopyJob$BackupDistCp(195): BackupDistCp : 
> hdfs://localhost:42796/user/hbase/test-data/160aeab5-6bca-9f87-465e-2517a0c43119/data/default/test-1539439707496/96b5a3613d52f4df1ba87a1cef20684c/f/a7599081e835440eb7bf0dd3ef4fd7a5_SeqId_205_
> 2018-10-13 14:09:24,125 DEBUG [Time-limited test] 
> mapreduce.MapReduceBackupCopyJob$BackupDistCp(197): BackupDistCp execute for 
> 2 files of 10242
> {code}
> Later on, CopyCommitter#concatFileChunks would throw the following exception:
> {code}
> 2018-10-13 14:09:25,351 WARN  [Thread-936] mapred.LocalJobRunner$Job(590): 
> job_local1795473782_0004
> java.io.IOException: Inconsistent sequence file: current chunk file 
> org.apache.hadoop.tools.CopyListingFileStatus@bb8826ee{hdfs://localhost:42796/user/hbase/test-data/
>
> 160aeab5-6bca-9f87-465e-2517a0c43119/data/default/test-1539439707496/96b5a3613d52f4df1ba87a1cef20684c/f/a7599081e835440eb7bf0dd3ef4fd7a5_SeqId_205_
>  length = 5100 aclEntries  = null, xAttrs = null} doesnt match prior entry 
> org.apache.hadoop.tools.CopyListingFileStatus@243d544d{hdfs://localhost:42796/user/hbase/test-data/160aeab5-6bca-9f87-465e-
>
> 2517a0c43119/data/default/test-1539439707496/96b5a3613d52f4df1ba87a1cef20684c/f/394e6d39a9b94b148b9089c4fb967aad_SeqId_205_
>  length = 5142 aclEntries = null, xAttrs = null}
>   at 
> org.apache.hadoop.tools.mapred.CopyCommitter.concatFileChunks(CopyCommitter.java:276)
>   at 
> org.apache.hadoop.tools.mapred.CopyCommitter.commitJob(CopyCommitter.java:100)
>   at org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:567)
> {code}
> The above warning shouldn't happen - the two bulk loaded hfiles are 
> independent.
> From the contents of the two CopyListingFileStatus instances, we can see that 
> their isSplit() return false. Otherwise the following from toString should be 
> logged:
> {code}
> if (isSplit()) {
>   sb.append(", chunkOffset = ").append(this.getChunkOffset());
>   sb.append(", chunkLength = ").append(this.getChunkLength());
> }
> {code}
> From hbase side, we can specify one bulk loaded hfile per job but that 
> defeats the purpose of using DistCp.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15876) Use keySet().removeAll() to remove multiple keys from Map in AzureBlobFileSystemStore

2018-10-24 Thread Steve Loughran (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15876?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-15876:

Issue Type: Sub-task  (was: Improvement)
Parent: HADOOP-15763

> Use keySet().removeAll() to remove multiple keys from Map in 
> AzureBlobFileSystemStore
> -
>
> Key: HADOOP-15876
> URL: https://issues.apache.org/jira/browse/HADOOP-15876
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Ted Yu
>Priority: Minor
>
> Looking at 
> hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/AzureBlobFileSystemStore.java
>  , {{removeDefaultAcl}} in particular:
> {code}
> for (Map.Entry defaultAclEntry : 
> defaultAclEntries.entrySet()) {
>   aclEntries.remove(defaultAclEntry.getKey());
> }
> {code}
> The above operation can be written this way:
> {code}
> aclEntries.keySet().removeAll(defaultAclEntries.keySet());
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15864) Job submitter / executor fail when SBN domain name can not resolved

2018-10-24 Thread He Xiaoqiao (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15864?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

He Xiaoqiao updated HADOOP-15864:
-
Fix Version/s: 3.3.0
   2.7.8

> Job submitter / executor fail when SBN domain name can not resolved
> ---
>
> Key: HADOOP-15864
> URL: https://issues.apache.org/jira/browse/HADOOP-15864
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: He Xiaoqiao
>Assignee: He Xiaoqiao
>Priority: Critical
> Fix For: 2.7.8, 3.3.0
>
> Attachments: HADOOP-15864-branch.2.7.001.patch, 
> HADOOP-15864-branch.2.7.002.patch, HADOOP-15864.003.patch, 
> HADOOP-15864.branch.2.7.004.patch
>
>
> Job submit failure and Task executes failure if Standby NameNode domain name 
> can not resolved on HDFS HA with DelegationToken feature.
> This issue is triggered when create {{ConfiguredFailoverProxyProvider}} 
> instance which invoke {{HAUtil.cloneDelegationTokenForLogicalUri}} in HA mode 
> with Security. Since in HDFS HA mode UGI need include separate token for each 
> NameNode in order to dealing with Active-Standby switch, the double tokens' 
> content is same of course. 
> However when #setTokenService in {{HAUtil.cloneDelegationTokenForLogicalUri}} 
> it checks whether the address of NameNode has been resolved or not, if Not, 
> throw #IllegalArgumentException upon, then job submitter/ task executor fail.
> HDFS-8068 and HADOOP-12125 try to fix it, but I don't think the two tickets 
> resolve completely.
> Another questions many guys consider is why NameNode domain name can not 
> resolve? I think there are many scenarios, for instance node replace when 
> meet fault, and refresh DNS sometimes. Anyway, Standby NameNode failure 
> should not impact Hadoop cluster stability in my opinion.
> a. code ref: org.apache.hadoop.security.SecurityUtil line373-386
> {code:java}
>   public static Text buildTokenService(InetSocketAddress addr) {
> String host = null;
> if (useIpForTokenService) {
>   if (addr.isUnresolved()) { // host has no ip address
> throw new IllegalArgumentException(
> new UnknownHostException(addr.getHostName())
> );
>   }
>   host = addr.getAddress().getHostAddress();
> } else {
>   host = StringUtils.toLowerCase(addr.getHostName());
> }
> return new Text(host + ":" + addr.getPort());
>   }
> {code}
> b.exception log ref:
> {code:xml}
> at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:227)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
> at java.lang.Thread.run(Thread.java:745)
> Caused by: java.io.IOException: Couldn't create proxy provider class 
> org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider
> at 
> org.apache.hadoop.hdfs.NameNodeProxies.createFailoverProxyProvider(NameNodeProxies.java:515)
> at 
> org.apache.hadoop.hdfs.NameNodeProxies.createProxy(NameNodeProxies.java:170)
> at org.apache.hadoop.hdfs.DFSClient.(DFSClient.java:761)
> at org.apache.hadoop.hdfs.DFSClient.(DFSClient.java:691)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFileSystem.java:150)
> at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2713)
> at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:93)
> at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2747)
> at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2729)
> at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:385)
> at 
> org.apache.hadoop.fs.viewfs.ChRootedFileSystem.(ChRootedFileSystem.java:106)
> at 
> org.apache.hadoop.fs.viewfs.ViewFileSystem$1.getTargetFileSystem(ViewFileSystem.java:178)
> at 
> org.apache.hadoop.fs.viewfs.ViewFileSystem$1.getTargetFileSystem(ViewFileSystem.java:172)
> at org.apache.hadoop.fs.viewfs.InodeTree.createLink(InodeTree.java:303)
> at org.apache.hadoop.fs.viewfs.InodeTree.(InodeTree.java:377)
> at 
> org.apache.hadoop.fs.viewfs.ViewFileSystem$1.(ViewFileSystem.java:172)
> at 
> org.apache.hadoop.fs.viewfs.ViewFileSystem.initialize(ViewFileSystem.java:172)
> at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2713)
> at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:93)
> at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2747)
> at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2729)
> at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:385)
> at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:176)
> at org.apache.hadoop.mapred.JobConf.getWorkingDirectory(JobConf.java:665)
> ... 35 more
> Caused by: java.lang.reflect.InvocationTargetException
> at 

[jira] [Commented] (HADOOP-15875) S3AInputStream.seek should throw EOFException if seeking past the end of file

2018-10-24 Thread Steve Loughran (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15875?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16662472#comment-16662472
 ] 

Steve Loughran commented on HADOOP-15875:
-

Doesn't it do this? I guess with lazy seek it might be postponing the check 
until the first read()

seek() is a special case: 
https://hadoop.apache.org/docs/stable/hadoop-project-dist/hadoop-common/filesystem/fsdatainputstream.html#Seekable.seeks

Posix filesystems don't fail on the seek() either, because you are allowed to 
append beyond the EOF. There's also the fact that the EOF can move about 
dynamically.

Which means: you can't rely on seek failing if you go past the EOF, even though 
HDFS does.

I'll take a patch (which will have to change the s3a.xml contract options), but 
it's not something I view as that significant precisely because it's consistent 
with posix 

> S3AInputStream.seek should throw EOFException if seeking past the end of file
> -
>
> Key: HADOOP-15875
> URL: https://issues.apache.org/jira/browse/HADOOP-15875
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.2.0
>Reporter: Shixiong Zhu
>Priority: Major
>
> I read the javadoc of `Seekable.seek` but it doesn't say what should be done 
> when seeking past the end of file. Right now, DFSInputStream throws new 
> EOFException, but S3AInputStream doesn't throw any error.
> I think it's better to have consistent behavior in `seek.`



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15875) S3AInputStream.seek should throw EOFException if seeking past the end of file

2018-10-24 Thread Steve Loughran (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15875?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-15875:

Priority: Minor  (was: Major)

> S3AInputStream.seek should throw EOFException if seeking past the end of file
> -
>
> Key: HADOOP-15875
> URL: https://issues.apache.org/jira/browse/HADOOP-15875
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.2.0
>Reporter: Shixiong Zhu
>Priority: Minor
>
> I read the javadoc of `Seekable.seek` but it doesn't say what should be done 
> when seeking past the end of file. Right now, DFSInputStream throws new 
> EOFException, but S3AInputStream doesn't throw any error.
> I think it's better to have consistent behavior in `seek.`



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15875) S3AInputStream.seek should throw EOFException if seeking past the end of file

2018-10-24 Thread Steve Loughran (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15875?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-15875:

Affects Version/s: 3.2.0

> S3AInputStream.seek should throw EOFException if seeking past the end of file
> -
>
> Key: HADOOP-15875
> URL: https://issues.apache.org/jira/browse/HADOOP-15875
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.2.0
>Reporter: Shixiong Zhu
>Priority: Major
>
> I read the javadoc of `Seekable.seek` but it doesn't say what should be done 
> when seeking past the end of file. Right now, DFSInputStream throws new 
> EOFException, but S3AInputStream doesn't throw any error.
> I think it's better to have consistent behavior in `seek.`



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15864) Job submitter / executor fail when SBN domain name can not resolved

2018-10-24 Thread He Xiaoqiao (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15864?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16662429#comment-16662429
 ] 

He Xiaoqiao commented on HADOOP-15864:
--

Thanks [~jojochuang] for your suggestion,  [^HADOOP-15864.003.patch] is ready 
for branch trunk, and I fond unit test has pass. I also rename v002 follow the 
right format and resubmit. FYI.

> Job submitter / executor fail when SBN domain name can not resolved
> ---
>
> Key: HADOOP-15864
> URL: https://issues.apache.org/jira/browse/HADOOP-15864
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: He Xiaoqiao
>Assignee: He Xiaoqiao
>Priority: Critical
> Attachments: HADOOP-15864-branch.2.7.001.patch, 
> HADOOP-15864-branch.2.7.002.patch, HADOOP-15864.003.patch, 
> HADOOP-15864.branch.2.7.004.patch
>
>
> Job submit failure and Task executes failure if Standby NameNode domain name 
> can not resolved on HDFS HA with DelegationToken feature.
> This issue is triggered when create {{ConfiguredFailoverProxyProvider}} 
> instance which invoke {{HAUtil.cloneDelegationTokenForLogicalUri}} in HA mode 
> with Security. Since in HDFS HA mode UGI need include separate token for each 
> NameNode in order to dealing with Active-Standby switch, the double tokens' 
> content is same of course. 
> However when #setTokenService in {{HAUtil.cloneDelegationTokenForLogicalUri}} 
> it checks whether the address of NameNode has been resolved or not, if Not, 
> throw #IllegalArgumentException upon, then job submitter/ task executor fail.
> HDFS-8068 and HADOOP-12125 try to fix it, but I don't think the two tickets 
> resolve completely.
> Another questions many guys consider is why NameNode domain name can not 
> resolve? I think there are many scenarios, for instance node replace when 
> meet fault, and refresh DNS sometimes. Anyway, Standby NameNode failure 
> should not impact Hadoop cluster stability in my opinion.
> a. code ref: org.apache.hadoop.security.SecurityUtil line373-386
> {code:java}
>   public static Text buildTokenService(InetSocketAddress addr) {
> String host = null;
> if (useIpForTokenService) {
>   if (addr.isUnresolved()) { // host has no ip address
> throw new IllegalArgumentException(
> new UnknownHostException(addr.getHostName())
> );
>   }
>   host = addr.getAddress().getHostAddress();
> } else {
>   host = StringUtils.toLowerCase(addr.getHostName());
> }
> return new Text(host + ":" + addr.getPort());
>   }
> {code}
> b.exception log ref:
> {code:xml}
> at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:227)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
> at java.lang.Thread.run(Thread.java:745)
> Caused by: java.io.IOException: Couldn't create proxy provider class 
> org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider
> at 
> org.apache.hadoop.hdfs.NameNodeProxies.createFailoverProxyProvider(NameNodeProxies.java:515)
> at 
> org.apache.hadoop.hdfs.NameNodeProxies.createProxy(NameNodeProxies.java:170)
> at org.apache.hadoop.hdfs.DFSClient.(DFSClient.java:761)
> at org.apache.hadoop.hdfs.DFSClient.(DFSClient.java:691)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFileSystem.java:150)
> at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2713)
> at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:93)
> at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2747)
> at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2729)
> at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:385)
> at 
> org.apache.hadoop.fs.viewfs.ChRootedFileSystem.(ChRootedFileSystem.java:106)
> at 
> org.apache.hadoop.fs.viewfs.ViewFileSystem$1.getTargetFileSystem(ViewFileSystem.java:178)
> at 
> org.apache.hadoop.fs.viewfs.ViewFileSystem$1.getTargetFileSystem(ViewFileSystem.java:172)
> at org.apache.hadoop.fs.viewfs.InodeTree.createLink(InodeTree.java:303)
> at org.apache.hadoop.fs.viewfs.InodeTree.(InodeTree.java:377)
> at 
> org.apache.hadoop.fs.viewfs.ViewFileSystem$1.(ViewFileSystem.java:172)
> at 
> org.apache.hadoop.fs.viewfs.ViewFileSystem.initialize(ViewFileSystem.java:172)
> at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2713)
> at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:93)
> at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2747)
> at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2729)
> at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:385)
> at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:176)
> at 

[jira] [Updated] (HADOOP-15864) Job submitter / executor fail when SBN domain name can not resolved

2018-10-24 Thread He Xiaoqiao (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15864?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

He Xiaoqiao updated HADOOP-15864:
-
Attachment: HADOOP-15864.branch.2.7.004.patch

> Job submitter / executor fail when SBN domain name can not resolved
> ---
>
> Key: HADOOP-15864
> URL: https://issues.apache.org/jira/browse/HADOOP-15864
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: He Xiaoqiao
>Assignee: He Xiaoqiao
>Priority: Critical
> Attachments: HADOOP-15864-branch.2.7.001.patch, 
> HADOOP-15864-branch.2.7.002.patch, HADOOP-15864.003.patch, 
> HADOOP-15864.branch.2.7.004.patch
>
>
> Job submit failure and Task executes failure if Standby NameNode domain name 
> can not resolved on HDFS HA with DelegationToken feature.
> This issue is triggered when create {{ConfiguredFailoverProxyProvider}} 
> instance which invoke {{HAUtil.cloneDelegationTokenForLogicalUri}} in HA mode 
> with Security. Since in HDFS HA mode UGI need include separate token for each 
> NameNode in order to dealing with Active-Standby switch, the double tokens' 
> content is same of course. 
> However when #setTokenService in {{HAUtil.cloneDelegationTokenForLogicalUri}} 
> it checks whether the address of NameNode has been resolved or not, if Not, 
> throw #IllegalArgumentException upon, then job submitter/ task executor fail.
> HDFS-8068 and HADOOP-12125 try to fix it, but I don't think the two tickets 
> resolve completely.
> Another questions many guys consider is why NameNode domain name can not 
> resolve? I think there are many scenarios, for instance node replace when 
> meet fault, and refresh DNS sometimes. Anyway, Standby NameNode failure 
> should not impact Hadoop cluster stability in my opinion.
> a. code ref: org.apache.hadoop.security.SecurityUtil line373-386
> {code:java}
>   public static Text buildTokenService(InetSocketAddress addr) {
> String host = null;
> if (useIpForTokenService) {
>   if (addr.isUnresolved()) { // host has no ip address
> throw new IllegalArgumentException(
> new UnknownHostException(addr.getHostName())
> );
>   }
>   host = addr.getAddress().getHostAddress();
> } else {
>   host = StringUtils.toLowerCase(addr.getHostName());
> }
> return new Text(host + ":" + addr.getPort());
>   }
> {code}
> b.exception log ref:
> {code:xml}
> at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:227)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
> at java.lang.Thread.run(Thread.java:745)
> Caused by: java.io.IOException: Couldn't create proxy provider class 
> org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider
> at 
> org.apache.hadoop.hdfs.NameNodeProxies.createFailoverProxyProvider(NameNodeProxies.java:515)
> at 
> org.apache.hadoop.hdfs.NameNodeProxies.createProxy(NameNodeProxies.java:170)
> at org.apache.hadoop.hdfs.DFSClient.(DFSClient.java:761)
> at org.apache.hadoop.hdfs.DFSClient.(DFSClient.java:691)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFileSystem.java:150)
> at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2713)
> at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:93)
> at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2747)
> at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2729)
> at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:385)
> at 
> org.apache.hadoop.fs.viewfs.ChRootedFileSystem.(ChRootedFileSystem.java:106)
> at 
> org.apache.hadoop.fs.viewfs.ViewFileSystem$1.getTargetFileSystem(ViewFileSystem.java:178)
> at 
> org.apache.hadoop.fs.viewfs.ViewFileSystem$1.getTargetFileSystem(ViewFileSystem.java:172)
> at org.apache.hadoop.fs.viewfs.InodeTree.createLink(InodeTree.java:303)
> at org.apache.hadoop.fs.viewfs.InodeTree.(InodeTree.java:377)
> at 
> org.apache.hadoop.fs.viewfs.ViewFileSystem$1.(ViewFileSystem.java:172)
> at 
> org.apache.hadoop.fs.viewfs.ViewFileSystem.initialize(ViewFileSystem.java:172)
> at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2713)
> at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:93)
> at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2747)
> at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2729)
> at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:385)
> at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:176)
> at org.apache.hadoop.mapred.JobConf.getWorkingDirectory(JobConf.java:665)
> ... 35 more
> Caused by: java.lang.reflect.InvocationTargetException
> at sun.reflect.GeneratedConstructorAccessor14.newInstance(Unknown Source)
> at 

[jira] [Created] (HADOOP-15878) website should have a list of CVEs w/impacted versions and guidance

2018-10-24 Thread Sean Busbey (JIRA)
Sean Busbey created HADOOP-15878:


 Summary: website should have a list of CVEs w/impacted versions 
and guidance
 Key: HADOOP-15878
 URL: https://issues.apache.org/jira/browse/HADOOP-15878
 Project: Hadoop Common
  Issue Type: Task
  Components: documentation
Reporter: Sean Busbey
Assignee: Sean Busbey


Our website should have a page with publicly disclosed CVEs listed. They should 
include the community's understanding of impacted and fixed versions.

For a simple example, see what kafka does:

https://kafka.apache.org/cve-list



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15864) Job submitter / executor fail when SBN domain name can not resolved

2018-10-24 Thread Wei-Chiu Chuang (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15864?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16662401#comment-16662401
 ] 

Wei-Chiu Chuang commented on HADOOP-15864:
--

Thanks for the patch. 

First off, please update the affect versions and fix versions.

It sounds to me the bug still exists in trunk, so you should definitely provide 
a patch against trunk to begin with.

If you do intend to offer a patch for branch-2.7, try to rename the patch as 
HADOOP-15864.branch.2.7.002.patch. I find separately the jira ID, branch name 
and revision numbers by dots works for me all the time.

> Job submitter / executor fail when SBN domain name can not resolved
> ---
>
> Key: HADOOP-15864
> URL: https://issues.apache.org/jira/browse/HADOOP-15864
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: He Xiaoqiao
>Assignee: He Xiaoqiao
>Priority: Critical
> Attachments: HADOOP-15864-branch.2.7.001.patch, 
> HADOOP-15864-branch.2.7.002.patch, HADOOP-15864.003.patch
>
>
> Job submit failure and Task executes failure if Standby NameNode domain name 
> can not resolved on HDFS HA with DelegationToken feature.
> This issue is triggered when create {{ConfiguredFailoverProxyProvider}} 
> instance which invoke {{HAUtil.cloneDelegationTokenForLogicalUri}} in HA mode 
> with Security. Since in HDFS HA mode UGI need include separate token for each 
> NameNode in order to dealing with Active-Standby switch, the double tokens' 
> content is same of course. 
> However when #setTokenService in {{HAUtil.cloneDelegationTokenForLogicalUri}} 
> it checks whether the address of NameNode has been resolved or not, if Not, 
> throw #IllegalArgumentException upon, then job submitter/ task executor fail.
> HDFS-8068 and HADOOP-12125 try to fix it, but I don't think the two tickets 
> resolve completely.
> Another questions many guys consider is why NameNode domain name can not 
> resolve? I think there are many scenarios, for instance node replace when 
> meet fault, and refresh DNS sometimes. Anyway, Standby NameNode failure 
> should not impact Hadoop cluster stability in my opinion.
> a. code ref: org.apache.hadoop.security.SecurityUtil line373-386
> {code:java}
>   public static Text buildTokenService(InetSocketAddress addr) {
> String host = null;
> if (useIpForTokenService) {
>   if (addr.isUnresolved()) { // host has no ip address
> throw new IllegalArgumentException(
> new UnknownHostException(addr.getHostName())
> );
>   }
>   host = addr.getAddress().getHostAddress();
> } else {
>   host = StringUtils.toLowerCase(addr.getHostName());
> }
> return new Text(host + ":" + addr.getPort());
>   }
> {code}
> b.exception log ref:
> {code:xml}
> at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:227)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
> at java.lang.Thread.run(Thread.java:745)
> Caused by: java.io.IOException: Couldn't create proxy provider class 
> org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider
> at 
> org.apache.hadoop.hdfs.NameNodeProxies.createFailoverProxyProvider(NameNodeProxies.java:515)
> at 
> org.apache.hadoop.hdfs.NameNodeProxies.createProxy(NameNodeProxies.java:170)
> at org.apache.hadoop.hdfs.DFSClient.(DFSClient.java:761)
> at org.apache.hadoop.hdfs.DFSClient.(DFSClient.java:691)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFileSystem.java:150)
> at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2713)
> at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:93)
> at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2747)
> at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2729)
> at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:385)
> at 
> org.apache.hadoop.fs.viewfs.ChRootedFileSystem.(ChRootedFileSystem.java:106)
> at 
> org.apache.hadoop.fs.viewfs.ViewFileSystem$1.getTargetFileSystem(ViewFileSystem.java:178)
> at 
> org.apache.hadoop.fs.viewfs.ViewFileSystem$1.getTargetFileSystem(ViewFileSystem.java:172)
> at org.apache.hadoop.fs.viewfs.InodeTree.createLink(InodeTree.java:303)
> at org.apache.hadoop.fs.viewfs.InodeTree.(InodeTree.java:377)
> at 
> org.apache.hadoop.fs.viewfs.ViewFileSystem$1.(ViewFileSystem.java:172)
> at 
> org.apache.hadoop.fs.viewfs.ViewFileSystem.initialize(ViewFileSystem.java:172)
> at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2713)
> at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:93)
> at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2747)
> at 

[jira] [Commented] (HADOOP-15856) Trunk build fails to compile native on Windows

2018-10-24 Thread Takanobu Asanuma (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15856?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16662138#comment-16662138
 ] 

Takanobu Asanuma commented on HADOOP-15856:
---

Sorry for creating the bug. Thanks [~vinayrpet], [~ayushtkn], [~surendrasingh] 
and [~elgoiri] for investigating and fixing it.

> Trunk build fails to compile native on Windows
> --
>
> Key: HADOOP-15856
> URL: https://issues.apache.org/jira/browse/HADOOP-15856
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: native
>Reporter: Vinayakumar B
>Assignee: Vinayakumar B
>Priority: Blocker
> Fix For: 3.3.0
>
> Attachments: HADOOP-15856-01.patch
>
>
> After removal of {{javah}} dependency in HADOOP-15767
> Trunk build fails with unable to find JNI headers.
> HADOOP-15767 fixed javah isssue with JDK10 only for linux builds.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15815) Upgrade Eclipse Jetty version due to security concerns

2018-10-24 Thread Takanobu Asanuma (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15815?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16662127#comment-16662127
 ] 

Takanobu Asanuma commented on HADOOP-15815:
---

[~borisvu] Thanks for the patch.

Actually the failure of the shadedclient tests seems to be related though I'm 
not sure the cause.

> Upgrade Eclipse Jetty version due to security concerns
> --
>
> Key: HADOOP-15815
> URL: https://issues.apache.org/jira/browse/HADOOP-15815
> Project: Hadoop Common
>  Issue Type: Task
>Affects Versions: 3.1.1, 3.0.3
>Reporter: Boris Vulikh
>Assignee: Boris Vulikh
>Priority: Major
> Attachments: HADOOP-15815.01-2.patch
>
>
> * 
> [CVE-2017-7657|https://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2017-7657]
>  * 
> [CVE-2017-7658|https://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2017-7658]
>  * 
> [CVE-2017-7656|https://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2017-7656]
>  * 
> [CVE-2018-12536|https://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2018-12536]
> We should upgrade the dependency to version 9.3.24 or the latest, if possible.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14693) Upgrade JUnit from 4 to 5

2018-10-24 Thread Takanobu Asanuma (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-14693?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16662009#comment-16662009
 ] 

Takanobu Asanuma commented on HADOOP-14693:
---

HADOOP-14775 resolved. Now we can start working on this.

> Upgrade JUnit from 4 to 5
> -
>
> Key: HADOOP-14693
> URL: https://issues.apache.org/jira/browse/HADOOP-14693
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Akira Ajisaka
>Priority: Major
>
> JUnit 4 does not support Java 9. We need to upgrade this.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15869) BlockDecompressorStream#decompress should not return -1 in case of IOException.

2018-10-24 Thread Surendra Singh Lilhore (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15869?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16661989#comment-16661989
 ] 

Surendra Singh Lilhore commented on HADOOP-15869:
-

{quote}should we catch the IOException ? 
{quote}
No, we shouldn't catch IOException but in current code it is catching and it is 
causing data lose for user.

> BlockDecompressorStream#decompress should not return -1 in case of 
> IOException.
> ---
>
> Key: HADOOP-15869
> URL: https://issues.apache.org/jira/browse/HADOOP-15869
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.7.2
>Reporter: Surendra Singh Lilhore
>Assignee: Surendra Singh Lilhore
>Priority: Major
> Attachments: HADOOP-15869.01.patch
>
>
> BlockDecompressorStream#decompress() return -1 in case of 
> BlockMissingException. Application which is using BlockDecompressorStream may 
> think file is empty and proceed further. But actually read operation should 
> fail.
> {code:java}
> // Get original data size
> try {
>originalBlockSize = rawReadInt();
> } catch (IOException ioe) {
>return -1;
> }{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14775) Change junit dependency in parent pom file to junit 5 while maintaining backward compatibility to junit4.

2018-10-24 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-14775?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16661939#comment-16661939
 ] 

Hudson commented on HADOOP-14775:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #15304 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/15304/])
HADOOP-14775. Change junit dependency in parent pom file to junit 5 (tasanuma: 
rev 1c0aae63a7eb59e6f1857b4438ba89dec7821c19)
* (edit) hadoop-project/pom.xml


> Change junit dependency in parent pom file to junit 5 while maintaining 
> backward compatibility to junit4. 
> --
>
> Key: HADOOP-14775
> URL: https://issues.apache.org/jira/browse/HADOOP-14775
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Affects Versions: 3.0.0-alpha4
>Reporter: Ajay Kumar
>Assignee: Akira Ajisaka
>Priority: Major
>  Labels: junit5
> Fix For: 3.3.0
>
> Attachments: HADOOP-14775.01.patch, HADOOP-14775.02.patch, 
> HADOOP-14775.03.patch, HADOOP-14775.04.patch, HADOOP-14775.05.patch, 
> HADOOP-14775.06.patch
>
>
> Change junit dependency in parent pom file to junit 5 while maintaining 
> backward compatibility to junit4. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14775) Change junit dependency in parent pom file to junit 5 while maintaining backward compatibility to junit4.

2018-10-24 Thread Takanobu Asanuma (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-14775?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Takanobu Asanuma updated HADOOP-14775:
--
   Resolution: Fixed
Fix Version/s: 3.3.0
   Status: Resolved  (was: Patch Available)

> Change junit dependency in parent pom file to junit 5 while maintaining 
> backward compatibility to junit4. 
> --
>
> Key: HADOOP-14775
> URL: https://issues.apache.org/jira/browse/HADOOP-14775
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Affects Versions: 3.0.0-alpha4
>Reporter: Ajay Kumar
>Assignee: Akira Ajisaka
>Priority: Major
>  Labels: junit5
> Fix For: 3.3.0
>
> Attachments: HADOOP-14775.01.patch, HADOOP-14775.02.patch, 
> HADOOP-14775.03.patch, HADOOP-14775.04.patch, HADOOP-14775.05.patch, 
> HADOOP-14775.06.patch
>
>
> Change junit dependency in parent pom file to junit 5 while maintaining 
> backward compatibility to junit4. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14775) Change junit dependency in parent pom file to junit 5 while maintaining backward compatibility to junit4.

2018-10-24 Thread Takanobu Asanuma (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-14775?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16661884#comment-16661884
 ] 

Takanobu Asanuma commented on HADOOP-14775:
---

Committed to trunk. Thanks for the contribution, [~ajisakaa], and thanks for 
the review, [~ajayydv]!

> Change junit dependency in parent pom file to junit 5 while maintaining 
> backward compatibility to junit4. 
> --
>
> Key: HADOOP-14775
> URL: https://issues.apache.org/jira/browse/HADOOP-14775
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Affects Versions: 3.0.0-alpha4
>Reporter: Ajay Kumar
>Assignee: Akira Ajisaka
>Priority: Major
>  Labels: junit5
> Attachments: HADOOP-14775.01.patch, HADOOP-14775.02.patch, 
> HADOOP-14775.03.patch, HADOOP-14775.04.patch, HADOOP-14775.05.patch, 
> HADOOP-14775.06.patch
>
>
> Change junit dependency in parent pom file to junit 5 while maintaining 
> backward compatibility to junit4. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14775) Change junit dependency in parent pom file to junit 5 while maintaining backward compatibility to junit4.

2018-10-24 Thread Takanobu Asanuma (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-14775?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16661876#comment-16661876
 ] 

Takanobu Asanuma commented on HADOOP-14775:
---

+1. Will commit it soon.

> Change junit dependency in parent pom file to junit 5 while maintaining 
> backward compatibility to junit4. 
> --
>
> Key: HADOOP-14775
> URL: https://issues.apache.org/jira/browse/HADOOP-14775
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Affects Versions: 3.0.0-alpha4
>Reporter: Ajay Kumar
>Assignee: Akira Ajisaka
>Priority: Major
>  Labels: junit5
> Attachments: HADOOP-14775.01.patch, HADOOP-14775.02.patch, 
> HADOOP-14775.03.patch, HADOOP-14775.04.patch, HADOOP-14775.05.patch, 
> HADOOP-14775.06.patch
>
>
> Change junit dependency in parent pom file to junit 5 while maintaining 
> backward compatibility to junit4. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15877) Upgrade Curator version to 4.0.1

2018-10-24 Thread Akira Ajisaka (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15877?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16661805#comment-16661805
 ] 

Akira Ajisaka commented on HADOOP-15877:


Unfortunately upgrading Curator version to 4.0.1 does not fix YARN-8937. When 
using ZK 3.4.x, curator 2.x is used for testing.

Detail: 
https://issues.apache.org/jira/browse/CURATOR-409?focusedCommentId=16661755=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16661755

> Upgrade Curator version to 4.0.1
> 
>
> Key: HADOOP-15877
> URL: https://issues.apache.org/jira/browse/HADOOP-15877
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: ha
>Reporter: Akira Ajisaka
>Assignee: Akira Ajisaka
>Priority: Major
>
> A long-term option to fix YARN-8937.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15864) Job submitter / executor fail when SBN domain name can not resolved

2018-10-24 Thread He Xiaoqiao (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15864?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16661790#comment-16661790
 ] 

He Xiaoqiao commented on HADOOP-15864:
--

[~jojochuang] check fail unit test and it passed at local machine, I think it 
is not related about this patch. another question,  
[^HADOOP-15864-branch.2.7.002.patch] is for branch-2.7 , however jenkins apply 
it to branch-3.3.0, could you give some suggestions?

> Job submitter / executor fail when SBN domain name can not resolved
> ---
>
> Key: HADOOP-15864
> URL: https://issues.apache.org/jira/browse/HADOOP-15864
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: He Xiaoqiao
>Assignee: He Xiaoqiao
>Priority: Critical
> Attachments: HADOOP-15864-branch.2.7.001.patch, 
> HADOOP-15864-branch.2.7.002.patch, HADOOP-15864.003.patch
>
>
> Job submit failure and Task executes failure if Standby NameNode domain name 
> can not resolved on HDFS HA with DelegationToken feature.
> This issue is triggered when create {{ConfiguredFailoverProxyProvider}} 
> instance which invoke {{HAUtil.cloneDelegationTokenForLogicalUri}} in HA mode 
> with Security. Since in HDFS HA mode UGI need include separate token for each 
> NameNode in order to dealing with Active-Standby switch, the double tokens' 
> content is same of course. 
> However when #setTokenService in {{HAUtil.cloneDelegationTokenForLogicalUri}} 
> it checks whether the address of NameNode has been resolved or not, if Not, 
> throw #IllegalArgumentException upon, then job submitter/ task executor fail.
> HDFS-8068 and HADOOP-12125 try to fix it, but I don't think the two tickets 
> resolve completely.
> Another questions many guys consider is why NameNode domain name can not 
> resolve? I think there are many scenarios, for instance node replace when 
> meet fault, and refresh DNS sometimes. Anyway, Standby NameNode failure 
> should not impact Hadoop cluster stability in my opinion.
> a. code ref: org.apache.hadoop.security.SecurityUtil line373-386
> {code:java}
>   public static Text buildTokenService(InetSocketAddress addr) {
> String host = null;
> if (useIpForTokenService) {
>   if (addr.isUnresolved()) { // host has no ip address
> throw new IllegalArgumentException(
> new UnknownHostException(addr.getHostName())
> );
>   }
>   host = addr.getAddress().getHostAddress();
> } else {
>   host = StringUtils.toLowerCase(addr.getHostName());
> }
> return new Text(host + ":" + addr.getPort());
>   }
> {code}
> b.exception log ref:
> {code:xml}
> at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:227)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
> at java.lang.Thread.run(Thread.java:745)
> Caused by: java.io.IOException: Couldn't create proxy provider class 
> org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider
> at 
> org.apache.hadoop.hdfs.NameNodeProxies.createFailoverProxyProvider(NameNodeProxies.java:515)
> at 
> org.apache.hadoop.hdfs.NameNodeProxies.createProxy(NameNodeProxies.java:170)
> at org.apache.hadoop.hdfs.DFSClient.(DFSClient.java:761)
> at org.apache.hadoop.hdfs.DFSClient.(DFSClient.java:691)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFileSystem.java:150)
> at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2713)
> at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:93)
> at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2747)
> at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2729)
> at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:385)
> at 
> org.apache.hadoop.fs.viewfs.ChRootedFileSystem.(ChRootedFileSystem.java:106)
> at 
> org.apache.hadoop.fs.viewfs.ViewFileSystem$1.getTargetFileSystem(ViewFileSystem.java:178)
> at 
> org.apache.hadoop.fs.viewfs.ViewFileSystem$1.getTargetFileSystem(ViewFileSystem.java:172)
> at org.apache.hadoop.fs.viewfs.InodeTree.createLink(InodeTree.java:303)
> at org.apache.hadoop.fs.viewfs.InodeTree.(InodeTree.java:377)
> at 
> org.apache.hadoop.fs.viewfs.ViewFileSystem$1.(ViewFileSystem.java:172)
> at 
> org.apache.hadoop.fs.viewfs.ViewFileSystem.initialize(ViewFileSystem.java:172)
> at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2713)
> at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:93)
> at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2747)
> at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2729)
> at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:385)
> at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:176)
> at 

[jira] [Updated] (HADOOP-15856) Trunk build fails to compile native on Windows

2018-10-24 Thread Vinayakumar B (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15856?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinayakumar B updated HADOOP-15856:
---
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 3.3.0
   Status: Resolved  (was: Patch Available)

Thanks [~ayushtkn] [~surendrasingh] and [~elgoiri] for reviews and verification.

Committed to trunk.

> Trunk build fails to compile native on Windows
> --
>
> Key: HADOOP-15856
> URL: https://issues.apache.org/jira/browse/HADOOP-15856
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: native
>Reporter: Vinayakumar B
>Assignee: Vinayakumar B
>Priority: Blocker
> Fix For: 3.3.0
>
> Attachments: HADOOP-15856-01.patch
>
>
> After removal of {{javah}} dependency in HADOOP-15767
> Trunk build fails with unable to find JNI headers.
> HADOOP-15767 fixed javah isssue with JDK10 only for linux builds.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15856) Trunk build fails to compile native on Windows

2018-10-24 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15856?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16661765#comment-16661765
 ] 

Hudson commented on HADOOP-15856:
-

FAILURE: Integrated in Jenkins build Hadoop-trunk-Commit #15303 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/15303/])
HADOOP-15856. Trunk build fails to compile native on Windows. (vinayakumarb: 
rev 0ca50648c2b1a05356ce4b0d5c3a3da5ab3a7d02)
* (edit) hadoop-project/pom.xml


> Trunk build fails to compile native on Windows
> --
>
> Key: HADOOP-15856
> URL: https://issues.apache.org/jira/browse/HADOOP-15856
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: native
>Reporter: Vinayakumar B
>Assignee: Vinayakumar B
>Priority: Blocker
> Attachments: HADOOP-15856-01.patch
>
>
> After removal of {{javah}} dependency in HADOOP-15767
> Trunk build fails with unable to find JNI headers.
> HADOOP-15767 fixed javah isssue with JDK10 only for linux builds.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org