[jira] [Commented] (HADOOP-15943) AliyunOSS: add missing owner & group attributes for oss FileStatus

2018-11-19 Thread wujinhu (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15943?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16692719#comment-16692719
 ] 

wujinhu commented on HADOOP-15943:
--

[~cheersyang] Please help to review this patch, this is a minor fix for oss 
file status. Many thanks.:)

> AliyunOSS: add missing owner & group attributes for oss FileStatus
> --
>
> Key: HADOOP-15943
> URL: https://issues.apache.org/jira/browse/HADOOP-15943
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs/oss
>Affects Versions: 2.10.0, 2.9.1, 3.2.0, 3.1.1, 3.0.3, 3.3.0
>Reporter: wujinhu
>Assignee: wujinhu
>Priority: Major
> Attachments: HADOOP-15943.001.patch, HADOOP-15943.002.patch
>
>
> Owner & group attributes are missing when you list oss objects via hadoop 
> command:
> {code:java}
> Found 6 items
> drwxrwxrwx - 0 2018-08-01 21:37 /1024
> drwxrwxrwx - 0 2018-10-30 11:07 /50
> -rw-rw-rw- 1 94070 2018-11-08 21:48 /a
> -rw-rw-rw- 1 2441079322 2018-10-31 10:14 /lineitem.csv
> drwxrwxrwx - 0 1970-01-01 08:00 /tmp
> drwxrwxrwx - 0 1970-01-01 08:00 /user
> {code}
>  
> The result should like below(hadoop fs -ls hdfs://master:8020/):
> {code:java}
> Found 5 items
> drwxr-xr-x - hbase hbase 0 2018-11-18 17:31 hdfs://master:8020/hbase
> drwxrwxrwt - hdfs supergroup 0 2018-10-30 11:07 hdfs://master:8020/tmp
> drwxr-xr-x - hdfs supergroup 0 2018-10-30 10:39 hdfs://master:8020/user
> {code}
> However, oss objects do not have owner & group attributes like hadoop files,  
> so we assume both owner & group are the current user at the time the FS was 
> instantiated.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15943) AliyunOSS: add missing owner & group attributes for oss FileStatus

2018-11-19 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15943?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16692709#comment-16692709
 ] 

Hadoop QA commented on HADOOP-15943:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
11s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 20m 
42s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
23s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
17s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
25s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 56s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
32s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
19s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
19s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 24s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
17s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
20s{color} | {color:green} hadoop-aliyun in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
23s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 52m 17s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | HADOOP-15943 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12948802/HADOOP-15943.002.patch
 |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 65c59641a2a4 4.4.0-134-generic #160~14.04.1-Ubuntu SMP Fri Aug 
17 11:07:07 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 5fb14e0 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_181 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/15546/testReport/ |
| Max. process+thread count | 341 (vs. ulimit of 1) |
| modules | C: hadoop-tools/hadoop-aliyun U: hadoop-tools/hadoop-aliyun |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/15546/console |
| Powered by | Apache Yetus 0.8.0   http://yetus.apache.org |


This message was automatically generated.



> AliyunOSS: add missing owner & group attributes for oss FileStatus
> --
>
> Key: 

[jira] [Updated] (HADOOP-15943) AliyunOSS: add missing owner & group attributes for oss FileStatus

2018-11-19 Thread wujinhu (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15943?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

wujinhu updated HADOOP-15943:
-
Attachment: HADOOP-15943.002.patch

> AliyunOSS: add missing owner & group attributes for oss FileStatus
> --
>
> Key: HADOOP-15943
> URL: https://issues.apache.org/jira/browse/HADOOP-15943
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs/oss
>Affects Versions: 2.10.0, 2.9.1, 3.2.0, 3.1.1, 3.0.3, 3.3.0
>Reporter: wujinhu
>Assignee: wujinhu
>Priority: Major
> Attachments: HADOOP-15943.001.patch, HADOOP-15943.002.patch
>
>
> Owner & group attributes are missing when you list oss objects via hadoop 
> command:
> {code:java}
> Found 6 items
> drwxrwxrwx - 0 2018-08-01 21:37 /1024
> drwxrwxrwx - 0 2018-10-30 11:07 /50
> -rw-rw-rw- 1 94070 2018-11-08 21:48 /a
> -rw-rw-rw- 1 2441079322 2018-10-31 10:14 /lineitem.csv
> drwxrwxrwx - 0 1970-01-01 08:00 /tmp
> drwxrwxrwx - 0 1970-01-01 08:00 /user
> {code}
>  
> The result should like below(hadoop fs -ls hdfs://master:8020/):
> {code:java}
> Found 5 items
> drwxr-xr-x - hbase hbase 0 2018-11-18 17:31 hdfs://master:8020/hbase
> drwxrwxrwt - hdfs supergroup 0 2018-10-30 11:07 hdfs://master:8020/tmp
> drwxr-xr-x - hdfs supergroup 0 2018-10-30 10:39 hdfs://master:8020/user
> {code}
> However, oss objects do not have owner & group attributes like hadoop files,  
> so we assume both owner & group are the current user at the time the FS was 
> instantiated.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15943) AliyunOSS: add missing owner & group attributes for oss FileStatus

2018-11-19 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15943?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16692666#comment-16692666
 ] 

Hadoop QA commented on HADOOP-15943:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  8m 
40s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 24m 
45s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
23s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
17s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
26s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m  7s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
33s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
20s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
17s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 11s{color} | {color:orange} hadoop-tools/hadoop-aliyun: The patch generated 
3 new + 0 unchanged - 0 fixed = 3 total (was 0) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
19s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 36s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
17s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
20s{color} | {color:green} hadoop-aliyun in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
26s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 65m 21s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | HADOOP-15943 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12948796/HADOOP-15943.001.patch
 |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux e054146e94e9 4.4.0-134-generic #160~14.04.1-Ubuntu SMP Fri Aug 
17 11:07:07 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 5fb14e0 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_181 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/15545/artifact/out/diff-checkstyle-hadoop-tools_hadoop-aliyun.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/15545/testReport/ |
| Max. process+thread count | 324 (vs. ulimit of 1) |
| modules | C: hadoop-tools/hadoop-aliyun U: hadoop-tools/hadoop-aliyun |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/15545/console |
| Powered by | Apache Yetus 0.8.0   

[jira] [Updated] (HADOOP-15943) AliyunOSS: add missing owner & group attributes for oss FileStatus

2018-11-19 Thread wujinhu (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15943?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

wujinhu updated HADOOP-15943:
-
Attachment: HADOOP-15943.001.patch
Status: Patch Available  (was: Open)

> AliyunOSS: add missing owner & group attributes for oss FileStatus
> --
>
> Key: HADOOP-15943
> URL: https://issues.apache.org/jira/browse/HADOOP-15943
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs/oss
>Affects Versions: 3.0.3, 3.1.1, 2.9.1, 2.10.0, 3.2.0, 3.3.0
>Reporter: wujinhu
>Assignee: wujinhu
>Priority: Major
> Attachments: HADOOP-15943.001.patch
>
>
> Owner & group attributes are missing when you list oss objects via hadoop 
> command:
> {code:java}
> Found 6 items
> drwxrwxrwx - 0 2018-08-01 21:37 /1024
> drwxrwxrwx - 0 2018-10-30 11:07 /50
> -rw-rw-rw- 1 94070 2018-11-08 21:48 /a
> -rw-rw-rw- 1 2441079322 2018-10-31 10:14 /lineitem.csv
> drwxrwxrwx - 0 1970-01-01 08:00 /tmp
> drwxrwxrwx - 0 1970-01-01 08:00 /user
> {code}
>  
> The result should like below(hadoop fs -ls hdfs://master:8020/):
> {code:java}
> Found 5 items
> drwxr-xr-x - hbase hbase 0 2018-11-18 17:31 hdfs://master:8020/hbase
> drwxrwxrwt - hdfs supergroup 0 2018-10-30 11:07 hdfs://master:8020/tmp
> drwxr-xr-x - hdfs supergroup 0 2018-10-30 10:39 hdfs://master:8020/user
> {code}
> However, oss objects do not have owner & group attributes like hadoop files,  
> so we assume both owner & group are the current user at the time the FS was 
> instantiated.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15939) Filter overlapping objenesis class in hadoop-client-minicluster

2018-11-19 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15939?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16692520#comment-16692520
 ] 

Hudson commented on HADOOP-15939:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #15465 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/15465/])
HADOOP-15939. Filter overlapping objenesis class in (xyao: rev 
397f523e22a4f76b5484bed26ef4e6d40200611e)
* (edit) hadoop-client-modules/hadoop-client-minicluster/pom.xml


> Filter overlapping objenesis class in hadoop-client-minicluster 
> 
>
> Key: HADOOP-15939
> URL: https://issues.apache.org/jira/browse/HADOOP-15939
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
>Priority: Minor
> Fix For: 3.3.0
>
> Attachments: HADOOP-15939.001.patch
>
>
> As mentioned here and found in with latest Jenkins shadedclient.
> Jenkins does not provide a detailed output file for the failure though. But 
> it can be reproed with the following cmd:
> {code:java}
> mvn verify -fae --batch-mode -am -pl 
> hadoop-client-modules/hadoop-client-check-invariants -pl 
> hadoop-client-modules/hadoop-client-check-test-invariants -pl 
> hadoop-client-modules/hadoop-client-integration-tests -Dtest=NoUnitTests 
> -Dmaven.javadoc.skip=true -Dcheckstyle.skip=true -Dfindbugs.skip=true
> {code}
> Error Message:
> {code:java}
> [WARNING] objenesis-1.0.jar, mockito-all-1.8.5.jar define 30 overlapping 
> classes: 
> [WARNING]   - org.objenesis.ObjenesisBase
> [WARNING]   - org.objenesis.instantiator.gcj.GCJInstantiator
> [WARNING]   - org.objenesis.ObjenesisHelper
> [WARNING]   - org.objenesis.instantiator.jrockit.JRockitLegacyInstantiator
> [WARNING]   - org.objenesis.instantiator.sun.SunReflectionFactoryInstantiator
> [WARNING]   - org.objenesis.instantiator.ObjectInstantiator
> [WARNING]   - org.objenesis.instantiator.gcj.GCJInstantiatorBase$DummyStream
> [WARNING]   - org.objenesis.instantiator.basic.ObjectStreamClassInstantiator
> [WARNING]   - org.objenesis.ObjenesisException
> [WARNING]   - org.objenesis.Objenesis
> [WARNING]   - 20 more...
> [WARNING] maven-shade-plugin has detected that some class files are
> [WARNING] present in two or more JARs. When this happens, only one
> [WARNING] single version of the class is copied to the uber jar.
> [WARNING] Usually this is not harmful and you can skip these warnings,
> [WARNING] otherwise try to manually exclude artifacts based on
> [WARNING] mvn dependency:tree -Ddetail=true and the above output.
> [WARNING] See [http://maven.apache.org/plugins/maven-shade-plugin/]
> [INFO] Replacing original artifact with shaded artifact.
> {code}
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15942) Change the logging level form DEBUG to ERROR for RuntimeErrorException

2018-11-19 Thread Anuhan Torgonshar (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15942?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anuhan Torgonshar updated HADOOP-15942:
---
Description: In JMXJsonServlet.java file, when invokes 
MBeanServer.getAttribute() method, many catch clauses followed, and each of 
them contains a log statement, most of them set the logging level to ERROR. 
However, when catches RuntimeErrorException in line 348 (r1839798), the logging 
level of log statement in this catch clause is DEBUG, the annotation indicates 
that an unexpected failure occured in getAttribute method before, so I think 
the logging level shuold be ERROR level too.   (was: In JMXJsonServlet.java 
file, when invokes MBeanServer.getAttribute() method, many catch clauses 
followed, and each of them contains a log statement. However, when catches 
RuntimeErrorException in line 348 (r1839798), the logging level of log 
statement in this catch clause is DEBUG, the annotation indicates that an 
unexpected failure occured in getAttribute method before, so I think the 
logging level shuold be ERROR level too. )

> Change the logging level form DEBUG to ERROR for RuntimeErrorException
> --
>
> Key: HADOOP-15942
> URL: https://issues.apache.org/jira/browse/HADOOP-15942
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: common
>Affects Versions: HADOOP-10388, 3.1.0
>Reporter: Anuhan Torgonshar
>Priority: Major
> Attachments: JMXJsonServlet.java
>
>
> In JMXJsonServlet.java file, when invokes MBeanServer.getAttribute() method, 
> many catch clauses followed, and each of them contains a log statement, most 
> of them set the logging level to ERROR. However, when catches 
> RuntimeErrorException in line 348 (r1839798), the logging level of log 
> statement in this catch clause is DEBUG, the annotation indicates that an 
> unexpected failure occured in getAttribute method before, so I think the 
> logging level shuold be ERROR level too. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15939) Filter overlapping objenesis class in hadoop-client-minicluster

2018-11-19 Thread Xiaoyu Yao (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15939?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HADOOP-15939:

   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 3.3.0
   Status: Resolved  (was: Patch Available)

Thanks all for the reviews. I've commit the patch to trunk.

> Filter overlapping objenesis class in hadoop-client-minicluster 
> 
>
> Key: HADOOP-15939
> URL: https://issues.apache.org/jira/browse/HADOOP-15939
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
>Priority: Minor
> Fix For: 3.3.0
>
> Attachments: HADOOP-15939.001.patch
>
>
> As mentioned here and found in with latest Jenkins shadedclient.
> Jenkins does not provide a detailed output file for the failure though. But 
> it can be reproed with the following cmd:
> {code:java}
> mvn verify -fae --batch-mode -am -pl 
> hadoop-client-modules/hadoop-client-check-invariants -pl 
> hadoop-client-modules/hadoop-client-check-test-invariants -pl 
> hadoop-client-modules/hadoop-client-integration-tests -Dtest=NoUnitTests 
> -Dmaven.javadoc.skip=true -Dcheckstyle.skip=true -Dfindbugs.skip=true
> {code}
> Error Message:
> {code:java}
> [WARNING] objenesis-1.0.jar, mockito-all-1.8.5.jar define 30 overlapping 
> classes: 
> [WARNING]   - org.objenesis.ObjenesisBase
> [WARNING]   - org.objenesis.instantiator.gcj.GCJInstantiator
> [WARNING]   - org.objenesis.ObjenesisHelper
> [WARNING]   - org.objenesis.instantiator.jrockit.JRockitLegacyInstantiator
> [WARNING]   - org.objenesis.instantiator.sun.SunReflectionFactoryInstantiator
> [WARNING]   - org.objenesis.instantiator.ObjectInstantiator
> [WARNING]   - org.objenesis.instantiator.gcj.GCJInstantiatorBase$DummyStream
> [WARNING]   - org.objenesis.instantiator.basic.ObjectStreamClassInstantiator
> [WARNING]   - org.objenesis.ObjenesisException
> [WARNING]   - org.objenesis.Objenesis
> [WARNING]   - 20 more...
> [WARNING] maven-shade-plugin has detected that some class files are
> [WARNING] present in two or more JARs. When this happens, only one
> [WARNING] single version of the class is copied to the uber jar.
> [WARNING] Usually this is not harmful and you can skip these warnings,
> [WARNING] otherwise try to manually exclude artifacts based on
> [WARNING] mvn dependency:tree -Ddetail=true and the above output.
> [WARNING] See [http://maven.apache.org/plugins/maven-shade-plugin/]
> [INFO] Replacing original artifact with shaded artifact.
> {code}
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14556) S3A to support Delegation Tokens

2018-11-19 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-14556?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16692228#comment-16692228
 ] 

Hadoop QA commented on HADOOP-14556:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
19s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 38 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
22s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 18m 
32s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 15m  
2s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
52s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
52s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
17m 13s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-project {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m  
9s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
53s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
23s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
 3s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 15m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 15m 
12s{color} | {color:green} root generated 0 new + 1448 unchanged - 1 fixed = 
1448 total (was 1449) {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
2m 48s{color} | {color:orange} root: The patch generated 31 new + 167 unchanged 
- 11 fixed = 198 total (was 178) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
49s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 98 line(s) that end in whitespace. Use 
git apply --whitespace=fix <>. Refer 
https://git-scm.com/docs/git-apply {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
3s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 34s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-project {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
38s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
26s{color} | {color:red} hadoop-tools_hadoop-aws generated 1 new + 1 unchanged 
- 0 fixed = 2 total (was 1) {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
18s{color} | {color:green} hadoop-project in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  8m 
33s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  4m  
8s{color} | {color:green} hadoop-mapreduce-client-core in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  4m 
25s{color} | {color:green} hadoop-aws in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
32s{color} | 

[jira] [Commented] (HADOOP-15939) Filter overlapping objenesis class in hadoop-client-minicluster

2018-11-19 Thread Xiaoyu Yao (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15939?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16692144#comment-16692144
 ] 

Xiaoyu Yao commented on HADOOP-15939:
-

Thanks [~busbey] and [~ste...@apache.org]l. As I mentioned in the description 
the QA bot only shows failure but the output file is not uploaded. I will open 
a YETUS ticket to get that fixed. Rerun the same command of Jenkins job can 
easily repro it.

My fix is similar to [~busbey] put earlier for hamcrest classes. The recent 
conflict on objenesis could relate to the change from -YARN-8338,-  where 
objenesis is added as dependency to hadoop-project/pom.xml.

> Filter overlapping objenesis class in hadoop-client-minicluster 
> 
>
> Key: HADOOP-15939
> URL: https://issues.apache.org/jira/browse/HADOOP-15939
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
>Priority: Minor
> Attachments: HADOOP-15939.001.patch
>
>
> As mentioned here and found in with latest Jenkins shadedclient.
> Jenkins does not provide a detailed output file for the failure though. But 
> it can be reproed with the following cmd:
> {code:java}
> mvn verify -fae --batch-mode -am -pl 
> hadoop-client-modules/hadoop-client-check-invariants -pl 
> hadoop-client-modules/hadoop-client-check-test-invariants -pl 
> hadoop-client-modules/hadoop-client-integration-tests -Dtest=NoUnitTests 
> -Dmaven.javadoc.skip=true -Dcheckstyle.skip=true -Dfindbugs.skip=true
> {code}
> Error Message:
> {code:java}
> [WARNING] objenesis-1.0.jar, mockito-all-1.8.5.jar define 30 overlapping 
> classes: 
> [WARNING]   - org.objenesis.ObjenesisBase
> [WARNING]   - org.objenesis.instantiator.gcj.GCJInstantiator
> [WARNING]   - org.objenesis.ObjenesisHelper
> [WARNING]   - org.objenesis.instantiator.jrockit.JRockitLegacyInstantiator
> [WARNING]   - org.objenesis.instantiator.sun.SunReflectionFactoryInstantiator
> [WARNING]   - org.objenesis.instantiator.ObjectInstantiator
> [WARNING]   - org.objenesis.instantiator.gcj.GCJInstantiatorBase$DummyStream
> [WARNING]   - org.objenesis.instantiator.basic.ObjectStreamClassInstantiator
> [WARNING]   - org.objenesis.ObjenesisException
> [WARNING]   - org.objenesis.Objenesis
> [WARNING]   - 20 more...
> [WARNING] maven-shade-plugin has detected that some class files are
> [WARNING] present in two or more JARs. When this happens, only one
> [WARNING] single version of the class is copied to the uber jar.
> [WARNING] Usually this is not harmful and you can skip these warnings,
> [WARNING] otherwise try to manually exclude artifacts based on
> [WARNING] mvn dependency:tree -Ddetail=true and the above output.
> [WARNING] See [http://maven.apache.org/plugins/maven-shade-plugin/]
> [INFO] Replacing original artifact with shaded artifact.
> {code}
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14556) S3A to support Delegation Tokens

2018-11-19 Thread Steve Loughran (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-14556?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-14556:

Status: Patch Available  (was: Open)

Patch 020
* adds DtFetcher binding (copy and past of HDFS one, new service entry), plus 
test
* various improvements for strings and logging
* minor cleanup.

I have used this in real distcp jobs: it works

> S3A to support Delegation Tokens
> 
>
> Key: HADOOP-14556
> URL: https://issues.apache.org/jira/browse/HADOOP-14556
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.2.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Major
> Attachments: HADOOP-14556-001.patch, HADOOP-14556-002.patch, 
> HADOOP-14556-003.patch, HADOOP-14556-004.patch, HADOOP-14556-005.patch, 
> HADOOP-14556-007.patch, HADOOP-14556-008.patch, HADOOP-14556-009.patch, 
> HADOOP-14556-010.patch, HADOOP-14556-010.patch, HADOOP-14556-011.patch, 
> HADOOP-14556-012.patch, HADOOP-14556-013.patch, HADOOP-14556-014.patch, 
> HADOOP-14556-015.patch, HADOOP-14556-016.patch, HADOOP-14556-017.patch, 
> HADOOP-14556-018a.patch, HADOOP-14556-019.patch, HADOOP-14556-020.patch, 
> HADOOP-14556.oath-002.patch, HADOOP-14556.oath.patch
>
>
> S3A to support delegation tokens where
> * an authenticated client can request a token via 
> {{FileSystem.getDelegationToken()}}
> * Amazon's token service is used to request short-lived session secret & id; 
> these will be saved in the token and  marshalled with jobs
> * A new authentication provider will look for a token for the current user 
> and authenticate the user if found
> This will not support renewals; the lifespan of a token will be limited to 
> the initial duration. Also, as you can't request an STS token from a 
> temporary session, IAM instances won't be able to issue tokens.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14556) S3A to support Delegation Tokens

2018-11-19 Thread Steve Loughran (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-14556?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-14556:

Attachment: HADOOP-14556-020.patch

> S3A to support Delegation Tokens
> 
>
> Key: HADOOP-14556
> URL: https://issues.apache.org/jira/browse/HADOOP-14556
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.2.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Major
> Attachments: HADOOP-14556-001.patch, HADOOP-14556-002.patch, 
> HADOOP-14556-003.patch, HADOOP-14556-004.patch, HADOOP-14556-005.patch, 
> HADOOP-14556-007.patch, HADOOP-14556-008.patch, HADOOP-14556-009.patch, 
> HADOOP-14556-010.patch, HADOOP-14556-010.patch, HADOOP-14556-011.patch, 
> HADOOP-14556-012.patch, HADOOP-14556-013.patch, HADOOP-14556-014.patch, 
> HADOOP-14556-015.patch, HADOOP-14556-016.patch, HADOOP-14556-017.patch, 
> HADOOP-14556-018a.patch, HADOOP-14556-019.patch, HADOOP-14556-020.patch, 
> HADOOP-14556.oath-002.patch, HADOOP-14556.oath.patch
>
>
> S3A to support delegation tokens where
> * an authenticated client can request a token via 
> {{FileSystem.getDelegationToken()}}
> * Amazon's token service is used to request short-lived session secret & id; 
> these will be saved in the token and  marshalled with jobs
> * A new authentication provider will look for a token for the current user 
> and authenticate the user if found
> This will not support renewals; the lifespan of a token will be limited to 
> the initial duration. Also, as you can't request an STS token from a 
> temporary session, IAM instances won't be able to issue tokens.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15940) ABFS: For HNS account, avoid unnecessary get call when doing Rename

2018-11-19 Thread Da Zhou (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15940?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16692090#comment-16692090
 ] 

Da Zhou commented on HADOOP-15940:
--

[~tmarquardt], thanks for review.
 Yes, the patch missed the case for FileNotFoundException.

For the behavior of rename dir:
 - HDFS contract [tests 
|https://github.com/apache/hadoop/blob/a55d6bba71c81c1c4e9d8cd11f55c78f10a548b0/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/FileSystemContractBaseTest.java#L765]assume
 rename a directory onto itself should fail.
 - HDFS [doc 
|https://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-common/filesystem/filesystem.html#boolean_renamePath_src_Path_d]
 said "Renaming a directory onto itself is no-op; return value is not 
specified.In POSIX the result is False; in HDFS the result is True."
 - The fix provided by you also return false, because service will return 
"INVALID_RENAME_SOURCE_PATH" error code for such case.

Based on the above, do you think we should override the contract test so 
renameDirToItself should return *false* for this corner case?

> ABFS: For HNS account, avoid unnecessary get call when doing Rename
> ---
>
> Key: HADOOP-15940
> URL: https://issues.apache.org/jira/browse/HADOOP-15940
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Affects Versions: 3.2.0
>Reporter: Da Zhou
>Assignee: Da Zhou
>Priority: Major
> Attachments: HADOOP-15940-001.patch
>
>
> When rename, there is always a GET dst file status call, this is not 
> necessary.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14556) S3A to support Delegation Tokens

2018-11-19 Thread Steve Loughran (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-14556?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-14556:

Status: Open  (was: Patch Available)

> S3A to support Delegation Tokens
> 
>
> Key: HADOOP-14556
> URL: https://issues.apache.org/jira/browse/HADOOP-14556
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.2.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Major
> Attachments: HADOOP-14556-001.patch, HADOOP-14556-002.patch, 
> HADOOP-14556-003.patch, HADOOP-14556-004.patch, HADOOP-14556-005.patch, 
> HADOOP-14556-007.patch, HADOOP-14556-008.patch, HADOOP-14556-009.patch, 
> HADOOP-14556-010.patch, HADOOP-14556-010.patch, HADOOP-14556-011.patch, 
> HADOOP-14556-012.patch, HADOOP-14556-013.patch, HADOOP-14556-014.patch, 
> HADOOP-14556-015.patch, HADOOP-14556-016.patch, HADOOP-14556-017.patch, 
> HADOOP-14556-018a.patch, HADOOP-14556-019.patch, HADOOP-14556-020.patch, 
> HADOOP-14556.oath-002.patch, HADOOP-14556.oath.patch
>
>
> S3A to support delegation tokens where
> * an authenticated client can request a token via 
> {{FileSystem.getDelegationToken()}}
> * Amazon's token service is used to request short-lived session secret & id; 
> these will be saved in the token and  marshalled with jobs
> * A new authentication provider will look for a token for the current user 
> and authenticate the user if found
> This will not support renewals; the lifespan of a token will be limited to 
> the initial duration. Also, as you can't request an STS token from a 
> temporary session, IAM instances won't be able to issue tokens.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15940) ABFS: For HNS account, avoid unnecessary get call when doing Rename

2018-11-19 Thread Da Zhou (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15940?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16692082#comment-16692082
 ] 

Da Zhou commented on HADOOP-15940:
--

Sorry I made a mistake in my previous comment, I highlighted the corrected one:
 1. rename a file to itself, should return true
 2. rename a dir to itself, should return *false*. (According to the [Contract 
test,|https://github.com/apache/hadoop/blob/a55d6bba71c81c1c4e9d8cd11f55c78f10a548b0/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/FileSystemContractBaseTest.java#L765]
 this should return false)
 3. move file/dir to same parent folder, return true.

> ABFS: For HNS account, avoid unnecessary get call when doing Rename
> ---
>
> Key: HADOOP-15940
> URL: https://issues.apache.org/jira/browse/HADOOP-15940
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Affects Versions: 3.2.0
>Reporter: Da Zhou
>Assignee: Da Zhou
>Priority: Major
> Attachments: HADOOP-15940-001.patch
>
>
> When rename, there is always a GET dst file status call, this is not 
> necessary.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15932) Oozie unable to create sharelib in s3a filesystem

2018-11-19 Thread Steve Loughran (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15932?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-15932:

Target Version/s: 3.0.4, 3.1.2, 3.2.1

> Oozie unable to create sharelib in s3a filesystem
> -
>
> Key: HADOOP-15932
> URL: https://issues.apache.org/jira/browse/HADOOP-15932
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs, fs/s3
>Affects Versions: 3.0.0
>Reporter: Soumitra Sulav
>Assignee: Steve Loughran
>Priority: Critical
> Attachments: HADOOP-15932-001.patch
>
>
> Oozie server unable to start cause of below exception.
> s3a expects a file to copy it in store but sharelib is a folder containing 
> all the needed components jars.
> Hence throws the exception :
> _Not a file: /usr/hdp/current/oozie-server/share/lib_
> {code:java}
> [oozie@sg-hdp1 ~]$ /usr/hdp/current/oozie-server/bin/oozie-setup.sh sharelib 
> create -fs s3a://hdp -locallib /usr/hdp/current/oozie-server/share
>   setting OOZIE_CONFIG=${OOZIE_CONFIG:-/usr/hdp/current/oozie-client/conf}
>   setting 
> CATALINA_BASE=${CATALINA_BASE:-/usr/hdp/current/oozie-client/oozie-server}
>   setting CATALINA_TMPDIR=${CATALINA_TMPDIR:-/var/tmp/oozie}
>   setting OOZIE_CATALINA_HOME=/usr/lib/bigtop-tomcat
>   setting JAVA_HOME=/usr/jdk64/jdk1.8.0_112
>   setting JRE_HOME=${JAVA_HOME}
>   setting CATALINA_OPTS="$CATALINA_OPTS -Xmx2048m"
>   setting OOZIE_LOG=/var/log/oozie
>   setting CATALINA_PID=/var/run/oozie/oozie.pid
>   setting OOZIE_DATA=/hadoop/oozie/data
>   setting OOZIE_HTTP_PORT=11000
>   setting OOZIE_ADMIN_PORT=11001
>   setting 
> JAVA_LIBRARY_PATH=/usr/hdp/3.0.0.0-1634/hadoop/lib/native/Linux-amd64-64
>   setting OOZIE_CLIENT_OPTS="${OOZIE_CLIENT_OPTS} 
> -Doozie.connection.retry.count=5 "
>   setting OOZIE_CONFIG=${OOZIE_CONFIG:-/usr/hdp/current/oozie-client/conf}
>   setting 
> CATALINA_BASE=${CATALINA_BASE:-/usr/hdp/current/oozie-client/oozie-server}
>   setting CATALINA_TMPDIR=${CATALINA_TMPDIR:-/var/tmp/oozie}
>   setting OOZIE_CATALINA_HOME=/usr/lib/bigtop-tomcat
>   setting JAVA_HOME=/usr/jdk64/jdk1.8.0_112
>   setting JRE_HOME=${JAVA_HOME}
>   setting CATALINA_OPTS="$CATALINA_OPTS -Xmx2048m"
>   setting OOZIE_LOG=/var/log/oozie
>   setting CATALINA_PID=/var/run/oozie/oozie.pid
>   setting OOZIE_DATA=/hadoop/oozie/data
>   setting OOZIE_HTTP_PORT=11000
>   setting OOZIE_ADMIN_PORT=11001
>   setting 
> JAVA_LIBRARY_PATH=/usr/hdp/3.0.0.0-1634/hadoop/lib/native/Linux-amd64-64
>   setting OOZIE_CLIENT_OPTS="${OOZIE_CLIENT_OPTS} 
> -Doozie.connection.retry.count=5 "
> SLF4J: Class path contains multiple SLF4J bindings.
> SLF4J: Found binding in 
> [jar:file:/usr/hdp/3.0.0.0-1634/oozie/lib/slf4j-simple-1.6.6.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> SLF4J: Found binding in 
> [jar:file:/usr/hdp/3.0.0.0-1634/oozie/libserver/log4j-slf4j-impl-2.10.0.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> SLF4J: Found binding in 
> [jar:file:/usr/hdp/3.0.0.0-1634/oozie/libserver/slf4j-log4j12-1.6.6.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an 
> explanation.
> SLF4J: Actual binding is of type [org.slf4j.impl.SimpleLoggerFactory]
> 518 [main] WARN org.apache.hadoop.util.NativeCodeLoader - Unable to load 
> native-hadoop library for your platform... using builtin-java classes where 
> applicable
> 605 [main] INFO org.apache.hadoop.conf.Configuration.deprecation - 
> mapred.local.dir is deprecated. Instead, use mapreduce.cluster.local.dir
> 619 [main] INFO org.apache.hadoop.security.SecurityUtil - Updating 
> Configuration
> the destination path for sharelib is: /user/oozie/share/lib/lib_20181114154552
> log4j:WARN No appenders could be found for logger 
> (org.apache.htrace.core.Tracer).
> log4j:WARN Please initialize the log4j system properly.
> log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for more 
> info.
> 1118 [main] WARN org.apache.hadoop.metrics2.impl.MetricsConfig - Cannot 
> locate configuration: tried 
> hadoop-metrics2-s3a-file-system.properties,hadoop-metrics2.properties
> 1172 [main] INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl - 
> Scheduled Metric snapshot period at 10 second(s).
> 1172 [main] INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl - 
> s3a-file-system metrics system started
> 2255 [main] INFO org.apache.hadoop.conf.Configuration.deprecation - 
> fs.s3a.server-side-encryption-key is deprecated. Instead, use 
> fs.s3a.server-side-encryption.key
> Error: Not a file: /usr/hdp/current/oozie-server/share/lib
> Stack trace for the error was (for debug purposes):
> --
> java.io.FileNotFoundException: Not a file: 
> /usr/hdp/current/oozie-server/share/lib
>   at 
> 

[jira] [Assigned] (HADOOP-15932) Oozie unable to create sharelib in s3a filesystem

2018-11-19 Thread Steve Loughran (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15932?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran reassigned HADOOP-15932:
---

Assignee: Steve Loughran

> Oozie unable to create sharelib in s3a filesystem
> -
>
> Key: HADOOP-15932
> URL: https://issues.apache.org/jira/browse/HADOOP-15932
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs, fs/s3
>Affects Versions: 3.0.0
>Reporter: Soumitra Sulav
>Assignee: Steve Loughran
>Priority: Critical
> Attachments: HADOOP-15932-001.patch
>
>
> Oozie server unable to start cause of below exception.
> s3a expects a file to copy it in store but sharelib is a folder containing 
> all the needed components jars.
> Hence throws the exception :
> _Not a file: /usr/hdp/current/oozie-server/share/lib_
> {code:java}
> [oozie@sg-hdp1 ~]$ /usr/hdp/current/oozie-server/bin/oozie-setup.sh sharelib 
> create -fs s3a://hdp -locallib /usr/hdp/current/oozie-server/share
>   setting OOZIE_CONFIG=${OOZIE_CONFIG:-/usr/hdp/current/oozie-client/conf}
>   setting 
> CATALINA_BASE=${CATALINA_BASE:-/usr/hdp/current/oozie-client/oozie-server}
>   setting CATALINA_TMPDIR=${CATALINA_TMPDIR:-/var/tmp/oozie}
>   setting OOZIE_CATALINA_HOME=/usr/lib/bigtop-tomcat
>   setting JAVA_HOME=/usr/jdk64/jdk1.8.0_112
>   setting JRE_HOME=${JAVA_HOME}
>   setting CATALINA_OPTS="$CATALINA_OPTS -Xmx2048m"
>   setting OOZIE_LOG=/var/log/oozie
>   setting CATALINA_PID=/var/run/oozie/oozie.pid
>   setting OOZIE_DATA=/hadoop/oozie/data
>   setting OOZIE_HTTP_PORT=11000
>   setting OOZIE_ADMIN_PORT=11001
>   setting 
> JAVA_LIBRARY_PATH=/usr/hdp/3.0.0.0-1634/hadoop/lib/native/Linux-amd64-64
>   setting OOZIE_CLIENT_OPTS="${OOZIE_CLIENT_OPTS} 
> -Doozie.connection.retry.count=5 "
>   setting OOZIE_CONFIG=${OOZIE_CONFIG:-/usr/hdp/current/oozie-client/conf}
>   setting 
> CATALINA_BASE=${CATALINA_BASE:-/usr/hdp/current/oozie-client/oozie-server}
>   setting CATALINA_TMPDIR=${CATALINA_TMPDIR:-/var/tmp/oozie}
>   setting OOZIE_CATALINA_HOME=/usr/lib/bigtop-tomcat
>   setting JAVA_HOME=/usr/jdk64/jdk1.8.0_112
>   setting JRE_HOME=${JAVA_HOME}
>   setting CATALINA_OPTS="$CATALINA_OPTS -Xmx2048m"
>   setting OOZIE_LOG=/var/log/oozie
>   setting CATALINA_PID=/var/run/oozie/oozie.pid
>   setting OOZIE_DATA=/hadoop/oozie/data
>   setting OOZIE_HTTP_PORT=11000
>   setting OOZIE_ADMIN_PORT=11001
>   setting 
> JAVA_LIBRARY_PATH=/usr/hdp/3.0.0.0-1634/hadoop/lib/native/Linux-amd64-64
>   setting OOZIE_CLIENT_OPTS="${OOZIE_CLIENT_OPTS} 
> -Doozie.connection.retry.count=5 "
> SLF4J: Class path contains multiple SLF4J bindings.
> SLF4J: Found binding in 
> [jar:file:/usr/hdp/3.0.0.0-1634/oozie/lib/slf4j-simple-1.6.6.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> SLF4J: Found binding in 
> [jar:file:/usr/hdp/3.0.0.0-1634/oozie/libserver/log4j-slf4j-impl-2.10.0.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> SLF4J: Found binding in 
> [jar:file:/usr/hdp/3.0.0.0-1634/oozie/libserver/slf4j-log4j12-1.6.6.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an 
> explanation.
> SLF4J: Actual binding is of type [org.slf4j.impl.SimpleLoggerFactory]
> 518 [main] WARN org.apache.hadoop.util.NativeCodeLoader - Unable to load 
> native-hadoop library for your platform... using builtin-java classes where 
> applicable
> 605 [main] INFO org.apache.hadoop.conf.Configuration.deprecation - 
> mapred.local.dir is deprecated. Instead, use mapreduce.cluster.local.dir
> 619 [main] INFO org.apache.hadoop.security.SecurityUtil - Updating 
> Configuration
> the destination path for sharelib is: /user/oozie/share/lib/lib_20181114154552
> log4j:WARN No appenders could be found for logger 
> (org.apache.htrace.core.Tracer).
> log4j:WARN Please initialize the log4j system properly.
> log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for more 
> info.
> 1118 [main] WARN org.apache.hadoop.metrics2.impl.MetricsConfig - Cannot 
> locate configuration: tried 
> hadoop-metrics2-s3a-file-system.properties,hadoop-metrics2.properties
> 1172 [main] INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl - 
> Scheduled Metric snapshot period at 10 second(s).
> 1172 [main] INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl - 
> s3a-file-system metrics system started
> 2255 [main] INFO org.apache.hadoop.conf.Configuration.deprecation - 
> fs.s3a.server-side-encryption-key is deprecated. Instead, use 
> fs.s3a.server-side-encryption.key
> Error: Not a file: /usr/hdp/current/oozie-server/share/lib
> Stack trace for the error was (for debug purposes):
> --
> java.io.FileNotFoundException: Not a file: 
> /usr/hdp/current/oozie-server/share/lib
>   at 
> 

[jira] [Commented] (HADOOP-15870) S3AInputStream.remainingInFile should use nextReadPos

2018-11-19 Thread Steve Loughran (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15870?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16691900#comment-16691900
 ] 

Steve Loughran commented on HADOOP-15870:
-

thanks for the test run.

For the patch, use git diff:

{code}
 git diff trunk...HEAD > HADOOP-15870-002.patch
{code}

Attach file to the other JIRA, hit "cancel patch" then "submit patch"

thanks

> S3AInputStream.remainingInFile should use nextReadPos
> -
>
> Key: HADOOP-15870
> URL: https://issues.apache.org/jira/browse/HADOOP-15870
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.8.4, 3.1.1
>Reporter: Shixiong Zhu
>Assignee: lqjacklee
>Priority: Major
>
> Otherwise `remainingInFile` will not change after `seek`.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15932) Oozie unable to create sharelib in s3a filesystem

2018-11-19 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15932?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16691887#comment-16691887
 ] 

Hadoop QA commented on HADOOP-15932:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
16s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 19m 
10s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
31s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
24s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
35s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 21s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
39s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
24s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
25s{color} | {color:green} hadoop-tools_hadoop-aws generated 0 new + 17 
unchanged - 1 fixed = 17 total (was 18) {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
30s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 33s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
20s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  4m 
32s{color} | {color:green} hadoop-aws in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
26s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 54m 52s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | HADOOP-15932 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12948728/HADOOP-15932-001.patch
 |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 62fa7848e2a7 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 9366608 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_181 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/15543/testReport/ |
| Max. process+thread count | 453 (vs. ulimit of 1) |
| modules | C: hadoop-tools/hadoop-aws U: hadoop-tools/hadoop-aws |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/15543/console |
| Powered by | Apache Yetus 0.8.0   http://yetus.apache.org |


This message was automatically generated.



> Oozie unable to create sharelib in s3a filesystem
> -
>
>  

[jira] [Commented] (HADOOP-15870) S3AInputStream.remainingInFile should use nextReadPos

2018-11-19 Thread lqjacklee (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15870?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16691871#comment-16691871
 ] 

lqjacklee commented on HADOOP-15870:


[~ste...@apache.org] I verify the function under the ap-southeast-1 region. 
What's more, how can I attach the patch file, please help me, thanks a lot.

> S3AInputStream.remainingInFile should use nextReadPos
> -
>
> Key: HADOOP-15870
> URL: https://issues.apache.org/jira/browse/HADOOP-15870
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.8.4, 3.1.1
>Reporter: Shixiong Zhu
>Assignee: lqjacklee
>Priority: Major
>
> Otherwise `remainingInFile` will not change after `seek`.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15932) Oozie unable to create sharelib in s3a filesystem

2018-11-19 Thread Steve Loughran (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15932?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-15932:

Status: Patch Available  (was: Open)

tested: s3a ireland. 

All good except for {{ITestS3AMiniYarnCluster}} failing with CNFE for bouncy 
castle. HADOOP-14556 fixes that, and its not a issue for older releases.

> Oozie unable to create sharelib in s3a filesystem
> -
>
> Key: HADOOP-15932
> URL: https://issues.apache.org/jira/browse/HADOOP-15932
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs, fs/s3
>Affects Versions: 3.0.0
>Reporter: Soumitra Sulav
>Priority: Critical
> Attachments: HADOOP-15932-001.patch
>
>
> Oozie server unable to start cause of below exception.
> s3a expects a file to copy it in store but sharelib is a folder containing 
> all the needed components jars.
> Hence throws the exception :
> _Not a file: /usr/hdp/current/oozie-server/share/lib_
> {code:java}
> [oozie@sg-hdp1 ~]$ /usr/hdp/current/oozie-server/bin/oozie-setup.sh sharelib 
> create -fs s3a://hdp -locallib /usr/hdp/current/oozie-server/share
>   setting OOZIE_CONFIG=${OOZIE_CONFIG:-/usr/hdp/current/oozie-client/conf}
>   setting 
> CATALINA_BASE=${CATALINA_BASE:-/usr/hdp/current/oozie-client/oozie-server}
>   setting CATALINA_TMPDIR=${CATALINA_TMPDIR:-/var/tmp/oozie}
>   setting OOZIE_CATALINA_HOME=/usr/lib/bigtop-tomcat
>   setting JAVA_HOME=/usr/jdk64/jdk1.8.0_112
>   setting JRE_HOME=${JAVA_HOME}
>   setting CATALINA_OPTS="$CATALINA_OPTS -Xmx2048m"
>   setting OOZIE_LOG=/var/log/oozie
>   setting CATALINA_PID=/var/run/oozie/oozie.pid
>   setting OOZIE_DATA=/hadoop/oozie/data
>   setting OOZIE_HTTP_PORT=11000
>   setting OOZIE_ADMIN_PORT=11001
>   setting 
> JAVA_LIBRARY_PATH=/usr/hdp/3.0.0.0-1634/hadoop/lib/native/Linux-amd64-64
>   setting OOZIE_CLIENT_OPTS="${OOZIE_CLIENT_OPTS} 
> -Doozie.connection.retry.count=5 "
>   setting OOZIE_CONFIG=${OOZIE_CONFIG:-/usr/hdp/current/oozie-client/conf}
>   setting 
> CATALINA_BASE=${CATALINA_BASE:-/usr/hdp/current/oozie-client/oozie-server}
>   setting CATALINA_TMPDIR=${CATALINA_TMPDIR:-/var/tmp/oozie}
>   setting OOZIE_CATALINA_HOME=/usr/lib/bigtop-tomcat
>   setting JAVA_HOME=/usr/jdk64/jdk1.8.0_112
>   setting JRE_HOME=${JAVA_HOME}
>   setting CATALINA_OPTS="$CATALINA_OPTS -Xmx2048m"
>   setting OOZIE_LOG=/var/log/oozie
>   setting CATALINA_PID=/var/run/oozie/oozie.pid
>   setting OOZIE_DATA=/hadoop/oozie/data
>   setting OOZIE_HTTP_PORT=11000
>   setting OOZIE_ADMIN_PORT=11001
>   setting 
> JAVA_LIBRARY_PATH=/usr/hdp/3.0.0.0-1634/hadoop/lib/native/Linux-amd64-64
>   setting OOZIE_CLIENT_OPTS="${OOZIE_CLIENT_OPTS} 
> -Doozie.connection.retry.count=5 "
> SLF4J: Class path contains multiple SLF4J bindings.
> SLF4J: Found binding in 
> [jar:file:/usr/hdp/3.0.0.0-1634/oozie/lib/slf4j-simple-1.6.6.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> SLF4J: Found binding in 
> [jar:file:/usr/hdp/3.0.0.0-1634/oozie/libserver/log4j-slf4j-impl-2.10.0.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> SLF4J: Found binding in 
> [jar:file:/usr/hdp/3.0.0.0-1634/oozie/libserver/slf4j-log4j12-1.6.6.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an 
> explanation.
> SLF4J: Actual binding is of type [org.slf4j.impl.SimpleLoggerFactory]
> 518 [main] WARN org.apache.hadoop.util.NativeCodeLoader - Unable to load 
> native-hadoop library for your platform... using builtin-java classes where 
> applicable
> 605 [main] INFO org.apache.hadoop.conf.Configuration.deprecation - 
> mapred.local.dir is deprecated. Instead, use mapreduce.cluster.local.dir
> 619 [main] INFO org.apache.hadoop.security.SecurityUtil - Updating 
> Configuration
> the destination path for sharelib is: /user/oozie/share/lib/lib_20181114154552
> log4j:WARN No appenders could be found for logger 
> (org.apache.htrace.core.Tracer).
> log4j:WARN Please initialize the log4j system properly.
> log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for more 
> info.
> 1118 [main] WARN org.apache.hadoop.metrics2.impl.MetricsConfig - Cannot 
> locate configuration: tried 
> hadoop-metrics2-s3a-file-system.properties,hadoop-metrics2.properties
> 1172 [main] INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl - 
> Scheduled Metric snapshot period at 10 second(s).
> 1172 [main] INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl - 
> s3a-file-system metrics system started
> 2255 [main] INFO org.apache.hadoop.conf.Configuration.deprecation - 
> fs.s3a.server-side-encryption-key is deprecated. Instead, use 
> fs.s3a.server-side-encryption.key
> Error: Not a file: /usr/hdp/current/oozie-server/share/lib
> Stack trace for the error was (for debug purposes):
> --
> 

[jira] [Commented] (HADOOP-15939) Filter overlapping objenesis class in hadoop-client-minicluster

2018-11-19 Thread Sean Busbey (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15939?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16691800#comment-16691800
 ] 

Sean Busbey commented on HADOOP-15939:
--

Looking at [the source for 
mockito-1.8.5|https://github.com/mockito/mockito/tree/v1.8.5] I think it's 
objenesis 1.0 they're shipping.

+1, this looks like the right approach to me. I'd prefer it if the QA bot 
showed a failure for the overlapping classes, but that can be its own issue.

> Filter overlapping objenesis class in hadoop-client-minicluster 
> 
>
> Key: HADOOP-15939
> URL: https://issues.apache.org/jira/browse/HADOOP-15939
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
>Priority: Major
> Attachments: HADOOP-15939.001.patch
>
>
> As mentioned here and found in with latest Jenkins shadedclient.
> Jenkins does not provide a detailed output file for the failure though. But 
> it can be reproed with the following cmd:
> {code:java}
> mvn verify -fae --batch-mode -am -pl 
> hadoop-client-modules/hadoop-client-check-invariants -pl 
> hadoop-client-modules/hadoop-client-check-test-invariants -pl 
> hadoop-client-modules/hadoop-client-integration-tests -Dtest=NoUnitTests 
> -Dmaven.javadoc.skip=true -Dcheckstyle.skip=true -Dfindbugs.skip=true
> {code}
> Error Message:
> {code:java}
> [WARNING] objenesis-1.0.jar, mockito-all-1.8.5.jar define 30 overlapping 
> classes: 
> [WARNING]   - org.objenesis.ObjenesisBase
> [WARNING]   - org.objenesis.instantiator.gcj.GCJInstantiator
> [WARNING]   - org.objenesis.ObjenesisHelper
> [WARNING]   - org.objenesis.instantiator.jrockit.JRockitLegacyInstantiator
> [WARNING]   - org.objenesis.instantiator.sun.SunReflectionFactoryInstantiator
> [WARNING]   - org.objenesis.instantiator.ObjectInstantiator
> [WARNING]   - org.objenesis.instantiator.gcj.GCJInstantiatorBase$DummyStream
> [WARNING]   - org.objenesis.instantiator.basic.ObjectStreamClassInstantiator
> [WARNING]   - org.objenesis.ObjenesisException
> [WARNING]   - org.objenesis.Objenesis
> [WARNING]   - 20 more...
> [WARNING] maven-shade-plugin has detected that some class files are
> [WARNING] present in two or more JARs. When this happens, only one
> [WARNING] single version of the class is copied to the uber jar.
> [WARNING] Usually this is not harmful and you can skip these warnings,
> [WARNING] otherwise try to manually exclude artifacts based on
> [WARNING] mvn dependency:tree -Ddetail=true and the above output.
> [WARNING] See [http://maven.apache.org/plugins/maven-shade-plugin/]
> [INFO] Replacing original artifact with shaded artifact.
> {code}
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15939) Filter overlapping objenesis class in hadoop-client-minicluster

2018-11-19 Thread Sean Busbey (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15939?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Busbey updated HADOOP-15939:
-
Priority: Minor  (was: Major)

> Filter overlapping objenesis class in hadoop-client-minicluster 
> 
>
> Key: HADOOP-15939
> URL: https://issues.apache.org/jira/browse/HADOOP-15939
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
>Priority: Minor
> Attachments: HADOOP-15939.001.patch
>
>
> As mentioned here and found in with latest Jenkins shadedclient.
> Jenkins does not provide a detailed output file for the failure though. But 
> it can be reproed with the following cmd:
> {code:java}
> mvn verify -fae --batch-mode -am -pl 
> hadoop-client-modules/hadoop-client-check-invariants -pl 
> hadoop-client-modules/hadoop-client-check-test-invariants -pl 
> hadoop-client-modules/hadoop-client-integration-tests -Dtest=NoUnitTests 
> -Dmaven.javadoc.skip=true -Dcheckstyle.skip=true -Dfindbugs.skip=true
> {code}
> Error Message:
> {code:java}
> [WARNING] objenesis-1.0.jar, mockito-all-1.8.5.jar define 30 overlapping 
> classes: 
> [WARNING]   - org.objenesis.ObjenesisBase
> [WARNING]   - org.objenesis.instantiator.gcj.GCJInstantiator
> [WARNING]   - org.objenesis.ObjenesisHelper
> [WARNING]   - org.objenesis.instantiator.jrockit.JRockitLegacyInstantiator
> [WARNING]   - org.objenesis.instantiator.sun.SunReflectionFactoryInstantiator
> [WARNING]   - org.objenesis.instantiator.ObjectInstantiator
> [WARNING]   - org.objenesis.instantiator.gcj.GCJInstantiatorBase$DummyStream
> [WARNING]   - org.objenesis.instantiator.basic.ObjectStreamClassInstantiator
> [WARNING]   - org.objenesis.ObjenesisException
> [WARNING]   - org.objenesis.Objenesis
> [WARNING]   - 20 more...
> [WARNING] maven-shade-plugin has detected that some class files are
> [WARNING] present in two or more JARs. When this happens, only one
> [WARNING] single version of the class is copied to the uber jar.
> [WARNING] Usually this is not harmful and you can skip these warnings,
> [WARNING] otherwise try to manually exclude artifacts based on
> [WARNING] mvn dependency:tree -Ddetail=true and the above output.
> [WARNING] See [http://maven.apache.org/plugins/maven-shade-plugin/]
> [INFO] Replacing original artifact with shaded artifact.
> {code}
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-15939) Filter overlapping objenesis class in hadoop-client-minicluster

2018-11-19 Thread Sean Busbey (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15939?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16691778#comment-16691778
 ] 

Sean Busbey edited comment on HADOOP-15939 at 11/19/18 2:46 PM:


I can put this in my review queue for Wednesday. is that fast enough?

(*Edit*: Never mind. I'm reviewing now.)


was (Author: busbey):
I can put this in my review queue for Wednesday. is that fast enough?

> Filter overlapping objenesis class in hadoop-client-minicluster 
> 
>
> Key: HADOOP-15939
> URL: https://issues.apache.org/jira/browse/HADOOP-15939
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
>Priority: Major
> Attachments: HADOOP-15939.001.patch
>
>
> As mentioned here and found in with latest Jenkins shadedclient.
> Jenkins does not provide a detailed output file for the failure though. But 
> it can be reproed with the following cmd:
> {code:java}
> mvn verify -fae --batch-mode -am -pl 
> hadoop-client-modules/hadoop-client-check-invariants -pl 
> hadoop-client-modules/hadoop-client-check-test-invariants -pl 
> hadoop-client-modules/hadoop-client-integration-tests -Dtest=NoUnitTests 
> -Dmaven.javadoc.skip=true -Dcheckstyle.skip=true -Dfindbugs.skip=true
> {code}
> Error Message:
> {code:java}
> [WARNING] objenesis-1.0.jar, mockito-all-1.8.5.jar define 30 overlapping 
> classes: 
> [WARNING]   - org.objenesis.ObjenesisBase
> [WARNING]   - org.objenesis.instantiator.gcj.GCJInstantiator
> [WARNING]   - org.objenesis.ObjenesisHelper
> [WARNING]   - org.objenesis.instantiator.jrockit.JRockitLegacyInstantiator
> [WARNING]   - org.objenesis.instantiator.sun.SunReflectionFactoryInstantiator
> [WARNING]   - org.objenesis.instantiator.ObjectInstantiator
> [WARNING]   - org.objenesis.instantiator.gcj.GCJInstantiatorBase$DummyStream
> [WARNING]   - org.objenesis.instantiator.basic.ObjectStreamClassInstantiator
> [WARNING]   - org.objenesis.ObjenesisException
> [WARNING]   - org.objenesis.Objenesis
> [WARNING]   - 20 more...
> [WARNING] maven-shade-plugin has detected that some class files are
> [WARNING] present in two or more JARs. When this happens, only one
> [WARNING] single version of the class is copied to the uber jar.
> [WARNING] Usually this is not harmful and you can skip these warnings,
> [WARNING] otherwise try to manually exclude artifacts based on
> [WARNING] mvn dependency:tree -Ddetail=true and the above output.
> [WARNING] See [http://maven.apache.org/plugins/maven-shade-plugin/]
> [INFO] Replacing original artifact with shaded artifact.
> {code}
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15939) Filter overlapping objenesis class in hadoop-client-minicluster

2018-11-19 Thread Sean Busbey (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15939?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16691788#comment-16691788
 ] 

Sean Busbey commented on HADOOP-15939:
--

lol. the frowny face comment from past-me means probably we have to include it.

What version of objenesis does mockito-all include?

> Filter overlapping objenesis class in hadoop-client-minicluster 
> 
>
> Key: HADOOP-15939
> URL: https://issues.apache.org/jira/browse/HADOOP-15939
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
>Priority: Major
> Attachments: HADOOP-15939.001.patch
>
>
> As mentioned here and found in with latest Jenkins shadedclient.
> Jenkins does not provide a detailed output file for the failure though. But 
> it can be reproed with the following cmd:
> {code:java}
> mvn verify -fae --batch-mode -am -pl 
> hadoop-client-modules/hadoop-client-check-invariants -pl 
> hadoop-client-modules/hadoop-client-check-test-invariants -pl 
> hadoop-client-modules/hadoop-client-integration-tests -Dtest=NoUnitTests 
> -Dmaven.javadoc.skip=true -Dcheckstyle.skip=true -Dfindbugs.skip=true
> {code}
> Error Message:
> {code:java}
> [WARNING] objenesis-1.0.jar, mockito-all-1.8.5.jar define 30 overlapping 
> classes: 
> [WARNING]   - org.objenesis.ObjenesisBase
> [WARNING]   - org.objenesis.instantiator.gcj.GCJInstantiator
> [WARNING]   - org.objenesis.ObjenesisHelper
> [WARNING]   - org.objenesis.instantiator.jrockit.JRockitLegacyInstantiator
> [WARNING]   - org.objenesis.instantiator.sun.SunReflectionFactoryInstantiator
> [WARNING]   - org.objenesis.instantiator.ObjectInstantiator
> [WARNING]   - org.objenesis.instantiator.gcj.GCJInstantiatorBase$DummyStream
> [WARNING]   - org.objenesis.instantiator.basic.ObjectStreamClassInstantiator
> [WARNING]   - org.objenesis.ObjenesisException
> [WARNING]   - org.objenesis.Objenesis
> [WARNING]   - 20 more...
> [WARNING] maven-shade-plugin has detected that some class files are
> [WARNING] present in two or more JARs. When this happens, only one
> [WARNING] single version of the class is copied to the uber jar.
> [WARNING] Usually this is not harmful and you can skip these warnings,
> [WARNING] otherwise try to manually exclude artifacts based on
> [WARNING] mvn dependency:tree -Ddetail=true and the above output.
> [WARNING] See [http://maven.apache.org/plugins/maven-shade-plugin/]
> [INFO] Replacing original artifact with shaded artifact.
> {code}
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15939) Filter overlapping objenesis class in hadoop-client-minicluster

2018-11-19 Thread Sean Busbey (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15939?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16691785#comment-16691785
 ] 

Sean Busbey commented on HADOOP-15939:
--

An initial question: Why is mockito-all being included in the client facing 
minicluster artifact? Do the minicluster classes really have a hard dependency 
on it or are we leaking something we use for internal testing?

> Filter overlapping objenesis class in hadoop-client-minicluster 
> 
>
> Key: HADOOP-15939
> URL: https://issues.apache.org/jira/browse/HADOOP-15939
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
>Priority: Major
> Attachments: HADOOP-15939.001.patch
>
>
> As mentioned here and found in with latest Jenkins shadedclient.
> Jenkins does not provide a detailed output file for the failure though. But 
> it can be reproed with the following cmd:
> {code:java}
> mvn verify -fae --batch-mode -am -pl 
> hadoop-client-modules/hadoop-client-check-invariants -pl 
> hadoop-client-modules/hadoop-client-check-test-invariants -pl 
> hadoop-client-modules/hadoop-client-integration-tests -Dtest=NoUnitTests 
> -Dmaven.javadoc.skip=true -Dcheckstyle.skip=true -Dfindbugs.skip=true
> {code}
> Error Message:
> {code:java}
> [WARNING] objenesis-1.0.jar, mockito-all-1.8.5.jar define 30 overlapping 
> classes: 
> [WARNING]   - org.objenesis.ObjenesisBase
> [WARNING]   - org.objenesis.instantiator.gcj.GCJInstantiator
> [WARNING]   - org.objenesis.ObjenesisHelper
> [WARNING]   - org.objenesis.instantiator.jrockit.JRockitLegacyInstantiator
> [WARNING]   - org.objenesis.instantiator.sun.SunReflectionFactoryInstantiator
> [WARNING]   - org.objenesis.instantiator.ObjectInstantiator
> [WARNING]   - org.objenesis.instantiator.gcj.GCJInstantiatorBase$DummyStream
> [WARNING]   - org.objenesis.instantiator.basic.ObjectStreamClassInstantiator
> [WARNING]   - org.objenesis.ObjenesisException
> [WARNING]   - org.objenesis.Objenesis
> [WARNING]   - 20 more...
> [WARNING] maven-shade-plugin has detected that some class files are
> [WARNING] present in two or more JARs. When this happens, only one
> [WARNING] single version of the class is copied to the uber jar.
> [WARNING] Usually this is not harmful and you can skip these warnings,
> [WARNING] otherwise try to manually exclude artifacts based on
> [WARNING] mvn dependency:tree -Ddetail=true and the above output.
> [WARNING] See [http://maven.apache.org/plugins/maven-shade-plugin/]
> [INFO] Replacing original artifact with shaded artifact.
> {code}
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15932) Oozie unable to create sharelib in s3a filesystem

2018-11-19 Thread Steve Loughran (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15932?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16691780#comment-16691780
 ] 

Steve Loughran commented on HADOOP-15932:
-

Patch 001: emergency fix which removes the custom "performant" copy file and 
replaces it with the slow-but-handles-directories one.

* Pulls up the entry counter and logging, so that there's still some tracking 
of what is happening
* Fix tests where the reverted-to code throws different exceptions and/or has 
different messages
* And disable any test which no longer fails on directory source/dest.

> Oozie unable to create sharelib in s3a filesystem
> -
>
> Key: HADOOP-15932
> URL: https://issues.apache.org/jira/browse/HADOOP-15932
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs, fs/s3
>Affects Versions: 3.0.0
>Reporter: Soumitra Sulav
>Priority: Critical
> Attachments: HADOOP-15932-001.patch
>
>
> Oozie server unable to start cause of below exception.
> s3a expects a file to copy it in store but sharelib is a folder containing 
> all the needed components jars.
> Hence throws the exception :
> _Not a file: /usr/hdp/current/oozie-server/share/lib_
> {code:java}
> [oozie@sg-hdp1 ~]$ /usr/hdp/current/oozie-server/bin/oozie-setup.sh sharelib 
> create -fs s3a://hdp -locallib /usr/hdp/current/oozie-server/share
>   setting OOZIE_CONFIG=${OOZIE_CONFIG:-/usr/hdp/current/oozie-client/conf}
>   setting 
> CATALINA_BASE=${CATALINA_BASE:-/usr/hdp/current/oozie-client/oozie-server}
>   setting CATALINA_TMPDIR=${CATALINA_TMPDIR:-/var/tmp/oozie}
>   setting OOZIE_CATALINA_HOME=/usr/lib/bigtop-tomcat
>   setting JAVA_HOME=/usr/jdk64/jdk1.8.0_112
>   setting JRE_HOME=${JAVA_HOME}
>   setting CATALINA_OPTS="$CATALINA_OPTS -Xmx2048m"
>   setting OOZIE_LOG=/var/log/oozie
>   setting CATALINA_PID=/var/run/oozie/oozie.pid
>   setting OOZIE_DATA=/hadoop/oozie/data
>   setting OOZIE_HTTP_PORT=11000
>   setting OOZIE_ADMIN_PORT=11001
>   setting 
> JAVA_LIBRARY_PATH=/usr/hdp/3.0.0.0-1634/hadoop/lib/native/Linux-amd64-64
>   setting OOZIE_CLIENT_OPTS="${OOZIE_CLIENT_OPTS} 
> -Doozie.connection.retry.count=5 "
>   setting OOZIE_CONFIG=${OOZIE_CONFIG:-/usr/hdp/current/oozie-client/conf}
>   setting 
> CATALINA_BASE=${CATALINA_BASE:-/usr/hdp/current/oozie-client/oozie-server}
>   setting CATALINA_TMPDIR=${CATALINA_TMPDIR:-/var/tmp/oozie}
>   setting OOZIE_CATALINA_HOME=/usr/lib/bigtop-tomcat
>   setting JAVA_HOME=/usr/jdk64/jdk1.8.0_112
>   setting JRE_HOME=${JAVA_HOME}
>   setting CATALINA_OPTS="$CATALINA_OPTS -Xmx2048m"
>   setting OOZIE_LOG=/var/log/oozie
>   setting CATALINA_PID=/var/run/oozie/oozie.pid
>   setting OOZIE_DATA=/hadoop/oozie/data
>   setting OOZIE_HTTP_PORT=11000
>   setting OOZIE_ADMIN_PORT=11001
>   setting 
> JAVA_LIBRARY_PATH=/usr/hdp/3.0.0.0-1634/hadoop/lib/native/Linux-amd64-64
>   setting OOZIE_CLIENT_OPTS="${OOZIE_CLIENT_OPTS} 
> -Doozie.connection.retry.count=5 "
> SLF4J: Class path contains multiple SLF4J bindings.
> SLF4J: Found binding in 
> [jar:file:/usr/hdp/3.0.0.0-1634/oozie/lib/slf4j-simple-1.6.6.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> SLF4J: Found binding in 
> [jar:file:/usr/hdp/3.0.0.0-1634/oozie/libserver/log4j-slf4j-impl-2.10.0.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> SLF4J: Found binding in 
> [jar:file:/usr/hdp/3.0.0.0-1634/oozie/libserver/slf4j-log4j12-1.6.6.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an 
> explanation.
> SLF4J: Actual binding is of type [org.slf4j.impl.SimpleLoggerFactory]
> 518 [main] WARN org.apache.hadoop.util.NativeCodeLoader - Unable to load 
> native-hadoop library for your platform... using builtin-java classes where 
> applicable
> 605 [main] INFO org.apache.hadoop.conf.Configuration.deprecation - 
> mapred.local.dir is deprecated. Instead, use mapreduce.cluster.local.dir
> 619 [main] INFO org.apache.hadoop.security.SecurityUtil - Updating 
> Configuration
> the destination path for sharelib is: /user/oozie/share/lib/lib_20181114154552
> log4j:WARN No appenders could be found for logger 
> (org.apache.htrace.core.Tracer).
> log4j:WARN Please initialize the log4j system properly.
> log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for more 
> info.
> 1118 [main] WARN org.apache.hadoop.metrics2.impl.MetricsConfig - Cannot 
> locate configuration: tried 
> hadoop-metrics2-s3a-file-system.properties,hadoop-metrics2.properties
> 1172 [main] INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl - 
> Scheduled Metric snapshot period at 10 second(s).
> 1172 [main] INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl - 
> s3a-file-system metrics system started
> 2255 [main] INFO org.apache.hadoop.conf.Configuration.deprecation - 
> fs.s3a.server-side-encryption-key 

[jira] [Commented] (HADOOP-15939) Filter overlapping objenesis class in hadoop-client-minicluster

2018-11-19 Thread Sean Busbey (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15939?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16691778#comment-16691778
 ] 

Sean Busbey commented on HADOOP-15939:
--

I can put this in my review queue for Wednesday. is that fast enough?

> Filter overlapping objenesis class in hadoop-client-minicluster 
> 
>
> Key: HADOOP-15939
> URL: https://issues.apache.org/jira/browse/HADOOP-15939
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
>Priority: Major
> Attachments: HADOOP-15939.001.patch
>
>
> As mentioned here and found in with latest Jenkins shadedclient.
> Jenkins does not provide a detailed output file for the failure though. But 
> it can be reproed with the following cmd:
> {code:java}
> mvn verify -fae --batch-mode -am -pl 
> hadoop-client-modules/hadoop-client-check-invariants -pl 
> hadoop-client-modules/hadoop-client-check-test-invariants -pl 
> hadoop-client-modules/hadoop-client-integration-tests -Dtest=NoUnitTests 
> -Dmaven.javadoc.skip=true -Dcheckstyle.skip=true -Dfindbugs.skip=true
> {code}
> Error Message:
> {code:java}
> [WARNING] objenesis-1.0.jar, mockito-all-1.8.5.jar define 30 overlapping 
> classes: 
> [WARNING]   - org.objenesis.ObjenesisBase
> [WARNING]   - org.objenesis.instantiator.gcj.GCJInstantiator
> [WARNING]   - org.objenesis.ObjenesisHelper
> [WARNING]   - org.objenesis.instantiator.jrockit.JRockitLegacyInstantiator
> [WARNING]   - org.objenesis.instantiator.sun.SunReflectionFactoryInstantiator
> [WARNING]   - org.objenesis.instantiator.ObjectInstantiator
> [WARNING]   - org.objenesis.instantiator.gcj.GCJInstantiatorBase$DummyStream
> [WARNING]   - org.objenesis.instantiator.basic.ObjectStreamClassInstantiator
> [WARNING]   - org.objenesis.ObjenesisException
> [WARNING]   - org.objenesis.Objenesis
> [WARNING]   - 20 more...
> [WARNING] maven-shade-plugin has detected that some class files are
> [WARNING] present in two or more JARs. When this happens, only one
> [WARNING] single version of the class is copied to the uber jar.
> [WARNING] Usually this is not harmful and you can skip these warnings,
> [WARNING] otherwise try to manually exclude artifacts based on
> [WARNING] mvn dependency:tree -Ddetail=true and the above output.
> [WARNING] See [http://maven.apache.org/plugins/maven-shade-plugin/]
> [INFO] Replacing original artifact with shaded artifact.
> {code}
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15932) Oozie unable to create sharelib in s3a filesystem

2018-11-19 Thread Steve Loughran (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15932?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-15932:

Attachment: HADOOP-15932-001.patch

> Oozie unable to create sharelib in s3a filesystem
> -
>
> Key: HADOOP-15932
> URL: https://issues.apache.org/jira/browse/HADOOP-15932
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs, fs/s3
>Affects Versions: 3.0.0
>Reporter: Soumitra Sulav
>Priority: Critical
> Attachments: HADOOP-15932-001.patch
>
>
> Oozie server unable to start cause of below exception.
> s3a expects a file to copy it in store but sharelib is a folder containing 
> all the needed components jars.
> Hence throws the exception :
> _Not a file: /usr/hdp/current/oozie-server/share/lib_
> {code:java}
> [oozie@sg-hdp1 ~]$ /usr/hdp/current/oozie-server/bin/oozie-setup.sh sharelib 
> create -fs s3a://hdp -locallib /usr/hdp/current/oozie-server/share
>   setting OOZIE_CONFIG=${OOZIE_CONFIG:-/usr/hdp/current/oozie-client/conf}
>   setting 
> CATALINA_BASE=${CATALINA_BASE:-/usr/hdp/current/oozie-client/oozie-server}
>   setting CATALINA_TMPDIR=${CATALINA_TMPDIR:-/var/tmp/oozie}
>   setting OOZIE_CATALINA_HOME=/usr/lib/bigtop-tomcat
>   setting JAVA_HOME=/usr/jdk64/jdk1.8.0_112
>   setting JRE_HOME=${JAVA_HOME}
>   setting CATALINA_OPTS="$CATALINA_OPTS -Xmx2048m"
>   setting OOZIE_LOG=/var/log/oozie
>   setting CATALINA_PID=/var/run/oozie/oozie.pid
>   setting OOZIE_DATA=/hadoop/oozie/data
>   setting OOZIE_HTTP_PORT=11000
>   setting OOZIE_ADMIN_PORT=11001
>   setting 
> JAVA_LIBRARY_PATH=/usr/hdp/3.0.0.0-1634/hadoop/lib/native/Linux-amd64-64
>   setting OOZIE_CLIENT_OPTS="${OOZIE_CLIENT_OPTS} 
> -Doozie.connection.retry.count=5 "
>   setting OOZIE_CONFIG=${OOZIE_CONFIG:-/usr/hdp/current/oozie-client/conf}
>   setting 
> CATALINA_BASE=${CATALINA_BASE:-/usr/hdp/current/oozie-client/oozie-server}
>   setting CATALINA_TMPDIR=${CATALINA_TMPDIR:-/var/tmp/oozie}
>   setting OOZIE_CATALINA_HOME=/usr/lib/bigtop-tomcat
>   setting JAVA_HOME=/usr/jdk64/jdk1.8.0_112
>   setting JRE_HOME=${JAVA_HOME}
>   setting CATALINA_OPTS="$CATALINA_OPTS -Xmx2048m"
>   setting OOZIE_LOG=/var/log/oozie
>   setting CATALINA_PID=/var/run/oozie/oozie.pid
>   setting OOZIE_DATA=/hadoop/oozie/data
>   setting OOZIE_HTTP_PORT=11000
>   setting OOZIE_ADMIN_PORT=11001
>   setting 
> JAVA_LIBRARY_PATH=/usr/hdp/3.0.0.0-1634/hadoop/lib/native/Linux-amd64-64
>   setting OOZIE_CLIENT_OPTS="${OOZIE_CLIENT_OPTS} 
> -Doozie.connection.retry.count=5 "
> SLF4J: Class path contains multiple SLF4J bindings.
> SLF4J: Found binding in 
> [jar:file:/usr/hdp/3.0.0.0-1634/oozie/lib/slf4j-simple-1.6.6.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> SLF4J: Found binding in 
> [jar:file:/usr/hdp/3.0.0.0-1634/oozie/libserver/log4j-slf4j-impl-2.10.0.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> SLF4J: Found binding in 
> [jar:file:/usr/hdp/3.0.0.0-1634/oozie/libserver/slf4j-log4j12-1.6.6.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an 
> explanation.
> SLF4J: Actual binding is of type [org.slf4j.impl.SimpleLoggerFactory]
> 518 [main] WARN org.apache.hadoop.util.NativeCodeLoader - Unable to load 
> native-hadoop library for your platform... using builtin-java classes where 
> applicable
> 605 [main] INFO org.apache.hadoop.conf.Configuration.deprecation - 
> mapred.local.dir is deprecated. Instead, use mapreduce.cluster.local.dir
> 619 [main] INFO org.apache.hadoop.security.SecurityUtil - Updating 
> Configuration
> the destination path for sharelib is: /user/oozie/share/lib/lib_20181114154552
> log4j:WARN No appenders could be found for logger 
> (org.apache.htrace.core.Tracer).
> log4j:WARN Please initialize the log4j system properly.
> log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for more 
> info.
> 1118 [main] WARN org.apache.hadoop.metrics2.impl.MetricsConfig - Cannot 
> locate configuration: tried 
> hadoop-metrics2-s3a-file-system.properties,hadoop-metrics2.properties
> 1172 [main] INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl - 
> Scheduled Metric snapshot period at 10 second(s).
> 1172 [main] INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl - 
> s3a-file-system metrics system started
> 2255 [main] INFO org.apache.hadoop.conf.Configuration.deprecation - 
> fs.s3a.server-side-encryption-key is deprecated. Instead, use 
> fs.s3a.server-side-encryption.key
> Error: Not a file: /usr/hdp/current/oozie-server/share/lib
> Stack trace for the error was (for debug purposes):
> --
> java.io.FileNotFoundException: Not a file: 
> /usr/hdp/current/oozie-server/share/lib
>   at 
> 

[jira] [Commented] (HADOOP-15865) ConcurrentModificationException in Configuration.overlay() method

2018-11-19 Thread Oleksandr Shevchenko (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15865?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16691758#comment-16691758
 ] 

Oleksandr Shevchenko commented on HADOOP-15865:
---

Could someone kindly review the patch?
Thanks!

> ConcurrentModificationException in Configuration.overlay() method
> -
>
> Key: HADOOP-15865
> URL: https://issues.apache.org/jira/browse/HADOOP-15865
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Oleksandr Shevchenko
>Assignee: Oleksandr Shevchenko
>Priority: Major
> Attachments: HADOOP-15865.001.patch
>
>
> Configuration.overlay() is not thread-safe and can be the cause of 
> ConcurrentModificationException since we use iteration over Properties 
> object. 
> {code}
> private void overlay(Properties to, Properties from) {
>  for (Entry entry: from.entrySet()) {
>  to.put(entry.getKey(), entry.getValue());
>  }
>  }
> {code}
> Properties class is thread-safe but iterator is not. We should manually 
> synchronize on the returned set of entries which we use for iteration.
> We faced with ResourceManger fails during recovery caused by 
> ConcurrentModificationException:
> {noformat}
> 2018-10-12 08:00:56,968 INFO org.apache.hadoop.service.AbstractService: 
> Service ResourceManager failed in state STARTED; cause: 
> java.util.ConcurrentModificationException
> java.util.ConcurrentModificationException
>  at java.util.Hashtable$Enumerator.next(Hashtable.java:1383)
>  at org.apache.hadoop.conf.Configuration.overlay(Configuration.java:2801)
>  at org.apache.hadoop.conf.Configuration.loadResource(Configuration.java:2696)
>  at 
> org.apache.hadoop.conf.Configuration.loadResources(Configuration.java:2632)
>  at org.apache.hadoop.conf.Configuration.getProps(Configuration.java:2528)
>  at org.apache.hadoop.conf.Configuration.get(Configuration.java:1062)
>  at 
> org.apache.hadoop.conf.Configuration.getStringCollection(Configuration.java:1914)
>  at 
> org.apache.hadoop.security.alias.CredentialProviderFactory.getProviders(CredentialProviderFactory.java:53)
>  at 
> org.apache.hadoop.conf.Configuration.getPasswordFromCredentialProviders(Configuration.java:2043)
>  at org.apache.hadoop.conf.Configuration.getPassword(Configuration.java:2023)
>  at 
> org.apache.hadoop.yarn.webapp.util.WebAppUtils.getPassword(WebAppUtils.java:452)
>  at 
> org.apache.hadoop.yarn.webapp.util.WebAppUtils.loadSslConfiguration(WebAppUtils.java:428)
>  at org.apache.hadoop.yarn.webapp.WebApps$Builder.start(WebApps.java:293)
>  at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.startWepApp(ResourceManager.java:1017)
>  at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.serviceStart(ResourceManager.java:1117)
>  at org.apache.hadoop.service.AbstractService.start(AbstractService.java:193)
>  at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.main(ResourceManager.java:1251)
> 2018-10-12 08:00:56,968 INFO 
> org.apache.hadoop.yarn.server.resourcemanager.security.RMDelegationTokenSecretManager:
>  removing RMDelegation token with sequence number: 3489914
> 2018-10-12 08:00:56,968 INFO 
> org.apache.hadoop.yarn.server.resourcemanager.recovery.RMStateStore: Removing 
> RMDelegationToken and SequenceNumber
> 2018-10-12 08:00:56,968 INFO 
> org.apache.hadoop.yarn.server.resourcemanager.recovery.FileSystemRMStateStore:
>  Removing RMDelegationToken_3489914
> 2018-10-12 08:00:56,969 INFO org.apache.hadoop.ipc.Server: Stopping server on 
> 8032
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15939) Filter overlapping objenesis class in hadoop-client-minicluster

2018-11-19 Thread Steve Loughran (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15939?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16691712#comment-16691712
 ] 

Steve Loughran commented on HADOOP-15939:
-

I'm not familiar enough with these modules to be a safe reviewer.

[~busbey]: -you got a chance to look @this?

> Filter overlapping objenesis class in hadoop-client-minicluster 
> 
>
> Key: HADOOP-15939
> URL: https://issues.apache.org/jira/browse/HADOOP-15939
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
>Priority: Major
> Attachments: HADOOP-15939.001.patch
>
>
> As mentioned here and found in with latest Jenkins shadedclient.
> Jenkins does not provide a detailed output file for the failure though. But 
> it can be reproed with the following cmd:
> {code:java}
> mvn verify -fae --batch-mode -am -pl 
> hadoop-client-modules/hadoop-client-check-invariants -pl 
> hadoop-client-modules/hadoop-client-check-test-invariants -pl 
> hadoop-client-modules/hadoop-client-integration-tests -Dtest=NoUnitTests 
> -Dmaven.javadoc.skip=true -Dcheckstyle.skip=true -Dfindbugs.skip=true
> {code}
> Error Message:
> {code:java}
> [WARNING] objenesis-1.0.jar, mockito-all-1.8.5.jar define 30 overlapping 
> classes: 
> [WARNING]   - org.objenesis.ObjenesisBase
> [WARNING]   - org.objenesis.instantiator.gcj.GCJInstantiator
> [WARNING]   - org.objenesis.ObjenesisHelper
> [WARNING]   - org.objenesis.instantiator.jrockit.JRockitLegacyInstantiator
> [WARNING]   - org.objenesis.instantiator.sun.SunReflectionFactoryInstantiator
> [WARNING]   - org.objenesis.instantiator.ObjectInstantiator
> [WARNING]   - org.objenesis.instantiator.gcj.GCJInstantiatorBase$DummyStream
> [WARNING]   - org.objenesis.instantiator.basic.ObjectStreamClassInstantiator
> [WARNING]   - org.objenesis.ObjenesisException
> [WARNING]   - org.objenesis.Objenesis
> [WARNING]   - 20 more...
> [WARNING] maven-shade-plugin has detected that some class files are
> [WARNING] present in two or more JARs. When this happens, only one
> [WARNING] single version of the class is copied to the uber jar.
> [WARNING] Usually this is not harmful and you can skip these warnings,
> [WARNING] otherwise try to manually exclude artifacts based on
> [WARNING] mvn dependency:tree -Ddetail=true and the above output.
> [WARNING] See [http://maven.apache.org/plugins/maven-shade-plugin/]
> [INFO] Replacing original artifact with shaded artifact.
> {code}
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15939) Filter overlapping objenesis class in hadoop-client-minicluster

2018-11-19 Thread Steve Loughran (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15939?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-15939:

Component/s: build

> Filter overlapping objenesis class in hadoop-client-minicluster 
> 
>
> Key: HADOOP-15939
> URL: https://issues.apache.org/jira/browse/HADOOP-15939
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
>Priority: Major
> Attachments: HADOOP-15939.001.patch
>
>
> As mentioned here and found in with latest Jenkins shadedclient.
> Jenkins does not provide a detailed output file for the failure though. But 
> it can be reproed with the following cmd:
> {code:java}
> mvn verify -fae --batch-mode -am -pl 
> hadoop-client-modules/hadoop-client-check-invariants -pl 
> hadoop-client-modules/hadoop-client-check-test-invariants -pl 
> hadoop-client-modules/hadoop-client-integration-tests -Dtest=NoUnitTests 
> -Dmaven.javadoc.skip=true -Dcheckstyle.skip=true -Dfindbugs.skip=true
> {code}
> Error Message:
> {code:java}
> [WARNING] objenesis-1.0.jar, mockito-all-1.8.5.jar define 30 overlapping 
> classes: 
> [WARNING]   - org.objenesis.ObjenesisBase
> [WARNING]   - org.objenesis.instantiator.gcj.GCJInstantiator
> [WARNING]   - org.objenesis.ObjenesisHelper
> [WARNING]   - org.objenesis.instantiator.jrockit.JRockitLegacyInstantiator
> [WARNING]   - org.objenesis.instantiator.sun.SunReflectionFactoryInstantiator
> [WARNING]   - org.objenesis.instantiator.ObjectInstantiator
> [WARNING]   - org.objenesis.instantiator.gcj.GCJInstantiatorBase$DummyStream
> [WARNING]   - org.objenesis.instantiator.basic.ObjectStreamClassInstantiator
> [WARNING]   - org.objenesis.ObjenesisException
> [WARNING]   - org.objenesis.Objenesis
> [WARNING]   - 20 more...
> [WARNING] maven-shade-plugin has detected that some class files are
> [WARNING] present in two or more JARs. When this happens, only one
> [WARNING] single version of the class is copied to the uber jar.
> [WARNING] Usually this is not harmful and you can skip these warnings,
> [WARNING] otherwise try to manually exclude artifacts based on
> [WARNING] mvn dependency:tree -Ddetail=true and the above output.
> [WARNING] See [http://maven.apache.org/plugins/maven-shade-plugin/]
> [INFO] Replacing original artifact with shaded artifact.
> {code}
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-15944) S3AInputStream logging to make it easier to debug file leakage

2018-11-19 Thread Steve Loughran (JIRA)
Steve Loughran created HADOOP-15944:
---

 Summary: S3AInputStream logging to make it easier to debug file 
leakage
 Key: HADOOP-15944
 URL: https://issues.apache.org/jira/browse/HADOOP-15944
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: fs/s3
Affects Versions: 3.1.1
Reporter: Steve Loughran


Problem: if an app opens too many input streams, then all the http connections 
in the S3A pool can be used up; all attempts to do other FS operations fail 
timing out for http pool access

Proposed simple solution: log better what's going on with input stream 
lifecyce, specifically

# include URL of file in open, reopen & close events
# maybe: Separate logger for these events, though S3A Input stream should be 
enough as it doesn't do much else.
# maybe: have some prefix in the events like "Lifecycle", so that you could use 
the existing log @ debug, grep for that phrase and look at the printed URLs to 
identify what's going on
# stream metrics: expose some of the state of the http connection pool and/or 
active input and output streams

Idle output streams don't use up http connections, as they only connect during 
block upload.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15944) S3AInputStream logging to make it easier to debug file leakage

2018-11-19 Thread Steve Loughran (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15944?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16691684#comment-16691684
 ] 

Steve Loughran commented on HADOOP-15944:
-

Also: make sure that the troubleshooting section on "Timeout waiting for 
connection from pool" is up to date. Stack trace isn't, text only looks @ 
output stream.

> S3AInputStream logging to make it easier to debug file leakage
> --
>
> Key: HADOOP-15944
> URL: https://issues.apache.org/jira/browse/HADOOP-15944
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.1.1
>Reporter: Steve Loughran
>Priority: Minor
>
> Problem: if an app opens too many input streams, then all the http 
> connections in the S3A pool can be used up; all attempts to do other FS 
> operations fail timing out for http pool access
> Proposed simple solution: log better what's going on with input stream 
> lifecyce, specifically
> # include URL of file in open, reopen & close events
> # maybe: Separate logger for these events, though S3A Input stream should be 
> enough as it doesn't do much else.
> # maybe: have some prefix in the events like "Lifecycle", so that you could 
> use the existing log @ debug, grep for that phrase and look at the printed 
> URLs to identify what's going on
> # stream metrics: expose some of the state of the http connection pool and/or 
> active input and output streams
> Idle output streams don't use up http connections, as they only connect 
> during block upload.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15944) S3AInputStream logging to make it easier to debug file leakage

2018-11-19 Thread Steve Loughran (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15944?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16691680#comment-16691680
 ] 

Steve Loughran commented on HADOOP-15944:
-

Stack
{code}
Caused by: com.amazonaws.SdkClientException: Unable to execute HTTP request: 
Timeout waiting for connection from pool
at 
com.amazonaws.http.AmazonHttpClient$RequestExecutor.handleRetryableException(AmazonHttpClient.java:1114)
at 
com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeHelper(AmazonHttpClient.java:1064)
at 
com.amazonaws.http.AmazonHttpClient$RequestExecutor.doExecute(AmazonHttpClient.java:743)
at 
com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeWithTimer(AmazonHttpClient.java:717)
at 
com.amazonaws.http.AmazonHttpClient$RequestExecutor.execute(AmazonHttpClient.java:699)
at 
com.amazonaws.http.AmazonHttpClient$RequestExecutor.access$500(AmazonHttpClient.java:667)
at 
com.amazonaws.http.AmazonHttpClient$RequestExecutionBuilderImpl.execute(AmazonHttpClient.java:649)
at com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:513)
at com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:4325)
at com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:4272)
at 
com.amazonaws.services.s3.AmazonS3Client.getObjectMetadata(AmazonS3Client.java:1264)
at 
org.apache.hadoop.fs.s3a.S3AFileSystem.lambda$getObjectMetadata$4(S3AFileSystem.java:1235)
at org.apache.hadoop.fs.s3a.Invoker.retryUntranslated(Invoker.java:317)
at org.apache.hadoop.fs.s3a.Invoker.retryUntranslated(Invoker.java:280)
at 
org.apache.hadoop.fs.s3a.S3AFileSystem.getObjectMetadata(S3AFileSystem.java:1232)
at 
org.apache.hadoop.fs.s3a.S3AFileSystem.s3GetFileStatus(S3AFileSystem.java:2169)
... 27 more
Caused by: 
com.amazonaws.thirdparty.apache.http.conn.ConnectionPoolTimeoutException: 
Timeout waiting for connection from pool
at 
com.amazonaws.thirdparty.apache.http.impl.conn.PoolingHttpClientConnectionManager.leaseConnection(PoolingHttpClientConnectionManager.java:286)
at 
com.amazonaws.thirdparty.apache.http.impl.conn.PoolingHttpClientConnectionManager$1.get(PoolingHttpClientConnectionManager.java:263)
at sun.reflect.GeneratedMethodAccessor12.invoke(Unknown Source)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.amazonaws.http.conn.ClientConnectionRequestFactory$Handler.invoke(ClientConnectionRequestFactory.java:70)
at com.amazonaws.http.conn.$Proxy22.get(Unknown Source)
at 
com.amazonaws.thirdparty.apache.http.impl.execchain.MainClientExec.execute(MainClientExec.java:190)
at 
com.amazonaws.thirdparty.apache.http.impl.execchain.ProtocolExec.execute(ProtocolExec.java:184)
at 
com.amazonaws.thirdparty.apache.http.impl.client.InternalHttpClient.doExecute(InternalHttpClient.java:184)
at 
com.amazonaws.thirdparty.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:82)
at 
com.amazonaws.thirdparty.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:55)
at 
com.amazonaws.http.apache.client.impl.SdkHttpClient.execute(SdkHttpClient.java:72)
at 
com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeOneRequest(AmazonHttpClient.java:1236)
at 
com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeHelper(AmazonHttpClient.java:1056)
{code}

> S3AInputStream logging to make it easier to debug file leakage
> --
>
> Key: HADOOP-15944
> URL: https://issues.apache.org/jira/browse/HADOOP-15944
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.1.1
>Reporter: Steve Loughran
>Priority: Minor
>
> Problem: if an app opens too many input streams, then all the http 
> connections in the S3A pool can be used up; all attempts to do other FS 
> operations fail timing out for http pool access
> Proposed simple solution: log better what's going on with input stream 
> lifecyce, specifically
> # include URL of file in open, reopen & close events
> # maybe: Separate logger for these events, though S3A Input stream should be 
> enough as it doesn't do much else.
> # maybe: have some prefix in the events like "Lifecycle", so that you could 
> use the existing log @ debug, grep for that phrase and look at the printed 
> URLs to identify what's going on
> # stream metrics: expose some of the state of the http connection pool and/or 
> active input and output streams
> Idle output streams don't use up http connections, as they only connect 
> during block upload.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: 

[jira] [Updated] (HADOOP-15943) AliyunOSS: add missing owner & group attributes for oss FileStatus

2018-11-19 Thread wujinhu (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15943?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

wujinhu updated HADOOP-15943:
-
Description: 
Owner & group attributes are missing when you list oss objects via hadoop 
command:
{code:java}
Found 6 items
drwxrwxrwx - 0 2018-08-01 21:37 /1024
drwxrwxrwx - 0 2018-10-30 11:07 /50
-rw-rw-rw- 1 94070 2018-11-08 21:48 /a
-rw-rw-rw- 1 2441079322 2018-10-31 10:14 /lineitem.csv
drwxrwxrwx - 0 1970-01-01 08:00 /tmp
drwxrwxrwx - 0 1970-01-01 08:00 /user
{code}
 

The result should like below(hadoop fs -ls hdfs://master:8020/):
{code:java}
Found 5 items
drwxr-xr-x - hbase hbase 0 2018-11-18 17:31 hdfs://master:8020/hbase
drwxrwxrwt - hdfs supergroup 0 2018-10-30 11:07 hdfs://master:8020/tmp
drwxr-xr-x - hdfs supergroup 0 2018-10-30 10:39 hdfs://master:8020/user
{code}
However, oss objects do not have owner & group attributes like hadoop files,  
so we assume both owner & group are the current user at the time the FS was 
instantiated.

  was:
Owner & group attributes are missing when you list oss objects via hadoop 
command:

 
{code:java}
Found 6 items
drwxrwxrwx - 0 2018-08-01 21:37 /1024
drwxrwxrwx - 0 2018-10-30 11:07 /50
-rw-rw-rw- 1 94070 2018-11-08 21:48 /a
-rw-rw-rw- 1 2441079322 2018-10-31 10:14 /lineitem.csv
drwxrwxrwx - 0 1970-01-01 08:00 /tmp
drwxrwxrwx - 0 1970-01-01 08:00 /user
{code}
 

 

The result should like below(hadoop fs -ls hdfs://master:8020/):

 
{code:java}
Found 5 items
drwxr-xr-x - hbase hbase 0 2018-11-18 17:31 hdfs://master:8020/hbase
drwxrwxrwt - hdfs supergroup 0 2018-10-30 11:07 hdfs://master:8020/tmp
drwxr-xr-x - hdfs supergroup 0 2018-10-30 10:39 hdfs://master:8020/user
{code}
 

 

However, oss objects do not have owner & group attributes like hadoop files,  
so we assume both owner & group are the current user at the time the FS was 
instantiated.


> AliyunOSS: add missing owner & group attributes for oss FileStatus
> --
>
> Key: HADOOP-15943
> URL: https://issues.apache.org/jira/browse/HADOOP-15943
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs/oss
>Affects Versions: 2.10.0, 2.9.1, 3.2.0, 3.1.1, 3.0.3, 3.3.0
>Reporter: wujinhu
>Assignee: wujinhu
>Priority: Major
>
> Owner & group attributes are missing when you list oss objects via hadoop 
> command:
> {code:java}
> Found 6 items
> drwxrwxrwx - 0 2018-08-01 21:37 /1024
> drwxrwxrwx - 0 2018-10-30 11:07 /50
> -rw-rw-rw- 1 94070 2018-11-08 21:48 /a
> -rw-rw-rw- 1 2441079322 2018-10-31 10:14 /lineitem.csv
> drwxrwxrwx - 0 1970-01-01 08:00 /tmp
> drwxrwxrwx - 0 1970-01-01 08:00 /user
> {code}
>  
> The result should like below(hadoop fs -ls hdfs://master:8020/):
> {code:java}
> Found 5 items
> drwxr-xr-x - hbase hbase 0 2018-11-18 17:31 hdfs://master:8020/hbase
> drwxrwxrwt - hdfs supergroup 0 2018-10-30 11:07 hdfs://master:8020/tmp
> drwxr-xr-x - hdfs supergroup 0 2018-10-30 10:39 hdfs://master:8020/user
> {code}
> However, oss objects do not have owner & group attributes like hadoop files,  
> so we assume both owner & group are the current user at the time the FS was 
> instantiated.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15943) AliyunOSS: add missing owner & group attributes for oss FileStatus

2018-11-19 Thread wujinhu (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15943?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

wujinhu updated HADOOP-15943:
-
Description: 
Owner & group attributes are missing when you list oss objects via hadoop 
command:

 
{code:java}
Found 6 items
drwxrwxrwx - 0 2018-08-01 21:37 /1024
drwxrwxrwx - 0 2018-10-30 11:07 /50
-rw-rw-rw- 1 94070 2018-11-08 21:48 /a
-rw-rw-rw- 1 2441079322 2018-10-31 10:14 /lineitem.csv
drwxrwxrwx - 0 1970-01-01 08:00 /tmp
drwxrwxrwx - 0 1970-01-01 08:00 /user
{code}
 

 

The result should like below(hadoop fs -ls hdfs://master:8020/):

 
{code:java}
Found 5 items
drwxr-xr-x - hbase hbase 0 2018-11-18 17:31 hdfs://master:8020/hbase
drwxrwxrwt - hdfs supergroup 0 2018-10-30 11:07 hdfs://master:8020/tmp
drwxr-xr-x - hdfs supergroup 0 2018-10-30 10:39 hdfs://master:8020/user
{code}
 

 

However, oss objects do not have owner & group attributes like hadoop files,  
so we assume both owner & group are the current user at the time the FS was 
instantiated.

  was:Owner & group attribute


> AliyunOSS: add missing owner & group attributes for oss FileStatus
> --
>
> Key: HADOOP-15943
> URL: https://issues.apache.org/jira/browse/HADOOP-15943
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs/oss
>Affects Versions: 2.10.0, 2.9.1, 3.2.0, 3.1.1, 3.0.3, 3.3.0
>Reporter: wujinhu
>Assignee: wujinhu
>Priority: Major
>
> Owner & group attributes are missing when you list oss objects via hadoop 
> command:
>  
> {code:java}
> Found 6 items
> drwxrwxrwx - 0 2018-08-01 21:37 /1024
> drwxrwxrwx - 0 2018-10-30 11:07 /50
> -rw-rw-rw- 1 94070 2018-11-08 21:48 /a
> -rw-rw-rw- 1 2441079322 2018-10-31 10:14 /lineitem.csv
> drwxrwxrwx - 0 1970-01-01 08:00 /tmp
> drwxrwxrwx - 0 1970-01-01 08:00 /user
> {code}
>  
>  
> The result should like below(hadoop fs -ls hdfs://master:8020/):
>  
> {code:java}
> Found 5 items
> drwxr-xr-x - hbase hbase 0 2018-11-18 17:31 hdfs://master:8020/hbase
> drwxrwxrwt - hdfs supergroup 0 2018-10-30 11:07 hdfs://master:8020/tmp
> drwxr-xr-x - hdfs supergroup 0 2018-10-30 10:39 hdfs://master:8020/user
> {code}
>  
>  
> However, oss objects do not have owner & group attributes like hadoop files,  
> so we assume both owner & group are the current user at the time the FS was 
> instantiated.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15943) AliyunOSS: add missing owner & group attributes for oss FileStatus

2018-11-19 Thread wujinhu (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15943?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

wujinhu updated HADOOP-15943:
-
Summary: AliyunOSS: add missing owner & group attributes for oss FileStatus 
 (was: AliyunOSS: add missing owner & group attribute for oss FileStatus)

> AliyunOSS: add missing owner & group attributes for oss FileStatus
> --
>
> Key: HADOOP-15943
> URL: https://issues.apache.org/jira/browse/HADOOP-15943
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs/oss
>Affects Versions: 2.10.0, 2.9.1, 3.2.0, 3.1.1, 3.0.3, 3.3.0
>Reporter: wujinhu
>Assignee: wujinhu
>Priority: Major
>
> Owner & group attribute



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15943) AliyunOSS: add missing owner & group attribute for oss FileStatus

2018-11-19 Thread wujinhu (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15943?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

wujinhu updated HADOOP-15943:
-
Description: Owner & group attribute

> AliyunOSS: add missing owner & group attribute for oss FileStatus
> -
>
> Key: HADOOP-15943
> URL: https://issues.apache.org/jira/browse/HADOOP-15943
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs/oss
>Affects Versions: 2.10.0, 2.9.1, 3.2.0, 3.1.1, 3.0.3, 3.3.0
>Reporter: wujinhu
>Assignee: wujinhu
>Priority: Major
>
> Owner & group attribute



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15919) AliyunOSS: Enable Yarn to use OSS

2018-11-19 Thread wujinhu (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15919?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16691364#comment-16691364
 ] 

wujinhu commented on HADOOP-15919:
--

Cool! Thanks [~cheersyang] :)

> AliyunOSS: Enable Yarn to use OSS
> -
>
> Key: HADOOP-15919
> URL: https://issues.apache.org/jira/browse/HADOOP-15919
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs/oss
>Affects Versions: 2.10.0, 2.9.1, 3.2.0, 3.1.1, 3.0.3
>Reporter: wujinhu
>Assignee: wujinhu
>Priority: Major
> Fix For: 2.10.0, 3.0.4, 3.3.0, 3.1.2, 3.2.1, 2.9.3
>
> Attachments: HADOOP-15919.001.patch, HADOOP-15919.002.patch, 
> HADOOP-15919.003.patch, HADOOP-15919.004.patch
>
>
> Uses DelegateToFileSystem to expose AliyunOSSFileSystem as an 
> AbstractFileSystem



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-15943) AliyunOSS: add missing owner & group attribute for oss FileStatus

2018-11-19 Thread wujinhu (JIRA)
wujinhu created HADOOP-15943:


 Summary: AliyunOSS: add missing owner & group attribute for oss 
FileStatus
 Key: HADOOP-15943
 URL: https://issues.apache.org/jira/browse/HADOOP-15943
 Project: Hadoop Common
  Issue Type: Improvement
  Components: fs/oss
Affects Versions: 3.0.3, 3.1.1, 2.9.1, 2.10.0, 3.2.0, 3.3.0
Reporter: wujinhu






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Assigned] (HADOOP-15943) AliyunOSS: add missing owner & group attribute for oss FileStatus

2018-11-19 Thread wujinhu (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15943?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

wujinhu reassigned HADOOP-15943:


Assignee: wujinhu

> AliyunOSS: add missing owner & group attribute for oss FileStatus
> -
>
> Key: HADOOP-15943
> URL: https://issues.apache.org/jira/browse/HADOOP-15943
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs/oss
>Affects Versions: 2.10.0, 2.9.1, 3.2.0, 3.1.1, 3.0.3, 3.3.0
>Reporter: wujinhu
>Assignee: wujinhu
>Priority: Major
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org