[jira] [Commented] (HADOOP-14964) AliyunOSS: backport Aliyun OSS module to branch-2

2017-11-30 Thread SammiChen (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14964?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16274013#comment-16274013
 ] 

SammiChen commented on HADOOP-14964:


bq. It looks like json-lib is ALv2, but it includes the (Cat-X) json.org 
dependency.

Hi @Chris Douglas, the "json.org" is referred in the {{}} section. 
It's not used in any {{}} of net.sf.json-lib/json-lib. Is it still 
a problem? 

{quote}
 
  
 Douglas Crockford
 json at JSON.org
 JSON.org
 
Original source code developer
 
  
{quote}

> AliyunOSS: backport Aliyun OSS module to branch-2
> -
>
> Key: HADOOP-14964
> URL: https://issues.apache.org/jira/browse/HADOOP-14964
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs/oss
>Reporter: Genmao Yu
>Assignee: SammiChen
> Fix For: 2.9.1
>
> Attachments: HADOOP-14964-branch-2.000.patch, 
> HADOOP-14964-branch-2.8.000.patch, HADOOP-14964-branch-2.8.001.patch, 
> HADOOP-14964-branch-2.9.001.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-14600) LocatedFileStatus constructor forces RawLocalFS to exec a process to get the permissions

2017-11-30 Thread Ping Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14600?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16273970#comment-16273970
 ] 

Ping Liu edited comment on HADOOP-14600 at 12/1/17 5:47 AM:


Just verified.  There is no error!

I missed {{-Pnative}} in Maven build that is required profile to generate JNI 
native code.  Now after built with {{-Pnative}}, things look good.  I tried the 
patch on IntelliJ in both Windows and Linux and made sure seeing the code flow 
into the test cases.

Also tested command line console.  I am attaching the command line test results 
from both Windows and Linux (see attachments: 
{{command_line_test_result__linux.txt}}, 
{{command_line_test_result__windows.txt}}).

cc: [~chris.douglas], [~steve_l]


was (Author: myapachejira):
Just verified.  There is no error!

I missed {{-Pnative}} in Maven build that is required profile to generate JNI 
native code.  Now things look good.  I tried the patch on IntelliJ in both 
Windows and Linux and made sure seeing the code flow into the test cases.

Also tested command line console.  I am attaching the command line test results 
from both Windows and Linux (see attachments: 
{{command_line_test_result__linux.txt}}, 
{{command_line_test_result__windows.txt}}).

cc: [~chris.douglas], [~steve_l]

> LocatedFileStatus constructor forces RawLocalFS to exec a process to get the 
> permissions
> 
>
> Key: HADOOP-14600
> URL: https://issues.apache.org/jira/browse/HADOOP-14600
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Affects Versions: 2.7.3
> Environment: file:// in a dir with many files
>Reporter: Steve Loughran
>Assignee: Ping Liu
> Attachments: HADOOP-14600.001.patch, HADOOP-14600.002.patch, 
> HADOOP-14600.003.patch, HADOOP-14600.004.patch, HADOOP-14600.005.patch, 
> HADOOP-14600.006.patch, HADOOP-14600.007.patch, HADOOP-14600.008.patch, 
> HADOOP-14600.009.patch, TestRawLocalFileSystemContract.java, 
> command_line_test_result__linux.txt, command_line_test_result__windows.txt
>
>
> Reported in SPARK-21137. a {{FileSystem.listStatus}} call really craws 
> against the local FS, because {{FileStatus.getPemissions}} call forces  
> {{DeprecatedRawLocalFileStatus}} tp spawn a process to read the real UGI 
> values.
> That is: for every other FS, what's a field lookup or even a no-op, on the 
> local FS it's a process exec/spawn, with all the costs. This gets expensive 
> if you have many files.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14600) LocatedFileStatus constructor forces RawLocalFS to exec a process to get the permissions

2017-11-30 Thread Ping Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14600?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16273970#comment-16273970
 ] 

Ping Liu commented on HADOOP-14600:
---

Just verified.  There is no error!

I missed {{-Pnative}} in Maven build that is required profile to generate JNI 
native code.  Now things look good.  I tried the patch on IntelliJ in both 
Windows and Linux and made sure seeing the code flow into the test cases.

Also tested command line console.  I am attaching the command line test results 
from both Windows and Linux (see attachments: 
{{command_line_test_result__linux.txt}}, 
{{command_line_test_result__windows.txt}}).

cc: [~chris.douglas], [~steve_l]

> LocatedFileStatus constructor forces RawLocalFS to exec a process to get the 
> permissions
> 
>
> Key: HADOOP-14600
> URL: https://issues.apache.org/jira/browse/HADOOP-14600
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Affects Versions: 2.7.3
> Environment: file:// in a dir with many files
>Reporter: Steve Loughran
>Assignee: Ping Liu
> Attachments: HADOOP-14600.001.patch, HADOOP-14600.002.patch, 
> HADOOP-14600.003.patch, HADOOP-14600.004.patch, HADOOP-14600.005.patch, 
> HADOOP-14600.006.patch, HADOOP-14600.007.patch, HADOOP-14600.008.patch, 
> HADOOP-14600.009.patch, TestRawLocalFileSystemContract.java, 
> command_line_test_result__linux.txt, command_line_test_result__windows.txt
>
>
> Reported in SPARK-21137. a {{FileSystem.listStatus}} call really craws 
> against the local FS, because {{FileStatus.getPemissions}} call forces  
> {{DeprecatedRawLocalFileStatus}} tp spawn a process to read the real UGI 
> values.
> That is: for every other FS, what's a field lookup or even a no-op, on the 
> local FS it's a process exec/spawn, with all the costs. This gets expensive 
> if you have many files.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14600) LocatedFileStatus constructor forces RawLocalFS to exec a process to get the permissions

2017-11-30 Thread Ping Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14600?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ping Liu updated HADOOP-14600:
--
Attachment: command_line_test_result__linux.txt
command_line_test_result__windows.txt

> LocatedFileStatus constructor forces RawLocalFS to exec a process to get the 
> permissions
> 
>
> Key: HADOOP-14600
> URL: https://issues.apache.org/jira/browse/HADOOP-14600
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Affects Versions: 2.7.3
> Environment: file:// in a dir with many files
>Reporter: Steve Loughran
>Assignee: Ping Liu
> Attachments: HADOOP-14600.001.patch, HADOOP-14600.002.patch, 
> HADOOP-14600.003.patch, HADOOP-14600.004.patch, HADOOP-14600.005.patch, 
> HADOOP-14600.006.patch, HADOOP-14600.007.patch, HADOOP-14600.008.patch, 
> HADOOP-14600.009.patch, TestRawLocalFileSystemContract.java, 
> command_line_test_result__linux.txt, command_line_test_result__windows.txt
>
>
> Reported in SPARK-21137. a {{FileSystem.listStatus}} call really craws 
> against the local FS, because {{FileStatus.getPemissions}} call forces  
> {{DeprecatedRawLocalFileStatus}} tp spawn a process to read the real UGI 
> values.
> That is: for every other FS, what's a field lookup or even a no-op, on the 
> local FS it's a process exec/spawn, with all the costs. This gets expensive 
> if you have many files.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-9747) Reduce unnecessary UGI synchronization

2017-11-30 Thread Bharat Viswanadham (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9747?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16273938#comment-16273938
 ] 

Bharat Viswanadham commented on HADOOP-9747:


Hi [~daryn]
Thanks for a great patch, this really simplifies UGI code. I have a few 
comments.
1. System.setProperty(KRB5CCNAME) is not being set, previously this is being 
set in the case of IBM_JAVA

2. getLoginUser is no longer Synchronized. If multiple threads call this in 
parallel, multiple loginUser ugi’s will be created and they could potentially 
spin up a new thread in spawnAutoRenewalThreadForUserCreds. I would suggest to 
synchronize it for the case when loginUser is null.

3. In loginUserFromSubject method, when subject passed is null the same 
situation will occur, it could spin multiple threads for renewal. Probably, we 
don’t need to support null subject in this API, because null subject use case 
is already handled by getLoginUser.

4. As you have noted in the comments that getKeyTabEntry is not very reliable 
for external subjects. I was wondering whether we really need it. Can we get 
away by saying that it’s user’s responsibility to renew external subjects?

5. Following 3 methods perform login and update the static loginUser. It might 
make sense to add documentation that these update the global loginUser.
getLoginUser, loginUserFromSubject and loginUserFromKeytab
 
*Minor Nits*
· getSubjectLoginLock, does not actually getLock, can we change this 
method name getSubectPrivateCredentials.
· In hasKerberosCredential, it might make sense to return false for 
null user, otherwise we will get an NPE.
· Can we add assert hasSubjectLoginLock() in getTgt()?
· In unprotectedLoginUserFromSubject we should change the local 
variable name instead of overloading loginUser, only for better readability.
 
Thank You [~xyao] [~jnp] for cumulative review on the patch.

> Reduce unnecessary UGI synchronization
> --
>
> Key: HADOOP-9747
> URL: https://issues.apache.org/jira/browse/HADOOP-9747
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 0.23.0, 2.0.0-alpha, 3.0.0-alpha1
>Reporter: Daryn Sharp
>Assignee: Daryn Sharp
>Priority: Critical
> Attachments: HADOOP-9747-trunk.01.patch, 
> HADOOP-9747.2.branch-2.patch, HADOOP-9747.2.trunk.patch, 
> HADOOP-9747.branch-2.patch, HADOOP-9747.trunk.patch
>
>
> Jstacks of heavily loaded NNs show up to dozens of threads blocking in the 
> UGI.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-15081) org.apache.hadoop.util.JvmPauseMonitor Detected pause in JVM or host machine (eg GC) cause ResourceManager exit

2017-11-30 Thread liuxiaobin (JIRA)
liuxiaobin created HADOOP-15081:
---

 Summary: org.apache.hadoop.util.JvmPauseMonitor Detected pause 
in JVM or host machine (eg GC)   cause  ResourceManager   exit  
 Key: HADOOP-15081
 URL: https://issues.apache.org/jira/browse/HADOOP-15081
 Project: Hadoop Common
  Issue Type: Bug
  Components: common
Affects Versions: 2.6.3
Reporter: liuxiaobin


org.apache.hadoop.util.JvmPauseMonitor  
Detected pause in JVM or host machine (eg GC): pause of approximately 2562ms
GC pool 'ConcurrentMarkSweep' had collection(s): count=4 time=3040ms

ResourceManager   NodeManagerexit   .



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15056) Fix TestUnbuffer#testUnbufferException failure

2017-11-30 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15056?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16273838#comment-16273838
 ] 

genericqa commented on HADOOP-15056:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 10m 
10s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
43s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 16m 
47s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 12m 
29s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
 2s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
10s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
14m 24s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
44s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
49s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
18s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m 
56s{color} | {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 11m 
32s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 11m 
32s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
 1s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m  
4s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m  5s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
35s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
49s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  8m 
49s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 87m 26s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
42s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}194m 29s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 |
| JIRA Issue | HADOOP-15056 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12900106/HADOOP-15056.004.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux e5d6b314297d 3.13.0-129-generic #178-Ubuntu SMP Fri Aug 11 
12:48:20 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 0780fdb |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_151 |
| findbugs | v3.1.0-RC1 |
| mvninstall | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/13766/artifact/out/patch-mvninstall-hadoop-hdfs-project_hadoop-hdfs.txt
 |
| unit | 

[jira] [Assigned] (HADOOP-14985) Remove subversion related code from VersionInfoMojo.java

2017-11-30 Thread Ajay Kumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14985?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ajay Kumar reassigned HADOOP-14985:
---

Assignee: Ajay Kumar

> Remove subversion related code from VersionInfoMojo.java
> 
>
> Key: HADOOP-14985
> URL: https://issues.apache.org/jira/browse/HADOOP-14985
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Affects Versions: 2.9.0, 3.0.0
>Reporter: Akira Ajisaka
>Assignee: Ajay Kumar
>Priority: Minor
>
> When building Apache Hadoop, we can see the following message:
> {noformat}
> [WARNING] [svn, info] failed with error code 1
> {noformat}
> We have migrated to the code base from svn to git, so the message is useless.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14444) New implementation of ftp and sftp filesystems

2017-11-30 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-1?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16273754#comment-16273754
 ] 

genericqa commented on HADOOP-1:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
11s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 30 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
16s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 17m 
 2s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 13m 
43s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
 8s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
13s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
14m 59s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-project hadoop-tools {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m  
0s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
34s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
19s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  3m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 12m 
43s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 12m 
43s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
6s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m  8s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-project hadoop-tools {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
6s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
28s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
19s{color} | {color:green} hadoop-project in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
32s{color} | {color:green} hadoop-ftp in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 51m  
9s{color} | {color:green} hadoop-tools in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
35s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}137m 36s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 |
| JIRA Issue | HADOOP-1 |
| JIRA Patch URL | 

[jira] [Commented] (HADOOP-14820) Wasb mkdirs security checks inconsistent with HDFS

2017-11-30 Thread Thomas Marquardt (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14820?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16273707#comment-16273707
 ] 

Thomas Marquardt commented on HADOOP-14820:
---

It was ported to branch-2.  Who is using 2.8.x?  I don't see any reason not to 
backport.

> Wasb mkdirs security checks inconsistent with HDFS
> --
>
> Key: HADOOP-14820
> URL: https://issues.apache.org/jira/browse/HADOOP-14820
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/azure
>Affects Versions: 2.8.1
>Reporter: Sivaguru Sankaridurg
>Assignee: Sivaguru Sankaridurg
>  Labels: azure, fs, secure, wasb
> Fix For: 2.9.0, 3.0.0-beta1
>
> Attachments: HADOOP-14820-006.patch, HADOOP-14820-007.patch, 
> HADOOP-14820-branch-2-001.patch.txt, HADOOP-14820.001.patch, 
> HADOOP-14820.002.patch, HADOOP-14820.003.patch, HADOOP-14820.004.patch, 
> HADOOP-14820.005.patch
>
>
> No authorization checks should be made when a user tries to create (mkdirs 
> -p) an existing folder hierarchy.
> For example, if we start with _/home/hdiuser/prefix_ pre-created, and do the 
> following operations, the results should be as shown below.
> {noformat}
> hdiuser@hn0-0d2f67:~$ sudo chown root:root prefix
> hdiuser@hn0-0d2f67:~$ sudo chmod 555 prefix
> hdiuser@hn0-0d2f67:~$ ls -l
> dr-xr-xr-x 3 rootroot  4096 Aug 29 08:25 prefix
> hdiuser@hn0-0d2f67:~$ mkdir -p /home
> hdiuser@hn0-0d2f67:~$ mkdir -p /home/hdiuser
> hdiuser@hn0-0d2f67:~$ mkdir -p /home/hdiuser/prefix
> hdiuser@hn0-0d2f67:~$ mkdir -p /home/hdiuser/prefix/1
> mkdir: cannot create directory â/home/hdiuser/prefix/1â: Permission denied
> The first three mkdirs succeed, because the ancestor is already present. The 
> fourth one fails because of a permission check against the (shorter) ancestor 
> (as compared to the path being created).
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Assigned] (HADOOP-15009) hadoop-resourceestimator's shell scripts are a mess

2017-11-30 Thread Ajay Kumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15009?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ajay Kumar reassigned HADOOP-15009:
---

Assignee: Ajay Kumar

> hadoop-resourceestimator's shell scripts are a mess
> ---
>
> Key: HADOOP-15009
> URL: https://issues.apache.org/jira/browse/HADOOP-15009
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: scripts, tools
>Affects Versions: 3.1.0
>Reporter: Allen Wittenauer
>Assignee: Ajay Kumar
>Priority: Blocker
>
> #1:
> There's no reason for estimator.sh to exist.  Just make it a subcommand under 
> yarn or whatever.  
> #2:
> In it's current form, it's missing a BUNCH of boilerplate that makes certain 
> functionality completely fail.
> #3
> start/stop-estimator.sh is full of copypasta that doesn't actually do 
> anything/work correctly.  Additionally, if estimator.sh doesn't exist, 
> neither does this since yarn --daemon start/stop will do everything as 
> necessary.  



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-15080) Cat-X transitive dependency on org.json library via json-lib

2017-11-30 Thread Chris Douglas (JIRA)
Chris Douglas created HADOOP-15080:
--

 Summary: Cat-X transitive dependency on org.json library via 
json-lib
 Key: HADOOP-15080
 URL: https://issues.apache.org/jira/browse/HADOOP-15080
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs/oss
Affects Versions: 3.0.0-beta1
Reporter: Chris Douglas
Priority: Blocker


The OSS SDK has a dependency on json-lib. In LEGAL-245, the org.json library 
(from which json-lib may be derived) is released under a 
[category-x|https://www.apache.org/legal/resolved.html#json] license.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15079) ITestS3AFileOperationCost#testFakeDirectoryDeletion failing after OutputCommitter patch

2017-11-30 Thread Sean Mackrory (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15079?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16273638#comment-16273638
 ] 

Sean Mackrory commented on HADOOP-15079:


I suspected that simply removing the deleteUnnecessaryFakeDirectories from 
innerMkdirs was the right thing to do so it could all be handled under 
createFakeDirectory. I tried that and encountered another discrepancy later:

{code}
[ERROR]   
ITestS3AFileOperationCost.testFakeDirectoryDeletion:262->Assert.assertEquals:555->Assert.assertEquals:118->Assert.failNotEquals:743->Assert.fail:88
 after rename(srcFilePath, destFilePath): object_delete_requests expected:<2> 
but was:<3>
{code}

> ITestS3AFileOperationCost#testFakeDirectoryDeletion failing after 
> OutputCommitter patch
> ---
>
> Key: HADOOP-15079
> URL: https://issues.apache.org/jira/browse/HADOOP-15079
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 3.1.0
>Reporter: Sean Mackrory
>Priority: Critical
>
> I see this test failing with "object_delete_requests expected:<1> but 
> was:<2>". I printed stack traces whenever this metric was incremented, and 
> found the root cause to be that innerMkdirs is now causing two calls to 
> delete fake directories when it previously caused only one. It is called once 
> inside createFakeDirectory, and once directly inside innerMkdirs later:
> {code}
> at 
> org.apache.hadoop.fs.s3a.S3AInstrumentation.incrementCounter(S3AInstrumentation.java:454)
> at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.incrementStatistic(S3AFileSystem.java:1108)
> at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.lambda$deleteObjects$8(S3AFileSystem.java:1369)
> at 
> org.apache.hadoop.fs.s3a.Invoker.retryUntranslated(Invoker.java:313)
> at 
> org.apache.hadoop.fs.s3a.Invoker.retryUntranslated(Invoker.java:279)
> at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.deleteObjects(S3AFileSystem.java:1366)
> at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.removeKeys(S3AFileSystem.java:1625)
> at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.deleteUnnecessaryFakeDirectories(S3AFileSystem.java:2634)
> at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.finishedWrite(S3AFileSystem.java:2599)
> at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.putObjectDirect(S3AFileSystem.java:1498)
> at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.lambda$createEmptyObject$11(S3AFileSystem.java:2684)
> at org.apache.hadoop.fs.s3a.Invoker.once(Invoker.java:108)
> at org.apache.hadoop.fs.s3a.Invoker.lambda$retry$3(Invoker.java:259)
> at 
> org.apache.hadoop.fs.s3a.Invoker.retryUntranslated(Invoker.java:313)
> at org.apache.hadoop.fs.s3a.Invoker.retry(Invoker.java:255)
> at org.apache.hadoop.fs.s3a.Invoker.retry(Invoker.java:230)
> at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.createEmptyObject(S3AFileSystem.java:2682)
> at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.createFakeDirectory(S3AFileSystem.java:2657)
> at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.innerMkdirs(S3AFileSystem.java:2021)
> at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.mkdirs(S3AFileSystem.java:1956)
> at org.apache.hadoop.fs.FileSystem.mkdirs(FileSystem.java:2305)
> at 
> org.apache.hadoop.fs.contract.AbstractFSContractTestBase.mkdirs(AbstractFSContractTestBase.java:338)
> at 
> org.apache.hadoop.fs.s3a.ITestS3AFileOperationCost.testFakeDirectoryDeletion(ITestS3AFileOperationCost.java:209)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
> at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
> at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
> at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
> at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
> at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
> at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55)
> at 
> org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74
> {code}
> {code}
> at 
> org.apache.hadoop.fs.s3a.S3AInstrumentation.incrementCounter(S3AInstrumentation.java:454)
> at 
> 

[jira] [Updated] (HADOOP-15056) Fix TestUnbuffer#testUnbufferException failure

2017-11-30 Thread Jack Bearden (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15056?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jack Bearden updated HADOOP-15056:
--
Attachment: HADOOP-15056.004.patch

#4 Fix style issues

> Fix TestUnbuffer#testUnbufferException failure
> --
>
> Key: HADOOP-15056
> URL: https://issues.apache.org/jira/browse/HADOOP-15056
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 2.9.0
>Reporter: Jack Bearden
>Assignee: Jack Bearden
>Priority: Minor
> Attachments: HADOOP-15056.001.patch, HADOOP-15056.002.patch, 
> HADOOP-15056.003.patch, HADOOP-15056.004.patch
>
>
> Hello! I am a new contributor and actually contributing to open source for 
> the very first time. :) 
> I pulled down Hadoop today and when running the tests I encountered a failure 
> with the TestUnbuffer#testUnbufferException test.
> The unbuffer code has recently gone through some changes and I believe this 
> test case may have been overlooked. Using today's git commit 
> (659e85e304d070f9908a96cf6a0e1cbafde6a434), and upon running the test case, 
> there is an expect mock for an exception UnsupportedOperationException that 
> is no longer being thrown. 
> It would appear that a test like this would be valuable so my initial 
> proposed patch did not remove it. Instead, I removed the conditions that were 
> guarding the cast from being able to fire -- as was the previous behavior. 
> Now when we encounter an object that doesn't have the UNBUFFERED 
> StreamCapability, it will throw an error as it did prior to the recent 
> changes. 
> Please review and let me know what you think! :D



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14475) Metrics of S3A don't print out when enable it in Hadoop metrics property file

2017-11-30 Thread Aaron Fabbri (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14475?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16273606#comment-16273606
 ] 

Aaron Fabbri commented on HADOOP-14475:
---

v15 patch looks good with minor change:

{noformat}
+its own metrics system called s3a-file-system, and each instance of the client
+will create its own metrics source, named with a JVM-unique numerical ID and ID
+and the bucket name.
{noformat}

This needs to be updated, right?

{noformat}
+  private void registerAsMetricsSource(URI name) {
+int number;
+synchronized(this) {
+  getMetricsSystem();
+
+  metricsSourceActiveCounter++;
{noformat}

You could move getMetricsSystem() up a line if you want. I've been brainwashed 
that recursive mutexes are evil, but this is totally valid Java code so feel 
free to giggle and just ignore me here.  :-)

> Metrics of S3A don't print out  when enable it in Hadoop metrics property file
> --
>
> Key: HADOOP-14475
> URL: https://issues.apache.org/jira/browse/HADOOP-14475
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.8.0
> Environment: uname -a
> Linux client01 4.4.0-74-generic #95-Ubuntu SMP Wed Apr 12 09:50:34 UTC 2017 
> x86_64 x86_64 x86_64 GNU/Linux
>  cat /etc/issue
> Ubuntu 16.04.2 LTS \n \l
>Reporter: Yonger
>Assignee: Yonger
> Attachments: HADOOP-14475-003.patch, HADOOP-14475.002.patch, 
> HADOOP-14475.005.patch, HADOOP-14475.006.patch, HADOOP-14475.008.patch, 
> HADOOP-14475.009.patch, HADOOP-14475.010.patch, HADOOP-14475.011.patch, 
> HADOOP-14475.012.patch, HADOOP-14475.013.patch, HADOOP-14475.014.patch, 
> HADOOP-14475.015.patch, HADOOP-14775.007.patch, failsafe-report-s3a-it.html, 
> failsafe-report-s3a-scale.html, failsafe-report-scale.html, 
> failsafe-report-scale.zip, s3a-metrics.patch1, stdout.zip
>
>
> *.sink.file.class=org.apache.hadoop.metrics2.sink.FileSink
> #*.sink.file.class=org.apache.hadoop.metrics2.sink.influxdb.InfluxdbSink
> #*.sink.influxdb.url=http:/xx
> #*.sink.influxdb.influxdb_port=8086
> #*.sink.influxdb.database=hadoop
> #*.sink.influxdb.influxdb_username=hadoop
> #*.sink.influxdb.influxdb_password=hadoop
> #*.sink.ingluxdb.cluster=c1
> *.period=10
> #namenode.sink.influxdb.class=org.apache.hadoop.metrics2.sink.influxdb.InfluxdbSink
> #S3AFileSystem.sink.influxdb.class=org.apache.hadoop.metrics2.sink.influxdb.InfluxdbSink
> S3AFileSystem.sink.file.filename=s3afilesystem-metrics.out
> I can't find the out put file even i run a MR job which should be used s3.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14444) New implementation of ftp and sftp filesystems

2017-11-30 Thread Lukas Waldmann (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-1?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lukas Waldmann updated HADOOP-1:

Status: Patch Available  (was: In Progress)

solved:
we tend to use `setup()` `teardown()` as the @Before/@after operations in 
filesystems. Having standard names makes it more consistent when 
subclassing...and having >1 before/after method puts you into ambiguous 
ordering. Fix: change the names, subclass as appropriate, calling the 
superclass method as desired.

like what you've done with the mixin to reuse all the tests, but I'd prefer a 
name more unique to the FS than ContractTestBase. FTPContractTestMixin?

Docs readme should go into src/site/org/apache/hadoop/ftpextended/index.md

need to rename AbstractFileSystem to a class which isn't used elsewhere, e.g 
AbstractFTPFileSystem
hadoop code prefers a space after // in comments; a search & replace should fix

org/apache/hadoop/fs/ftpextended/ftp/package-info.java should declare code as 
@Private+Unstable. Even if the FS is public, there's no API coming from this 
module, nor stability guarantees.

Unless it's going to leak passwords, error messages should try and include the 
filesystem URI in them. Why? helps debugging when the job is working with >1 FS 
and all you have is a log to go on

When wrapping library exceptions (e.g SFTP exceptions), always include the 
toString() value of the wrapped exception. It'll be the string most likely to 
make it to bug reports.

core-site.xml mentions s3


> New implementation of ftp and sftp filesystems
> --
>
> Key: HADOOP-1
> URL: https://issues.apache.org/jira/browse/HADOOP-1
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: fs
>Affects Versions: 2.8.0
>Reporter: Lukas Waldmann
>Assignee: Lukas Waldmann
> Attachments: HADOOP-1.10.patch, HADOOP-1.11.patch, 
> HADOOP-1.12.patch, HADOOP-1.13.patch, HADOOP-1.2.patch, 
> HADOOP-1.3.patch, HADOOP-1.4.patch, HADOOP-1.5.patch, 
> HADOOP-1.6.patch, HADOOP-1.7.patch, HADOOP-1.8.patch, 
> HADOOP-1.9.patch, HADOOP-1.patch
>
>
> Current implementation of FTP and SFTP filesystems have severe limitations 
> and performance issues when dealing with high number of files. Mine patch 
> solve those issues and integrate both filesystems such a way that most of the 
> core functionality is common for both and therefore simplifying the 
> maintainability.
> The core features:
> * Support for HTTP/SOCKS proxies
> * Support for passive FTP
> * Support for explicit FTPS (SSL/TLS)
> * Support of connection pooling - new connection is not created for every 
> single command but reused from the pool.
> For huge number of files it shows order of magnitude performance improvement 
> over not pooled connections.
> * Caching of directory trees. For ftp you always need to list whole directory 
> whenever you ask information about particular file.
> Again for huge number of files it shows order of magnitude performance 
> improvement over not cached connections.
> * Support of keep alive (NOOP) messages to avoid connection drops
> * Support for Unix style or regexp wildcard glob - useful for listing a 
> particular files across whole directory tree
> * Support for reestablishing broken ftp data transfers - can happen 
> surprisingly often



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14444) New implementation of ftp and sftp filesystems

2017-11-30 Thread Lukas Waldmann (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-1?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lukas Waldmann updated HADOOP-1:

Attachment: HADOOP-1.13.patch

> New implementation of ftp and sftp filesystems
> --
>
> Key: HADOOP-1
> URL: https://issues.apache.org/jira/browse/HADOOP-1
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: fs
>Affects Versions: 2.8.0
>Reporter: Lukas Waldmann
>Assignee: Lukas Waldmann
> Attachments: HADOOP-1.10.patch, HADOOP-1.11.patch, 
> HADOOP-1.12.patch, HADOOP-1.13.patch, HADOOP-1.2.patch, 
> HADOOP-1.3.patch, HADOOP-1.4.patch, HADOOP-1.5.patch, 
> HADOOP-1.6.patch, HADOOP-1.7.patch, HADOOP-1.8.patch, 
> HADOOP-1.9.patch, HADOOP-1.patch
>
>
> Current implementation of FTP and SFTP filesystems have severe limitations 
> and performance issues when dealing with high number of files. Mine patch 
> solve those issues and integrate both filesystems such a way that most of the 
> core functionality is common for both and therefore simplifying the 
> maintainability.
> The core features:
> * Support for HTTP/SOCKS proxies
> * Support for passive FTP
> * Support for explicit FTPS (SSL/TLS)
> * Support of connection pooling - new connection is not created for every 
> single command but reused from the pool.
> For huge number of files it shows order of magnitude performance improvement 
> over not pooled connections.
> * Caching of directory trees. For ftp you always need to list whole directory 
> whenever you ask information about particular file.
> Again for huge number of files it shows order of magnitude performance 
> improvement over not cached connections.
> * Support of keep alive (NOOP) messages to avoid connection drops
> * Support for Unix style or regexp wildcard glob - useful for listing a 
> particular files across whole directory tree
> * Support for reestablishing broken ftp data transfers - can happen 
> surprisingly often



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14444) New implementation of ftp and sftp filesystems

2017-11-30 Thread Lukas Waldmann (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-1?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lukas Waldmann updated HADOOP-1:

Status: In Progress  (was: Patch Available)

> New implementation of ftp and sftp filesystems
> --
>
> Key: HADOOP-1
> URL: https://issues.apache.org/jira/browse/HADOOP-1
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: fs
>Affects Versions: 2.8.0
>Reporter: Lukas Waldmann
>Assignee: Lukas Waldmann
> Attachments: HADOOP-1.10.patch, HADOOP-1.11.patch, 
> HADOOP-1.12.patch, HADOOP-1.2.patch, HADOOP-1.3.patch, 
> HADOOP-1.4.patch, HADOOP-1.5.patch, HADOOP-1.6.patch, 
> HADOOP-1.7.patch, HADOOP-1.8.patch, HADOOP-1.9.patch, 
> HADOOP-1.patch
>
>
> Current implementation of FTP and SFTP filesystems have severe limitations 
> and performance issues when dealing with high number of files. Mine patch 
> solve those issues and integrate both filesystems such a way that most of the 
> core functionality is common for both and therefore simplifying the 
> maintainability.
> The core features:
> * Support for HTTP/SOCKS proxies
> * Support for passive FTP
> * Support for explicit FTPS (SSL/TLS)
> * Support of connection pooling - new connection is not created for every 
> single command but reused from the pool.
> For huge number of files it shows order of magnitude performance improvement 
> over not pooled connections.
> * Caching of directory trees. For ftp you always need to list whole directory 
> whenever you ask information about particular file.
> Again for huge number of files it shows order of magnitude performance 
> improvement over not cached connections.
> * Support of keep alive (NOOP) messages to avoid connection drops
> * Support for Unix style or regexp wildcard glob - useful for listing a 
> particular files across whole directory tree
> * Support for reestablishing broken ftp data transfers - can happen 
> surprisingly often



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14475) Metrics of S3A don't print out when enable it in Hadoop metrics property file

2017-11-30 Thread Sean Mackrory (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14475?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16273554#comment-16273554
 ] 

Sean Mackrory commented on HADOOP-14475:


Filed HADOOP-15079. I'm quite certain it's unrelated to the metrics patch, 
though I'm surprised it's not already discovered. Suspect everyone may be 
testing only with S3Guard enabled?

> Metrics of S3A don't print out  when enable it in Hadoop metrics property file
> --
>
> Key: HADOOP-14475
> URL: https://issues.apache.org/jira/browse/HADOOP-14475
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.8.0
> Environment: uname -a
> Linux client01 4.4.0-74-generic #95-Ubuntu SMP Wed Apr 12 09:50:34 UTC 2017 
> x86_64 x86_64 x86_64 GNU/Linux
>  cat /etc/issue
> Ubuntu 16.04.2 LTS \n \l
>Reporter: Yonger
>Assignee: Yonger
> Attachments: HADOOP-14475-003.patch, HADOOP-14475.002.patch, 
> HADOOP-14475.005.patch, HADOOP-14475.006.patch, HADOOP-14475.008.patch, 
> HADOOP-14475.009.patch, HADOOP-14475.010.patch, HADOOP-14475.011.patch, 
> HADOOP-14475.012.patch, HADOOP-14475.013.patch, HADOOP-14475.014.patch, 
> HADOOP-14475.015.patch, HADOOP-14775.007.patch, failsafe-report-s3a-it.html, 
> failsafe-report-s3a-scale.html, failsafe-report-scale.html, 
> failsafe-report-scale.zip, s3a-metrics.patch1, stdout.zip
>
>
> *.sink.file.class=org.apache.hadoop.metrics2.sink.FileSink
> #*.sink.file.class=org.apache.hadoop.metrics2.sink.influxdb.InfluxdbSink
> #*.sink.influxdb.url=http:/xx
> #*.sink.influxdb.influxdb_port=8086
> #*.sink.influxdb.database=hadoop
> #*.sink.influxdb.influxdb_username=hadoop
> #*.sink.influxdb.influxdb_password=hadoop
> #*.sink.ingluxdb.cluster=c1
> *.period=10
> #namenode.sink.influxdb.class=org.apache.hadoop.metrics2.sink.influxdb.InfluxdbSink
> #S3AFileSystem.sink.influxdb.class=org.apache.hadoop.metrics2.sink.influxdb.InfluxdbSink
> S3AFileSystem.sink.file.filename=s3afilesystem-metrics.out
> I can't find the out put file even i run a MR job which should be used s3.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-15079) ITestS3AFileOperationCost#testFakeDirectoryDeletion failing after OutputCommitter patch

2017-11-30 Thread Sean Mackrory (JIRA)
Sean Mackrory created HADOOP-15079:
--

 Summary: ITestS3AFileOperationCost#testFakeDirectoryDeletion 
failing after OutputCommitter patch
 Key: HADOOP-15079
 URL: https://issues.apache.org/jira/browse/HADOOP-15079
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 3.1.0
Reporter: Sean Mackrory
Priority: Critical


I see this test failing with "object_delete_requests expected:<1> but was:<2>". 
I printed stack traces whenever this metric was incremented, and found the root 
cause to be that innerMkdirs is now causing two calls to delete fake 
directories when it previously caused only one. It is called once inside 
createFakeDirectory, and once directly inside innerMkdirs later:

{code}
at 
org.apache.hadoop.fs.s3a.S3AInstrumentation.incrementCounter(S3AInstrumentation.java:454)
at 
org.apache.hadoop.fs.s3a.S3AFileSystem.incrementStatistic(S3AFileSystem.java:1108)
at 
org.apache.hadoop.fs.s3a.S3AFileSystem.lambda$deleteObjects$8(S3AFileSystem.java:1369)
at org.apache.hadoop.fs.s3a.Invoker.retryUntranslated(Invoker.java:313)
at org.apache.hadoop.fs.s3a.Invoker.retryUntranslated(Invoker.java:279)
at 
org.apache.hadoop.fs.s3a.S3AFileSystem.deleteObjects(S3AFileSystem.java:1366)
at 
org.apache.hadoop.fs.s3a.S3AFileSystem.removeKeys(S3AFileSystem.java:1625)
at 
org.apache.hadoop.fs.s3a.S3AFileSystem.deleteUnnecessaryFakeDirectories(S3AFileSystem.java:2634)
at 
org.apache.hadoop.fs.s3a.S3AFileSystem.finishedWrite(S3AFileSystem.java:2599)
at 
org.apache.hadoop.fs.s3a.S3AFileSystem.putObjectDirect(S3AFileSystem.java:1498)
at 
org.apache.hadoop.fs.s3a.S3AFileSystem.lambda$createEmptyObject$11(S3AFileSystem.java:2684)
at org.apache.hadoop.fs.s3a.Invoker.once(Invoker.java:108)
at org.apache.hadoop.fs.s3a.Invoker.lambda$retry$3(Invoker.java:259)
at org.apache.hadoop.fs.s3a.Invoker.retryUntranslated(Invoker.java:313)
at org.apache.hadoop.fs.s3a.Invoker.retry(Invoker.java:255)
at org.apache.hadoop.fs.s3a.Invoker.retry(Invoker.java:230)
at 
org.apache.hadoop.fs.s3a.S3AFileSystem.createEmptyObject(S3AFileSystem.java:2682)
at 
org.apache.hadoop.fs.s3a.S3AFileSystem.createFakeDirectory(S3AFileSystem.java:2657)
at 
org.apache.hadoop.fs.s3a.S3AFileSystem.innerMkdirs(S3AFileSystem.java:2021)
at 
org.apache.hadoop.fs.s3a.S3AFileSystem.mkdirs(S3AFileSystem.java:1956)
at org.apache.hadoop.fs.FileSystem.mkdirs(FileSystem.java:2305)
at 
org.apache.hadoop.fs.contract.AbstractFSContractTestBase.mkdirs(AbstractFSContractTestBase.java:338)
at 
org.apache.hadoop.fs.s3a.ITestS3AFileOperationCost.testFakeDirectoryDeletion(ITestS3AFileOperationCost.java:209)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
at 
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
at 
org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55)
at 
org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74
{code}

{code}
at 
org.apache.hadoop.fs.s3a.S3AInstrumentation.incrementCounter(S3AInstrumentation.java:454)
at 
org.apache.hadoop.fs.s3a.S3AFileSystem.incrementStatistic(S3AFileSystem.java:1108)
at 
org.apache.hadoop.fs.s3a.S3AFileSystem.lambda$deleteObjects$8(S3AFileSystem.java:1369)
at org.apache.hadoop.fs.s3a.Invoker.retryUntranslated(Invoker.java:313)
at org.apache.hadoop.fs.s3a.Invoker.retryUntranslated(Invoker.java:279)
at 
org.apache.hadoop.fs.s3a.S3AFileSystem.deleteObjects(S3AFileSystem.java:1366)
at 
org.apache.hadoop.fs.s3a.S3AFileSystem.removeKeys(S3AFileSystem.java:1625)
at 
org.apache.hadoop.fs.s3a.S3AFileSystem.deleteUnnecessaryFakeDirectories(S3AFileSystem.java:2634)
at 
org.apache.hadoop.fs.s3a.S3AFileSystem.innerMkdirs(S3AFileSystem.java:2025)
at 
org.apache.hadoop.fs.s3a.S3AFileSystem.mkdirs(S3AFileSystem.java:1956)
at org.apache.hadoop.fs.FileSystem.mkdirs(FileSystem.java:2305)
at 

[jira] [Updated] (HADOOP-14788) Credentials readTokenStorageFile to stop wrapping IOEs in IOEs

2017-11-30 Thread Ajay Kumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14788?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ajay Kumar updated HADOOP-14788:

Status: Open  (was: Patch Available)

> Credentials readTokenStorageFile to stop wrapping IOEs in IOEs
> --
>
> Key: HADOOP-14788
> URL: https://issues.apache.org/jira/browse/HADOOP-14788
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.8.1
>Reporter: Steve Loughran
>Assignee: Ajay Kumar
>Priority: Minor
> Attachments: HADOOP-14788.001.patch, HADOOP-14788.002.patch, 
> HADOOP-14788.003.patch
>
>
> When {{Credentials readTokenStorageFile}} gets an IOE. it catches & wraps 
> with the filename, so losing the exception class information.
> Is this needed. or can it pass everything up?
> If it is needed, well, it's a common pattern: wrapping the exception with the 
> path & operation. Maybe it's time to add an IOE version of 
> {{NetworkUtils.wrapException()}} which handles the broader set of IOEs



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14788) Credentials readTokenStorageFile to stop wrapping IOEs in IOEs

2017-11-30 Thread Ajay Kumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14788?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ajay Kumar updated HADOOP-14788:

Status: Patch Available  (was: Open)

> Credentials readTokenStorageFile to stop wrapping IOEs in IOEs
> --
>
> Key: HADOOP-14788
> URL: https://issues.apache.org/jira/browse/HADOOP-14788
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.8.1
>Reporter: Steve Loughran
>Assignee: Ajay Kumar
>Priority: Minor
> Attachments: HADOOP-14788.001.patch, HADOOP-14788.002.patch, 
> HADOOP-14788.003.patch
>
>
> When {{Credentials readTokenStorageFile}} gets an IOE. it catches & wraps 
> with the filename, so losing the exception class information.
> Is this needed. or can it pass everything up?
> If it is needed, well, it's a common pattern: wrapping the exception with the 
> path & operation. Maybe it's time to add an IOE version of 
> {{NetworkUtils.wrapException()}} which handles the broader set of IOEs



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-15078) dtutil ignores nonexistent files

2017-11-30 Thread Jason Lowe (JIRA)
Jason Lowe created HADOOP-15078:
---

 Summary: dtutil ignores nonexistent files
 Key: HADOOP-15078
 URL: https://issues.apache.org/jira/browse/HADOOP-15078
 Project: Hadoop Common
  Issue Type: Bug
  Components: security
Affects Versions: 3.0.0-alpha1
Reporter: Jason Lowe


While investigating issues in HADOOP-15059 I ran the dtutil append command like 
this:
{noformat}
$ hadoop dtutil append -format protobuf foo foo.pb
{noformat}

expecting the append command to translate the existing tokens in file {{foo}} 
into the currently non-existent file {{foo.pb}}.  Instead the command executed 
without error and overwrote {{foo}} instead of creating {{foo.pb}} as I 
expected.  I now understand how append works, but it was very surprising to 
have dtutil _silently ignore_ filenames requested on the command-line.  At best 
it is a bit surprising to the user.  At worst it clobbers data the user did not 
expect to be overwritten.




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15059) 3.0 deployment cannot work with old version MR tar ball which break rolling upgrade

2017-11-30 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15059?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16273186#comment-16273186
 ] 

genericqa commented on HADOOP-15059:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
13s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
15s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 16m 
13s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 12m 
28s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
 1s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m  
9s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
14m 34s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
21s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
50s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
15s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
38s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 11m 
31s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 11m 
31s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
2m  0s{color} | {color:orange} root: The patch generated 2 new + 20 unchanged - 
4 fixed = 22 total (was 24) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m  
6s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}  
9m 59s{color} | {color:green} patch has no errors when building and testing our 
client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
39s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
47s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  8m 19s{color} 
| {color:red} hadoop-common in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 89m 27s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
34s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}182m 33s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.security.TestRaceWhenRelogin |
|   | hadoop.net.TestDNS |
|   | hadoop.fs.TestUnbuffer |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 |
| JIRA Issue | HADOOP-15059 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12900034/HADOOP-15059.005.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux b7d7efdfc04f 3.13.0-135-generic #184-Ubuntu SMP Wed Oct 18 
11:55:51 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 75a3ab8 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_151 |
| 

[jira] [Assigned] (HADOOP-14959) DelegationTokenAuthenticator.authenticate() to wrap network exceptions

2017-11-30 Thread Ajay Kumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14959?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ajay Kumar reassigned HADOOP-14959:
---

Assignee: (was: Ajay Kumar)

> DelegationTokenAuthenticator.authenticate() to wrap network exceptions
> --
>
> Key: HADOOP-14959
> URL: https://issues.apache.org/jira/browse/HADOOP-14959
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: net, security
>Affects Versions: 2.8.1
>Reporter: Steve Loughran
>Priority: Minor
>
> network errors raised in {{DelegationTokenAuthenticator.authenticate()}} 
> aren't being wrapped, so only return the usual limited-value java.net error 
> text. using {{NetUtils.wrapException()}} can address that



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14964) AliyunOSS: backport Aliyun OSS module to branch-2

2017-11-30 Thread Chris Douglas (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14964?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16273142#comment-16273142
 ] 

Chris Douglas commented on HADOOP-14964:


It [looks like|https://github.com/aalmiray/Json-lib/blob/master/LICENSE.txt] 
json-lib is ALv2, but it 
[includes|https://github.com/aalmiray/Json-lib/blob/master/pom.xml#L27](?) the 
(Cat-X) json.org dependency.

> AliyunOSS: backport Aliyun OSS module to branch-2
> -
>
> Key: HADOOP-14964
> URL: https://issues.apache.org/jira/browse/HADOOP-14964
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs/oss
>Reporter: Genmao Yu
>Assignee: SammiChen
> Fix For: 2.9.1
>
> Attachments: HADOOP-14964-branch-2.000.patch, 
> HADOOP-14964-branch-2.8.000.patch, HADOOP-14964-branch-2.8.001.patch, 
> HADOOP-14964-branch-2.9.001.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14964) AliyunOSS: backport Aliyun OSS module to branch-2

2017-11-30 Thread Chris Douglas (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14964?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16273127#comment-16273127
 ] 

Chris Douglas commented on HADOOP-14964:


bq. net.sf.json-lib/json-lib(not used in Hadoop)
If this is an LGPLv2 [dependency|http://json.sf.net], then that's a 
[problem|https://www.apache.org/legal/resolved.html#category-x] not only for 
the backport but also for including Aliyun OSS in _any_ ASF release. Please 
find an alternative JSON library. When the json.org dependency was 
[reclassified|https://www.apache.org/legal/resolved.html#json], Ted Dunning 
wrote a compatible, ALv2 [replacement|https://s.apache.org/xmq9] that may be 
useful.

bq. Thanks for the asking. I would like to take the RM role if possible. Also 
guide is strongly needed
The release [wiki|https://wiki.apache.org/hadoop/HowToRelease] is thorough, but 
as you run into issues please don't hesitate to ask for help. You'll want to 
push for/monitor resolution of [blockers|https://s.apache.org/T1rV] for the 2.9 
branch. You may also want to look for blockers for 2.8.3 and 3.0.0 that affect 
branch-2.9.

> AliyunOSS: backport Aliyun OSS module to branch-2
> -
>
> Key: HADOOP-14964
> URL: https://issues.apache.org/jira/browse/HADOOP-14964
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs/oss
>Reporter: Genmao Yu
>Assignee: SammiChen
> Fix For: 2.9.1
>
> Attachments: HADOOP-14964-branch-2.000.patch, 
> HADOOP-14964-branch-2.8.000.patch, HADOOP-14964-branch-2.8.001.patch, 
> HADOOP-14964-branch-2.9.001.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13493) Compatibility Docs should clarify the policy for what takes precedence when a conflict is found

2017-11-30 Thread Robert Kanter (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13493?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16273008#comment-16273008
 ] 

Robert Kanter commented on HADOOP-13493:


Oh, I meant to do that.  Thanks.  It's very easy to mix up branch-3.0 and 
branch-3.0.0.

> Compatibility Docs should clarify the policy for what takes precedence when a 
> conflict is found
> ---
>
> Key: HADOOP-13493
> URL: https://issues.apache.org/jira/browse/HADOOP-13493
> Project: Hadoop Common
>  Issue Type: Task
>  Components: documentation
>Affects Versions: 2.7.2
>Reporter: Robert Kanter
>Assignee: Daniel Templeton
>Priority: Critical
> Fix For: 3.0.0
>
> Attachments: HADOOP-13493.001.patch, HADOOP-13493.002.patch
>
>
> The Compatibility Docs 
> (https://hadoop.apache.org/docs/r2.7.1/hadoop-project-dist/hadoop-common/Compatibility.html#Java_API)
>  list the policies for Private, Public, not annotated, etc Classes and 
> members, but it doesn't say what happens when there's a conflict.  We should 
> try obviously try to avoid this situation, but it would be good to explicitly 
> state what takes precedence.
> As an example, until YARN-3225 made it consistent, {{RefreshNodesRequest}} 
> looked like this:
> {code:java}
> @Private
> @Stable
> public abstract class RefreshNodesRequest {
>   @Public
>   @Stable
>   public static RefreshNodesRequest newInstance() {
> RefreshNodesRequest request = 
> Records.newRecord(RefreshNodesRequest.class);
> return request;
>   }
> }
> {code}
> Note that the class is marked {{\@Private}}, but the method is marked 
> {{\@Public}}.
> In this example, I'd say that the class level should have priority.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14820) Wasb mkdirs security checks inconsistent with HDFS

2017-11-30 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14820?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16273001#comment-16273001
 ] 

Steve Loughran commented on HADOOP-14820:
-

I should add that this appears to fix the problem whereas mkdirs() can fail 
near the root of a tree, because getParent() is being invoked before check for 
null

Here's a stack trace of wasb client which *doesn't* have this patch in
{code}
java.lang.NullPointerException
at 
org.apache.hadoop.fs.azure.NativeAzureFileSystem.getAncestor(NativeAzureFileSystem.java:2404)
at 
org.apache.hadoop.fs.azure.NativeAzureFileSystem.mkdirs(NativeAzureFileSystem.java:2436)
at 
org.apache.hadoop.fs.azure.NativeAzureFileSystem.mkdirs(NativeAzureFileSystem.java:2422)
at org.apache.hadoop.fs.FileSystem.mkdirs(FileSystem.java:1924)
...
{code}

directory being created is something like 
{{wasb://contr...@stevel.blob.core.windows.net/out/}} in some spark code.

This makes me wonder whether this should be backported to 2.8.x. Thoughts?

> Wasb mkdirs security checks inconsistent with HDFS
> --
>
> Key: HADOOP-14820
> URL: https://issues.apache.org/jira/browse/HADOOP-14820
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/azure
>Affects Versions: 2.8.1
>Reporter: Sivaguru Sankaridurg
>Assignee: Sivaguru Sankaridurg
>  Labels: azure, fs, secure, wasb
> Fix For: 2.9.0, 3.0.0-beta1
>
> Attachments: HADOOP-14820-006.patch, HADOOP-14820-007.patch, 
> HADOOP-14820-branch-2-001.patch.txt, HADOOP-14820.001.patch, 
> HADOOP-14820.002.patch, HADOOP-14820.003.patch, HADOOP-14820.004.patch, 
> HADOOP-14820.005.patch
>
>
> No authorization checks should be made when a user tries to create (mkdirs 
> -p) an existing folder hierarchy.
> For example, if we start with _/home/hdiuser/prefix_ pre-created, and do the 
> following operations, the results should be as shown below.
> {noformat}
> hdiuser@hn0-0d2f67:~$ sudo chown root:root prefix
> hdiuser@hn0-0d2f67:~$ sudo chmod 555 prefix
> hdiuser@hn0-0d2f67:~$ ls -l
> dr-xr-xr-x 3 rootroot  4096 Aug 29 08:25 prefix
> hdiuser@hn0-0d2f67:~$ mkdir -p /home
> hdiuser@hn0-0d2f67:~$ mkdir -p /home/hdiuser
> hdiuser@hn0-0d2f67:~$ mkdir -p /home/hdiuser/prefix
> hdiuser@hn0-0d2f67:~$ mkdir -p /home/hdiuser/prefix/1
> mkdir: cannot create directory â/home/hdiuser/prefix/1â: Permission denied
> The first three mkdirs succeed, because the ancestor is already present. The 
> fourth one fails because of a permission check against the (shorter) ancestor 
> (as compared to the path being created).
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13493) Compatibility Docs should clarify the policy for what takes precedence when a conflict is found

2017-11-30 Thread Daniel Templeton (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13493?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16273002#comment-16273002
 ] 

Daniel Templeton commented on HADOOP-13493:
---

I also pulled it back into branch-3.0.0.

> Compatibility Docs should clarify the policy for what takes precedence when a 
> conflict is found
> ---
>
> Key: HADOOP-13493
> URL: https://issues.apache.org/jira/browse/HADOOP-13493
> Project: Hadoop Common
>  Issue Type: Task
>  Components: documentation
>Affects Versions: 2.7.2
>Reporter: Robert Kanter
>Assignee: Daniel Templeton
>Priority: Critical
> Fix For: 3.0.0
>
> Attachments: HADOOP-13493.001.patch, HADOOP-13493.002.patch
>
>
> The Compatibility Docs 
> (https://hadoop.apache.org/docs/r2.7.1/hadoop-project-dist/hadoop-common/Compatibility.html#Java_API)
>  list the policies for Private, Public, not annotated, etc Classes and 
> members, but it doesn't say what happens when there's a conflict.  We should 
> try obviously try to avoid this situation, but it would be good to explicitly 
> state what takes precedence.
> As an example, until YARN-3225 made it consistent, {{RefreshNodesRequest}} 
> looked like this:
> {code:java}
> @Private
> @Stable
> public abstract class RefreshNodesRequest {
>   @Public
>   @Stable
>   public static RefreshNodesRequest newInstance() {
> RefreshNodesRequest request = 
> Records.newRecord(RefreshNodesRequest.class);
> return request;
>   }
> }
> {code}
> Note that the class is marked {{\@Private}}, but the method is marked 
> {{\@Public}}.
> In this example, I'd say that the class level should have priority.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14475) Metrics of S3A don't print out when enable it in Hadoop metrics property file

2017-11-30 Thread Sean Mackrory (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14475?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16272970#comment-16272970
 ] 

Sean Mackrory commented on HADOOP-14475:


I never saw the issue you were referring to and can't reproduce them now. I 
also can't reproduce the ITestS3AMetrics failure, even with the same arguments.

I do however see ITestS3AFileOperationCost#testFakeDirectoryDeletion failing 
with "object_delete_requests expected:<1> but was:<2>". This is the test that 
gets skipped with S3Guard is enabled because the opcounts vary in that scenario 
too, but this happens even if I revert this patch, so I believe it's unrelated.


> Metrics of S3A don't print out  when enable it in Hadoop metrics property file
> --
>
> Key: HADOOP-14475
> URL: https://issues.apache.org/jira/browse/HADOOP-14475
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.8.0
> Environment: uname -a
> Linux client01 4.4.0-74-generic #95-Ubuntu SMP Wed Apr 12 09:50:34 UTC 2017 
> x86_64 x86_64 x86_64 GNU/Linux
>  cat /etc/issue
> Ubuntu 16.04.2 LTS \n \l
>Reporter: Yonger
>Assignee: Yonger
> Attachments: HADOOP-14475-003.patch, HADOOP-14475.002.patch, 
> HADOOP-14475.005.patch, HADOOP-14475.006.patch, HADOOP-14475.008.patch, 
> HADOOP-14475.009.patch, HADOOP-14475.010.patch, HADOOP-14475.011.patch, 
> HADOOP-14475.012.patch, HADOOP-14475.013.patch, HADOOP-14475.014.patch, 
> HADOOP-14475.015.patch, HADOOP-14775.007.patch, failsafe-report-s3a-it.html, 
> failsafe-report-s3a-scale.html, failsafe-report-scale.html, 
> failsafe-report-scale.zip, s3a-metrics.patch1, stdout.zip
>
>
> *.sink.file.class=org.apache.hadoop.metrics2.sink.FileSink
> #*.sink.file.class=org.apache.hadoop.metrics2.sink.influxdb.InfluxdbSink
> #*.sink.influxdb.url=http:/xx
> #*.sink.influxdb.influxdb_port=8086
> #*.sink.influxdb.database=hadoop
> #*.sink.influxdb.influxdb_username=hadoop
> #*.sink.influxdb.influxdb_password=hadoop
> #*.sink.ingluxdb.cluster=c1
> *.period=10
> #namenode.sink.influxdb.class=org.apache.hadoop.metrics2.sink.influxdb.InfluxdbSink
> #S3AFileSystem.sink.influxdb.class=org.apache.hadoop.metrics2.sink.influxdb.InfluxdbSink
> S3AFileSystem.sink.file.filename=s3afilesystem-metrics.out
> I can't find the out put file even i run a MR job which should be used s3.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-15077) Support for setting user agent for (GCS)Google Cloud Storage

2017-11-30 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15077?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran resolved HADOOP-15077.
-
Resolution: Invalid

we don't have a GCS connector in the ASF codebase. Talk to google.

> Support for setting user agent for (GCS)Google Cloud Storage
> 
>
> Key: HADOOP-15077
> URL: https://issues.apache.org/jira/browse/HADOOP-15077
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: common, fs
>Reporter: Yu LIU
>Priority: Minor
>
> Currently, when we use AWS/Azure/Aliyun as a FileSystem, we can set *user 
> agent* for the underneath HTTP communication with these cloud providers by 
> setting _fs.s3a.user.agent.prefix_, _fs.azure.user.agent.prefix_, or 
> _fs.oss.user.agent.prefix_ properties.
> But not GCS(Google Cloud), So is it possible to provide this new feature ?



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-15077) Support for setting user agent for (GCS)Google Cloud Storage

2017-11-30 Thread Yu LIU (JIRA)
Yu LIU created HADOOP-15077:
---

 Summary: Support for setting user agent for (GCS)Google Cloud 
Storage
 Key: HADOOP-15077
 URL: https://issues.apache.org/jira/browse/HADOOP-15077
 Project: Hadoop Common
  Issue Type: New Feature
  Components: fs
Reporter: Yu LIU
Priority: Minor


Currently, when we use AWS/Azure/Aliyun as a FileSystem, we can set *user 
agent* for the underneath HTTP communication with these cloud providers by 
setting _fs.s3a.user.agent.prefix_, _fs.azure.user.agent.prefix_, or 
_fs.oss.user.agent.prefix_ properties.
But not GCS(Google Cloud), So is it possible to provide this new feature ?



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15077) Support for setting user agent for (GCS)Google Cloud Storage

2017-11-30 Thread Yu LIU (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15077?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yu LIU updated HADOOP-15077:

Component/s: common

> Support for setting user agent for (GCS)Google Cloud Storage
> 
>
> Key: HADOOP-15077
> URL: https://issues.apache.org/jira/browse/HADOOP-15077
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: common, fs
>Reporter: Yu LIU
>Priority: Minor
>
> Currently, when we use AWS/Azure/Aliyun as a FileSystem, we can set *user 
> agent* for the underneath HTTP communication with these cloud providers by 
> setting _fs.s3a.user.agent.prefix_, _fs.azure.user.agent.prefix_, or 
> _fs.oss.user.agent.prefix_ properties.
> But not GCS(Google Cloud), So is it possible to provide this new feature ?



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14775) Change junit dependency in parent pom file to junit 5 while maintaining backward compatibility to junit4.

2017-11-30 Thread Ajay Kumar (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14775?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16272896#comment-16272896
 ] 

Ajay Kumar commented on HADOOP-14775:
-

[~ajisakaa] Could you please have a look at new patch.

> Change junit dependency in parent pom file to junit 5 while maintaining 
> backward compatibility to junit4. 
> --
>
> Key: HADOOP-14775
> URL: https://issues.apache.org/jira/browse/HADOOP-14775
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Affects Versions: 3.0.0-alpha4
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>  Labels: junit5
> Attachments: HADOOP-14775.01.patch, HADOOP-14775.02.patch
>
>
> Change junit dependency in parent pom file to junit 5 while maintaining 
> backward compatibility to junit4. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13493) Compatibility Docs should clarify the policy for what takes precedence when a conflict is found

2017-11-30 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13493?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16272846#comment-16272846
 ] 

Hudson commented on HADOOP-13493:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #13296 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/13296/])
HADOOP-13493. Compatibility Docs should clarify the policy for what (rkanter: 
rev 75a3ab88f5f4ea6abf0a56cb8058e17b5a5fe403)
* (edit) hadoop-common-project/hadoop-common/src/site/markdown/Compatibility.md


> Compatibility Docs should clarify the policy for what takes precedence when a 
> conflict is found
> ---
>
> Key: HADOOP-13493
> URL: https://issues.apache.org/jira/browse/HADOOP-13493
> Project: Hadoop Common
>  Issue Type: Task
>  Components: documentation
>Affects Versions: 2.7.2
>Reporter: Robert Kanter
>Assignee: Daniel Templeton
>Priority: Critical
> Fix For: 3.0.0
>
> Attachments: HADOOP-13493.001.patch, HADOOP-13493.002.patch
>
>
> The Compatibility Docs 
> (https://hadoop.apache.org/docs/r2.7.1/hadoop-project-dist/hadoop-common/Compatibility.html#Java_API)
>  list the policies for Private, Public, not annotated, etc Classes and 
> members, but it doesn't say what happens when there's a conflict.  We should 
> try obviously try to avoid this situation, but it would be good to explicitly 
> state what takes precedence.
> As an example, until YARN-3225 made it consistent, {{RefreshNodesRequest}} 
> looked like this:
> {code:java}
> @Private
> @Stable
> public abstract class RefreshNodesRequest {
>   @Public
>   @Stable
>   public static RefreshNodesRequest newInstance() {
> RefreshNodesRequest request = 
> Records.newRecord(RefreshNodesRequest.class);
> return request;
>   }
> }
> {code}
> Note that the class is marked {{\@Private}}, but the method is marked 
> {{\@Public}}.
> In this example, I'd say that the class level should have priority.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15059) 3.0 deployment cannot work with old version MR tar ball which break rolling upgrade

2017-11-30 Thread Jason Lowe (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15059?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Lowe updated HADOOP-15059:

Attachment: HADOOP-15059.005.patch

Thanks for the reviews, Daryn and Vinod!  I'm attaching a patch that moves the 
byte value to enum mapping into the enum itself.  I'm not convinced the 
refactor was worth it, but here it is for you to decide.

bq. If only YARN had a system whereby one could dynamically label a node based 
upon the current software stack. Then one could schedule around that and/or set 
up distributed cache content to match.

The RM tracks which version each NM registered with, so it knows the software 
stack being provided by that node.  Unfortunately YARN doesn't know what 
software stack the user's application expects, so it cannot fixup the 
distributed cache to match the expectation.  In addition the framework could be 
embedded in the user's app which cannot be fixed in the general case by 
tweaking the distributed cache.

> 3.0 deployment cannot work with old version MR tar ball which break rolling 
> upgrade
> ---
>
> Key: HADOOP-15059
> URL: https://issues.apache.org/jira/browse/HADOOP-15059
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Reporter: Junping Du
>Assignee: Jason Lowe
>Priority: Blocker
> Attachments: HADOOP-15059.001.patch, HADOOP-15059.002.patch, 
> HADOOP-15059.003.patch, HADOOP-15059.004.patch, HADOOP-15059.005.patch
>
>
> I tried to deploy 3.0 cluster with 2.9 MR tar ball. The MR job is failed 
> because following error:
> {noformat}
> 2017-11-21 12:42:50,911 INFO [main] 
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Created MRAppMaster for 
> application appattempt_1511295641738_0003_01
> 2017-11-21 12:42:51,070 WARN [main] org.apache.hadoop.util.NativeCodeLoader: 
> Unable to load native-hadoop library for your platform... using builtin-java 
> classes where applicable
> 2017-11-21 12:42:51,118 FATAL [main] 
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Error starting MRAppMaster
> java.lang.RuntimeException: Unable to determine current user
>   at 
> org.apache.hadoop.conf.Configuration$Resource.getRestrictParserDefault(Configuration.java:254)
>   at 
> org.apache.hadoop.conf.Configuration$Resource.(Configuration.java:220)
>   at 
> org.apache.hadoop.conf.Configuration$Resource.(Configuration.java:212)
>   at 
> org.apache.hadoop.conf.Configuration.addResource(Configuration.java:888)
>   at 
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster.main(MRAppMaster.java:1638)
> Caused by: java.io.IOException: Exception reading 
> /tmp/nm-local-dir/usercache/jdu/appcache/application_1511295641738_0003/container_e03_1511295641738_0003_01_01/container_tokens
>   at 
> org.apache.hadoop.security.Credentials.readTokenStorageFile(Credentials.java:208)
>   at 
> org.apache.hadoop.security.UserGroupInformation.loginUserFromSubject(UserGroupInformation.java:907)
>   at 
> org.apache.hadoop.security.UserGroupInformation.getLoginUser(UserGroupInformation.java:820)
>   at 
> org.apache.hadoop.security.UserGroupInformation.getCurrentUser(UserGroupInformation.java:689)
>   at 
> org.apache.hadoop.conf.Configuration$Resource.getRestrictParserDefault(Configuration.java:252)
>   ... 4 more
> Caused by: java.io.IOException: Unknown version 1 in token storage.
>   at 
> org.apache.hadoop.security.Credentials.readTokenStorageStream(Credentials.java:226)
>   at 
> org.apache.hadoop.security.Credentials.readTokenStorageFile(Credentials.java:205)
>   ... 8 more
> 2017-11-21 12:42:51,122 INFO [main] org.apache.hadoop.util.ExitUtil: Exiting 
> with status 1: java.lang.RuntimeException: Unable to determine current user
> {noformat}
> I think it is due to token incompatiblity change between 2.9 and 3.0. As we 
> claim "rolling upgrade" is supported in Hadoop 3, we should fix this before 
> we ship 3.0 otherwise all MR running applications will get stuck during/after 
> upgrade.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15059) 3.0 deployment cannot work with old version MR tar ball which break rolling upgrade

2017-11-30 Thread Daryn Sharp (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15059?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16272842#comment-16272842
 ] 

Daryn Sharp commented on HADOOP-15059:
--

bq. ContainerLaunchContext.tokens in YARN is unfortunately a byte-buffer.  
Taking a protobuf, wrapping it into a byte-buffer and sending it to the RM is 
backwards to me.

I'm not sure I understand.  All the code parsing that buffer wraps a stream 
about the byte buffer and invokes {{Credentials#readTokenStorageStream}}.  The 
credentials is still more than a simple PB.  It has a header of magic bytes and 
format version so encoding this into a byte buffer doesn't seem wrong.

bq. The patch looks mostly good to me.

What would remove the word mostly from this sentence? :)


> 3.0 deployment cannot work with old version MR tar ball which break rolling 
> upgrade
> ---
>
> Key: HADOOP-15059
> URL: https://issues.apache.org/jira/browse/HADOOP-15059
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Reporter: Junping Du
>Assignee: Jason Lowe
>Priority: Blocker
> Attachments: HADOOP-15059.001.patch, HADOOP-15059.002.patch, 
> HADOOP-15059.003.patch, HADOOP-15059.004.patch
>
>
> I tried to deploy 3.0 cluster with 2.9 MR tar ball. The MR job is failed 
> because following error:
> {noformat}
> 2017-11-21 12:42:50,911 INFO [main] 
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Created MRAppMaster for 
> application appattempt_1511295641738_0003_01
> 2017-11-21 12:42:51,070 WARN [main] org.apache.hadoop.util.NativeCodeLoader: 
> Unable to load native-hadoop library for your platform... using builtin-java 
> classes where applicable
> 2017-11-21 12:42:51,118 FATAL [main] 
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Error starting MRAppMaster
> java.lang.RuntimeException: Unable to determine current user
>   at 
> org.apache.hadoop.conf.Configuration$Resource.getRestrictParserDefault(Configuration.java:254)
>   at 
> org.apache.hadoop.conf.Configuration$Resource.(Configuration.java:220)
>   at 
> org.apache.hadoop.conf.Configuration$Resource.(Configuration.java:212)
>   at 
> org.apache.hadoop.conf.Configuration.addResource(Configuration.java:888)
>   at 
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster.main(MRAppMaster.java:1638)
> Caused by: java.io.IOException: Exception reading 
> /tmp/nm-local-dir/usercache/jdu/appcache/application_1511295641738_0003/container_e03_1511295641738_0003_01_01/container_tokens
>   at 
> org.apache.hadoop.security.Credentials.readTokenStorageFile(Credentials.java:208)
>   at 
> org.apache.hadoop.security.UserGroupInformation.loginUserFromSubject(UserGroupInformation.java:907)
>   at 
> org.apache.hadoop.security.UserGroupInformation.getLoginUser(UserGroupInformation.java:820)
>   at 
> org.apache.hadoop.security.UserGroupInformation.getCurrentUser(UserGroupInformation.java:689)
>   at 
> org.apache.hadoop.conf.Configuration$Resource.getRestrictParserDefault(Configuration.java:252)
>   ... 4 more
> Caused by: java.io.IOException: Unknown version 1 in token storage.
>   at 
> org.apache.hadoop.security.Credentials.readTokenStorageStream(Credentials.java:226)
>   at 
> org.apache.hadoop.security.Credentials.readTokenStorageFile(Credentials.java:205)
>   ... 8 more
> 2017-11-21 12:42:51,122 INFO [main] org.apache.hadoop.util.ExitUtil: Exiting 
> with status 1: java.lang.RuntimeException: Unable to determine current user
> {noformat}
> I think it is due to token incompatiblity change between 2.9 and 3.0. As we 
> claim "rolling upgrade" is supported in Hadoop 3, we should fix this before 
> we ship 3.0 otherwise all MR running applications will get stuck during/after 
> upgrade.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13493) Compatibility Docs should clarify the policy for what takes precedence when a conflict is found

2017-11-30 Thread Robert Kanter (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13493?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Kanter updated HADOOP-13493:
---
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 3.0.0
   Status: Resolved  (was: Patch Available)

Thanks [~templedf].  Committed to trunk and branch-3.0!

> Compatibility Docs should clarify the policy for what takes precedence when a 
> conflict is found
> ---
>
> Key: HADOOP-13493
> URL: https://issues.apache.org/jira/browse/HADOOP-13493
> Project: Hadoop Common
>  Issue Type: Task
>  Components: documentation
>Affects Versions: 2.7.2
>Reporter: Robert Kanter
>Assignee: Daniel Templeton
>Priority: Critical
> Fix For: 3.0.0
>
> Attachments: HADOOP-13493.001.patch, HADOOP-13493.002.patch
>
>
> The Compatibility Docs 
> (https://hadoop.apache.org/docs/r2.7.1/hadoop-project-dist/hadoop-common/Compatibility.html#Java_API)
>  list the policies for Private, Public, not annotated, etc Classes and 
> members, but it doesn't say what happens when there's a conflict.  We should 
> try obviously try to avoid this situation, but it would be good to explicitly 
> state what takes precedence.
> As an example, until YARN-3225 made it consistent, {{RefreshNodesRequest}} 
> looked like this:
> {code:java}
> @Private
> @Stable
> public abstract class RefreshNodesRequest {
>   @Public
>   @Stable
>   public static RefreshNodesRequest newInstance() {
> RefreshNodesRequest request = 
> Records.newRecord(RefreshNodesRequest.class);
> return request;
>   }
> }
> {code}
> Note that the class is marked {{\@Private}}, but the method is marked 
> {{\@Public}}.
> In this example, I'd say that the class level should have priority.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13493) Compatibility Docs should clarify the policy for what takes precedence when a conflict is found

2017-11-30 Thread Robert Kanter (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13493?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16272814#comment-16272814
 ] 

Robert Kanter commented on HADOOP-13493:


Sorry for the delay.

It now says that the more restrictive thing has priority, which I think makes 
the most sense.
+1 LGTM 

> Compatibility Docs should clarify the policy for what takes precedence when a 
> conflict is found
> ---
>
> Key: HADOOP-13493
> URL: https://issues.apache.org/jira/browse/HADOOP-13493
> Project: Hadoop Common
>  Issue Type: Task
>  Components: documentation
>Affects Versions: 2.7.2
>Reporter: Robert Kanter
>Assignee: Daniel Templeton
>Priority: Critical
> Attachments: HADOOP-13493.001.patch, HADOOP-13493.002.patch
>
>
> The Compatibility Docs 
> (https://hadoop.apache.org/docs/r2.7.1/hadoop-project-dist/hadoop-common/Compatibility.html#Java_API)
>  list the policies for Private, Public, not annotated, etc Classes and 
> members, but it doesn't say what happens when there's a conflict.  We should 
> try obviously try to avoid this situation, but it would be good to explicitly 
> state what takes precedence.
> As an example, until YARN-3225 made it consistent, {{RefreshNodesRequest}} 
> looked like this:
> {code:java}
> @Private
> @Stable
> public abstract class RefreshNodesRequest {
>   @Public
>   @Stable
>   public static RefreshNodesRequest newInstance() {
> RefreshNodesRequest request = 
> Records.newRecord(RefreshNodesRequest.class);
> return request;
>   }
> }
> {code}
> Note that the class is marked {{\@Private}}, but the method is marked 
> {{\@Public}}.
> In this example, I'd say that the class level should have priority.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14444) New implementation of ftp and sftp filesystems

2017-11-30 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-1?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16272696#comment-16272696
 ] 

genericqa commented on HADOOP-1:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
11s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 30 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
16s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 17m 
19s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 13m 
35s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
 7s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 11m 
12s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
23m 16s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-project hadoop-tools . {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m  
0s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  5m 
17s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
17s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 21m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 13m  
3s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 13m  
3s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
 8s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 11m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
5s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}  
9m 43s{color} | {color:green} patch has no errors when building and testing our 
client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-project hadoop-tools . {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  4m 
47s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 14m 36s{color} 
| {color:red} root in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
32s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}139m 55s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.ha.TestZKFailoverController |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 |
| JIRA Issue | HADOOP-1 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/1288/HADOOP-1.12.patch 
|
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  xml  findbugs  checkstyle  |
| uname | Linux 82919017a49b 3.13.0-135-generic #184-Ubuntu SMP Wed Oct 

[jira] [Commented] (HADOOP-14444) New implementation of ftp and sftp filesystems

2017-11-30 Thread Lukas Waldmann (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-1?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16272674#comment-16272674
 ] 

Lukas Waldmann commented on HADOOP-1:
-

Overall comment: I don't like using this patch system - it's fine for small 
changes  but for whole module is PITA. It's hard to track changes, make diffs 
etc. Wouldn't be possible to have a branch and push things directly to it?

See my comments:
Tests
why is the FTP test skipped on Windows?
{color:#8eb021}Hmm, good question - it actually comes from original 
implementation of sftp filesystem 
(hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/sftp/TestSFT)
 and I have no idea why windows were excluded. I will try to find windows 
machine and run it there 
{color}

we tend to use `setup()` `teardown()` as the @Before/@after operations in 
filesystems. Having standard names makes it more consistent when 
subclassing...and having >1 before/after method puts you into ambiguous 
ordering. Fix: change the names, subclass as appropriate, calling the 
superclass method as desired.
{color:#8eb021}will try to do - there was some problem if I remember but will 
see what I can do {color}

like what you've done with the mixin to reuse all the tests, but I'd prefer a 
name more unique to the FS than ContractTestBase. FTPContractTestMixin?
{color:#8eb021}will do{color}

Never thought about having `AbstractFSContract createContract()` raise an IOE. 
We could add that to its signature (best in a separate JIRA)
{color:#8eb021}not sure if I understand you here{color}

You are importing the distcp tests but not using them. What's your plan there? 
Get this patch in and then add that as the next iteration?
{color:#8eb021}ITestContractDistCp is using ditscp test and runs fine - did I 
miss something?{color}

Docs readme should go into src/site/org/apache/hadoop/ftpextended/index.md
{color:#8eb021}will do{color}

Misc minor points
need to rename AbstractFileSystem to a class which isn't used elsewhere, e.g 
AbstractFTPFileSystem
{color:#8eb021}oki{color}

use try-with-resources areound channel logic and have the implicit 
channel.close() do the disconnect
{color:#8eb021}I tend to use it when possible, please let me know if you see 
some specific place{color}
 
There's lots of opportunities to use subclasses of IOE where it is useful to 
provide more meaningful failures.
{color:#8eb021}sure, not so sure if i have time to investigate and replace 
them{color}

the style guidelines have conventions on import ordering we strive to maintain, 
especially on new code
{color:#8eb021}do you have a link?{color}

hadoop code prefers a space after // in comments; a search & replace should fix
{color:#8eb021}will do{color}

org/apache/hadoop/fs/ftpextended/ftp/package-info.java should declare code as 
@Private+Unstable. Even if the FS is public, there's no API coming from this 
module, nor stability guarantees.
{color:#8eb021}will do{color}

Unless it's going to leak passwords, error messages should try and include the 
filesystem URI in them. Why? helps debugging when the job is working with >1 FS 
and all you have is a log to go on
{color:#8eb021}will try to find a generic way for it but it may to wait for 
later{color}
 
When wrapping library exceptions (e.g SFTP exceptions), always include the 
toString() value of the wrapped exception. It'll be the string most likely to 
make it to bug reports.
{color:#8eb021}oki{color}

core-site.xml mentions s3
{color:#8eb021}Just C - removed
{color}

Security
I'm moving to a world where we have to provide security audits of sensitive 
patches, which this is.
What's the security mechanism here?
{color:#8eb021}Well, so far none :){color}

Is Configuration.getPassword() used to get secrets through JCEKS files?
{color:#8eb021}No{color}

I see that user:password is supported. I don't like this. I guess given its 
only FTP it doesn't matter that much, but on SFTP it does
And on the topic of SFTP, what to do there?
{color:#8eb021}I don't like it much either - but distributing private keys 
across cluster i like even less and that you need in case of distcp or some 
other MR job.
I guess it is question for the deeper discussion - i can probably use some 
mechanism used in other filesystems? Do you have some ideas
?{color}

> New implementation of ftp and sftp filesystems
> --
>
> Key: HADOOP-1
> URL: https://issues.apache.org/jira/browse/HADOOP-1
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: fs
>Affects Versions: 2.8.0
>Reporter: Lukas Waldmann
>Assignee: Lukas Waldmann
> Attachments: HADOOP-1.10.patch, HADOOP-1.11.patch, 
> HADOOP-1.12.patch, HADOOP-1.2.patch, HADOOP-1.3.patch, 
> HADOOP-1.4.patch, HADOOP-1.5.patch, HADOOP-1.6.patch, 
> 

[jira] [Updated] (HADOOP-14444) New implementation of ftp and sftp filesystems

2017-11-30 Thread Lukas Waldmann (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-1?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lukas Waldmann updated HADOOP-1:

Status: Patch Available  (was: In Progress)

fix compilation and javadoc warnings

> New implementation of ftp and sftp filesystems
> --
>
> Key: HADOOP-1
> URL: https://issues.apache.org/jira/browse/HADOOP-1
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: fs
>Affects Versions: 2.8.0
>Reporter: Lukas Waldmann
>Assignee: Lukas Waldmann
> Attachments: HADOOP-1.10.patch, HADOOP-1.11.patch, 
> HADOOP-1.12.patch, HADOOP-1.2.patch, HADOOP-1.3.patch, 
> HADOOP-1.4.patch, HADOOP-1.5.patch, HADOOP-1.6.patch, 
> HADOOP-1.7.patch, HADOOP-1.8.patch, HADOOP-1.9.patch, 
> HADOOP-1.patch
>
>
> Current implementation of FTP and SFTP filesystems have severe limitations 
> and performance issues when dealing with high number of files. Mine patch 
> solve those issues and integrate both filesystems such a way that most of the 
> core functionality is common for both and therefore simplifying the 
> maintainability.
> The core features:
> * Support for HTTP/SOCKS proxies
> * Support for passive FTP
> * Support for explicit FTPS (SSL/TLS)
> * Support of connection pooling - new connection is not created for every 
> single command but reused from the pool.
> For huge number of files it shows order of magnitude performance improvement 
> over not pooled connections.
> * Caching of directory trees. For ftp you always need to list whole directory 
> whenever you ask information about particular file.
> Again for huge number of files it shows order of magnitude performance 
> improvement over not cached connections.
> * Support of keep alive (NOOP) messages to avoid connection drops
> * Support for Unix style or regexp wildcard glob - useful for listing a 
> particular files across whole directory tree
> * Support for reestablishing broken ftp data transfers - can happen 
> surprisingly often



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14444) New implementation of ftp and sftp filesystems

2017-11-30 Thread Lukas Waldmann (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-1?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lukas Waldmann updated HADOOP-1:

Attachment: HADOOP-1.12.patch

> New implementation of ftp and sftp filesystems
> --
>
> Key: HADOOP-1
> URL: https://issues.apache.org/jira/browse/HADOOP-1
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: fs
>Affects Versions: 2.8.0
>Reporter: Lukas Waldmann
>Assignee: Lukas Waldmann
> Attachments: HADOOP-1.10.patch, HADOOP-1.11.patch, 
> HADOOP-1.12.patch, HADOOP-1.2.patch, HADOOP-1.3.patch, 
> HADOOP-1.4.patch, HADOOP-1.5.patch, HADOOP-1.6.patch, 
> HADOOP-1.7.patch, HADOOP-1.8.patch, HADOOP-1.9.patch, 
> HADOOP-1.patch
>
>
> Current implementation of FTP and SFTP filesystems have severe limitations 
> and performance issues when dealing with high number of files. Mine patch 
> solve those issues and integrate both filesystems such a way that most of the 
> core functionality is common for both and therefore simplifying the 
> maintainability.
> The core features:
> * Support for HTTP/SOCKS proxies
> * Support for passive FTP
> * Support for explicit FTPS (SSL/TLS)
> * Support of connection pooling - new connection is not created for every 
> single command but reused from the pool.
> For huge number of files it shows order of magnitude performance improvement 
> over not pooled connections.
> * Caching of directory trees. For ftp you always need to list whole directory 
> whenever you ask information about particular file.
> Again for huge number of files it shows order of magnitude performance 
> improvement over not cached connections.
> * Support of keep alive (NOOP) messages to avoid connection drops
> * Support for Unix style or regexp wildcard glob - useful for listing a 
> particular files across whole directory tree
> * Support for reestablishing broken ftp data transfers - can happen 
> surprisingly often



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14444) New implementation of ftp and sftp filesystems

2017-11-30 Thread Lukas Waldmann (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-1?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lukas Waldmann updated HADOOP-1:

Status: In Progress  (was: Patch Available)

> New implementation of ftp and sftp filesystems
> --
>
> Key: HADOOP-1
> URL: https://issues.apache.org/jira/browse/HADOOP-1
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: fs
>Affects Versions: 2.8.0
>Reporter: Lukas Waldmann
>Assignee: Lukas Waldmann
> Attachments: HADOOP-1.10.patch, HADOOP-1.11.patch, 
> HADOOP-1.2.patch, HADOOP-1.3.patch, HADOOP-1.4.patch, 
> HADOOP-1.5.patch, HADOOP-1.6.patch, HADOOP-1.7.patch, 
> HADOOP-1.8.patch, HADOOP-1.9.patch, HADOOP-1.patch
>
>
> Current implementation of FTP and SFTP filesystems have severe limitations 
> and performance issues when dealing with high number of files. Mine patch 
> solve those issues and integrate both filesystems such a way that most of the 
> core functionality is common for both and therefore simplifying the 
> maintainability.
> The core features:
> * Support for HTTP/SOCKS proxies
> * Support for passive FTP
> * Support for explicit FTPS (SSL/TLS)
> * Support of connection pooling - new connection is not created for every 
> single command but reused from the pool.
> For huge number of files it shows order of magnitude performance improvement 
> over not pooled connections.
> * Caching of directory trees. For ftp you always need to list whole directory 
> whenever you ask information about particular file.
> Again for huge number of files it shows order of magnitude performance 
> improvement over not cached connections.
> * Support of keep alive (NOOP) messages to avoid connection drops
> * Support for Unix style or regexp wildcard glob - useful for listing a 
> particular files across whole directory tree
> * Support for reestablishing broken ftp data transfers - can happen 
> surprisingly often



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15056) Fix TestUnbuffer#testUnbufferException failure

2017-11-30 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15056?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16272479#comment-16272479
 ] 

genericqa commented on HADOOP-15056:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
10s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
30s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 16m 
19s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 13m  
3s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
 5s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
14s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
14m 43s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
20s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
47s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
15s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
38s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 11m 
28s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 11m 
28s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
2m  3s{color} | {color:orange} root: The patch generated 2 new + 5 unchanged - 
0 fixed = 7 total (was 5) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m  
8s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}  
9m 56s{color} | {color:green} patch has no errors when building and testing our 
client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
40s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
47s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  8m 25s{color} 
| {color:red} hadoop-common in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 88m 13s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
34s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}183m 20s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.security.TestRaceWhenRelogin |
|   | hadoop.hdfs.server.blockmanagement.TestBlockStatsMXBean |
|   | hadoop.fs.viewfs.TestViewFileSystemLinkMergeSlash |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 |
| JIRA Issue | HADOOP-15056 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12899955/HADOOP-15056.003.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 441ce6b71903 3.13.0-135-generic #184-Ubuntu SMP Wed Oct 18 
11:55:51 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 0e560f3 |
| maven | 

[jira] [Created] (HADOOP-15076) s3a troubleshooting to add "things don't work after I dropped in a new AWS SDK JAR"

2017-11-30 Thread Steve Loughran (JIRA)
Steve Loughran created HADOOP-15076:
---

 Summary: s3a troubleshooting to add "things don't work after I 
dropped in a new AWS SDK JAR"
 Key: HADOOP-15076
 URL: https://issues.apache.org/jira/browse/HADOOP-15076
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: documentation, fs/s3
Affects Versions: 2.8.2
Reporter: Steve Loughran


A recurrent theme in s3a-related JIRAs, support calls etc is "tried upgrading 
the AWS SDK JAR and then I got the error ...". We know here "don't do that", 
but its not something immediately obvious to lots of downstream users who want 
to be able to drop in the new JAR to fix things/add new features

We need to spell this out quite clearlyi "you cannot safely expect to do this. 
If you want to upgrade the SDK, you will need to rebuild the whole of 
hadoop-aws with the maven POM updated to the latest version, ideally rerunning 
all the tests to make sure something hasn't broken. 

Maybe near the top of the index.md file, along with "never share your AWS 
credentials with anyone"




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15074) SequenceFile#Writer flush does not update the length of the written file.

2017-11-30 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15074?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16272463#comment-16272463
 ] 

Steve Loughran commented on HADOOP-15074:
-

you mean after a flush/sync the length of the file from a listing/getFileStatus 
hasn't changed?

> SequenceFile#Writer flush does not update the length of the written file.
> -
>
> Key: HADOOP-15074
> URL: https://issues.apache.org/jira/browse/HADOOP-15074
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Mukul Kumar Singh
>Assignee: Mukul Kumar Singh
>
> SequenceFile#Writer flush does not update the length of the file. This 
> happens because as part of the flush, {{UPDATE_LENGTH}} flag is not passed to 
> the DFSOutputStream#hsync.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org