[jira] [Commented] (HADOOP-14726) Mark FileStatus::isDir as final

2017-08-14 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14726?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16126827#comment-16126827
 ] 

Hudson commented on HADOOP-14726:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #12187 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/12187/])
HADOOP-14726. Mark FileStatus::isDir as final (cdouglas: rev 
645a8f2a4d09acb5a21820f52ee78784d9e4cc8a)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocolPB/PBHelperClient.java
* (edit) 
hadoop-tools/hadoop-openstack/src/main/java/org/apache/hadoop/fs/swift/snative/SwiftFileStatus.java
* (edit) 
hadoop-tools/hadoop-openstack/src/test/java/org/apache/hadoop/fs/swift/TestSwiftFileSystemDirectories.java
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileStatus.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/mover/TestStorageMover.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NamenodeFsck.java
* (edit) 
hadoop-tools/hadoop-openstack/src/test/java/org/apache/hadoop/fs/swift/TestSwiftFileSystemPartitionedUploads.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/mover/Mover.java
* (edit) 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/mapreduce/lib/input/TestCombineFileInputFormat.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-nfs/src/main/java/org/apache/hadoop/hdfs/nfs/nfs3/RpcProgramNfs3.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-nfs/src/main/java/org/apache/hadoop/hdfs/nfs/nfs3/Nfs3Utils.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/ha/TestFailureToReadEdits.java
* (edit) 
hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azure/TestOutOfBandAzureBlobOperations.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestStartup.java
* (edit) 
hadoop-tools/hadoop-openstack/src/main/java/org/apache/hadoop/fs/swift/snative/SwiftNativeFileSystemStore.java
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFsFileStatus.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/ha/TestEditLogTailer.java
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFsLocatedFileStatus.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDFSUpgradeFromImage.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/ha/TestInitializeSharedEdits.java


> Mark FileStatus::isDir as final
> ---
>
> Key: HADOOP-14726
> URL: https://issues.apache.org/jira/browse/HADOOP-14726
> Project: Hadoop Common
>  Issue Type: Task
>  Components: fs
>Reporter: Chris Douglas
>Assignee: Chris Douglas
>Priority: Minor
> Fix For: 3.0.0-beta1
>
> Attachments: HADOOP-14726.000.patch, HADOOP-14726.001.patch, 
> HADOOP-14726.002.patch, HADOOP-14726.003.patch, HADOOP-14726.004.patch
>
>
> FileStatus#isDir was deprecated in 0.21 (HADOOP-6585).



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14398) Modify documents for the FileSystem Builder API

2017-08-14 Thread Andrew Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14398?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16126823#comment-16126823
 ] 

Andrew Wang commented on HADOOP-14398:
--

Thanks for the rev Eddy! Almost there, a few more things:

* Can we update the example in FSDataOutputStream as well to use a 
FooFileSystem and fake options too?
* There are still mentions of realistic looking names in the doc that I'd 
prefer were fake ones.
* Looking at FSDataOutputStream, the javadoc for must and opt should have a 
{{@throws IllegalArgumentException}} instead of linking. The overloads for the 
non-String types would also ideally {{@see #must(String, String)}} or {{@see 
#opt(String, String}} as appropriate.

Reading Aaron's previous review and combining with my own, it sounds like we 
are deferring the following to future work / more discussion:

* Specify what happens when must and opt conflict
* Are there provisions for probing FS capabilities without must ?
* Where to put any HDFS-specific information about the builder options

Would appreciate if you start conversations / file follow-on JIRAs as 
appropriate for this.

> Modify documents for the FileSystem Builder API
> ---
>
> Key: HADOOP-14398
> URL: https://issues.apache.org/jira/browse/HADOOP-14398
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs
>Affects Versions: 3.0.0-alpha3
>Reporter: Lei (Eddy) Xu
>Assignee: Lei (Eddy) Xu
>  Labels: docuentation
> Attachments: HADOOP-14398.00.patch, HADOOP-14398.01.patch, 
> HADOOP-14398.02.patch
>
>
> After finishes the API, we should update the document to describe the 
> interface, capability and contract which the APIs hold. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14726) Mark FileStatus::isDir as final

2017-08-14 Thread Chris Douglas (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14726?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Douglas updated HADOOP-14726:
---
   Resolution: Fixed
 Assignee: Chris Douglas
 Hadoop Flags: Incompatible change,Reviewed  (was: Incompatible change)
Fix Version/s: 3.0.0-beta1
   Status: Resolved  (was: Patch Available)

I committed this. Thanks for the review, [~ste...@apache.org]

> Mark FileStatus::isDir as final
> ---
>
> Key: HADOOP-14726
> URL: https://issues.apache.org/jira/browse/HADOOP-14726
> Project: Hadoop Common
>  Issue Type: Task
>  Components: fs
>Reporter: Chris Douglas
>Assignee: Chris Douglas
>Priority: Minor
> Fix For: 3.0.0-beta1
>
> Attachments: HADOOP-14726.000.patch, HADOOP-14726.001.patch, 
> HADOOP-14726.002.patch, HADOOP-14726.003.patch, HADOOP-14726.004.patch
>
>
> FileStatus#isDir was deprecated in 0.21 (HADOOP-6585).



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14693) Upgrade JUnit from 4 to 5

2017-08-14 Thread Ajay Kumar (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14693?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16126796#comment-16126796
 ] 

Ajay Kumar commented on HADOOP-14693:
-

[~steve_l] agree!! will link jira to upgrade the junit dependency in parent pom 
with junit-vintage-engine. 

> Upgrade JUnit from 4 to 5
> -
>
> Key: HADOOP-14693
> URL: https://issues.apache.org/jira/browse/HADOOP-14693
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Akira Ajisaka
>
> JUnit 4 does not support Java 9. We need to upgrade this.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14726) Mark FileStatus::isDir as final

2017-08-14 Thread Chris Douglas (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14726?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Douglas updated HADOOP-14726:
---
Release Note: The deprecated FileStatus::isDir method has been marked as 
final. FileSystems should override FileStatus::isDirectory
 Summary: Mark FileStatus::isDir as final  (was: Mark FileStatus#isDir 
as final)

> Mark FileStatus::isDir as final
> ---
>
> Key: HADOOP-14726
> URL: https://issues.apache.org/jira/browse/HADOOP-14726
> Project: Hadoop Common
>  Issue Type: Task
>  Components: fs
>Reporter: Chris Douglas
>Priority: Minor
> Attachments: HADOOP-14726.000.patch, HADOOP-14726.001.patch, 
> HADOOP-14726.002.patch, HADOOP-14726.003.patch, HADOOP-14726.004.patch
>
>
> FileStatus#isDir was deprecated in 0.21 (HADOOP-6585).



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14726) Mark FileStatus#isDir as final

2017-08-14 Thread Chris Douglas (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14726?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Douglas updated HADOOP-14726:
---
Summary: Mark FileStatus#isDir as final  (was: Remove FileStatus#isDir)

> Mark FileStatus#isDir as final
> --
>
> Key: HADOOP-14726
> URL: https://issues.apache.org/jira/browse/HADOOP-14726
> Project: Hadoop Common
>  Issue Type: Task
>  Components: fs
>Reporter: Chris Douglas
>Priority: Minor
> Attachments: HADOOP-14726.000.patch, HADOOP-14726.001.patch, 
> HADOOP-14726.002.patch, HADOOP-14726.003.patch, HADOOP-14726.004.patch
>
>
> FileStatus#isDir was deprecated in 0.21 (HADOOP-6585).



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13998) Merge initial s3guard release into trunk

2017-08-14 Thread Sean Mackrory (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13998?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16126687#comment-16126687
 ] 

Sean Mackrory commented on HADOOP-13998:


Thanks, [~steve_l] - I'm reviewing (is a comitter +1 even binding on this? I 
seem to recall branch merges being PMC votes, but I don't see that in the 
by-laws). I ran tests locally as well and only got some typical S3N flakiness:

{code}
testListStatus(org.apache.hadoop.fs.s3native.ITestJets3tNativeS3FileSystemContract)
  Time elapsed: 2.527 sec  <<< FAILURE!
java.lang.AssertionError: expected:<1> but was:<12>

testRenameDirectoryAsExistingFile(org.apache.hadoop.fs.s3native.ITestJets3tNativeS3FileSystemContract)
  Time elapsed: 1.38 sec  <<< FAILURE!
java.lang.AssertionError: Source exists expected: but was:
{code}

> Merge initial s3guard release into trunk
> 
>
> Key: HADOOP-13998
> URL: https://issues.apache.org/jira/browse/HADOOP-13998
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.0.0-beta1
>Reporter: Steve Loughran
>Assignee: Steve Loughran
> Attachments: HADOOP-13998-001.patch, HADOOP-13998-002.patch
>
>
> JIRA to link in all the things we think are needed for a preview/merge into 
> trunk



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-9747) Reduce unnecessary UGI synchronization

2017-08-14 Thread Eric Payne (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9747?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16126644#comment-16126644
 ] 

Eric Payne commented on HADOOP-9747:


[~daryn], the following tests are failing with this patch and succeeding 
without it on trunk:

{noformat}
TestTokenClientRMService#testTokenRenewalByLoginUser
testTokenRenewalByLoginUser(org.apache.hadoop.yarn.server.resourcemanager.TestTokenClientRMService)
  Time elapsed: 0.043 sec  <<< ERROR!
java.lang.reflect.UndeclaredThrowableException: null
at 
org.apache.hadoop.yarn.server.resourcemanager.ClientRMService.renewDelegationToken(ClientRMService.java:1058)
at 
org.apache.hadoop.yarn.server.resourcemanager.TestTokenClientRMService.checkTokenRenewal(TestTokenClientRMService.java:169)
at 
org.apache.hadoop.yarn.server.resourcemanager.TestTokenClientRMService.access$500(TestTokenClientRMService.java:46)

TestRMDelegationTokens#testRMDTMasterKeyStateOnRollingMasterKey
testRMDTMasterKeyStateOnRollingMasterKey(org.apache.hadoop.yarn.server.resourcemanager.security.TestRMDelegationTokens)
  Time elapsed: 0.792 sec  <<< ERROR!
org.apache.hadoop.yarn.exceptions.YarnException: java.io.IOException: 
Delegation Token can be issued only with kerberos authentication
at 
org.apache.hadoop.yarn.server.resourcemanager.ClientRMService.getDelegationToken(ClientRMService.java:1022)
at 
org.apache.hadoop.yarn.server.resourcemanager.security.TestRMDelegationTokens.testRMDTMasterKeyStateOnRollingMasterKey(TestRMDelegationTokens.java:102)
{noformat}

> Reduce unnecessary UGI synchronization
> --
>
> Key: HADOOP-9747
> URL: https://issues.apache.org/jira/browse/HADOOP-9747
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 0.23.0, 2.0.0-alpha, 3.0.0-alpha1
>Reporter: Daryn Sharp
>Assignee: Daryn Sharp
>Priority: Critical
> Attachments: HADOOP-9747.2.branch-2.patch, HADOOP-9747.2.trunk.patch, 
> HADOOP-9747.branch-2.patch, HADOOP-9747.trunk.patch
>
>
> Jstacks of heavily loaded NNs show up to dozens of threads blocking in the 
> UGI.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14773) Extend ZKCuratorManager API

2017-08-14 Thread Subru Krishnan (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14773?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16126640#comment-16126640
 ] 

Subru Krishnan commented on HADOOP-14773:
-

Thanks [~elgoiri] for the patch. It looks fairly straightforward, +1 (pending 
Yetus).

> Extend ZKCuratorManager API
> ---
>
> Key: HADOOP-14773
> URL: https://issues.apache.org/jira/browse/HADOOP-14773
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Íñigo Goiri
>Assignee: Íñigo Goiri
> Attachments: HADOOP-14773-000.patch
>
>
> HDFS-10631 needs some minor changes in {{ZKCuratorManager}}:
> * {{boolean delete()}}
> * {{getData(path, stat)}}
> * {{createRootDirRecursively(path)}}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14773) Extend ZKCuratorManager API

2017-08-14 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/HADOOP-14773?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Íñigo Goiri updated HADOOP-14773:
-
Attachment: HADOOP-14773-000.patch

> Extend ZKCuratorManager API
> ---
>
> Key: HADOOP-14773
> URL: https://issues.apache.org/jira/browse/HADOOP-14773
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Íñigo Goiri
>Assignee: Íñigo Goiri
> Attachments: HADOOP-14773-000.patch
>
>
> HDFS-10631 needs some minor changes in {{ZKCuratorManager}}:
> * {{boolean delete()}}
> * {{getData(path, stat)}}
> * {{createRootDirRecursively(path)}}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14773) Extend ZKCuratorManager API

2017-08-14 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/HADOOP-14773?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Íñigo Goiri updated HADOOP-14773:
-
Description: 
HDFS-10631 needs some minor changes in {{ZKCuratorManager}}:
* {{boolean delete()}}
* {{getData(path, stat)}}
* {{createRootDirRecursively(path)}}

> Extend ZKCuratorManager API
> ---
>
> Key: HADOOP-14773
> URL: https://issues.apache.org/jira/browse/HADOOP-14773
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Íñigo Goiri
>Assignee: Íñigo Goiri
> Attachments: HADOOP-14773-000.patch
>
>
> HDFS-10631 needs some minor changes in {{ZKCuratorManager}}:
> * {{boolean delete()}}
> * {{getData(path, stat)}}
> * {{createRootDirRecursively(path)}}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14732) ProtobufRpcEngine should use Time.monotonicNow to measure durations

2017-08-14 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14732?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16126628#comment-16126628
 ] 

Hadoop QA commented on HADOOP-14732:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
16s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 
 8s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 15m 
54s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
37s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
30s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
31s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
50s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 11m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 11m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
26s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
35s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
51s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  8m 38s{color} 
| {color:red} hadoop-common in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
30s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 63m 24s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.security.TestRaceWhenRelogin |
|   | hadoop.ha.TestZKFailoverController |
|   | hadoop.ipc.TestRPC |
|   | hadoop.security.TestKDiag |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | HADOOP-14732 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12881519/HADOOP-14732.001.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 97b4df6d9d68 3.13.0-117-generic #164-Ubuntu SMP Fri Apr 7 
11:05:26 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 5558792 |
| Default Java | 1.8.0_144 |
| findbugs | v3.1.0-RC1 |
| unit | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/13032/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/13032/testReport/ |
| modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/13032/console |
| Powered by | Apache Yetus 0.6.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> ProtobufRpcEngine should use Time.monotonicNow to measure durations
> ---
>
> Key: HADOOP-14732
> URL: 

[jira] [Commented] (HADOOP-14738) Deprecate S3N in hadoop 3.0/2,9, target removal in Hadoop 3.1

2017-08-14 Thread Andrew Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14738?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16126622#comment-16126622
 ] 

Andrew Wang commented on HADOOP-14738:
--

Is anyone planning to pick this up and complete it by mid-September for beta1?

> Deprecate S3N in hadoop 3.0/2,9, target removal in Hadoop 3.1
> -
>
> Key: HADOOP-14738
> URL: https://issues.apache.org/jira/browse/HADOOP-14738
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.9.0, 3.0.0-beta1
>Reporter: Steve Loughran
>Priority: Blocker
>
> We are all happy with S3A; it's been stable since Hadoop 2.7 and high-perf 
> since Hadoop 2.8
> It's now time to kill S3N off, remove the source, the tests, the transitive 
> dependencies.
> I propose that in Hadoop 3.0 beta we tell people off from using it, and link 
> to a doc page (wiki?) about how to migrate (Change URLs, update config ops).



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-14773) Extend ZKCuratorManager API

2017-08-14 Thread JIRA
Íñigo Goiri created HADOOP-14773:


 Summary: Extend ZKCuratorManager API
 Key: HADOOP-14773
 URL: https://issues.apache.org/jira/browse/HADOOP-14773
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Íñigo Goiri
Assignee: Íñigo Goiri






--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14673) Remove leftover hadoop_xml_escape from functions

2017-08-14 Thread Arpit Agarwal (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14673?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16126608#comment-16126608
 ] 

Arpit Agarwal commented on HADOOP-14673:


Thanks for the contribution [~ajayydv].

> Remove leftover hadoop_xml_escape from functions
> 
>
> Key: HADOOP-14673
> URL: https://issues.apache.org/jira/browse/HADOOP-14673
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: scripts
>Affects Versions: 3.0.0-beta1
>Reporter: Allen Wittenauer
>Assignee: Ajay Kumar
> Fix For: 3.0.0-beta1
>
> Attachments: HADOOP-14673.01.patch
>
>
> This function is no longer needed, so let's purge it.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14673) Remove leftover hadoop_xml_escape from functions

2017-08-14 Thread Arpit Agarwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14673?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HADOOP-14673:
---
Target Version/s:   (was: 3.0.0-beta1)

> Remove leftover hadoop_xml_escape from functions
> 
>
> Key: HADOOP-14673
> URL: https://issues.apache.org/jira/browse/HADOOP-14673
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: scripts
>Affects Versions: 3.0.0-beta1
>Reporter: Allen Wittenauer
>Assignee: Ajay Kumar
> Fix For: 3.0.0-beta1
>
> Attachments: HADOOP-14673.01.patch
>
>
> This function is no longer needed, so let's purge it.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14673) Remove leftover hadoop_xml_escape from functions

2017-08-14 Thread Arpit Agarwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14673?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HADOOP-14673:
---
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 3.0.0-beta1
   Status: Resolved  (was: Patch Available)

Committed this after checking there are no references to the removed functions 
and running shell tests locally.

> Remove leftover hadoop_xml_escape from functions
> 
>
> Key: HADOOP-14673
> URL: https://issues.apache.org/jira/browse/HADOOP-14673
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: scripts
>Affects Versions: 3.0.0-beta1
>Reporter: Allen Wittenauer
>Assignee: Ajay Kumar
> Fix For: 3.0.0-beta1
>
> Attachments: HADOOP-14673.01.patch
>
>
> This function is no longer needed, so let's purge it.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14732) ProtobufRpcEngine should use Time.monotonicNow to measure durations

2017-08-14 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14732?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16126602#comment-16126602
 ] 

Hudson commented on HADOOP-14732:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #12184 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/12184/])
HADOOP-14732. ProtobufRpcEngine should use Time.monotonicNow to measure (arp: 
rev 8bef4eca28a3466707cc4ea0de0330449319a5eb)
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/ProtobufRpcEngine.java


> ProtobufRpcEngine should use Time.monotonicNow to measure durations
> ---
>
> Key: HADOOP-14732
> URL: https://issues.apache.org/jira/browse/HADOOP-14732
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
> Fix For: 2.9.0, 3.0.0-beta1
>
> Attachments: HADOOP-14732.001.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14732) ProtobufRpcEngine should use Time.monotonicNow to measure durations

2017-08-14 Thread Arpit Agarwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14732?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HADOOP-14732:
---
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 3.0.0-beta1
   2.9.0
   Status: Resolved  (was: Patch Available)

TestRPC passed locally for me too. I've committed this.

Thanks for the contribution [~hanishakoneru].

> ProtobufRpcEngine should use Time.monotonicNow to measure durations
> ---
>
> Key: HADOOP-14732
> URL: https://issues.apache.org/jira/browse/HADOOP-14732
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
> Fix For: 2.9.0, 3.0.0-beta1
>
> Attachments: HADOOP-14732.001.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-14772) Audit-log delegation token related operations to the KMS

2017-08-14 Thread Xiao Chen (JIRA)
Xiao Chen created HADOOP-14772:
--

 Summary: Audit-log delegation token related operations to the KMS
 Key: HADOOP-14772
 URL: https://issues.apache.org/jira/browse/HADOOP-14772
 Project: Hadoop Common
  Issue Type: Improvement
  Components: kms
Affects Versions: 2.6.0
Reporter: Xiao Chen
Assignee: Xiao Chen


When inspecting the code, I found that the following methods are not audit 
logged:
- getDelegationToken
- renewDelegationToken
- cancelDelegationToken

This jira is to propose add audit logging. A similar jira for HDFS is HDFS-12300




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14660) wasb: improve throughput by 34% when account limit exceeded

2017-08-14 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14660?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16126472#comment-16126472
 ] 

Hadoop QA commented on HADOOP-14660:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
19s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
1s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 4 new or modified test 
files. {color} |
|| || || || {color:brown} branch-2 Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
45s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  6m 
40s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
26s{color} | {color:green} branch-2 passed with JDK v1.8.0_144 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
46s{color} | {color:green} branch-2 passed with JDK v1.7.0_131 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
28s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
23s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
18s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
4s{color} | {color:green} branch-2 passed with JDK v1.8.0_144 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
10s{color} | {color:green} branch-2 passed with JDK v1.7.0_131 {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
17s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m 
15s{color} | {color:red} hadoop-azure in the patch failed. {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red}  5m 
39s{color} | {color:red} root in the patch failed with JDK v1.8.0_144. {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red}  5m 39s{color} 
| {color:red} root in the patch failed with JDK v1.8.0_144. {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red}  6m 
35s{color} | {color:red} root in the patch failed with JDK v1.7.0_131. {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red}  6m 35s{color} 
| {color:red} root in the patch failed with JDK v1.7.0_131. {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
1m 27s{color} | {color:orange} root: The patch generated 2 new + 3 unchanged - 
22 fixed = 5 total (was 25) {color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red}  0m 
20s{color} | {color:red} hadoop-azure in the patch failed. {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
20s{color} | {color:red} hadoop-azure in the patch failed. {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
17s{color} | {color:red} hadoop-tools_hadoop-azure-jdk1.8.0_144 with JDK 
v1.8.0_144 generated 4 new + 0 unchanged - 0 fixed = 4 total (was 0) {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
19s{color} | {color:red} hadoop-tools_hadoop-azure-jdk1.7.0_131 with JDK 
v1.7.0_131 generated 4 new + 1 unchanged - 0 fixed = 5 total (was 1) {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  8m 
35s{color} | {color:green} hadoop-common in the patch passed with JDK 
v1.7.0_131. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  0m 20s{color} 
| {color:red} hadoop-azure in the patch failed with JDK v1.7.0_131. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
27s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 90m  8s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:5e40efe |
| JIRA Issue | HADOOP-14660 |
| JIRA Patch URL 

[jira] [Commented] (HADOOP-14732) ProtobufRpcEngine should use Time.monotonicNow to measure durations

2017-08-14 Thread Hanisha Koneru (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14732?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16126414#comment-16126414
 ] 

Hanisha Koneru commented on HADOOP-14732:
-

Thanks for the review [~arpitagarwal].
The failing unit test TestRPC passes locally.

> ProtobufRpcEngine should use Time.monotonicNow to measure durations
> ---
>
> Key: HADOOP-14732
> URL: https://issues.apache.org/jira/browse/HADOOP-14732
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
> Attachments: HADOOP-14732.001.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14284) Shade Guava everywhere

2017-08-14 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14284?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16126355#comment-16126355
 ] 

stack commented on HADOOP-14284:


bq. Does HBase shade across all code base, instead of just in client modules?

The whole code base.

We offer a shaded client jar for those contained to our public facing client 
API. We've done a bad job evangelizing it up to this but intend to go at it 
with gusto with our next major release on out. This project should work for 
those confined to our client API but as with hadoop, hbase has facets other 
than that of the client API. This is when the story gets messy; i.e. plugins or 
clients reading/writing hbase files apart from an hbase instance. Shading our 
internals helps as we avoid the possibility of clashes. On our backend, we also 
are too-tightly tied to our upstream, a project we are working to undo, but we 
are not there yet. The internal shading helps here too.

Using enforcer has been suggested over in our project. Will pass on our 
experience if anything noteworthy.

> Shade Guava everywhere
> --
>
> Key: HADOOP-14284
> URL: https://issues.apache.org/jira/browse/HADOOP-14284
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Affects Versions: 3.0.0-alpha4
>Reporter: Andrew Wang
>Assignee: Tsuyoshi Ozawa
>Priority: Blocker
> Attachments: HADOOP-14238.pre001.patch, HADOOP-14284.002.patch, 
> HADOOP-14284.004.patch, HADOOP-14284.007.patch, HADOOP-14284.010.patch, 
> HADOOP-14284.012.patch
>
>
> HADOOP-10101 upgraded the guava version for 3.x to 21.
> Guava is broadly used by Java projects that consume our artifacts. 
> Unfortunately, these projects also consume our private artifacts like 
> {{hadoop-hdfs}}. They also are unlikely on the new shaded client introduced 
> by HADOOP-11804, currently only available in 3.0.0-alpha2.
> We should shade Guava everywhere to proactively avoid breaking downstreams. 
> This isn't a requirement for all dependency upgrades, but it's necessary for 
> known-bad dependencies like Guava.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14766) Add an object store high performance dfs put command

2017-08-14 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14766?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16126334#comment-16126334
 ] 

Steve Loughran commented on HADOOP-14766:
-

note, seeing lots of TIME_WAIT in TCP calls, which, as they happen inside the 
xfer manager, are not something I know the cause of/fix args yet

> Add an object store high performance dfs put command
> 
>
> Key: HADOOP-14766
> URL: https://issues.apache.org/jira/browse/HADOOP-14766
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs, fs/s3
>Affects Versions: 2.8.1
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>
> {{hdfs put local s3a://path}} is suboptimal as it treewalks down down the 
> source tree then, sequentially, copies up the file through copying the file 
> (opened as a stream) contents to a buffer, writes that to the dest file, 
> repeats.
> For S3A that hurts because
> * it;s doing the upload inefficiently: the file can be uploaded just by 
> handling the pathname to the AWS xter manager
> * it is doing it sequentially, when some parallelised upload would work. 
> * as the ordering of the files to upload is a recursive treewalk, it doesn't 
> spread the upload across multiple shards. 
> Better:
> * build the list of files to upload
> * upload in parallel, picking entries from the list at random and spreading 
> across a pool of uploaders
> * upload straight from local file (copyFromLocalFile()
> * track IO load (files created/second) to estimate risk of throttling.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14766) Add an object store high performance dfs put command

2017-08-14 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14766?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16126332#comment-16126332
 ] 

Steve Loughran commented on HADOOP-14766:
-

PoC w tests against local & s3a, built against  branch-2.8

https://github.com/steveloughran/cloudup

> Add an object store high performance dfs put command
> 
>
> Key: HADOOP-14766
> URL: https://issues.apache.org/jira/browse/HADOOP-14766
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs, fs/s3
>Affects Versions: 2.8.1
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>
> {{hdfs put local s3a://path}} is suboptimal as it treewalks down down the 
> source tree then, sequentially, copies up the file through copying the file 
> (opened as a stream) contents to a buffer, writes that to the dest file, 
> repeats.
> For S3A that hurts because
> * it;s doing the upload inefficiently: the file can be uploaded just by 
> handling the pathname to the AWS xter manager
> * it is doing it sequentially, when some parallelised upload would work. 
> * as the ordering of the files to upload is a recursive treewalk, it doesn't 
> spread the upload across multiple shards. 
> Better:
> * build the list of files to upload
> * upload in parallel, picking entries from the list at random and spreading 
> across a pool of uploaders
> * upload straight from local file (copyFromLocalFile()
> * track IO load (files created/second) to estimate risk of throttling.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14660) wasb: improve throughput by 34% when account limit exceeded

2017-08-14 Thread Thomas Marquardt (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14660?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas Marquardt updated HADOOP-14660:
--
Attachment: HADOOP-14660-branch-2.patch

Attaching HADOOP-14660-branch-2.patch.

This is the branch-2 patch.  It has a dependency on the branch-2 patch attached 
to https://issues.apache.org/jira/browse/HADOOP-14662.

All tests are passing against my tmarql3 endpoint:

Tests run: 736, Failures: 0, Errors: 0, Skipped: 95

> wasb: improve throughput by 34% when account limit exceeded
> ---
>
> Key: HADOOP-14660
> URL: https://issues.apache.org/jira/browse/HADOOP-14660
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Reporter: Thomas Marquardt
>Assignee: Thomas Marquardt
> Attachments: HADOOP-14660-001.patch, HADOOP-14660-002.patch, 
> HADOOP-14660-003.patch, HADOOP-14660-004.patch, HADOOP-14660-005.patch, 
> HADOOP-14660-006.patch, HADOOP-14660-007.patch, HADOOP-14660-008.patch, 
> HADOOP-14660-010.patch, HADOOP-14660-branch-2.patch
>
>
> Big data workloads frequently exceed the Azure Storage max ingress and egress 
> limits 
> (https://docs.microsoft.com/en-us/azure/azure-subscription-service-limits).  
> For example, the max ingress limit for a GRS account in the United States is 
> currently 10 Gbps.  When the limit is exceeded, the Azure Storage service 
> fails a percentage of incoming requests, and this causes the client to 
> initiate the retry policy.  The retry policy delays requests by sleeping, but 
> the sleep duration is independent of the client throughput and account limit. 
>  This results in low throughput, due to the high number of failed requests 
> and thrashing causes by the retry policy.
> To fix this, we introduce a client-side throttle which minimizes failed 
> requests and maximizes throughput.  Tests have shown that this improves 
> throughtput by ~34% when the storage account max ingress and/or egress limits 
> are exceeded. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14662) Update azure-storage sdk to version 5.4.0

2017-08-14 Thread Thomas Marquardt (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14662?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas Marquardt updated HADOOP-14662:
--
Attachment: HADOOP-14662-branch-2.patch

Attaching HADOOP-14662-branch-2.patch

This updates branch-2 to Azure Storage SDK 5.4.  All tests are passing against 
my tmarql3 endpoint:

Tests run: 736, Failures: 0, Errors: 0, Skipped: 95

> Update azure-storage sdk to version 5.4.0
> -
>
> Key: HADOOP-14662
> URL: https://issues.apache.org/jira/browse/HADOOP-14662
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs/azure
>Reporter: Thomas Marquardt
>Assignee: Thomas Marquardt
> Fix For: 3.0.0-beta1
>
> Attachments: HADOOP-14662-001.patch, HADOOP-14662-branch-2.patch
>
>
> Azure Storage SDK implements a new event (ErrorReceivingResponseEvent) which 
> HADOOP-14660 has a dependency on.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14738) Deprecate S3N in hadoop 3.0/2,9, target removal in Hadoop 3.1

2017-08-14 Thread Aaron Fabbri (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14738?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aaron Fabbri updated HADOOP-14738:
--
Priority: Blocker  (was: Major)

> Deprecate S3N in hadoop 3.0/2,9, target removal in Hadoop 3.1
> -
>
> Key: HADOOP-14738
> URL: https://issues.apache.org/jira/browse/HADOOP-14738
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.9.0, 3.0.0-beta1
>Reporter: Steve Loughran
>Priority: Blocker
>
> We are all happy with S3A; it's been stable since Hadoop 2.7 and high-perf 
> since Hadoop 2.8
> It's now time to kill S3N off, remove the source, the tests, the transitive 
> dependencies.
> I propose that in Hadoop 3.0 beta we tell people off from using it, and link 
> to a doc page (wiki?) about how to migrate (Change URLs, update config ops).



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14284) Shade Guava everywhere

2017-08-14 Thread Haibo Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14284?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16126259#comment-16126259
 ] 

Haibo Chen commented on HADOOP-14284:
-

Thanks @stack for the insight on HBase's solution. Does HBase shade across all 
code base, instead of just in client modules?
bq. Downside is having to be sure we always refer to the relocated versions in 
code.
Seems like maven enforcer can ban the unwanted non-relocated dependencies, 
manageable if provided that we are willing to reference the relocated versions 
in both server and client modules. If this does work, It seems to me that 
shading both server and client modules would be beneficial in the long run from 
maintenance point of view, even though what downstream projects need is only 
shaded clients. Thoughts?

Does this approach address all of your concerns, [~djp]?


> Shade Guava everywhere
> --
>
> Key: HADOOP-14284
> URL: https://issues.apache.org/jira/browse/HADOOP-14284
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Affects Versions: 3.0.0-alpha4
>Reporter: Andrew Wang
>Assignee: Tsuyoshi Ozawa
>Priority: Blocker
> Attachments: HADOOP-14238.pre001.patch, HADOOP-14284.002.patch, 
> HADOOP-14284.004.patch, HADOOP-14284.007.patch, HADOOP-14284.010.patch, 
> HADOOP-14284.012.patch
>
>
> HADOOP-10101 upgraded the guava version for 3.x to 21.
> Guava is broadly used by Java projects that consume our artifacts. 
> Unfortunately, these projects also consume our private artifacts like 
> {{hadoop-hdfs}}. They also are unlikely on the new shaded client introduced 
> by HADOOP-11804, currently only available in 3.0.0-alpha2.
> We should shade Guava everywhere to proactively avoid breaking downstreams. 
> This isn't a requirement for all dependency upgrades, but it's necessary for 
> known-bad dependencies like Guava.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14749) review s3guard docs & code prior to merge

2017-08-14 Thread Aaron Fabbri (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14749?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16126207#comment-16126207
 ] 

Aaron Fabbri commented on HADOOP-14749:
---

Awesome, thanks for doing this [~ste...@apache.org]

> review s3guard docs & code prior to merge
> -
>
> Key: HADOOP-14749
> URL: https://issues.apache.org/jira/browse/HADOOP-14749
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: documentation, fs/s3
>Affects Versions: HADOOP-13345
>Reporter: Steve Loughran
>Assignee: Steve Loughran
> Fix For: HADOOP-13345
>
> Attachments: HADOOP-14749-HADOOP-13345-001.patch, 
> HADOOP-14749-HADOOP-13345-002.patch, HADOOP-14749-HADOOP-13345-003.patch, 
> HADOOP-14749-HADOOP-13345-004.patch, HADOOP-14749-HADOOP-13345-005.patch, 
> HADOOP-14749-HADOOP-13345-006.patch, HADOOP-14749-HADOOP-13345-007.patch, 
> HADOOP-14749-HADOOP-13345-008.patch
>
>   Original Estimate: 24h
>  Remaining Estimate: 24h
>
> Pre-merge cleanup while it's still easy to do
> * Read through all the docs, tune
> * Diff the trunk/branch files to see if we can reduce the delta (and hence 
> the changes)
> * Review the new tests



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14284) Shade Guava everywhere

2017-08-14 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14284?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16126187#comment-16126187
 ] 

stack commented on HADOOP-14284:


If it helps, here is what was done over in hbase to make it so we could upgrade 
guava, netty, protobuf, etc., w/o damage to downstreamers or having to use 
whatever hadoop et al. happened to have on the CLASSPATH

 * We made a little project hbase-thirdparty. Its only charge is providing 
mainline hbase with relocated popular libs such as guava and netty. The project 
comprises nought but poms (caveat some hacky patching of protobuf our project 
requires): https://github.com/apache/hbase-thirdparty The pull and relocation 
is moved out of the mainline hbase build.
 * We changed mainline hbase to use relocated versions of popular libs. This 
was mostly a case of changing imports from, for example, 
com.google.protobuf.Message to 
org.apache.hadoop.hbase.shaded.com.google.protobuf.Message (an unfortunate 
decision a while back saddled us w/ the extra-long relocation prefix).
 * As part of the mainline build, we run com.google.code.maven-replacer-plugin 
to rewrite third-party references in generated code to instead refer to our 
relocated versions.

Upside is we can update core libs whenever we wish. Should a lib turn 
problematic, we can add it to the relocated set. Downside is having to be sure 
we always refer to the relocated versions in code.

While the pattern is straight-forward, the above project took a good while to 
implement mostly because infra is a bit shakey and our test suite has a host of 
flakies in it; verifying the test was failiing because it a flakey and not 
because of the relocation took a good while.

If you want to do similar project in hadoop, I'd be game to help out.

> Shade Guava everywhere
> --
>
> Key: HADOOP-14284
> URL: https://issues.apache.org/jira/browse/HADOOP-14284
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Affects Versions: 3.0.0-alpha4
>Reporter: Andrew Wang
>Assignee: Tsuyoshi Ozawa
>Priority: Blocker
> Attachments: HADOOP-14238.pre001.patch, HADOOP-14284.002.patch, 
> HADOOP-14284.004.patch, HADOOP-14284.007.patch, HADOOP-14284.010.patch, 
> HADOOP-14284.012.patch
>
>
> HADOOP-10101 upgraded the guava version for 3.x to 21.
> Guava is broadly used by Java projects that consume our artifacts. 
> Unfortunately, these projects also consume our private artifacts like 
> {{hadoop-hdfs}}. They also are unlikely on the new shaded client introduced 
> by HADOOP-11804, currently only available in 3.0.0-alpha2.
> We should shade Guava everywhere to proactively avoid breaking downstreams. 
> This isn't a requirement for all dependency upgrades, but it's necessary for 
> known-bad dependencies like Guava.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-9747) Reduce unnecessary UGI synchronization

2017-08-14 Thread Jeff Storck (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9747?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16126164#comment-16126164
 ] 

Jeff Storck commented on HADOOP-9747:
-

[~daryn] Can you confirm that the two patches, HADOOP-9747.2.branch-2.patch and 
HADOOP-9747.2.trunk.patch (for their respective branches) are all that is 
needed to resolve the two open subtasks in this JIRA?

I've done my own testing for two principals being logged in and that they are 
able to relogin simultaneously using a single classloader, and this fixes a 
core issue that NiFi has been trying to work around for quite a while.  I have 
a suspicion that this might help us out with another issue regarding TDE where 
the hadoop client is not able to authenticate with a KMS (no TGT found) after 
successfully logging in with the KDC from a keytab.  The client seems to be 
"forgetting" that a principal was logged in from a keytab and ends up falling 
back to the OS user.

> Reduce unnecessary UGI synchronization
> --
>
> Key: HADOOP-9747
> URL: https://issues.apache.org/jira/browse/HADOOP-9747
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 0.23.0, 2.0.0-alpha, 3.0.0-alpha1
>Reporter: Daryn Sharp
>Assignee: Daryn Sharp
>Priority: Critical
> Attachments: HADOOP-9747.2.branch-2.patch, HADOOP-9747.2.trunk.patch, 
> HADOOP-9747.branch-2.patch, HADOOP-9747.trunk.patch
>
>
> Jstacks of heavily loaded NNs show up to dozens of threads blocking in the 
> UGI.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14741) Refactor curator based ZooKeeper communication into common library

2017-08-14 Thread Subru Krishnan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14741?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Subru Krishnan updated HADOOP-14741:

  Resolution: Fixed
Hadoop Flags: Reviewed
   Fix Version/s: 3.0.0-beta1
  2.9.0
Target Version/s:   (was: 3.0.0-beta1)
  Status: Resolved  (was: Patch Available)

Thanks [~elgoiri] for rebasing the patch, I committed it to branch-2.

> Refactor curator based ZooKeeper communication into common library
> --
>
> Key: HADOOP-14741
> URL: https://issues.apache.org/jira/browse/HADOOP-14741
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Subru Krishnan
>Assignee: Íñigo Goiri
> Fix For: 2.9.0, 3.0.0-beta1
>
> Attachments: HADOOP-14741-000.patch, HADOOP-14741-001.patch, 
> HADOOP-14741-002.patch, HADOOP-14741-003.patch, HADOOP-14741-004.patch, 
> HADOOP-14741-005.patch, HADOOP-14741-branch-2-001.patch, 
> HADOOP-14741-branch-2-002.patch, HADOOP-14741-branch-2.patch
>
>
> Currently we have ZooKeeper based store implementations for multiple state 
> stores like RM, YARN Federation, HDFS router-based federation, RM queue 
> configs etc. This jira proposes to unify the curator based ZK communication 
> to eliminate redundancies.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-10550) HttpAuthentication.html is out of date

2017-08-14 Thread Vrushali C (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10550?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vrushali C resolved HADOOP-10550.
-
Resolution: Not A Problem

Closing as "Not a problem". 

> HttpAuthentication.html is out of date
> --
>
> Key: HADOOP-10550
> URL: https://issues.apache.org/jira/browse/HADOOP-10550
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 2.4.0, 3.0.0-alpha1
>Reporter: Zhijie Shen
>Assignee: Vrushali C
>Priority: Minor
>  Labels: newbie, site
>
> It is still saying:
> {code}
> By default Hadoop HTTP web-consoles (JobTracker, NameNode, TaskTrackers and 
> DataNodes) allow access without any form of authentication.
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14771) hadoop-common does not include hadoop-yarn-client

2017-08-14 Thread Haibo Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14771?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Haibo Chen updated HADOOP-14771:

Issue Type: Sub-task  (was: Bug)
Parent: HADOOP-11656

> hadoop-common does not include hadoop-yarn-client
> -
>
> Key: HADOOP-14771
> URL: https://issues.apache.org/jira/browse/HADOOP-14771
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: common
>Reporter: Haibo Chen
>Priority: Critical
>
> The hadoop-client does not include hadoop-yarn-client, thus, the shared 
> hadoop-client is incomplete. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14771) hadoop-common does not include hadoop-yarn-client

2017-08-14 Thread Haibo Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14771?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16126071#comment-16126071
 ] 

Haibo Chen commented on HADOOP-14771:
-

[~busbey] Do you think this is a blocker for 3.0 beta1? 
HADOOP-11656 (Classpath isolation for downstream clients) is a blocker, so I 
think this one should be too, but I'll let you to determine.

> hadoop-common does not include hadoop-yarn-client
> -
>
> Key: HADOOP-14771
> URL: https://issues.apache.org/jira/browse/HADOOP-14771
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: common
>Reporter: Haibo Chen
>Priority: Critical
>
> The hadoop-client does not include hadoop-yarn-client, thus, the shared 
> hadoop-client is incomplete. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-14771) hadoop-common does not include hadoop-yarn-client

2017-08-14 Thread Haibo Chen (JIRA)
Haibo Chen created HADOOP-14771:
---

 Summary: hadoop-common does not include hadoop-yarn-client
 Key: HADOOP-14771
 URL: https://issues.apache.org/jira/browse/HADOOP-14771
 Project: Hadoop Common
  Issue Type: Bug
  Components: common
Reporter: Haibo Chen
Priority: Critical


The hadoop-client does not include hadoop-yarn-client, thus, the shared 
hadoop-client is incomplete. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14732) ProtobufRpcEngine should use Time.monotonicNow to measure durations

2017-08-14 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14732?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16126045#comment-16126045
 ] 

Hadoop QA commented on HADOOP-14732:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
21s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 14m 
52s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 15m  
6s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
37s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
30s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
30s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
50s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
39s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 10m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 10m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
28s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
52s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  9m 15s{color} 
| {color:red} hadoop-common in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
31s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 62m 19s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.ipc.TestRPC |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | HADOOP-14732 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12881519/HADOOP-14732.001.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 1c87e2be9b25 3.13.0-119-generic #166-Ubuntu SMP Wed May 3 
12:18:55 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / d8f74c3 |
| Default Java | 1.8.0_144 |
| findbugs | v3.1.0-RC1 |
| unit | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/13030/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/13030/testReport/ |
| modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/13030/console |
| Powered by | Apache Yetus 0.6.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> ProtobufRpcEngine should use Time.monotonicNow to measure durations
> ---
>
> Key: HADOOP-14732
> URL: https://issues.apache.org/jira/browse/HADOOP-14732
> Project: Hadoop Common
>  Issue Type: Sub-task
>

[jira] [Commented] (HADOOP-14649) Update aliyun-sdk-oss version to 2.8.0

2017-08-14 Thread Ray Chiang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14649?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16125963#comment-16125963
 ] 

Ray Chiang commented on HADOOP-14649:
-

I'm hoping to get the majority of third party library updates done by beta 1.  
It wasn't urgent at the time I filed, but we're around a month away at this 
point.

> Update aliyun-sdk-oss version to 2.8.0
> --
>
> Key: HADOOP-14649
> URL: https://issues.apache.org/jira/browse/HADOOP-14649
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Ray Chiang
>
> Update the dependency
> com.aliyun.oss:aliyun-sdk-oss:2.4.1
> to the latest (2.8.0).



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14732) ProtobufRpcEngine should use Time.monotonicNow to measure durations

2017-08-14 Thread Arpit Agarwal (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14732?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16125927#comment-16125927
 ] 

Arpit Agarwal commented on HADOOP-14732:


+1 pending Jenkins.

> ProtobufRpcEngine should use Time.monotonicNow to measure durations
> ---
>
> Key: HADOOP-14732
> URL: https://issues.apache.org/jira/browse/HADOOP-14732
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
> Attachments: HADOOP-14732.001.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14732) ProtobufRpcEngine should use Time.monotonicNow to measure durations

2017-08-14 Thread Arpit Agarwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14732?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HADOOP-14732:
---
Status: Patch Available  (was: Open)

> ProtobufRpcEngine should use Time.monotonicNow to measure durations
> ---
>
> Key: HADOOP-14732
> URL: https://issues.apache.org/jira/browse/HADOOP-14732
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
> Attachments: HADOOP-14732.001.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14444) New implementation of ftp and sftp filesystems

2017-08-14 Thread Lukas Waldmann (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-1?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16125892#comment-16125892
 ] 

Lukas Waldmann commented on HADOOP-1:
-

Hi, I have been off for few days.
How we move forward from here? Build doesn't report any issues, neither there 
are any comments from community



> New implementation of ftp and sftp filesystems
> --
>
> Key: HADOOP-1
> URL: https://issues.apache.org/jira/browse/HADOOP-1
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: fs
>Affects Versions: 2.8.0
>Reporter: Lukas Waldmann
>Assignee: Lukas Waldmann
> Attachments: HADOOP-1.2.patch, HADOOP-1.3.patch, 
> HADOOP-1.4.patch, HADOOP-1.5.patch, HADOOP-1.6.patch, 
> HADOOP-1.7.patch, HADOOP-1.8.patch, HADOOP-1.patch
>
>
> Current implementation of FTP and SFTP filesystems have severe limitations 
> and performance issues when dealing with high number of files. Mine patch 
> solve those issues and integrate both filesystems such a way that most of the 
> core functionality is common for both and therefore simplifying the 
> maintainability.
> The core features:
> * Support for HTTP/SOCKS proxies
> * Support for passive FTP
> * Support of connection pooling - new connection is not created for every 
> single command but reused from the pool.
> For huge number of files it shows order of magnitude performance improvement 
> over not pooled connections.
> * Caching of directory trees. For ftp you always need to list whole directory 
> whenever you ask information about particular file.
> Again for huge number of files it shows order of magnitude performance 
> improvement over not cached connections.
> * Support of keep alive (NOOP) messages to avoid connection drops
> * Support for Unix style or regexp wildcard glob - useful for listing a 
> particular files across whole directory tree
> * Support for reestablishing broken ftp data transfers - can happen 
> surprisingly often



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13786) Add S3Guard committer for zero-rename commits to S3 endpoints

2017-08-14 Thread Ewan Higgs (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13786?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ewan Higgs updated HADOOP-13786:

Attachment: cloud-intergration-test-failure.log

Log for some NullPointerExceptions after "Start iterating the provided status".

> Add S3Guard committer for zero-rename commits to S3 endpoints
> -
>
> Key: HADOOP-13786
> URL: https://issues.apache.org/jira/browse/HADOOP-13786
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: fs/s3
>Affects Versions: HADOOP-13345
>Reporter: Steve Loughran
>Assignee: Steve Loughran
> Attachments: cloud-intergration-test-failure.log, 
> HADOOP-13786-HADOOP-13345-001.patch, HADOOP-13786-HADOOP-13345-002.patch, 
> HADOOP-13786-HADOOP-13345-003.patch, HADOOP-13786-HADOOP-13345-004.patch, 
> HADOOP-13786-HADOOP-13345-005.patch, HADOOP-13786-HADOOP-13345-006.patch, 
> HADOOP-13786-HADOOP-13345-006.patch, HADOOP-13786-HADOOP-13345-007.patch, 
> HADOOP-13786-HADOOP-13345-009.patch, HADOOP-13786-HADOOP-13345-010.patch, 
> HADOOP-13786-HADOOP-13345-011.patch, HADOOP-13786-HADOOP-13345-012.patch, 
> HADOOP-13786-HADOOP-13345-013.patch, HADOOP-13786-HADOOP-13345-015.patch, 
> HADOOP-13786-HADOOP-13345-016.patch, HADOOP-13786-HADOOP-13345-017.patch, 
> HADOOP-13786-HADOOP-13345-018.patch, HADOOP-13786-HADOOP-13345-019.patch, 
> HADOOP-13786-HADOOP-13345-020.patch, HADOOP-13786-HADOOP-13345-021.patch, 
> HADOOP-13786-HADOOP-13345-022.patch, HADOOP-13786-HADOOP-13345-023.patch, 
> HADOOP-13786-HADOOP-13345-024.patch, HADOOP-13786-HADOOP-13345-025.patch, 
> HADOOP-13786-HADOOP-13345-026.patch, HADOOP-13786-HADOOP-13345-027.patch, 
> HADOOP-13786-HADOOP-13345-028.patch, HADOOP-13786-HADOOP-13345-028.patch, 
> HADOOP-13786-HADOOP-13345-029.patch, HADOOP-13786-HADOOP-13345-030.patch, 
> HADOOP-13786-HADOOP-13345-031.patch, HADOOP-13786-HADOOP-13345-032.patch, 
> HADOOP-13786-HADOOP-13345-033.patch, HADOOP-13786-HADOOP-13345-035.patch, 
> objectstore.pdf, s3committer-master.zip
>
>
> A goal of this code is "support O(1) commits to S3 repositories in the 
> presence of failures". Implement it, including whatever is needed to 
> demonstrate the correctness of the algorithm. (that is, assuming that s3guard 
> provides a consistent view of the presence/absence of blobs, show that we can 
> commit directly).
> I consider ourselves free to expose the blobstore-ness of the s3 output 
> streams (ie. not visible until the close()), if we need to use that to allow 
> us to abort commit operations.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13786) Add S3Guard committer for zero-rename commits to S3 endpoints

2017-08-14 Thread Ewan Higgs (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13786?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16125872#comment-16125872
 ] 

Ewan Higgs commented on HADOOP-13786:
-

Hi, I have been testing this locally with the [Hortonworks cloud-integration 
project|https://github.com/hortonworks-spark/cloud-integration] and an S3 
compatible backend that is has strong consistency. Because it has strong 
consistency one would expect the {{NullMetadataStore}} to work. However, I'm 
getting some errors.

To reproduce, I build Hadoop as follows:

{code}
mvn install -DskipShade -Dmaven.javadoc.skip=true -Pdist,parallel-tests 
-DtestsThreadCount=8 -Djava.awt.headless=true -Ddeclared.hadoop.version=2.11 
-DskipTests
{code}

I ran into some NPEs:

{code}
S3AFileStatus{path=s3a://s3guard-test/cloud-integration/DELAY_LISTING_ME/S3ACommitDataframeSuite/dataframe-committer/committer-default-orc/orc/part-0-8b8b323b-c747-4d72-b331-b6de1c1f8387-c000.snappy.orc;
 isDirectory=false; length=2995; replication=1; blocksize=1048576; 
modification_time=1502715661000; access_time=0; owner=ehiggs; group=ehiggs; 
permission=rw-rw-rw-; isSymlink=false; hasAcl=false; isEncrypted=false; 
isErasureCoded=false} isEmptyDirectory=FALSE
2017-08-14 15:01:02,297 [ScalaTest-main-running-S3ACommitDataframeSuite] DEBUG 
s3a.S3AFileSystem (Listing.java:sourceHasNext(374)) - Start iterating the 
provided status.
2017-08-14 15:01:02,304 [ScalaTest-main-running-S3ACommitDataframeSuite] ERROR 
commit.S3ACommitDataframeSuite (Logging.scala:logError(91)) - After 237,747,296 
nS: java.lang.NullPointerException
java.lang.NullPointerException
at 
org.apache.hadoop.fs.LocatedFileStatus.(LocatedFileStatus.java:87)
at 
org.apache.spark.sql.execution.datasources.InMemoryFileIndex$$anonfun$org$apache$spark$sql$execution$datasources$InMemoryFileIndex$$listLeafFiles$3.apply(InMemoryFileIndex.scala:299)
at 
org.apache.spark.sql.execution.datasources.InMemoryFileIndex$$anonfun$org$apache$spark$sql$execution$datasources$InMemoryFileIndex$$listLeafFiles$3.apply(InMemoryFileIndex.scala:281)
at 
scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
at 
scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
at 
scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:33)
at scala.collection.mutable.ArrayOps$ofRef.foreach(ArrayOps.scala:186)
at scala.collection.TraversableLike$class.map(TraversableLike.scala:234)
at scala.collection.mutable.ArrayOps$ofRef.map(ArrayOps.scala:186)
at 
org.apache.spark.sql.execution.datasources.InMemoryFileIndex$.org$apache$spark$sql$execution$datasources$InMemoryFileIndex$$listLeafFiles(InMemoryFileIndex.scala:281)
at 
org.apache.spark.sql.execution.datasources.InMemoryFileIndex$$anonfun$org$apache$spark$sql$execution$datasources$InMemoryFileIndex$$bulkListLeafFiles$1.apply(InMemoryFileIndex.scala:172)
at 
org.apache.spark.sql.execution.datasources.InMemoryFileIndex$$anonfun$org$apache$spark$sql$execution$datasources$InMemoryFileIndex$$bulkListLeafFiles$1.apply(InMemoryFileIndex.scala:171)
at 
scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
{code}

I'll attach the trace.

> Add S3Guard committer for zero-rename commits to S3 endpoints
> -
>
> Key: HADOOP-13786
> URL: https://issues.apache.org/jira/browse/HADOOP-13786
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: fs/s3
>Affects Versions: HADOOP-13345
>Reporter: Steve Loughran
>Assignee: Steve Loughran
> Attachments: HADOOP-13786-HADOOP-13345-001.patch, 
> HADOOP-13786-HADOOP-13345-002.patch, HADOOP-13786-HADOOP-13345-003.patch, 
> HADOOP-13786-HADOOP-13345-004.patch, HADOOP-13786-HADOOP-13345-005.patch, 
> HADOOP-13786-HADOOP-13345-006.patch, HADOOP-13786-HADOOP-13345-006.patch, 
> HADOOP-13786-HADOOP-13345-007.patch, HADOOP-13786-HADOOP-13345-009.patch, 
> HADOOP-13786-HADOOP-13345-010.patch, HADOOP-13786-HADOOP-13345-011.patch, 
> HADOOP-13786-HADOOP-13345-012.patch, HADOOP-13786-HADOOP-13345-013.patch, 
> HADOOP-13786-HADOOP-13345-015.patch, HADOOP-13786-HADOOP-13345-016.patch, 
> HADOOP-13786-HADOOP-13345-017.patch, HADOOP-13786-HADOOP-13345-018.patch, 
> HADOOP-13786-HADOOP-13345-019.patch, HADOOP-13786-HADOOP-13345-020.patch, 
> HADOOP-13786-HADOOP-13345-021.patch, HADOOP-13786-HADOOP-13345-022.patch, 
> HADOOP-13786-HADOOP-13345-023.patch, HADOOP-13786-HADOOP-13345-024.patch, 
> HADOOP-13786-HADOOP-13345-025.patch, HADOOP-13786-HADOOP-13345-026.patch, 
> HADOOP-13786-HADOOP-13345-027.patch, HADOOP-13786-HADOOP-13345-028.patch, 
> HADOOP-13786-HADOOP-13345-028.patch, HADOOP-13786-HADOOP-13345-029.patch, 
> 

[jira] [Commented] (HADOOP-14741) Refactor curator based ZooKeeper communication into common library

2017-08-14 Thread JIRA

[ 
https://issues.apache.org/jira/browse/HADOOP-14741?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16125813#comment-16125813
 ] 

Íñigo Goiri commented on HADOOP-14741:
--

Checked the failed unit test in branch-2 and don't seem related.

> Refactor curator based ZooKeeper communication into common library
> --
>
> Key: HADOOP-14741
> URL: https://issues.apache.org/jira/browse/HADOOP-14741
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Subru Krishnan
>Assignee: Íñigo Goiri
> Attachments: HADOOP-14741-000.patch, HADOOP-14741-001.patch, 
> HADOOP-14741-002.patch, HADOOP-14741-003.patch, HADOOP-14741-004.patch, 
> HADOOP-14741-005.patch, HADOOP-14741-branch-2-001.patch, 
> HADOOP-14741-branch-2-002.patch, HADOOP-14741-branch-2.patch
>
>
> Currently we have ZooKeeper based store implementations for multiple state 
> stores like RM, YARN Federation, HDFS router-based federation, RM queue 
> configs etc. This jira proposes to unify the curator based ZK communication 
> to eliminate redundancies.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14729) Upgrade JUnit 3 TestCase to JUnit 4

2017-08-14 Thread Ajay Kumar (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14729?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16125799#comment-16125799
 ] 

Ajay Kumar commented on HADOOP-14729:
-

[~ste...@apache.org],[~boky01],[~ajisakaa] Request you to review the patch. 
test failures are unrelated.

> Upgrade JUnit 3 TestCase to JUnit 4
> ---
>
> Key: HADOOP-14729
> URL: https://issues.apache.org/jira/browse/HADOOP-14729
> Project: Hadoop Common
>  Issue Type: Test
>Reporter: Akira Ajisaka
>Assignee: Ajay Kumar
>  Labels: newbie
> Attachments: HADOOP-14729.001.patch, HADOOP-14729.002.patch, 
> HADOOP-14729.003.patch, HADOOP-14729.004.patch, HADOOP-14729.005.patch, 
> HADOOP-14729.006.patch
>
>
> There are still test classes that extend from junit.framework.TestCase in 
> hadoop-common. Upgrade them to JUnit4.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14770) S3A http connection in s3a driver not reuse in Spark application

2017-08-14 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14770?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-14770:

Priority: Minor  (was: Major)

> S3A http connection in s3a driver not reuse in Spark application
> 
>
> Key: HADOOP-14770
> URL: https://issues.apache.org/jira/browse/HADOOP-14770
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3
>Affects Versions: 2.7.3
>Reporter: Yonger
>Assignee: Yonger
>Priority: Minor
>
> I print out connection stats every 2 s when running Spark application against 
> s3-compatible storage:
> {code}
> ESTAB  0  0 :::10.0.2.36:6
> :::10.0.2.254:80 
> ESTAB  0  0 :::10.0.2.36:44454
> :::10.0.2.254:80 
> ESTAB  0  0 :::10.0.2.36:44374
> :::10.0.2.254:80 
> ESTAB  159724 0 :::10.0.2.36:44436
> :::10.0.2.254:80 
> ESTAB  0  0 :::10.0.2.36:8
> :::10.0.2.254:80 
> ESTAB  0  0 :::10.0.2.36:44338
> :::10.0.2.254:80 
> ESTAB  0  0 :::10.0.2.36:44438
> :::10.0.2.254:80 
> ESTAB  0  0 :::10.0.2.36:44414
> :::10.0.2.254:80 
> ESTAB  0  480   :::10.0.2.36:44450
> :::10.0.2.254:80  timer:(on,170ms,0)
> ESTAB  0  0 :::10.0.2.36:2
> :::10.0.2.254:80 
> ESTAB  0  0 :::10.0.2.36:44390
> :::10.0.2.254:80 
> ESTAB  0  0 :::10.0.2.36:44326
> :::10.0.2.254:80 
> ESTAB  0  0 :::10.0.2.36:44452
> :::10.0.2.254:80 
> ESTAB  0  0 :::10.0.2.36:44394
> :::10.0.2.254:80 
> ESTAB  0  0 :::10.0.2.36:4
> :::10.0.2.254:80 
> ESTAB  0  0 :::10.0.2.36:44456
> :::10.0.2.254:80 
> ==
> ESTAB  0  0 :::10.0.2.36:44508
> :::10.0.2.254:80 
> ESTAB  0  0 :::10.0.2.36:44476
> :::10.0.2.254:80 
> ESTAB  0  0 :::10.0.2.36:44524
> :::10.0.2.254:80 
> ESTAB  0  0 :::10.0.2.36:44374
> :::10.0.2.254:80 
> ESTAB  0  0 :::10.0.2.36:44500
> :::10.0.2.254:80 
> ESTAB  0  0 :::10.0.2.36:44504
> :::10.0.2.254:80 
> ESTAB  0  0 :::10.0.2.36:44512
> :::10.0.2.254:80 
> ESTAB  0  0 :::10.0.2.36:44506
> :::10.0.2.254:80 
> ESTAB  0  0 :::10.0.2.36:44464
> :::10.0.2.254:80 
> ESTAB  0  0 :::10.0.2.36:44518
> :::10.0.2.254:80 
> ESTAB  0  0 :::10.0.2.36:44510
> :::10.0.2.254:80 
> ESTAB  0  0 :::10.0.2.36:2
> :::10.0.2.254:80 
> ESTAB  0  0 :::10.0.2.36:44526
> :::10.0.2.254:80 
> ESTAB  0  0 :::10.0.2.36:44472
> :::10.0.2.254:80 
> ESTAB  0  0 :::10.0.2.36:44466
> :::10.0.2.254:80 
> {code}
> the connection in the above of "=" and below were changed all the time. But 
> this haven't seen in MR application. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14560) Make HttpServer2 accept queue size configurable

2017-08-14 Thread Alexander Krasheninnikov (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14560?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16125662#comment-16125662
 ] 

Alexander Krasheninnikov commented on HADOOP-14560:
---

[~jzhuge], actually I have no idea, how to move this issue next - what should 
be done from my side to make code applied to trunk :(

> Make HttpServer2 accept queue size configurable
> ---
>
> Key: HADOOP-14560
> URL: https://issues.apache.org/jira/browse/HADOOP-14560
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: common
>Reporter: Alexander Krasheninnikov
>Assignee: Alexander Krasheninnikov
>  Labels: webhdfs
>
> While operating WebHDFS at Badoo, we've faced issue, that hardcoded size of 
> socket backlog (128) is not enough for our purposes.
> When performing ~600 concurrent requests, clients receive "Connection 
> refused" error.
> We are proposing patch to make this backlog size configurable.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14769) WASB: delete recursive should not fail if a file is deleted

2017-08-14 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14769?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-14769:

Issue Type: Sub-task  (was: Bug)
Parent: HADOOP-14552

> WASB: delete recursive should not fail if a file is deleted
> ---
>
> Key: HADOOP-14769
> URL: https://issues.apache.org/jira/browse/HADOOP-14769
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Reporter: Thomas Marquardt
>Assignee: Thomas Marquardt
> Attachments: HADOOP-14769-001.patch
>
>
> FileSystem.delete(Path path) and delete(Path path, boolean recursive) return 
> false if the path does not exist.  The WASB implementation of recursive 
> delete currently fails if one of the entries is deleted by an external agent 
> while a recursive delete is in progress.  For example, if you try to delete 
> all of the files in a directory, which can be a very long process, and one of 
> the files contained within is deleted by an external agent, the recursive 
> directory delete operation will fail if it tries to delete that file and 
> discovers that it does not exist.  This is not desirable.  A recursive 
> directory delete operation should succeeed if the directory initially exists 
> and when the operation completes, the directory and all of its entries do not 
> exist.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14768) Honoring sticky bit during Deletion when authorization is enabled in WASB

2017-08-14 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14768?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-14768:

Parent Issue: HADOOP-14552  (was: HADOOP-11694)

> Honoring sticky bit during Deletion when authorization is enabled in WASB
> -
>
> Key: HADOOP-14768
> URL: https://issues.apache.org/jira/browse/HADOOP-14768
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Reporter: Varada Hemeswari
>Assignee: Varada Hemeswari
>  Labels: fs, secure, wasb
>
> When authorization is enabled in WASB filesystem, there is a need for 
> stickybit in cases where multiple users can create files under a shared 
> directory. This additional check for sticky bit is reqired since any user can 
> delete another user's file because the parent has WRITE permission for all 
> users.
> The purpose of this jira is to implement sticky bit equivalent for 'delete' 
> call when authorization is enabled.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14768) Honoring sticky bit during Deletion when authorization is enabled in WASB

2017-08-14 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14768?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-14768:

Issue Type: Sub-task  (was: Improvement)
Parent: HADOOP-11694

> Honoring sticky bit during Deletion when authorization is enabled in WASB
> -
>
> Key: HADOOP-14768
> URL: https://issues.apache.org/jira/browse/HADOOP-14768
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Reporter: Varada Hemeswari
>Assignee: Varada Hemeswari
>  Labels: fs, secure, wasb
>
> When authorization is enabled in WASB filesystem, there is a need for 
> stickybit in cases where multiple users can create files under a shared 
> directory. This additional check for sticky bit is reqired since any user can 
> delete another user's file because the parent has WRITE permission for all 
> users.
> The purpose of this jira is to implement sticky bit equivalent for 'delete' 
> call when authorization is enabled.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14649) Update aliyun-sdk-oss version to 2.8.0

2017-08-14 Thread Kai Zheng (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14649?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16125549#comment-16125549
 ] 

Kai Zheng commented on HADOOP-14649:


Hi [~rchiang],

Thanks for the ping. I had an offline sync with some folk and hope this can be 
get touched soon. Wondering how urgent this is to your side.

> Update aliyun-sdk-oss version to 2.8.0
> --
>
> Key: HADOOP-14649
> URL: https://issues.apache.org/jira/browse/HADOOP-14649
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Ray Chiang
>
> Update the dependency
> com.aliyun.oss:aliyun-sdk-oss:2.4.1
> to the latest (2.8.0).



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14770) S3A http connection in s3a driver not reuse in Spark application

2017-08-14 Thread Yonger (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14770?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16125536#comment-16125536
 ] 

Yonger commented on HADOOP-14770:
-

Thanks Steve, the application running on Hadoop 2.7.3 and against ORC file 
format. I will upgrade to Hadoop 2.8.0 to verify.

> S3A http connection in s3a driver not reuse in Spark application
> 
>
> Key: HADOOP-14770
> URL: https://issues.apache.org/jira/browse/HADOOP-14770
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3
>Affects Versions: 2.7.3
>Reporter: Yonger
>Assignee: Yonger
>
> I print out connection stats every 2 s when running Spark application against 
> s3-compatible storage:
> {code}
> ESTAB  0  0 :::10.0.2.36:6
> :::10.0.2.254:80 
> ESTAB  0  0 :::10.0.2.36:44454
> :::10.0.2.254:80 
> ESTAB  0  0 :::10.0.2.36:44374
> :::10.0.2.254:80 
> ESTAB  159724 0 :::10.0.2.36:44436
> :::10.0.2.254:80 
> ESTAB  0  0 :::10.0.2.36:8
> :::10.0.2.254:80 
> ESTAB  0  0 :::10.0.2.36:44338
> :::10.0.2.254:80 
> ESTAB  0  0 :::10.0.2.36:44438
> :::10.0.2.254:80 
> ESTAB  0  0 :::10.0.2.36:44414
> :::10.0.2.254:80 
> ESTAB  0  480   :::10.0.2.36:44450
> :::10.0.2.254:80  timer:(on,170ms,0)
> ESTAB  0  0 :::10.0.2.36:2
> :::10.0.2.254:80 
> ESTAB  0  0 :::10.0.2.36:44390
> :::10.0.2.254:80 
> ESTAB  0  0 :::10.0.2.36:44326
> :::10.0.2.254:80 
> ESTAB  0  0 :::10.0.2.36:44452
> :::10.0.2.254:80 
> ESTAB  0  0 :::10.0.2.36:44394
> :::10.0.2.254:80 
> ESTAB  0  0 :::10.0.2.36:4
> :::10.0.2.254:80 
> ESTAB  0  0 :::10.0.2.36:44456
> :::10.0.2.254:80 
> ==
> ESTAB  0  0 :::10.0.2.36:44508
> :::10.0.2.254:80 
> ESTAB  0  0 :::10.0.2.36:44476
> :::10.0.2.254:80 
> ESTAB  0  0 :::10.0.2.36:44524
> :::10.0.2.254:80 
> ESTAB  0  0 :::10.0.2.36:44374
> :::10.0.2.254:80 
> ESTAB  0  0 :::10.0.2.36:44500
> :::10.0.2.254:80 
> ESTAB  0  0 :::10.0.2.36:44504
> :::10.0.2.254:80 
> ESTAB  0  0 :::10.0.2.36:44512
> :::10.0.2.254:80 
> ESTAB  0  0 :::10.0.2.36:44506
> :::10.0.2.254:80 
> ESTAB  0  0 :::10.0.2.36:44464
> :::10.0.2.254:80 
> ESTAB  0  0 :::10.0.2.36:44518
> :::10.0.2.254:80 
> ESTAB  0  0 :::10.0.2.36:44510
> :::10.0.2.254:80 
> ESTAB  0  0 :::10.0.2.36:2
> :::10.0.2.254:80 
> ESTAB  0  0 :::10.0.2.36:44526
> :::10.0.2.254:80 
> ESTAB  0  0 :::10.0.2.36:44472
> :::10.0.2.254:80 
> ESTAB  0  0 :::10.0.2.36:44466
> :::10.0.2.254:80 
> {code}
> the connection in the above of "=" and below were changed all the time. But 
> this haven't seen in MR application. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14770) S3A http connection in s3a driver not reuse in Spark application

2017-08-14 Thread Yonger (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14770?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yonger updated HADOOP-14770:

Affects Version/s: 2.7.3

> S3A http connection in s3a driver not reuse in Spark application
> 
>
> Key: HADOOP-14770
> URL: https://issues.apache.org/jira/browse/HADOOP-14770
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3
>Affects Versions: 2.7.3
>Reporter: Yonger
>Assignee: Yonger
>
> I print out connection stats every 2 s when running Spark application against 
> s3-compatible storage:
> {code}
> ESTAB  0  0 :::10.0.2.36:6
> :::10.0.2.254:80 
> ESTAB  0  0 :::10.0.2.36:44454
> :::10.0.2.254:80 
> ESTAB  0  0 :::10.0.2.36:44374
> :::10.0.2.254:80 
> ESTAB  159724 0 :::10.0.2.36:44436
> :::10.0.2.254:80 
> ESTAB  0  0 :::10.0.2.36:8
> :::10.0.2.254:80 
> ESTAB  0  0 :::10.0.2.36:44338
> :::10.0.2.254:80 
> ESTAB  0  0 :::10.0.2.36:44438
> :::10.0.2.254:80 
> ESTAB  0  0 :::10.0.2.36:44414
> :::10.0.2.254:80 
> ESTAB  0  480   :::10.0.2.36:44450
> :::10.0.2.254:80  timer:(on,170ms,0)
> ESTAB  0  0 :::10.0.2.36:2
> :::10.0.2.254:80 
> ESTAB  0  0 :::10.0.2.36:44390
> :::10.0.2.254:80 
> ESTAB  0  0 :::10.0.2.36:44326
> :::10.0.2.254:80 
> ESTAB  0  0 :::10.0.2.36:44452
> :::10.0.2.254:80 
> ESTAB  0  0 :::10.0.2.36:44394
> :::10.0.2.254:80 
> ESTAB  0  0 :::10.0.2.36:4
> :::10.0.2.254:80 
> ESTAB  0  0 :::10.0.2.36:44456
> :::10.0.2.254:80 
> ==
> ESTAB  0  0 :::10.0.2.36:44508
> :::10.0.2.254:80 
> ESTAB  0  0 :::10.0.2.36:44476
> :::10.0.2.254:80 
> ESTAB  0  0 :::10.0.2.36:44524
> :::10.0.2.254:80 
> ESTAB  0  0 :::10.0.2.36:44374
> :::10.0.2.254:80 
> ESTAB  0  0 :::10.0.2.36:44500
> :::10.0.2.254:80 
> ESTAB  0  0 :::10.0.2.36:44504
> :::10.0.2.254:80 
> ESTAB  0  0 :::10.0.2.36:44512
> :::10.0.2.254:80 
> ESTAB  0  0 :::10.0.2.36:44506
> :::10.0.2.254:80 
> ESTAB  0  0 :::10.0.2.36:44464
> :::10.0.2.254:80 
> ESTAB  0  0 :::10.0.2.36:44518
> :::10.0.2.254:80 
> ESTAB  0  0 :::10.0.2.36:44510
> :::10.0.2.254:80 
> ESTAB  0  0 :::10.0.2.36:2
> :::10.0.2.254:80 
> ESTAB  0  0 :::10.0.2.36:44526
> :::10.0.2.254:80 
> ESTAB  0  0 :::10.0.2.36:44472
> :::10.0.2.254:80 
> ESTAB  0  0 :::10.0.2.36:44466
> :::10.0.2.254:80 
> {code}
> the connection in the above of "=" and below were changed all the time. But 
> this haven't seen in MR application. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14770) S3A http connection in s3a driver not reuse in Spark application

2017-08-14 Thread Yonger (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14770?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yonger updated HADOOP-14770:

Component/s: fs/s3

> S3A http connection in s3a driver not reuse in Spark application
> 
>
> Key: HADOOP-14770
> URL: https://issues.apache.org/jira/browse/HADOOP-14770
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3
>Affects Versions: 2.7.3
>Reporter: Yonger
>Assignee: Yonger
>
> I print out connection stats every 2 s when running Spark application against 
> s3-compatible storage:
> {code}
> ESTAB  0  0 :::10.0.2.36:6
> :::10.0.2.254:80 
> ESTAB  0  0 :::10.0.2.36:44454
> :::10.0.2.254:80 
> ESTAB  0  0 :::10.0.2.36:44374
> :::10.0.2.254:80 
> ESTAB  159724 0 :::10.0.2.36:44436
> :::10.0.2.254:80 
> ESTAB  0  0 :::10.0.2.36:8
> :::10.0.2.254:80 
> ESTAB  0  0 :::10.0.2.36:44338
> :::10.0.2.254:80 
> ESTAB  0  0 :::10.0.2.36:44438
> :::10.0.2.254:80 
> ESTAB  0  0 :::10.0.2.36:44414
> :::10.0.2.254:80 
> ESTAB  0  480   :::10.0.2.36:44450
> :::10.0.2.254:80  timer:(on,170ms,0)
> ESTAB  0  0 :::10.0.2.36:2
> :::10.0.2.254:80 
> ESTAB  0  0 :::10.0.2.36:44390
> :::10.0.2.254:80 
> ESTAB  0  0 :::10.0.2.36:44326
> :::10.0.2.254:80 
> ESTAB  0  0 :::10.0.2.36:44452
> :::10.0.2.254:80 
> ESTAB  0  0 :::10.0.2.36:44394
> :::10.0.2.254:80 
> ESTAB  0  0 :::10.0.2.36:4
> :::10.0.2.254:80 
> ESTAB  0  0 :::10.0.2.36:44456
> :::10.0.2.254:80 
> ==
> ESTAB  0  0 :::10.0.2.36:44508
> :::10.0.2.254:80 
> ESTAB  0  0 :::10.0.2.36:44476
> :::10.0.2.254:80 
> ESTAB  0  0 :::10.0.2.36:44524
> :::10.0.2.254:80 
> ESTAB  0  0 :::10.0.2.36:44374
> :::10.0.2.254:80 
> ESTAB  0  0 :::10.0.2.36:44500
> :::10.0.2.254:80 
> ESTAB  0  0 :::10.0.2.36:44504
> :::10.0.2.254:80 
> ESTAB  0  0 :::10.0.2.36:44512
> :::10.0.2.254:80 
> ESTAB  0  0 :::10.0.2.36:44506
> :::10.0.2.254:80 
> ESTAB  0  0 :::10.0.2.36:44464
> :::10.0.2.254:80 
> ESTAB  0  0 :::10.0.2.36:44518
> :::10.0.2.254:80 
> ESTAB  0  0 :::10.0.2.36:44510
> :::10.0.2.254:80 
> ESTAB  0  0 :::10.0.2.36:2
> :::10.0.2.254:80 
> ESTAB  0  0 :::10.0.2.36:44526
> :::10.0.2.254:80 
> ESTAB  0  0 :::10.0.2.36:44472
> :::10.0.2.254:80 
> ESTAB  0  0 :::10.0.2.36:44466
> :::10.0.2.254:80 
> {code}
> the connection in the above of "=" and below were changed all the time. But 
> this haven't seen in MR application. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14770) S3A http connection in s3a driver not reuse in Spark application

2017-08-14 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14770?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-14770:

Description: 
I print out connection stats every 2 s when running Spark application against 
s3-compatible storage:
{code}
ESTAB  0  0 :::10.0.2.36:6
:::10.0.2.254:80 
ESTAB  0  0 :::10.0.2.36:44454
:::10.0.2.254:80 
ESTAB  0  0 :::10.0.2.36:44374
:::10.0.2.254:80 
ESTAB  159724 0 :::10.0.2.36:44436
:::10.0.2.254:80 
ESTAB  0  0 :::10.0.2.36:8
:::10.0.2.254:80 
ESTAB  0  0 :::10.0.2.36:44338
:::10.0.2.254:80 
ESTAB  0  0 :::10.0.2.36:44438
:::10.0.2.254:80 
ESTAB  0  0 :::10.0.2.36:44414
:::10.0.2.254:80 
ESTAB  0  480   :::10.0.2.36:44450
:::10.0.2.254:80  timer:(on,170ms,0)
ESTAB  0  0 :::10.0.2.36:2
:::10.0.2.254:80 
ESTAB  0  0 :::10.0.2.36:44390
:::10.0.2.254:80 
ESTAB  0  0 :::10.0.2.36:44326
:::10.0.2.254:80 
ESTAB  0  0 :::10.0.2.36:44452
:::10.0.2.254:80 
ESTAB  0  0 :::10.0.2.36:44394
:::10.0.2.254:80 
ESTAB  0  0 :::10.0.2.36:4
:::10.0.2.254:80 
ESTAB  0  0 :::10.0.2.36:44456
:::10.0.2.254:80 
==
ESTAB  0  0 :::10.0.2.36:44508
:::10.0.2.254:80 
ESTAB  0  0 :::10.0.2.36:44476
:::10.0.2.254:80 
ESTAB  0  0 :::10.0.2.36:44524
:::10.0.2.254:80 
ESTAB  0  0 :::10.0.2.36:44374
:::10.0.2.254:80 
ESTAB  0  0 :::10.0.2.36:44500
:::10.0.2.254:80 
ESTAB  0  0 :::10.0.2.36:44504
:::10.0.2.254:80 
ESTAB  0  0 :::10.0.2.36:44512
:::10.0.2.254:80 
ESTAB  0  0 :::10.0.2.36:44506
:::10.0.2.254:80 
ESTAB  0  0 :::10.0.2.36:44464
:::10.0.2.254:80 
ESTAB  0  0 :::10.0.2.36:44518
:::10.0.2.254:80 
ESTAB  0  0 :::10.0.2.36:44510
:::10.0.2.254:80 
ESTAB  0  0 :::10.0.2.36:2
:::10.0.2.254:80 
ESTAB  0  0 :::10.0.2.36:44526
:::10.0.2.254:80 
ESTAB  0  0 :::10.0.2.36:44472
:::10.0.2.254:80 
ESTAB  0  0 :::10.0.2.36:44466
:::10.0.2.254:80 
{code}
the connection in the above of "=" and below were changed all the time. But 
this haven't seen in MR application. 

  was:
I print out connection stats every 2 s when running Spark application against 
s3-compatible storage:

ESTAB  0  0 :::10.0.2.36:6
:::10.0.2.254:80 
ESTAB  0  0 :::10.0.2.36:44454
:::10.0.2.254:80 
ESTAB  0  0 :::10.0.2.36:44374
:::10.0.2.254:80 
ESTAB  159724 0 :::10.0.2.36:44436
:::10.0.2.254:80 
ESTAB  0  0 :::10.0.2.36:8
:::10.0.2.254:80 
ESTAB  0  0 :::10.0.2.36:44338
:::10.0.2.254:80 
ESTAB  0  0 :::10.0.2.36:44438
:::10.0.2.254:80 
ESTAB  0  0 :::10.0.2.36:44414
:::10.0.2.254:80 
ESTAB  0  480   :::10.0.2.36:44450
:::10.0.2.254:80  timer:(on,170ms,0)
ESTAB  0  0 :::10.0.2.36:2
:::10.0.2.254:80 
ESTAB  0  0 :::10.0.2.36:44390

[jira] [Commented] (HADOOP-14770) S3A http connection in s3a driver not reuse in Spark application

2017-08-14 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14770?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16125526#comment-16125526
 ] 

Steve Loughran commented on HADOOP-14770:
-

also, remember to set component as fs/s3. We need this categorisation to know 
whether to begin looking at the problem and where (i.e if it's just Hadoop 2.7, 
the fix is upgrade, if its 2.8+ then its a real issue)

> S3A http connection in s3a driver not reuse in Spark application
> 
>
> Key: HADOOP-14770
> URL: https://issues.apache.org/jira/browse/HADOOP-14770
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Yonger
>Assignee: Yonger
>
> I print out connection stats every 2 s when running Spark application against 
> s3-compatible storage:
> ESTAB  0  0 :::10.0.2.36:6
> :::10.0.2.254:80 
> ESTAB  0  0 :::10.0.2.36:44454
> :::10.0.2.254:80 
> ESTAB  0  0 :::10.0.2.36:44374
> :::10.0.2.254:80 
> ESTAB  159724 0 :::10.0.2.36:44436
> :::10.0.2.254:80 
> ESTAB  0  0 :::10.0.2.36:8
> :::10.0.2.254:80 
> ESTAB  0  0 :::10.0.2.36:44338
> :::10.0.2.254:80 
> ESTAB  0  0 :::10.0.2.36:44438
> :::10.0.2.254:80 
> ESTAB  0  0 :::10.0.2.36:44414
> :::10.0.2.254:80 
> ESTAB  0  480   :::10.0.2.36:44450
> :::10.0.2.254:80  timer:(on,170ms,0)
> ESTAB  0  0 :::10.0.2.36:2
> :::10.0.2.254:80 
> ESTAB  0  0 :::10.0.2.36:44390
> :::10.0.2.254:80 
> ESTAB  0  0 :::10.0.2.36:44326
> :::10.0.2.254:80 
> ESTAB  0  0 :::10.0.2.36:44452
> :::10.0.2.254:80 
> ESTAB  0  0 :::10.0.2.36:44394
> :::10.0.2.254:80 
> ESTAB  0  0 :::10.0.2.36:4
> :::10.0.2.254:80 
> ESTAB  0  0 :::10.0.2.36:44456
> :::10.0.2.254:80 
> ==
> ESTAB  0  0 :::10.0.2.36:44508
> :::10.0.2.254:80 
> ESTAB  0  0 :::10.0.2.36:44476
> :::10.0.2.254:80 
> ESTAB  0  0 :::10.0.2.36:44524
> :::10.0.2.254:80 
> ESTAB  0  0 :::10.0.2.36:44374
> :::10.0.2.254:80 
> ESTAB  0  0 :::10.0.2.36:44500
> :::10.0.2.254:80 
> ESTAB  0  0 :::10.0.2.36:44504
> :::10.0.2.254:80 
> ESTAB  0  0 :::10.0.2.36:44512
> :::10.0.2.254:80 
> ESTAB  0  0 :::10.0.2.36:44506
> :::10.0.2.254:80 
> ESTAB  0  0 :::10.0.2.36:44464
> :::10.0.2.254:80 
> ESTAB  0  0 :::10.0.2.36:44518
> :::10.0.2.254:80 
> ESTAB  0  0 :::10.0.2.36:44510
> :::10.0.2.254:80 
> ESTAB  0  0 :::10.0.2.36:2
> :::10.0.2.254:80 
> ESTAB  0  0 :::10.0.2.36:44526
> :::10.0.2.254:80 
> ESTAB  0  0 :::10.0.2.36:44472
> :::10.0.2.254:80 
> ESTAB  0  0 :::10.0.2.36:44466
> :::10.0.2.254:80 
> the connection in the above of "=" and below were changed all the time. But 
> this haven't seen in MR application. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14770) S3A http connection in s3a driver not reuse in Spark application

2017-08-14 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14770?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16125524#comment-16125524
 ] 

Steve Loughran commented on HADOOP-14770:
-

# add the Hadoop version to the JIRA, thanks
# What is the file format? simple or columnar (ORC, Parquet)
# Looks like the connection is being closed on every seek, which is a sign of 
HADOOP-13203 not engaging (random IO), or on a sequential read, forward reads 
aborting/reopening rather than skipping forward.

Make sure you are using the Hadoop 2.8.x JARS, then:

For columnar data: enabling random IO.

{code}
spark.hadoop.fs.s3a.experimental.fadvise=random
{code}

For sequential data with big forward skips

{code}
spark.hadoop.fs.s3a.readahead.range = 768K
{code}

If this fixes it, close as a duplicate of HADOOP-13203
If this doesn't fix it, you can print both the input stream and s3a FS, as 
their toString() ops print all their stats.

Oh, one more possible cause: split calculation isn't getting it write. Look at 
your s3a block size, and the format itself.



> S3A http connection in s3a driver not reuse in Spark application
> 
>
> Key: HADOOP-14770
> URL: https://issues.apache.org/jira/browse/HADOOP-14770
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Yonger
>Assignee: Yonger
>
> I print out connection stats every 2 s when running Spark application against 
> s3-compatible storage:
> ESTAB  0  0 :::10.0.2.36:6
> :::10.0.2.254:80 
> ESTAB  0  0 :::10.0.2.36:44454
> :::10.0.2.254:80 
> ESTAB  0  0 :::10.0.2.36:44374
> :::10.0.2.254:80 
> ESTAB  159724 0 :::10.0.2.36:44436
> :::10.0.2.254:80 
> ESTAB  0  0 :::10.0.2.36:8
> :::10.0.2.254:80 
> ESTAB  0  0 :::10.0.2.36:44338
> :::10.0.2.254:80 
> ESTAB  0  0 :::10.0.2.36:44438
> :::10.0.2.254:80 
> ESTAB  0  0 :::10.0.2.36:44414
> :::10.0.2.254:80 
> ESTAB  0  480   :::10.0.2.36:44450
> :::10.0.2.254:80  timer:(on,170ms,0)
> ESTAB  0  0 :::10.0.2.36:2
> :::10.0.2.254:80 
> ESTAB  0  0 :::10.0.2.36:44390
> :::10.0.2.254:80 
> ESTAB  0  0 :::10.0.2.36:44326
> :::10.0.2.254:80 
> ESTAB  0  0 :::10.0.2.36:44452
> :::10.0.2.254:80 
> ESTAB  0  0 :::10.0.2.36:44394
> :::10.0.2.254:80 
> ESTAB  0  0 :::10.0.2.36:4
> :::10.0.2.254:80 
> ESTAB  0  0 :::10.0.2.36:44456
> :::10.0.2.254:80 
> ==
> ESTAB  0  0 :::10.0.2.36:44508
> :::10.0.2.254:80 
> ESTAB  0  0 :::10.0.2.36:44476
> :::10.0.2.254:80 
> ESTAB  0  0 :::10.0.2.36:44524
> :::10.0.2.254:80 
> ESTAB  0  0 :::10.0.2.36:44374
> :::10.0.2.254:80 
> ESTAB  0  0 :::10.0.2.36:44500
> :::10.0.2.254:80 
> ESTAB  0  0 :::10.0.2.36:44504
> :::10.0.2.254:80 
> ESTAB  0  0 :::10.0.2.36:44512
> :::10.0.2.254:80 
> ESTAB  0  0 :::10.0.2.36:44506
> :::10.0.2.254:80 
> ESTAB  0  0 :::10.0.2.36:44464
> :::10.0.2.254:80 
> ESTAB  0  0 :::10.0.2.36:44518
> :::10.0.2.254:80 
> ESTAB  0  0 :::10.0.2.36:44510
> :::10.0.2.254:80 
> ESTAB  0  0 :::10.0.2.36:2
> :::10.0.2.254:80 
> ESTAB  0  0 :::10.0.2.36:44526
> :::10.0.2.254:80 
> ESTAB  0  0 :::10.0.2.36:44472
> :::10.0.2.254:80 
> ESTAB  0  0 :::10.0.2.36:44466
> 

[jira] [Commented] (HADOOP-14769) WASB: delete recursive should not fail if a file is deleted

2017-08-14 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14769?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16125299#comment-16125299
 ] 

Hadoop QA commented on HADOOP-14769:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  2m 
26s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 13m 
42s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
19s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
13s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
21s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
27s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
14s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
11s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m  
1s{color} | {color:green} hadoop-azure in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
13s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 23m  0s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | HADOOP-14769 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12881686/HADOOP-14769-001.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux b256c5a483a9 3.13.0-117-generic #164-Ubuntu SMP Fri Apr 7 
11:05:26 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / d8f74c3 |
| Default Java | 1.8.0_144 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/13028/testReport/ |
| modules | C: hadoop-tools/hadoop-azure U: hadoop-tools/hadoop-azure |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/13028/console |
| Powered by | Apache Yetus 0.6.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> WASB: delete recursive should not fail if a file is deleted
> ---
>
> Key: HADOOP-14769
> URL: https://issues.apache.org/jira/browse/HADOOP-14769
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/azure
>Reporter: Thomas Marquardt
>Assignee: Thomas Marquardt
> Attachments: HADOOP-14769-001.patch
>
>
> FileSystem.delete(Path path) and delete(Path path, boolean recursive) return 
> false if the path does not exist.  The WASB implementation of recursive 
> delete currently fails if one of the entries is deleted by an external agent 

[jira] [Assigned] (HADOOP-14770) S3A http connection in s3a driver not reuse in Spark application

2017-08-14 Thread Yonger (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14770?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yonger reassigned HADOOP-14770:
---

Assignee: Yonger

> S3A http connection in s3a driver not reuse in Spark application
> 
>
> Key: HADOOP-14770
> URL: https://issues.apache.org/jira/browse/HADOOP-14770
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Yonger
>Assignee: Yonger
>
> I print out connection stats every 2 s when running Spark application against 
> s3-compatible storage:
> ESTAB  0  0 :::10.0.2.36:6
> :::10.0.2.254:80 
> ESTAB  0  0 :::10.0.2.36:44454
> :::10.0.2.254:80 
> ESTAB  0  0 :::10.0.2.36:44374
> :::10.0.2.254:80 
> ESTAB  159724 0 :::10.0.2.36:44436
> :::10.0.2.254:80 
> ESTAB  0  0 :::10.0.2.36:8
> :::10.0.2.254:80 
> ESTAB  0  0 :::10.0.2.36:44338
> :::10.0.2.254:80 
> ESTAB  0  0 :::10.0.2.36:44438
> :::10.0.2.254:80 
> ESTAB  0  0 :::10.0.2.36:44414
> :::10.0.2.254:80 
> ESTAB  0  480   :::10.0.2.36:44450
> :::10.0.2.254:80  timer:(on,170ms,0)
> ESTAB  0  0 :::10.0.2.36:2
> :::10.0.2.254:80 
> ESTAB  0  0 :::10.0.2.36:44390
> :::10.0.2.254:80 
> ESTAB  0  0 :::10.0.2.36:44326
> :::10.0.2.254:80 
> ESTAB  0  0 :::10.0.2.36:44452
> :::10.0.2.254:80 
> ESTAB  0  0 :::10.0.2.36:44394
> :::10.0.2.254:80 
> ESTAB  0  0 :::10.0.2.36:4
> :::10.0.2.254:80 
> ESTAB  0  0 :::10.0.2.36:44456
> :::10.0.2.254:80 
> ==
> ESTAB  0  0 :::10.0.2.36:44508
> :::10.0.2.254:80 
> ESTAB  0  0 :::10.0.2.36:44476
> :::10.0.2.254:80 
> ESTAB  0  0 :::10.0.2.36:44524
> :::10.0.2.254:80 
> ESTAB  0  0 :::10.0.2.36:44374
> :::10.0.2.254:80 
> ESTAB  0  0 :::10.0.2.36:44500
> :::10.0.2.254:80 
> ESTAB  0  0 :::10.0.2.36:44504
> :::10.0.2.254:80 
> ESTAB  0  0 :::10.0.2.36:44512
> :::10.0.2.254:80 
> ESTAB  0  0 :::10.0.2.36:44506
> :::10.0.2.254:80 
> ESTAB  0  0 :::10.0.2.36:44464
> :::10.0.2.254:80 
> ESTAB  0  0 :::10.0.2.36:44518
> :::10.0.2.254:80 
> ESTAB  0  0 :::10.0.2.36:44510
> :::10.0.2.254:80 
> ESTAB  0  0 :::10.0.2.36:2
> :::10.0.2.254:80 
> ESTAB  0  0 :::10.0.2.36:44526
> :::10.0.2.254:80 
> ESTAB  0  0 :::10.0.2.36:44472
> :::10.0.2.254:80 
> ESTAB  0  0 :::10.0.2.36:44466
> :::10.0.2.254:80 
> the connection in the above of "=" and below were changed all the time. But 
> this haven't seen in MR application. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-14770) S3A http connection in s3a driver not reuse in Spark application

2017-08-14 Thread Yonger (JIRA)
Yonger created HADOOP-14770:
---

 Summary: S3A http connection in s3a driver not reuse in Spark 
application
 Key: HADOOP-14770
 URL: https://issues.apache.org/jira/browse/HADOOP-14770
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Yonger


I print out connection stats every 2 s when running Spark application against 
s3-compatible storage:

ESTAB  0  0 :::10.0.2.36:6
:::10.0.2.254:80 
ESTAB  0  0 :::10.0.2.36:44454
:::10.0.2.254:80 
ESTAB  0  0 :::10.0.2.36:44374
:::10.0.2.254:80 
ESTAB  159724 0 :::10.0.2.36:44436
:::10.0.2.254:80 
ESTAB  0  0 :::10.0.2.36:8
:::10.0.2.254:80 
ESTAB  0  0 :::10.0.2.36:44338
:::10.0.2.254:80 
ESTAB  0  0 :::10.0.2.36:44438
:::10.0.2.254:80 
ESTAB  0  0 :::10.0.2.36:44414
:::10.0.2.254:80 
ESTAB  0  480   :::10.0.2.36:44450
:::10.0.2.254:80  timer:(on,170ms,0)
ESTAB  0  0 :::10.0.2.36:2
:::10.0.2.254:80 
ESTAB  0  0 :::10.0.2.36:44390
:::10.0.2.254:80 
ESTAB  0  0 :::10.0.2.36:44326
:::10.0.2.254:80 
ESTAB  0  0 :::10.0.2.36:44452
:::10.0.2.254:80 
ESTAB  0  0 :::10.0.2.36:44394
:::10.0.2.254:80 
ESTAB  0  0 :::10.0.2.36:4
:::10.0.2.254:80 
ESTAB  0  0 :::10.0.2.36:44456
:::10.0.2.254:80 
==
ESTAB  0  0 :::10.0.2.36:44508
:::10.0.2.254:80 
ESTAB  0  0 :::10.0.2.36:44476
:::10.0.2.254:80 
ESTAB  0  0 :::10.0.2.36:44524
:::10.0.2.254:80 
ESTAB  0  0 :::10.0.2.36:44374
:::10.0.2.254:80 
ESTAB  0  0 :::10.0.2.36:44500
:::10.0.2.254:80 
ESTAB  0  0 :::10.0.2.36:44504
:::10.0.2.254:80 
ESTAB  0  0 :::10.0.2.36:44512
:::10.0.2.254:80 
ESTAB  0  0 :::10.0.2.36:44506
:::10.0.2.254:80 
ESTAB  0  0 :::10.0.2.36:44464
:::10.0.2.254:80 
ESTAB  0  0 :::10.0.2.36:44518
:::10.0.2.254:80 
ESTAB  0  0 :::10.0.2.36:44510
:::10.0.2.254:80 
ESTAB  0  0 :::10.0.2.36:2
:::10.0.2.254:80 
ESTAB  0  0 :::10.0.2.36:44526
:::10.0.2.254:80 
ESTAB  0  0 :::10.0.2.36:44472
:::10.0.2.254:80 
ESTAB  0  0 :::10.0.2.36:44466
:::10.0.2.254:80 

the connection in the above of "=" and below were changed all the time. But 
this haven't seen in MR application. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14769) WASB: delete recursive should not fail if a file is deleted

2017-08-14 Thread Thomas Marquardt (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14769?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas Marquardt updated HADOOP-14769:
--
Status: Patch Available  (was: Open)

> WASB: delete recursive should not fail if a file is deleted
> ---
>
> Key: HADOOP-14769
> URL: https://issues.apache.org/jira/browse/HADOOP-14769
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/azure
>Reporter: Thomas Marquardt
>Assignee: Thomas Marquardt
> Attachments: HADOOP-14769-001.patch
>
>
> FileSystem.delete(Path path) and delete(Path path, boolean recursive) return 
> false if the path does not exist.  The WASB implementation of recursive 
> delete currently fails if one of the entries is deleted by an external agent 
> while a recursive delete is in progress.  For example, if you try to delete 
> all of the files in a directory, which can be a very long process, and one of 
> the files contained within is deleted by an external agent, the recursive 
> directory delete operation will fail if it tries to delete that file and 
> discovers that it does not exist.  This is not desirable.  A recursive 
> directory delete operation should succeeed if the directory initially exists 
> and when the operation completes, the directory and all of its entries do not 
> exist.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14769) WASB: delete recursive should not fail if a file is deleted

2017-08-14 Thread Thomas Marquardt (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14769?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas Marquardt updated HADOOP-14769:
--
Attachment: HADOOP-14769-001.patch

Attaching HADOOP-14769-001.patch.

This fixes recursive directory delete so that it will return false as expected 
if the directory does not exist, but it will return true if the directory is 
successfully deleted even if one of the path entries existed initially but did 
not exist when an attempt was made to delete it.  Two new test cases have been 
added.  One simulates a file being deleted by an external agent and the other a 
child directory.  One of the existing test cases was removed, as it verified 
that recursive directory delete failed if a child directory was deleted by an 
external agent--this test actually had a bug in it and the child directory did 
exist.

All tests passing against my tmarql3 account:

 Tests run: 775, Failures: 0, Errors: 0, Skipped: 155


> WASB: delete recursive should not fail if a file is deleted
> ---
>
> Key: HADOOP-14769
> URL: https://issues.apache.org/jira/browse/HADOOP-14769
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/azure
>Reporter: Thomas Marquardt
>Assignee: Thomas Marquardt
> Attachments: HADOOP-14769-001.patch
>
>
> FileSystem.delete(Path path) and delete(Path path, boolean recursive) return 
> false if the path does not exist.  The WASB implementation of recursive 
> delete currently fails if one of the entries is deleted by an external agent 
> while a recursive delete is in progress.  For example, if you try to delete 
> all of the files in a directory, which can be a very long process, and one of 
> the files contained within is deleted by an external agent, the recursive 
> directory delete operation will fail if it tries to delete that file and 
> discovers that it does not exist.  This is not desirable.  A recursive 
> directory delete operation should succeeed if the directory initially exists 
> and when the operation completes, the directory and all of its entries do not 
> exist.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org