[jira] [Assigned] (HADOOP-15985) LightWeightGSet.computeCapacity() doesn't correctly account for CompressedOops

2018-12-06 Thread lqjacklee (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15985?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

lqjacklee reassigned HADOOP-15985:
--

Assignee: lqjacklee

> LightWeightGSet.computeCapacity() doesn't correctly account for CompressedOops
> --
>
> Key: HADOOP-15985
> URL: https://issues.apache.org/jira/browse/HADOOP-15985
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Roman Leventov
>Assignee: lqjacklee
>Priority: Minor
> Attachments: HADOOP-15985-002.patch, HADOOP-15985-1.patch
>
>
> In this line: 
> [https://github.com/apache/hadoop/blob/a55d6bba71c81c1c4e9d8cd11f55c78f10a548b0/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/LightWeightGSet.java#L391],
>  instead of checking if the platform is 32- or 64-bit, it should check if 
> Unsafe.ARRAY_OBJECT_INDEX_SCALE is 4 or 8.
> The result is that on 64-bit platforms, when Compressed Oops are on, 
> LightWeightGSet is two times denser than it is configured to be.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15985) LightWeightGSet.computeCapacity() doesn't correctly account for CompressedOops

2018-12-06 Thread lqjacklee (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15985?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16712217#comment-16712217
 ] 

lqjacklee commented on HADOOP-15985:


[~kihwal] Thanks reminder, I submit the wrong path.  [^HADOOP-15985-002.patch] 

> LightWeightGSet.computeCapacity() doesn't correctly account for CompressedOops
> --
>
> Key: HADOOP-15985
> URL: https://issues.apache.org/jira/browse/HADOOP-15985
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Roman Leventov
>Priority: Minor
> Attachments: HADOOP-15985-002.patch, HADOOP-15985-1.patch
>
>
> In this line: 
> [https://github.com/apache/hadoop/blob/a55d6bba71c81c1c4e9d8cd11f55c78f10a548b0/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/LightWeightGSet.java#L391],
>  instead of checking if the platform is 32- or 64-bit, it should check if 
> Unsafe.ARRAY_OBJECT_INDEX_SCALE is 4 or 8.
> The result is that on 64-bit platforms, when Compressed Oops are on, 
> LightWeightGSet is two times denser than it is configured to be.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15985) LightWeightGSet.computeCapacity() doesn't correctly account for CompressedOops

2018-12-06 Thread lqjacklee (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15985?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

lqjacklee updated HADOOP-15985:
---
Attachment: HADOOP-15985-002.patch

> LightWeightGSet.computeCapacity() doesn't correctly account for CompressedOops
> --
>
> Key: HADOOP-15985
> URL: https://issues.apache.org/jira/browse/HADOOP-15985
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Roman Leventov
>Priority: Minor
> Attachments: HADOOP-15985-002.patch, HADOOP-15985-1.patch
>
>
> In this line: 
> [https://github.com/apache/hadoop/blob/a55d6bba71c81c1c4e9d8cd11f55c78f10a548b0/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/LightWeightGSet.java#L391],
>  instead of checking if the platform is 32- or 64-bit, it should check if 
> Unsafe.ARRAY_OBJECT_INDEX_SCALE is 4 or 8.
> The result is that on 64-bit platforms, when Compressed Oops are on, 
> LightWeightGSet is two times denser than it is configured to be.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15973) Configuration: Included properties are not cached if resource is a stream

2018-12-06 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15973?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16712169#comment-16712169
 ] 

Hadoop QA commented on HADOOP-15973:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
19s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 19m 
28s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 15m  
1s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
56s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
17s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 45s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
36s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
59s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 14m 
19s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 14m 
19s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 50s{color} | {color:orange} hadoop-common-project/hadoop-common: The patch 
generated 2 new + 251 unchanged - 0 fixed = 253 total (was 251) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
9s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 10s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
55s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  8m  
5s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
44s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 92m 53s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | HADOOP-15973 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12950910/HADOOP-15973.001.patch
 |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux a12912e04241 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 019836b |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_181 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/15616/artifact/out/diff-checkstyle-hadoop-common-project_hadoop-common.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/15616/testReport/ |
| Max. process+thread count | 1418 (vs. ulimit of 1) |
| modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/15616/console |
| Powered by | 

[jira] [Updated] (HADOOP-15973) Configuration: Included properties are not cached if resource is a stream

2018-12-06 Thread Eric Payne (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15973?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Payne updated HADOOP-15973:

Attachment: HADOOP-15973.001.patch

> Configuration: Included properties are not cached if resource is a stream
> -
>
> Key: HADOOP-15973
> URL: https://issues.apache.org/jira/browse/HADOOP-15973
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Eric Payne
>Assignee: Eric Payne
>Priority: Critical
> Attachments: HADOOP-15973.001.patch
>
>
> If a configuration resource is a bufferedinputstream and the resource has an 
> included xml file, the properties from the included file are read and stored 
> in the properties of the configuration, but they are not stored in the 
> resource cache. So, if a later resource is added to the config and the 
> properties are recalculated from the first resource, the included properties 
> are lost.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15973) Configuration: Included properties are not cached if resource is a stream

2018-12-06 Thread Eric Payne (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15973?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Payne updated HADOOP-15973:

Status: Patch Available  (was: Open)

Submitted 001 version of the patch.

> Configuration: Included properties are not cached if resource is a stream
> -
>
> Key: HADOOP-15973
> URL: https://issues.apache.org/jira/browse/HADOOP-15973
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Eric Payne
>Assignee: Eric Payne
>Priority: Critical
> Attachments: HADOOP-15973.001.patch
>
>
> If a configuration resource is a bufferedinputstream and the resource has an 
> included xml file, the properties from the included file are read and stored 
> in the properties of the configuration, but they are not stored in the 
> resource cache. So, if a later resource is added to the config and the 
> properties are recalculated from the first resource, the included properties 
> are lost.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15819) S3A integration test failures: FileSystem is closed! - without parallel test run

2018-12-06 Thread Adam Antal (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15819?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Adam Antal updated HADOOP-15819:

Attachment: HADOOP-15819.001.patch

> S3A integration test failures: FileSystem is closed! - without parallel test 
> run
> 
>
> Key: HADOOP-15819
> URL: https://issues.apache.org/jira/browse/HADOOP-15819
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3
>Affects Versions: 3.1.1
>Reporter: Gabor Bota
>Assignee: Gabor Bota
>Priority: Critical
> Attachments: HADOOP-15819.000.patch, HADOOP-15819.001.patch, 
> HADOOP-15819.001.patch, S3ACloseEnforcedFileSystem.java, 
> S3ACloseEnforcedFileSystem.java, closed_fs_closers_example_5klines.log.zip
>
>
> Running the integration tests for hadoop-aws {{mvn -Dscale verify}} against 
> Amazon AWS S3 (eu-west-1, us-west-1, with no s3guard) we see a lot of these 
> failures:
> {noformat}
> [ERROR] Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 4.408 
> s <<< FAILURE! - in 
> org.apache.hadoop.fs.s3a.commit.staging.integration.ITDirectoryCommitMRJob
> [ERROR] 
> testMRJob(org.apache.hadoop.fs.s3a.commit.staging.integration.ITDirectoryCommitMRJob)
>   Time elapsed: 0.027 s  <<< ERROR!
> java.io.IOException: s3a://cloudera-dev-gabor-ireland: FileSystem is closed!
> [ERROR] Tests run: 2, Failures: 0, Errors: 2, Skipped: 0, Time elapsed: 4.345 
> s <<< FAILURE! - in 
> org.apache.hadoop.fs.s3a.commit.staging.integration.ITStagingCommitMRJob
> [ERROR] 
> testStagingDirectory(org.apache.hadoop.fs.s3a.commit.staging.integration.ITStagingCommitMRJob)
>   Time elapsed: 0.021 s  <<< ERROR!
> java.io.IOException: s3a://cloudera-dev-gabor-ireland: FileSystem is closed!
> [ERROR] 
> testMRJob(org.apache.hadoop.fs.s3a.commit.staging.integration.ITStagingCommitMRJob)
>   Time elapsed: 0.022 s  <<< ERROR!
> java.io.IOException: s3a://cloudera-dev-gabor-ireland: FileSystem is closed!
> [ERROR] Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 4.489 
> s <<< FAILURE! - in 
> org.apache.hadoop.fs.s3a.commit.staging.integration.ITStagingCommitMRJobBadDest
> [ERROR] 
> testMRJob(org.apache.hadoop.fs.s3a.commit.staging.integration.ITStagingCommitMRJobBadDest)
>   Time elapsed: 0.023 s  <<< ERROR!
> java.io.IOException: s3a://cloudera-dev-gabor-ireland: FileSystem is closed!
> [ERROR] Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 4.695 
> s <<< FAILURE! - in org.apache.hadoop.fs.s3a.commit.magic.ITMagicCommitMRJob
> [ERROR] testMRJob(org.apache.hadoop.fs.s3a.commit.magic.ITMagicCommitMRJob)  
> Time elapsed: 0.039 s  <<< ERROR!
> java.io.IOException: s3a://cloudera-dev-gabor-ireland: FileSystem is closed!
> [ERROR] Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 0.015 
> s <<< FAILURE! - in org.apache.hadoop.fs.s3a.commit.ITestS3ACommitterFactory
> [ERROR] 
> testEverything(org.apache.hadoop.fs.s3a.commit.ITestS3ACommitterFactory)  
> Time elapsed: 0.014 s  <<< ERROR!
> java.io.IOException: s3a://cloudera-dev-gabor-ireland: FileSystem is closed!
> {noformat}
> The big issue is that the tests are running in a serial manner - no test is 
> running on top of the other - so we should not see that the tests are failing 
> like this. The issue could be in how we handle 
> org.apache.hadoop.fs.FileSystem#CACHE - the tests should use the same 
> S3AFileSystem so if A test uses a FileSystem and closes it in teardown then B 
> test will get the same FileSystem object from the cache and try to use it, 
> but it is closed.
> We see this a lot in our downstream testing too. It's not possible to tell 
> that the failed regression test result is an implementation issue in the 
> runtime code or a test implementation problem. 
> I've checked when and what closes the S3AFileSystem with a sightly modified 
> version of S3AFileSystem which logs the closers of the fs if an error should 
> occur. I'll attach this modified java file for reference. See the next 
> example of the result when it's running:
> {noformat}
> 2018-10-04 00:52:25,596 [Thread-4201] ERROR s3a.S3ACloseEnforcedFileSystem 
> (S3ACloseEnforcedFileSystem.java:checkIfClosed(74)) - Use after close(): 
> java.lang.RuntimeException: Using closed FS!.
>   at 
> org.apache.hadoop.fs.s3a.S3ACloseEnforcedFileSystem.checkIfClosed(S3ACloseEnforcedFileSystem.java:73)
>   at 
> org.apache.hadoop.fs.s3a.S3ACloseEnforcedFileSystem.mkdirs(S3ACloseEnforcedFileSystem.java:474)
>   at 
> org.apache.hadoop.fs.contract.AbstractFSContractTestBase.mkdirs(AbstractFSContractTestBase.java:338)
>   at 
> org.apache.hadoop.fs.contract.AbstractFSContractTestBase.setup(AbstractFSContractTestBase.java:193)
>   at 
> org.apache.hadoop.fs.s3a.ITestS3AClosedFS.setup(ITestS3AClosedFS.java:40)
>   

[jira] [Commented] (HADOOP-15819) S3A integration test failures: FileSystem is closed! - without parallel test run

2018-12-06 Thread Adam Antal (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15819?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16711966#comment-16711966
 ] 

Adam Antal commented on HADOOP-15819:
-

I'm sorry, uploaded bad patch (reuploaded the right one 
[^HADOOP-15819.001.patch] ).

> S3A integration test failures: FileSystem is closed! - without parallel test 
> run
> 
>
> Key: HADOOP-15819
> URL: https://issues.apache.org/jira/browse/HADOOP-15819
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3
>Affects Versions: 3.1.1
>Reporter: Gabor Bota
>Assignee: Gabor Bota
>Priority: Critical
> Attachments: HADOOP-15819.000.patch, HADOOP-15819.001.patch, 
> HADOOP-15819.001.patch, S3ACloseEnforcedFileSystem.java, 
> S3ACloseEnforcedFileSystem.java, closed_fs_closers_example_5klines.log.zip
>
>
> Running the integration tests for hadoop-aws {{mvn -Dscale verify}} against 
> Amazon AWS S3 (eu-west-1, us-west-1, with no s3guard) we see a lot of these 
> failures:
> {noformat}
> [ERROR] Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 4.408 
> s <<< FAILURE! - in 
> org.apache.hadoop.fs.s3a.commit.staging.integration.ITDirectoryCommitMRJob
> [ERROR] 
> testMRJob(org.apache.hadoop.fs.s3a.commit.staging.integration.ITDirectoryCommitMRJob)
>   Time elapsed: 0.027 s  <<< ERROR!
> java.io.IOException: s3a://cloudera-dev-gabor-ireland: FileSystem is closed!
> [ERROR] Tests run: 2, Failures: 0, Errors: 2, Skipped: 0, Time elapsed: 4.345 
> s <<< FAILURE! - in 
> org.apache.hadoop.fs.s3a.commit.staging.integration.ITStagingCommitMRJob
> [ERROR] 
> testStagingDirectory(org.apache.hadoop.fs.s3a.commit.staging.integration.ITStagingCommitMRJob)
>   Time elapsed: 0.021 s  <<< ERROR!
> java.io.IOException: s3a://cloudera-dev-gabor-ireland: FileSystem is closed!
> [ERROR] 
> testMRJob(org.apache.hadoop.fs.s3a.commit.staging.integration.ITStagingCommitMRJob)
>   Time elapsed: 0.022 s  <<< ERROR!
> java.io.IOException: s3a://cloudera-dev-gabor-ireland: FileSystem is closed!
> [ERROR] Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 4.489 
> s <<< FAILURE! - in 
> org.apache.hadoop.fs.s3a.commit.staging.integration.ITStagingCommitMRJobBadDest
> [ERROR] 
> testMRJob(org.apache.hadoop.fs.s3a.commit.staging.integration.ITStagingCommitMRJobBadDest)
>   Time elapsed: 0.023 s  <<< ERROR!
> java.io.IOException: s3a://cloudera-dev-gabor-ireland: FileSystem is closed!
> [ERROR] Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 4.695 
> s <<< FAILURE! - in org.apache.hadoop.fs.s3a.commit.magic.ITMagicCommitMRJob
> [ERROR] testMRJob(org.apache.hadoop.fs.s3a.commit.magic.ITMagicCommitMRJob)  
> Time elapsed: 0.039 s  <<< ERROR!
> java.io.IOException: s3a://cloudera-dev-gabor-ireland: FileSystem is closed!
> [ERROR] Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 0.015 
> s <<< FAILURE! - in org.apache.hadoop.fs.s3a.commit.ITestS3ACommitterFactory
> [ERROR] 
> testEverything(org.apache.hadoop.fs.s3a.commit.ITestS3ACommitterFactory)  
> Time elapsed: 0.014 s  <<< ERROR!
> java.io.IOException: s3a://cloudera-dev-gabor-ireland: FileSystem is closed!
> {noformat}
> The big issue is that the tests are running in a serial manner - no test is 
> running on top of the other - so we should not see that the tests are failing 
> like this. The issue could be in how we handle 
> org.apache.hadoop.fs.FileSystem#CACHE - the tests should use the same 
> S3AFileSystem so if A test uses a FileSystem and closes it in teardown then B 
> test will get the same FileSystem object from the cache and try to use it, 
> but it is closed.
> We see this a lot in our downstream testing too. It's not possible to tell 
> that the failed regression test result is an implementation issue in the 
> runtime code or a test implementation problem. 
> I've checked when and what closes the S3AFileSystem with a sightly modified 
> version of S3AFileSystem which logs the closers of the fs if an error should 
> occur. I'll attach this modified java file for reference. See the next 
> example of the result when it's running:
> {noformat}
> 2018-10-04 00:52:25,596 [Thread-4201] ERROR s3a.S3ACloseEnforcedFileSystem 
> (S3ACloseEnforcedFileSystem.java:checkIfClosed(74)) - Use after close(): 
> java.lang.RuntimeException: Using closed FS!.
>   at 
> org.apache.hadoop.fs.s3a.S3ACloseEnforcedFileSystem.checkIfClosed(S3ACloseEnforcedFileSystem.java:73)
>   at 
> org.apache.hadoop.fs.s3a.S3ACloseEnforcedFileSystem.mkdirs(S3ACloseEnforcedFileSystem.java:474)
>   at 
> org.apache.hadoop.fs.contract.AbstractFSContractTestBase.mkdirs(AbstractFSContractTestBase.java:338)
>   at 
> org.apache.hadoop.fs.contract.AbstractFSContractTestBase.setup(AbstractFSContractTestBase.java:193)
>   

[jira] [Assigned] (HADOOP-14425) Add more s3guard metrics

2018-12-06 Thread Gabor Bota (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-14425?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gabor Bota reassigned HADOOP-14425:
---

Assignee: Gabor Bota

> Add more s3guard metrics
> 
>
> Key: HADOOP-14425
> URL: https://issues.apache.org/jira/browse/HADOOP-14425
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.1.0
>Reporter: Ai Deng
>Assignee: Gabor Bota
>Priority: Major
>
> The metrics suggested to add:
> Status:
> S3GUARD_METADATASTORE_ENABLED
> S3GUARD_METADATASTORE_IS_AUTHORITATIVE
> Operations:
> S3GUARD_METADATASTORE_INITIALIZATION
> S3GUARD_METADATASTORE_DELETE_PATH
> S3GUARD_METADATASTORE_DELETE_PATH_LATENCY
> S3GUARD_METADATASTORE_DELETE_SUBTREE_PATCH
> S3GUARD_METADATASTORE_GET_PATH
> S3GUARD_METADATASTORE_GET_PATH_LATENCY
> S3GUARD_METADATASTORE_GET_CHILDREN_PATH
> S3GUARD_METADATASTORE_GET_CHILDREN_PATH_LATENCY
> S3GUARD_METADATASTORE_MOVE_PATH
> S3GUARD_METADATASTORE_PUT_PATH
> S3GUARD_METADATASTORE_PUT_PATH_LATENCY
> S3GUARD_METADATASTORE_CLOSE
> S3GUARD_METADATASTORE_DESTORY
> From S3Guard:
> S3GUARD_METADATASTORE_MERGE_DIRECTORY
> For the failures:
> S3GUARD_METADATASTORE_DELETE_FAILURE
> S3GUARD_METADATASTORE_GET_FAILURE
> S3GUARD_METADATASTORE_PUT_FAILURE
> Etc:
> S3GUARD_METADATASTORE_PUT_RETRY_TIMES



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14109) improvements to S3GuardTool destroy command

2018-12-06 Thread Gabor Bota (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-14109?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16711973#comment-16711973
 ] 

Gabor Bota commented on HADOOP-14109:
-

I'll start work on this after HADOOP-15428 and HADOOP-15845 gets resolved.

> improvements to S3GuardTool destroy command
> ---
>
> Key: HADOOP-14109
> URL: https://issues.apache.org/jira/browse/HADOOP-14109
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.1.0
>Reporter: Steve Loughran
>Assignee: Gabor Bota
>Priority: Minor
>
> The S3GuardTool destroy operation initializes dynamoDB, and in doing so has 
> some issues
> # if the version of the table is incompatible, init fails, so table isn't 
> deleteable
> # if the system is configured to create the table on demand, then whenever 
> destroy is called for a table that doesn't exist, it gets created and then 
> destroyed.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15819) S3A integration test failures: FileSystem is closed! - without parallel test run

2018-12-06 Thread Gabor Bota (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15819?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16711957#comment-16711957
 ] 

Gabor Bota commented on HADOOP-15819:
-

Thanks [~adam.antal], I'll test it tomorrow up and downstream.

> S3A integration test failures: FileSystem is closed! - without parallel test 
> run
> 
>
> Key: HADOOP-15819
> URL: https://issues.apache.org/jira/browse/HADOOP-15819
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3
>Affects Versions: 3.1.1
>Reporter: Gabor Bota
>Assignee: Gabor Bota
>Priority: Critical
> Attachments: HADOOP-15819.000.patch, HADOOP-15819.001.patch, 
> S3ACloseEnforcedFileSystem.java, S3ACloseEnforcedFileSystem.java, 
> closed_fs_closers_example_5klines.log.zip
>
>
> Running the integration tests for hadoop-aws {{mvn -Dscale verify}} against 
> Amazon AWS S3 (eu-west-1, us-west-1, with no s3guard) we see a lot of these 
> failures:
> {noformat}
> [ERROR] Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 4.408 
> s <<< FAILURE! - in 
> org.apache.hadoop.fs.s3a.commit.staging.integration.ITDirectoryCommitMRJob
> [ERROR] 
> testMRJob(org.apache.hadoop.fs.s3a.commit.staging.integration.ITDirectoryCommitMRJob)
>   Time elapsed: 0.027 s  <<< ERROR!
> java.io.IOException: s3a://cloudera-dev-gabor-ireland: FileSystem is closed!
> [ERROR] Tests run: 2, Failures: 0, Errors: 2, Skipped: 0, Time elapsed: 4.345 
> s <<< FAILURE! - in 
> org.apache.hadoop.fs.s3a.commit.staging.integration.ITStagingCommitMRJob
> [ERROR] 
> testStagingDirectory(org.apache.hadoop.fs.s3a.commit.staging.integration.ITStagingCommitMRJob)
>   Time elapsed: 0.021 s  <<< ERROR!
> java.io.IOException: s3a://cloudera-dev-gabor-ireland: FileSystem is closed!
> [ERROR] 
> testMRJob(org.apache.hadoop.fs.s3a.commit.staging.integration.ITStagingCommitMRJob)
>   Time elapsed: 0.022 s  <<< ERROR!
> java.io.IOException: s3a://cloudera-dev-gabor-ireland: FileSystem is closed!
> [ERROR] Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 4.489 
> s <<< FAILURE! - in 
> org.apache.hadoop.fs.s3a.commit.staging.integration.ITStagingCommitMRJobBadDest
> [ERROR] 
> testMRJob(org.apache.hadoop.fs.s3a.commit.staging.integration.ITStagingCommitMRJobBadDest)
>   Time elapsed: 0.023 s  <<< ERROR!
> java.io.IOException: s3a://cloudera-dev-gabor-ireland: FileSystem is closed!
> [ERROR] Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 4.695 
> s <<< FAILURE! - in org.apache.hadoop.fs.s3a.commit.magic.ITMagicCommitMRJob
> [ERROR] testMRJob(org.apache.hadoop.fs.s3a.commit.magic.ITMagicCommitMRJob)  
> Time elapsed: 0.039 s  <<< ERROR!
> java.io.IOException: s3a://cloudera-dev-gabor-ireland: FileSystem is closed!
> [ERROR] Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 0.015 
> s <<< FAILURE! - in org.apache.hadoop.fs.s3a.commit.ITestS3ACommitterFactory
> [ERROR] 
> testEverything(org.apache.hadoop.fs.s3a.commit.ITestS3ACommitterFactory)  
> Time elapsed: 0.014 s  <<< ERROR!
> java.io.IOException: s3a://cloudera-dev-gabor-ireland: FileSystem is closed!
> {noformat}
> The big issue is that the tests are running in a serial manner - no test is 
> running on top of the other - so we should not see that the tests are failing 
> like this. The issue could be in how we handle 
> org.apache.hadoop.fs.FileSystem#CACHE - the tests should use the same 
> S3AFileSystem so if A test uses a FileSystem and closes it in teardown then B 
> test will get the same FileSystem object from the cache and try to use it, 
> but it is closed.
> We see this a lot in our downstream testing too. It's not possible to tell 
> that the failed regression test result is an implementation issue in the 
> runtime code or a test implementation problem. 
> I've checked when and what closes the S3AFileSystem with a sightly modified 
> version of S3AFileSystem which logs the closers of the fs if an error should 
> occur. I'll attach this modified java file for reference. See the next 
> example of the result when it's running:
> {noformat}
> 2018-10-04 00:52:25,596 [Thread-4201] ERROR s3a.S3ACloseEnforcedFileSystem 
> (S3ACloseEnforcedFileSystem.java:checkIfClosed(74)) - Use after close(): 
> java.lang.RuntimeException: Using closed FS!.
>   at 
> org.apache.hadoop.fs.s3a.S3ACloseEnforcedFileSystem.checkIfClosed(S3ACloseEnforcedFileSystem.java:73)
>   at 
> org.apache.hadoop.fs.s3a.S3ACloseEnforcedFileSystem.mkdirs(S3ACloseEnforcedFileSystem.java:474)
>   at 
> org.apache.hadoop.fs.contract.AbstractFSContractTestBase.mkdirs(AbstractFSContractTestBase.java:338)
>   at 
> org.apache.hadoop.fs.contract.AbstractFSContractTestBase.setup(AbstractFSContractTestBase.java:193)
>   at 
> 

[jira] [Commented] (HADOOP-15819) S3A integration test failures: FileSystem is closed! - without parallel test run

2018-12-06 Thread Adam Antal (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15819?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16711917#comment-16711917
 ] 

Adam Antal commented on HADOOP-15819:
-

I'd also like to add that it worked for my disabled-cache version of trunk (so 
I disabled cache for {{AbstractITCommitProtocol }}previously) - it was added to 
the patch as well. It actually works without disabling cache (so only removing 
the bindFileSystem call).

> S3A integration test failures: FileSystem is closed! - without parallel test 
> run
> 
>
> Key: HADOOP-15819
> URL: https://issues.apache.org/jira/browse/HADOOP-15819
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3
>Affects Versions: 3.1.1
>Reporter: Gabor Bota
>Assignee: Gabor Bota
>Priority: Critical
> Attachments: HADOOP-15819.000.patch, HADOOP-15819.001.patch, 
> S3ACloseEnforcedFileSystem.java, S3ACloseEnforcedFileSystem.java, 
> closed_fs_closers_example_5klines.log.zip
>
>
> Running the integration tests for hadoop-aws {{mvn -Dscale verify}} against 
> Amazon AWS S3 (eu-west-1, us-west-1, with no s3guard) we see a lot of these 
> failures:
> {noformat}
> [ERROR] Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 4.408 
> s <<< FAILURE! - in 
> org.apache.hadoop.fs.s3a.commit.staging.integration.ITDirectoryCommitMRJob
> [ERROR] 
> testMRJob(org.apache.hadoop.fs.s3a.commit.staging.integration.ITDirectoryCommitMRJob)
>   Time elapsed: 0.027 s  <<< ERROR!
> java.io.IOException: s3a://cloudera-dev-gabor-ireland: FileSystem is closed!
> [ERROR] Tests run: 2, Failures: 0, Errors: 2, Skipped: 0, Time elapsed: 4.345 
> s <<< FAILURE! - in 
> org.apache.hadoop.fs.s3a.commit.staging.integration.ITStagingCommitMRJob
> [ERROR] 
> testStagingDirectory(org.apache.hadoop.fs.s3a.commit.staging.integration.ITStagingCommitMRJob)
>   Time elapsed: 0.021 s  <<< ERROR!
> java.io.IOException: s3a://cloudera-dev-gabor-ireland: FileSystem is closed!
> [ERROR] 
> testMRJob(org.apache.hadoop.fs.s3a.commit.staging.integration.ITStagingCommitMRJob)
>   Time elapsed: 0.022 s  <<< ERROR!
> java.io.IOException: s3a://cloudera-dev-gabor-ireland: FileSystem is closed!
> [ERROR] Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 4.489 
> s <<< FAILURE! - in 
> org.apache.hadoop.fs.s3a.commit.staging.integration.ITStagingCommitMRJobBadDest
> [ERROR] 
> testMRJob(org.apache.hadoop.fs.s3a.commit.staging.integration.ITStagingCommitMRJobBadDest)
>   Time elapsed: 0.023 s  <<< ERROR!
> java.io.IOException: s3a://cloudera-dev-gabor-ireland: FileSystem is closed!
> [ERROR] Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 4.695 
> s <<< FAILURE! - in org.apache.hadoop.fs.s3a.commit.magic.ITMagicCommitMRJob
> [ERROR] testMRJob(org.apache.hadoop.fs.s3a.commit.magic.ITMagicCommitMRJob)  
> Time elapsed: 0.039 s  <<< ERROR!
> java.io.IOException: s3a://cloudera-dev-gabor-ireland: FileSystem is closed!
> [ERROR] Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 0.015 
> s <<< FAILURE! - in org.apache.hadoop.fs.s3a.commit.ITestS3ACommitterFactory
> [ERROR] 
> testEverything(org.apache.hadoop.fs.s3a.commit.ITestS3ACommitterFactory)  
> Time elapsed: 0.014 s  <<< ERROR!
> java.io.IOException: s3a://cloudera-dev-gabor-ireland: FileSystem is closed!
> {noformat}
> The big issue is that the tests are running in a serial manner - no test is 
> running on top of the other - so we should not see that the tests are failing 
> like this. The issue could be in how we handle 
> org.apache.hadoop.fs.FileSystem#CACHE - the tests should use the same 
> S3AFileSystem so if A test uses a FileSystem and closes it in teardown then B 
> test will get the same FileSystem object from the cache and try to use it, 
> but it is closed.
> We see this a lot in our downstream testing too. It's not possible to tell 
> that the failed regression test result is an implementation issue in the 
> runtime code or a test implementation problem. 
> I've checked when and what closes the S3AFileSystem with a sightly modified 
> version of S3AFileSystem which logs the closers of the fs if an error should 
> occur. I'll attach this modified java file for reference. See the next 
> example of the result when it's running:
> {noformat}
> 2018-10-04 00:52:25,596 [Thread-4201] ERROR s3a.S3ACloseEnforcedFileSystem 
> (S3ACloseEnforcedFileSystem.java:checkIfClosed(74)) - Use after close(): 
> java.lang.RuntimeException: Using closed FS!.
>   at 
> org.apache.hadoop.fs.s3a.S3ACloseEnforcedFileSystem.checkIfClosed(S3ACloseEnforcedFileSystem.java:73)
>   at 
> org.apache.hadoop.fs.s3a.S3ACloseEnforcedFileSystem.mkdirs(S3ACloseEnforcedFileSystem.java:474)
>   at 
> 

[jira] [Commented] (HADOOP-15845) s3guard init and destroy command will create/destroy tables if ddb.table & region are set

2018-12-06 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15845?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16711881#comment-16711881
 ] 

Hadoop QA commented on HADOOP-15845:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
14s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 22m 
11s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
31s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
21s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
35s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m  2s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
45s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
23s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 47s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
21s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  4m 
36s{color} | {color:green} hadoop-aws in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
31s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 60m 13s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | HADOOP-15845 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12950872/HADOOP-15845.001.patch
 |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 838c6c9a252c 4.4.0-134-generic #160~14.04.1-Ubuntu SMP Fri Aug 
17 11:07:07 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / c03024a |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_181 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/15615/testReport/ |
| Max. process+thread count | 319 (vs. ulimit of 1) |
| modules | C: hadoop-tools/hadoop-aws U: hadoop-tools/hadoop-aws |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/15615/console |
| Powered by | Apache Yetus 0.8.0   http://yetus.apache.org |


This message was automatically generated.



> s3guard init and destroy command will create/destroy tables if ddb.table & 
> region are set
> 

[jira] [Updated] (HADOOP-15845) s3guard init and destroy command will create/destroy tables if ddb.table & region are set

2018-12-06 Thread Gabor Bota (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15845?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gabor Bota updated HADOOP-15845:

Status: Patch Available  (was: In Progress)

Test run against eu-west-1. No unknown issues 
(testWithMiniCluster(org.apache.hadoop.fs.s3a.yarn.ITestS3AMiniYarnCluster) 
still failing).

> s3guard init and destroy command will create/destroy tables if ddb.table & 
> region are set
> -
>
> Key: HADOOP-15845
> URL: https://issues.apache.org/jira/browse/HADOOP-15845
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.1.1
>Reporter: Steve Loughran
>Assignee: Gabor Bota
>Priority: Major
> Attachments: HADOOP-15845.001.patch
>
>
> If you have s3guard set up with a table name and a region, then s3guard init 
> will automatically create the table, without you specifying a bucket or URI.
> I had expected the command just to print out its arguments, but it actually 
> did the init with the default bucket values
> Even worse, `hadoop s3guard destroy` will destroy the table. 
> This is too dangerous to allow. The command must require either the name of a 
> bucket or an an explicit ddb table URI



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15845) s3guard init and destroy command will create/destroy tables if ddb.table & region are set

2018-12-06 Thread Gabor Bota (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15845?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gabor Bota updated HADOOP-15845:

Attachment: HADOOP-15845.001.patch

> s3guard init and destroy command will create/destroy tables if ddb.table & 
> region are set
> -
>
> Key: HADOOP-15845
> URL: https://issues.apache.org/jira/browse/HADOOP-15845
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.1.1
>Reporter: Steve Loughran
>Assignee: Gabor Bota
>Priority: Major
> Attachments: HADOOP-15845.001.patch
>
>
> If you have s3guard set up with a table name and a region, then s3guard init 
> will automatically create the table, without you specifying a bucket or URI.
> I had expected the command just to print out its arguments, but it actually 
> did the init with the default bucket values
> Even worse, `hadoop s3guard destroy` will destroy the table. 
> This is too dangerous to allow. The command must require either the name of a 
> bucket or an an explicit ddb table URI



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15819) S3A integration test failures: FileSystem is closed! - without parallel test run

2018-12-06 Thread Adam Antal (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15819?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Adam Antal updated HADOOP-15819:

Attachment: HADOOP-15819.001.patch

> S3A integration test failures: FileSystem is closed! - without parallel test 
> run
> 
>
> Key: HADOOP-15819
> URL: https://issues.apache.org/jira/browse/HADOOP-15819
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3
>Affects Versions: 3.1.1
>Reporter: Gabor Bota
>Assignee: Gabor Bota
>Priority: Critical
> Attachments: HADOOP-15819.000.patch, HADOOP-15819.001.patch, 
> S3ACloseEnforcedFileSystem.java, S3ACloseEnforcedFileSystem.java, 
> closed_fs_closers_example_5klines.log.zip
>
>
> Running the integration tests for hadoop-aws {{mvn -Dscale verify}} against 
> Amazon AWS S3 (eu-west-1, us-west-1, with no s3guard) we see a lot of these 
> failures:
> {noformat}
> [ERROR] Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 4.408 
> s <<< FAILURE! - in 
> org.apache.hadoop.fs.s3a.commit.staging.integration.ITDirectoryCommitMRJob
> [ERROR] 
> testMRJob(org.apache.hadoop.fs.s3a.commit.staging.integration.ITDirectoryCommitMRJob)
>   Time elapsed: 0.027 s  <<< ERROR!
> java.io.IOException: s3a://cloudera-dev-gabor-ireland: FileSystem is closed!
> [ERROR] Tests run: 2, Failures: 0, Errors: 2, Skipped: 0, Time elapsed: 4.345 
> s <<< FAILURE! - in 
> org.apache.hadoop.fs.s3a.commit.staging.integration.ITStagingCommitMRJob
> [ERROR] 
> testStagingDirectory(org.apache.hadoop.fs.s3a.commit.staging.integration.ITStagingCommitMRJob)
>   Time elapsed: 0.021 s  <<< ERROR!
> java.io.IOException: s3a://cloudera-dev-gabor-ireland: FileSystem is closed!
> [ERROR] 
> testMRJob(org.apache.hadoop.fs.s3a.commit.staging.integration.ITStagingCommitMRJob)
>   Time elapsed: 0.022 s  <<< ERROR!
> java.io.IOException: s3a://cloudera-dev-gabor-ireland: FileSystem is closed!
> [ERROR] Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 4.489 
> s <<< FAILURE! - in 
> org.apache.hadoop.fs.s3a.commit.staging.integration.ITStagingCommitMRJobBadDest
> [ERROR] 
> testMRJob(org.apache.hadoop.fs.s3a.commit.staging.integration.ITStagingCommitMRJobBadDest)
>   Time elapsed: 0.023 s  <<< ERROR!
> java.io.IOException: s3a://cloudera-dev-gabor-ireland: FileSystem is closed!
> [ERROR] Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 4.695 
> s <<< FAILURE! - in org.apache.hadoop.fs.s3a.commit.magic.ITMagicCommitMRJob
> [ERROR] testMRJob(org.apache.hadoop.fs.s3a.commit.magic.ITMagicCommitMRJob)  
> Time elapsed: 0.039 s  <<< ERROR!
> java.io.IOException: s3a://cloudera-dev-gabor-ireland: FileSystem is closed!
> [ERROR] Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 0.015 
> s <<< FAILURE! - in org.apache.hadoop.fs.s3a.commit.ITestS3ACommitterFactory
> [ERROR] 
> testEverything(org.apache.hadoop.fs.s3a.commit.ITestS3ACommitterFactory)  
> Time elapsed: 0.014 s  <<< ERROR!
> java.io.IOException: s3a://cloudera-dev-gabor-ireland: FileSystem is closed!
> {noformat}
> The big issue is that the tests are running in a serial manner - no test is 
> running on top of the other - so we should not see that the tests are failing 
> like this. The issue could be in how we handle 
> org.apache.hadoop.fs.FileSystem#CACHE - the tests should use the same 
> S3AFileSystem so if A test uses a FileSystem and closes it in teardown then B 
> test will get the same FileSystem object from the cache and try to use it, 
> but it is closed.
> We see this a lot in our downstream testing too. It's not possible to tell 
> that the failed regression test result is an implementation issue in the 
> runtime code or a test implementation problem. 
> I've checked when and what closes the S3AFileSystem with a sightly modified 
> version of S3AFileSystem which logs the closers of the fs if an error should 
> occur. I'll attach this modified java file for reference. See the next 
> example of the result when it's running:
> {noformat}
> 2018-10-04 00:52:25,596 [Thread-4201] ERROR s3a.S3ACloseEnforcedFileSystem 
> (S3ACloseEnforcedFileSystem.java:checkIfClosed(74)) - Use after close(): 
> java.lang.RuntimeException: Using closed FS!.
>   at 
> org.apache.hadoop.fs.s3a.S3ACloseEnforcedFileSystem.checkIfClosed(S3ACloseEnforcedFileSystem.java:73)
>   at 
> org.apache.hadoop.fs.s3a.S3ACloseEnforcedFileSystem.mkdirs(S3ACloseEnforcedFileSystem.java:474)
>   at 
> org.apache.hadoop.fs.contract.AbstractFSContractTestBase.mkdirs(AbstractFSContractTestBase.java:338)
>   at 
> org.apache.hadoop.fs.contract.AbstractFSContractTestBase.setup(AbstractFSContractTestBase.java:193)
>   at 
> org.apache.hadoop.fs.s3a.ITestS3AClosedFS.setup(ITestS3AClosedFS.java:40)
>   at 

[jira] [Comment Edited] (HADOOP-15978) Add Netty support to the RPC server

2018-12-06 Thread Erik Krogen (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15978?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16711785#comment-16711785
 ] 

Erik Krogen edited comment on HADOOP-15978 at 12/6/18 5:52 PM:
---

Hey Daryn, it's great to see this work being done. Do you have any performance 
numbers you can share with this Netty implementation vs. the existing RPC 
implementation? It would be nice to see the comparison without encryption and 
JavaSSL vs. Netty TLS.

One other question. Assuming TLS is not enabled, IIUC, this will be fully 
backwards compatible, allowing an old client to talk to new Netty RPC server, 
correct? Netty on both ends will only be required if TLS is used?


was (Author: xkrogen):
Hey Daryn, it's great to see this work being done. Do you have any performance 
numbers you can share with this Netty implementation vs. the existing RPC 
implementation? It would be nice to see the comparison without encryption and 
JavaSSL vs. Netty TLS.

> Add Netty support to the RPC server
> ---
>
> Key: HADOOP-15978
> URL: https://issues.apache.org/jira/browse/HADOOP-15978
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: ipc, security
>Reporter: Daryn Sharp
>Assignee: Daryn Sharp
>Priority: Major
> Attachments: HADOOP-15978.patch
>
>
> Adding Netty will allow later using a native TLS transport layer with much 
> better performance than that offered by Java's SSLEngine.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15978) Add Netty support to the RPC server

2018-12-06 Thread Erik Krogen (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15978?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16711785#comment-16711785
 ] 

Erik Krogen commented on HADOOP-15978:
--

Hey Daryn, it's great to see this work being done. Do you have any performance 
numbers you can share with this Netty implementation vs. the existing RPC 
implementation? It would be nice to see the comparison without encryption and 
JavaSSL vs. Netty TLS.

> Add Netty support to the RPC server
> ---
>
> Key: HADOOP-15978
> URL: https://issues.apache.org/jira/browse/HADOOP-15978
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: ipc, security
>Reporter: Daryn Sharp
>Assignee: Daryn Sharp
>Priority: Major
> Attachments: HADOOP-15978.patch
>
>
> Adding Netty will allow later using a native TLS transport layer with much 
> better performance than that offered by Java's SSLEngine.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15819) S3A integration test failures: FileSystem is closed! - without parallel test run

2018-12-06 Thread Adam Antal (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15819?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16711783#comment-16711783
 ] 

Adam Antal commented on HADOOP-15819:
-

Hi everyone!

I uploaded patch v1 with the fix.
 Remarks/explanation:
 * After thorough investigation of the cache I came to the conclusion that it 
is work as expected - I did not find any abnormal behaviour. The closed S3AFS 
must got into the cache somehow, so I tried further logging, and it showed that 
it was through the {{FileSystem}}'s following function:
{code:java}
@VisibleForTesting
   static void addFileSystemForTesting(URI uri, Configuration conf,
   FileSystem fs) throws IOException {
   CACHE.map.put(new Cache.Key(uri, conf), fs);
   }
{code}

 * It turned out that among the hadoop-aws integration tests there's only one 
usage of this function, and the stack trace is the following:
{code:java}
CACHE.map.put(new Cache.Key(uri, conf), fs) ->
  FileSystem.addFileSystemForTesting(uri, conf, fs) ->
FileSystemTestHelper.addFileSystemForTesting(uri, conf, wrapperFS) ->
  AbstractITCommitProtocol.bindFileSystem(FileSystem fs, Path path, 
Configuration conf) ->
AbstractITCommitProtocol.setup()
{code}

 * (Note that there are other usage in the test suite, but for integration 
tests this is the only one).
 * This function directly injects the FS into the FSCache (which can be in the 
cache already, but with another key), but as I wrote it in my previous comment, 
the caching is explicitly disabled in {{AbstractITCommitProtocol}} since in 
{{AbstractITCommitProtocol.createConfiguration()}} we call 
{{disableFilesystemCaching(conf)}}.
 * So this FS is in the cache, but it is never returned by {{FileSystem.get()}} 
since the cache is disabled. Instead a new S3AFileSystem object is returned in 
each call.
 * During teardown the FS is closed, and it's got removed from the cache *but 
the only removed key-value pair is the injected one.* If there was other 
key-value pair that has the same FS as value, it is kept in the cache. So after 
this action one key with one closed FS value as pair could remain in the 
FSCache.
 * The tests are running in the JVM so the next test case which tries to access 
a FS with {{s3a}} scheme without disabled cache gets this closed FS, and runs 
into the known error.
 * Since the cache is disabled there's no need to explicitly use that 
{{bindFileSystem}} method. My easy fix is to remove it.
 * This function is rarely used in tests, that may explain why we got this 
error and why no one has ever encountered errors like this. To sum it up, the 
cache is working as intended, but the misuse was the root of the issue. Maybe 
we should consider deleting it since it cause unforeseen effects like this 
(note that this function has only one usage except hadoop-aws).

Please verify my observations and please also check that the fix is working 
indeed.
 I also note that {{ITestS3ACommitterFactory}} is still failing, there was 
another error which got hidden by the "closed fs" error.

Regards,
 Adam

> S3A integration test failures: FileSystem is closed! - without parallel test 
> run
> 
>
> Key: HADOOP-15819
> URL: https://issues.apache.org/jira/browse/HADOOP-15819
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3
>Affects Versions: 3.1.1
>Reporter: Gabor Bota
>Assignee: Gabor Bota
>Priority: Critical
> Attachments: HADOOP-15819.000.patch, S3ACloseEnforcedFileSystem.java, 
> S3ACloseEnforcedFileSystem.java, closed_fs_closers_example_5klines.log.zip
>
>
> Running the integration tests for hadoop-aws {{mvn -Dscale verify}} against 
> Amazon AWS S3 (eu-west-1, us-west-1, with no s3guard) we see a lot of these 
> failures:
> {noformat}
> [ERROR] Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 4.408 
> s <<< FAILURE! - in 
> org.apache.hadoop.fs.s3a.commit.staging.integration.ITDirectoryCommitMRJob
> [ERROR] 
> testMRJob(org.apache.hadoop.fs.s3a.commit.staging.integration.ITDirectoryCommitMRJob)
>   Time elapsed: 0.027 s  <<< ERROR!
> java.io.IOException: s3a://cloudera-dev-gabor-ireland: FileSystem is closed!
> [ERROR] Tests run: 2, Failures: 0, Errors: 2, Skipped: 0, Time elapsed: 4.345 
> s <<< FAILURE! - in 
> org.apache.hadoop.fs.s3a.commit.staging.integration.ITStagingCommitMRJob
> [ERROR] 
> testStagingDirectory(org.apache.hadoop.fs.s3a.commit.staging.integration.ITStagingCommitMRJob)
>   Time elapsed: 0.021 s  <<< ERROR!
> java.io.IOException: s3a://cloudera-dev-gabor-ireland: FileSystem is closed!
> [ERROR] 
> testMRJob(org.apache.hadoop.fs.s3a.commit.staging.integration.ITStagingCommitMRJob)
>   Time elapsed: 0.022 s  <<< ERROR!
> java.io.IOException: s3a://cloudera-dev-gabor-ireland: FileSystem is closed!
> 

[jira] [Updated] (HADOOP-15980) Enable TLS in RPC client/server

2018-12-06 Thread Erik Krogen (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15980?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erik Krogen updated HADOOP-15980:
-
Component/s: security
 ipc

> Enable TLS in RPC client/server
> ---
>
> Key: HADOOP-15980
> URL: https://issues.apache.org/jira/browse/HADOOP-15980
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: ipc, security
>Reporter: Daryn Sharp
>Assignee: Daryn Sharp
>Priority: Major
>
> Once the RPC client and server can be configured to use Netty, the TLS engine 
> can be added to the channel pipeline.  The server should allow QoS-like 
> functionality to determine if TLS is mandatory or optional for a client.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15920) get patch for S3a nextReadPos(), through Yetus

2018-12-06 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15920?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16711682#comment-16711682
 ] 

Hadoop QA commented on HADOOP-15920:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
32s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} branch-3.2 Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  6m 
10s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 20m 
 7s{color} | {color:green} branch-3.2 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 15m 
17s{color} | {color:green} branch-3.2 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  3m 
23s{color} | {color:green} branch-3.2 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
55s{color} | {color:green} branch-3.2 passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
18m  5s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
29s{color} | {color:green} branch-3.2 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
28s{color} | {color:green} branch-3.2 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
17s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 14m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 14m 
52s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
3m 23s{color} | {color:orange} root: The patch generated 3 new + 10 unchanged - 
0 fixed = 13 total (was 10) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
59s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 52s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
28s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  8m 
29s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  4m 
24s{color} | {color:green} hadoop-aws in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
40s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}118m 23s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:63396be |
| JIRA Issue | HADOOP-15920 |
| GITHUB PR | https://github.com/apache/hadoop/pull/433 |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 199703f73a5a 4.4.0-138-generic #164~14.04.1-Ubuntu SMP Fri Oct 
5 08:56:16 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | branch-3.2 / 8c70728 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_181 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 

[jira] [Commented] (HADOOP-15985) LightWeightGSet.computeCapacity() doesn't correctly account for CompressedOops

2018-12-06 Thread Kihwal Lee (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15985?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16711661#comment-16711661
 ] 

Kihwal Lee commented on HADOOP-15985:
-

[~Jack-Lee], I am asking about the content of the attached patch.

> LightWeightGSet.computeCapacity() doesn't correctly account for CompressedOops
> --
>
> Key: HADOOP-15985
> URL: https://issues.apache.org/jira/browse/HADOOP-15985
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Roman Leventov
>Priority: Minor
> Attachments: HADOOP-15985-1.patch
>
>
> In this line: 
> [https://github.com/apache/hadoop/blob/a55d6bba71c81c1c4e9d8cd11f55c78f10a548b0/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/LightWeightGSet.java#L391],
>  instead of checking if the platform is 32- or 64-bit, it should check if 
> Unsafe.ARRAY_OBJECT_INDEX_SCALE is 4 or 8.
> The result is that on 64-bit platforms, when Compressed Oops are on, 
> LightWeightGSet is two times denser than it is configured to be.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15982) Support configurable trash location

2018-12-06 Thread Eric Payne (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15982?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16711653#comment-16711653
 ] 

Eric Payne commented on HADOOP-15982:
-

This JIRA is part of the wider discussion being done as part of HADOOP-7310.

> Support configurable trash location
> ---
>
> Key: HADOOP-15982
> URL: https://issues.apache.org/jira/browse/HADOOP-15982
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs
>Affects Versions: 3.0.0
>Reporter: George Huang
>Assignee: George Huang
>Priority: Minor
>
> Currently some customer has users accounts that are functional ids (fid) to 
> manage application and application data under the path /data/FID. These fid's 
> also get a home directory under /user path. The user's home directories are 
> limited with space quota 60 G. When these fids delete data, due to customer 
> deletion policy they are placed in /user//.Trash location and run over 
> quota.
> For now they are increasing quotas for these functional users, but 
> considering growing applications they would like the .Trash location to be 
> configurable or something like  /trash/\{userid} that is owned by the user.
> What should the configurable path look like to make this happen? For example, 
> some thoughts may relate whether we want to configure it for per user or per 
> cluster, etc.
> Here is current behavior:
> fs.TrashPolicyDefault: Moved: 'hdfs://ns1/user/hdfs/test/1.txt to trash at: 
> hdfs://ns1/user/hdfs/.Trash/Current/user/hdfs/test/1.txt
> for path under encryption zone:
> fs.TrashPolicyDefault: Moved: 'hdfs://ns1/scale/2.txt' to trash at 
> hdfs://ns1/scale/.Trash/hdfs/Current/scale/2.txt
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work started] (HADOOP-15870) S3AInputStream.remainingInFile should use nextReadPos

2018-12-06 Thread lqjacklee (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15870?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HADOOP-15870 started by lqjacklee.
--
> S3AInputStream.remainingInFile should use nextReadPos
> -
>
> Key: HADOOP-15870
> URL: https://issues.apache.org/jira/browse/HADOOP-15870
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.8.4, 3.1.1
>Reporter: Shixiong Zhu
>Assignee: lqjacklee
>Priority: Major
> Attachments: HADOOP-15870-002.patch, HADOOP-15870-003.patch
>
>
> Otherwise `remainingInFile` will not change after `seek`.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15985) LightWeightGSet.computeCapacity() doesn't correctly account for CompressedOops

2018-12-06 Thread lqjacklee (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15985?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16711527#comment-16711527
 ] 

lqjacklee commented on HADOOP-15985:


[~kihwal]  instead of checking if the platform is 32- or 64-bit, it should 
check if Unsafe.ARRAY_OBJECT_INDEX_SCALE is 4 or 8.

> LightWeightGSet.computeCapacity() doesn't correctly account for CompressedOops
> --
>
> Key: HADOOP-15985
> URL: https://issues.apache.org/jira/browse/HADOOP-15985
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Roman Leventov
>Priority: Minor
> Attachments: HADOOP-15985-1.patch
>
>
> In this line: 
> [https://github.com/apache/hadoop/blob/a55d6bba71c81c1c4e9d8cd11f55c78f10a548b0/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/LightWeightGSet.java#L391],
>  instead of checking if the platform is 32- or 64-bit, it should check if 
> Unsafe.ARRAY_OBJECT_INDEX_SCALE is 4 or 8.
> The result is that on 64-bit platforms, when Compressed Oops are on, 
> LightWeightGSet is two times denser than it is configured to be.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work stopped] (HADOOP-15870) S3AInputStream.remainingInFile should use nextReadPos

2018-12-06 Thread lqjacklee (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15870?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HADOOP-15870 stopped by lqjacklee.
--
> S3AInputStream.remainingInFile should use nextReadPos
> -
>
> Key: HADOOP-15870
> URL: https://issues.apache.org/jira/browse/HADOOP-15870
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.8.4, 3.1.1
>Reporter: Shixiong Zhu
>Assignee: lqjacklee
>Priority: Major
> Attachments: HADOOP-15870-002.patch, HADOOP-15870-003.patch
>
>
> Otherwise `remainingInFile` will not change after `seek`.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15870) S3AInputStream.remainingInFile should use nextReadPos

2018-12-06 Thread lqjacklee (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15870?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

lqjacklee updated HADOOP-15870:
---
Attachment: HADOOP-15870-003.patch

> S3AInputStream.remainingInFile should use nextReadPos
> -
>
> Key: HADOOP-15870
> URL: https://issues.apache.org/jira/browse/HADOOP-15870
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.8.4, 3.1.1
>Reporter: Shixiong Zhu
>Assignee: lqjacklee
>Priority: Major
> Attachments: HADOOP-15870-002.patch, HADOOP-15870-003.patch
>
>
> Otherwise `remainingInFile` will not change after `seek`.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15870) S3AInputStream.remainingInFile should use nextReadPos

2018-12-06 Thread lqjacklee (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15870?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16711522#comment-16711522
 ] 

lqjacklee commented on HADOOP-15870:


 [^HADOOP-15870-003.patch] 

> S3AInputStream.remainingInFile should use nextReadPos
> -
>
> Key: HADOOP-15870
> URL: https://issues.apache.org/jira/browse/HADOOP-15870
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.8.4, 3.1.1
>Reporter: Shixiong Zhu
>Assignee: lqjacklee
>Priority: Major
> Attachments: HADOOP-15870-002.patch, HADOOP-15870-003.patch
>
>
> Otherwise `remainingInFile` will not change after `seek`.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15920) get patch for S3a nextReadPos(), through Yetus

2018-12-06 Thread lqjacklee (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15920?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16711524#comment-16711524
 ] 

lqjacklee commented on HADOOP-15920:


 [^HADOOP-15870-003.patch] 

> get patch for S3a nextReadPos(), through Yetus
> --
>
> Key: HADOOP-15920
> URL: https://issues.apache.org/jira/browse/HADOOP-15920
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3, test
>Affects Versions: 3.1.1
>Reporter: Steve Loughran
>Assignee: lqjacklee
>Priority: Major
> Attachments: HADOOP-15870-001.diff, HADOOP-15870-002.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15920) get patch for S3a nextReadPos(), through Yetus

2018-12-06 Thread lqjacklee (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15920?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

lqjacklee updated HADOOP-15920:
---
Attachment: HADOOP-15870-003.patch

> get patch for S3a nextReadPos(), through Yetus
> --
>
> Key: HADOOP-15920
> URL: https://issues.apache.org/jira/browse/HADOOP-15920
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3, test
>Affects Versions: 3.1.1
>Reporter: Steve Loughran
>Assignee: lqjacklee
>Priority: Major
> Attachments: HADOOP-15870-001.diff, HADOOP-15870-002.patch, 
> HADOOP-15870-003.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15985) LightWeightGSet.computeCapacity() doesn't correctly account for CompressedOops

2018-12-06 Thread Kihwal Lee (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15985?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16711509#comment-16711509
 ] 

Kihwal Lee commented on HADOOP-15985:
-

[~Jack-Lee], what is this patch for?

> LightWeightGSet.computeCapacity() doesn't correctly account for CompressedOops
> --
>
> Key: HADOOP-15985
> URL: https://issues.apache.org/jira/browse/HADOOP-15985
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Roman Leventov
>Priority: Minor
> Attachments: HADOOP-15985-1.patch
>
>
> In this line: 
> [https://github.com/apache/hadoop/blob/a55d6bba71c81c1c4e9d8cd11f55c78f10a548b0/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/LightWeightGSet.java#L391],
>  instead of checking if the platform is 32- or 64-bit, it should check if 
> Unsafe.ARRAY_OBJECT_INDEX_SCALE is 4 or 8.
> The result is that on 64-bit platforms, when Compressed Oops are on, 
> LightWeightGSet is two times denser than it is configured to be.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15870) S3AInputStream.remainingInFile should use nextReadPos

2018-12-06 Thread lqjacklee (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15870?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16711490#comment-16711490
 ] 

lqjacklee commented on HADOOP-15870:


 [^HADOOP-15870-3.patch] 

> S3AInputStream.remainingInFile should use nextReadPos
> -
>
> Key: HADOOP-15870
> URL: https://issues.apache.org/jira/browse/HADOOP-15870
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.8.4, 3.1.1
>Reporter: Shixiong Zhu
>Assignee: lqjacklee
>Priority: Major
> Attachments: HADOOP-15870-002.patch
>
>
> Otherwise `remainingInFile` will not change after `seek`.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15985) LightWeightGSet.computeCapacity() doesn't correctly account for CompressedOops

2018-12-06 Thread lqjacklee (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15985?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

lqjacklee updated HADOOP-15985:
---
Attachment: HADOOP-15985-1.patch

> LightWeightGSet.computeCapacity() doesn't correctly account for CompressedOops
> --
>
> Key: HADOOP-15985
> URL: https://issues.apache.org/jira/browse/HADOOP-15985
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Roman Leventov
>Priority: Minor
> Attachments: HADOOP-15985-1.patch
>
>
> In this line: 
> [https://github.com/apache/hadoop/blob/a55d6bba71c81c1c4e9d8cd11f55c78f10a548b0/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/LightWeightGSet.java#L391],
>  instead of checking if the platform is 32- or 64-bit, it should check if 
> Unsafe.ARRAY_OBJECT_INDEX_SCALE is 4 or 8.
> The result is that on 64-bit platforms, when Compressed Oops are on, 
> LightWeightGSet is two times denser than it is configured to be.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15985) LightWeightGSet.computeCapacity() doesn't correctly account for CompressedOops

2018-12-06 Thread lqjacklee (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15985?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16711479#comment-16711479
 ] 

lqjacklee commented on HADOOP-15985:


 [^HADOOP-15985-1.patch] 

> LightWeightGSet.computeCapacity() doesn't correctly account for CompressedOops
> --
>
> Key: HADOOP-15985
> URL: https://issues.apache.org/jira/browse/HADOOP-15985
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Roman Leventov
>Priority: Minor
>
> In this line: 
> [https://github.com/apache/hadoop/blob/a55d6bba71c81c1c4e9d8cd11f55c78f10a548b0/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/LightWeightGSet.java#L391],
>  instead of checking if the platform is 32- or 64-bit, it should check if 
> Unsafe.ARRAY_OBJECT_INDEX_SCALE is 4 or 8.
> The result is that on 64-bit platforms, when Compressed Oops are on, 
> LightWeightGSet is two times denser than it is configured to be.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Issue Comment Deleted] (HADOOP-14604) Fix the bug of aggregation log have one replication in HDFS Federation environment

2018-12-06 Thread Elvis (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-14604?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elvis updated HADOOP-14604:
---
Comment: was deleted

(was: This patch may have some problems. Before applying this patch, the time 
between LOCALIZING and LOCALIZED is very fast. But after applying this patch, 
the time between LOCALIZING and LOCALIZED  becomes more than 10s)

> Fix the bug of aggregation log have one replication in HDFS Federation 
> environment
> --
>
> Key: HADOOP-14604
> URL: https://issues.apache.org/jira/browse/HADOOP-14604
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs, viewfs
>Affects Versions: 2.6.0
> Environment: Hadoop 2.6.0 with HDFS Federation
>Reporter: dingguotao
>Assignee: Doris Gu
>Priority: Major
> Fix For: 2.9.0, 2.7.4, 3.0.0-alpha4, 2.8.2
>
> Attachments: HADOOP-14604.patch
>
>
> With HDFS Federation configured, the aggregation log have a default 
> replication count of 1, rather than the value of dfs.replication in 
> hdfs-site.xml configuration file.
> Before: 
> {code:java}
> [op@NM-304-SA5212M4-BIGDATA-640 ~]$ hdfs dfs -ls 
> /yarn/apps/op/logs/application_1498544648312_0001
> Found 3 items
> -rw-r-   1 op hadoop   8270 2017-06-27 14:25 
> /yarn/apps/op/logs/application_1498544648312_0001/NM-304-SA5212M4-BIGDATA-645_8041
> -rw-r-   1 op hadoop  54469 2017-06-27 14:25 
> /yarn/apps/op/logs/application_1498544648312_0001/NM-304-SA5212M4-BIGDATA-646_8041
> -rw-r-   1 op hadoop   6537 2017-06-27 14:25 
> /yarn/apps/op/logs/application_1498544648312_0001/NM-304-SA5212M4-BIGDATA-672_8041
> {code}
> The aggregation log have only 1 replication in default. But  the value of 
> dfs.replication in hdfs-site.xml configuration file is 3
> After apply this patch:
> {code:java}
> [op@NM-304-SA5212M4-BIGDATA-640 ~]$ hdfs dfs -ls 
> /yarn/apps/op/logs/application_1498635214020_0001
> Found 3 items
> -rw-r-   3 op hadoop   5729 2017-06-28 15:34 
> /yarn/apps/op/logs/application_1498635214020_0001/NM-304-SA5212M4-BIGDATA-645_8041
> -rw-r-   3 op hadoop  38439 2017-06-28 15:34 
> /yarn/apps/op/logs/application_1498635214020_0001/NM-304-SA5212M4-BIGDATA-646_8041
> -rw-r-   3 op hadoop   8270 2017-06-28 15:34 
> /yarn/apps/op/logs/application_1498635214020_0001/NM-304-SA5212M4-BIGDATA-671_8041
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-15985) LightWeightGSet.computeCapacity() doesn't correctly account for CompressedOops

2018-12-06 Thread Roman Leventov (JIRA)
Roman Leventov created HADOOP-15985:
---

 Summary: LightWeightGSet.computeCapacity() doesn't correctly 
account for CompressedOops
 Key: HADOOP-15985
 URL: https://issues.apache.org/jira/browse/HADOOP-15985
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Roman Leventov


In this line: 
[https://github.com/apache/hadoop/blob/a55d6bba71c81c1c4e9d8cd11f55c78f10a548b0/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/LightWeightGSet.java#L391],
 instead of checking if the platform is 32- or 64-bit, it should check if 
Unsafe.ARRAY_OBJECT_INDEX_SCALE is 4 or 8.

The result is that on 64-bit platforms, when Compressed Oops are on, 
LightWeightGSet is two times denser than it is configured to be.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15984) Update jersey from 1.19 to 2.x

2018-12-06 Thread lqjacklee (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15984?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16711418#comment-16711418
 ] 

lqjacklee commented on HADOOP-15984:


[~ajisakaa] Could provide the source code link , I cannot locate the final 
version. thanks

> Update jersey from 1.19 to 2.x
> --
>
> Key: HADOOP-15984
> URL: https://issues.apache.org/jira/browse/HADOOP-15984
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Akira Ajisaka
>Priority: Major
>
> jersey-json 1.19 depends on Jackson 1.9.2. Let's upgrade.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15256) Create docker images for latest stable hadoop3 build

2018-12-06 Thread Elek, Marton (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15256?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elek, Marton updated HADOOP-15256:
--
  Resolution: Fixed
Target Version/s:   (was: 3.3.0)
  Status: Resolved  (was: Patch Available)

I can confirm that it's working with the latest patches from HDDS-524. The 
apache/hadoop:3 image is used in the smoke tests of the Hadoop Ozone. Closing 
this issue.

> Create docker images for latest stable hadoop3 build
> 
>
> Key: HADOOP-15256
> URL: https://issues.apache.org/jira/browse/HADOOP-15256
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
> Attachments: HADOOP-15256-docker-hadoop-3.001.patch, 
> HADOOP-15256-docker-hadoop-3.003.patch
>
>
> Similar to the hadoop2 image we can provide a developer hadoop image which 
> contains the latest hadoop from the binary release.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15965) Upgrade to ADLS SDK which has major performance improvement for ingress/egress

2018-12-06 Thread Vishwajeet Dusane (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15965?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16711159#comment-16711159
 ] 

Vishwajeet Dusane commented on HADOOP-15965:


[~ste...@apache.org] - Will raise another patch with minor version change in 
SDK release. Will remove the additional log line printed by wildfly. Reference 
to similar issue addressed in abfs - -HADOOP-15851.-

> Upgrade to ADLS SDK which has major performance improvement for ingress/egress
> --
>
> Key: HADOOP-15965
> URL: https://issues.apache.org/jira/browse/HADOOP-15965
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs/adl
>Reporter: Vishwajeet Dusane
>Assignee: Vishwajeet Dusane
>Priority: Major
> Attachments: HADOOP-15965-001.patch
>
>
> Upgrade ADLS SDK to version 2.3.2 which has major improvements
>  # Add special handling for 404 errors when requesting tokens from MSI
>  # Fix liststatus response parsing when filestatus object contains array in 
> one field.
>  # Use wildfly openssl native binding with Java. This is a workaround to 
> [https://bugs.openjdk.java.net/browse/JDK-8046943]issue. 2X performance boost 
> over HTTPS. Similar to HADOOP-15669



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14000) s3guard metadata stores to support millons of children

2018-12-06 Thread lqjacklee (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-14000?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16711130#comment-16711130
 ] 

lqjacklee commented on HADOOP-14000:


[~ste...@apache.org] I want change the method declaration for the return value 
to : 


{code:java}
class DirListingMetadataHolder {

private RemoteIterator iterator;

private Path path;

private boolean isAuthoritative;
}
{code}

Does the change will due to the block issue ? 

> s3guard metadata stores to support millons of children
> --
>
> Key: HADOOP-14000
> URL: https://issues.apache.org/jira/browse/HADOOP-14000
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.0.0-beta1
>Reporter: Steve Loughran
>Priority: Major
>
> S3 repos can have millions of child entries
> Currently {{DirListingMetaData}} can't and {{MetadataStore.listChildren(Path 
> path)}} won't be able to handle directories that big, for listing, deleting 
> or naming.
> We will need a paged response from the listing operation, something which can 
> be iterated over.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org