[jira] [Created] (HADOOP-17161) Make ipc.Client.stop() sleep configurable

2020-07-27 Thread Ramesh Kumar Thangarajan (Jira)
Ramesh Kumar Thangarajan created HADOOP-17161:
-

 Summary: Make ipc.Client.stop() sleep configurable
 Key: HADOOP-17161
 URL: https://issues.apache.org/jira/browse/HADOOP-17161
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Ramesh Kumar Thangarajan


After identifying HADOOP-16126 might cause issues in few workloads, 
ipc.Client.stop() sleep was identified to be configurable to better suit 
multiple workloads



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-17160) ITestAbfsInputStreamStatistics#testReadAheadCounters timing out always

2020-07-27 Thread Bilahari T H (Jira)
Bilahari T H created HADOOP-17160:
-

 Summary: ITestAbfsInputStreamStatistics#testReadAheadCounters 
timing out always
 Key: HADOOP-17160
 URL: https://issues.apache.org/jira/browse/HADOOP-17160
 Project: Hadoop Common
  Issue Type: Sub-task
Reporter: Bilahari T H


The test ITestAbfsInputStreamStatistics#testReadAheadCounters timing out always 
is timing out always



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-17159) Ability for forceful relogin in UserGroupInformation class

2020-07-27 Thread Sandeep Guggilam (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17159?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17166070#comment-17166070
 ] 

Sandeep Guggilam edited comment on HADOOP-17159 at 7/28/20, 2:16 AM:
-

[~abhishek.chouhan] [~liuml07] [~apurtell]


was (Author: sandeep.guggilam):
[~abhishek.chouhan] [~liuml07]

> Ability for forceful relogin in UserGroupInformation class
> --
>
> Key: HADOOP-17159
> URL: https://issues.apache.org/jira/browse/HADOOP-17159
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Sandeep Guggilam
>Priority: Major
>
> Currently we have a relogin() method in UGI which attempts to login if there 
> is no login attempted in the last 10 minutes or configured amount of time
> We should also have provision for doing a forceful relogin irrespective of 
> the time window that the client can choose to use it if needed . Consider the 
> below scenario:
>  # SASL Server is reimaged and new keytabs are fetched with refreshing the 
> password
>  # SASL client connection to the server would fail when it tries with the 
> cached service ticket
>  # We should try to logout to clear the service tickets in cache and then try 
> to login back in such scenarios. But since the current relogin() doesn't 
> guarantee a login, it could cause an issue
>  # A forceful relogin in this case would help after logout
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-17159) Ability for forceful relogin in UserGroupInformation class

2020-07-27 Thread Sandeep Guggilam (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17159?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17166070#comment-17166070
 ] 

Sandeep Guggilam commented on HADOOP-17159:
---

[~abhishek.chouhan] [~liuml07]

> Ability for forceful relogin in UserGroupInformation class
> --
>
> Key: HADOOP-17159
> URL: https://issues.apache.org/jira/browse/HADOOP-17159
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Sandeep Guggilam
>Priority: Major
>
> Currently we have a relogin() method in UGI which attempts to login if there 
> is no login attempted in the last 10 minutes or configured amount of time
> We should also have provision for doing a forceful relogin irrespective of 
> the time window that the client can choose to use it if needed . Consider the 
> below scenario:
>  # SASL Server is reimaged and new keytabs are fetched with refreshing the 
> password
>  # SASL client connection to the server would fail when it tries with the 
> cached service ticket
>  # We should try to logout to clear the service tickets in cache and then try 
> to login back in such scenarios. But since the current relogin() doesn't 
> guarantee a login, it could cause an issue
>  # A forceful relogin in this case would help after logout
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-17159) Ability for forceful relogin in UserGroupInformation class

2020-07-27 Thread Sandeep Guggilam (Jira)
Sandeep Guggilam created HADOOP-17159:
-

 Summary: Ability for forceful relogin in UserGroupInformation class
 Key: HADOOP-17159
 URL: https://issues.apache.org/jira/browse/HADOOP-17159
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Sandeep Guggilam


Currently we have a relogin() method in UGI which attempts to login if there is 
no login attempted in the last 10 minutes or configured amount of time

We should also have provision for doing a forceful relogin irrespective of the 
time window that the client can choose to use it if needed . Consider the below 
scenario:
 # SASL Server is reimaged and new keytabs are fetched with refreshing the 
password
 # SASL client connection to the server would fail when it tries with the 
cached service ticket
 # We should try to logout to clear the service tickets in cache and then try 
to login back in such scenarios. But since the current relogin() doesn't 
guarantee a login, it could cause an issue
 # A forceful relogin in this case would help after logout

 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-17150) ABFS: Test failure: Disable ITestAzureBlobFileSystemDelegationSAS tests

2020-07-27 Thread Steve Loughran (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17150?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17165913#comment-17165913
 ] 

Steve Loughran commented on HADOOP-17150:
-

usual jira housework: affected version component. thanks

> ABFS: Test failure: Disable ITestAzureBlobFileSystemDelegationSAS tests
> ---
>
> Key: HADOOP-17150
> URL: https://issues.apache.org/jira/browse/HADOOP-17150
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Sneha Vijayarajan
>Assignee: Sneha Vijayarajan
>Priority: Major
>  Labels: abfsactive
>
> ITestAzureBlobFileSystemDelegationSAS has tests for the SAS feature in 
> preview stage. The tests should not run until the API version reflects the 
> one in preview as when run against production clusters they will fail.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-16854) ABFS: Fix for OutofMemoryException from AbfsOutputStream

2020-07-27 Thread Karthik Amarnath (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16854?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17127506#comment-17127506
 ] 

Karthik Amarnath edited comment on HADOOP-16854 at 7/27/20, 4:26 PM:
-

Do see the issue with AbfsOutputStream running out of memory while trying to 
DistCP data to ADSL Gen2 in the AzureEU region.
{code:java}
2020-06-07 04:39:58,878 ERROR [main] org.apache.gobblin.runtime.fork.Fork-0: 
Fork 0 of task task_FileDistcpAzurePush_1591504534904_2 failed to process data 
records. Set throwable in holder 
org.apache.gobblin.runtime.ForkThrowableHolder@1ec36c52
java.io.IOException: com.github.rholder.retry.RetryException: Retrying failed 
to complete successfully after 1 attempts.
at 
org.apache.gobblin.writer.RetryWriter.callWithRetry(RetryWriter.java:144)
at 
org.apache.gobblin.writer.RetryWriter.writeEnvelope(RetryWriter.java:124)
at org.apache.gobblin.runtime.fork.Fork.processRecord(Fork.java:513)
at 
org.apache.gobblin.runtime.fork.AsynchronousFork.processRecord(AsynchronousFork.java:103)
at 
org.apache.gobblin.runtime.fork.AsynchronousFork.processRecords(AsynchronousFork.java:86)
at org.apache.gobblin.runtime.fork.Fork.run(Fork.java:251)
at 
org.apache.gobblin.util.executors.MDCPropagatingRunnable.run(MDCPropagatingRunnable.java:39)
at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: com.github.rholder.retry.RetryException: Retrying failed to complete 
successfully after 1 attempts.
at com.github.rholder.retry.Retryer.call(Retryer.java:174)
at 
com.github.rholder.retry.Retryer$RetryerCallable.call(Retryer.java:318)
at 
org.apache.gobblin.writer.RetryWriter.callWithRetry(RetryWriter.java:142)
... 11 more
Caused by: java.lang.OutOfMemoryError: Java heap space
at java.nio.HeapByteBuffer.(HeapByteBuffer.java:57)
at java.nio.ByteBuffer.allocate(ByteBuffer.java:335)
at 
org.apache.hadoop.io.ElasticByteBufferPool.getBuffer(ElasticByteBufferPool.java:96)
at 
org.apache.hadoop.fs.azurebfs.services.AbfsOutputStream.writeCurrentBufferToService(AbfsOutputStream.java:285)
at 
org.apache.hadoop.fs.azurebfs.services.AbfsOutputStream.flushInternal(AbfsOutputStream.java:268)
at 
org.apache.hadoop.fs.azurebfs.services.AbfsOutputStream.close(AbfsOutputStream.java:247)
at 
org.apache.hadoop.fs.FSDataOutputStream$PositionCache.close(FSDataOutputStream.java:72)
at 
org.apache.hadoop.fs.FSDataOutputStream.close(FSDataOutputStream.java:106)
at 
org.apache.gobblin.data.management.copy.writer.FileAwareInputStreamDataWriter.writeImpl(FileAwareInputStreamDataWriter.java:283)
at 
org.apache.gobblin.data.management.copy.writer.FileAwareInputStreamDataWriter.writeImpl(FileAwareInputStreamDataWriter.java:186)
at 
org.apache.gobblin.data.management.copy.writer.FileAwareInputStreamDataWriter.writeImpl(FileAwareInputStreamDataWriter.java:83)
at 
org.apache.gobblin.instrumented.writer.InstrumentedDataWriterBase.write(InstrumentedDataWriterBase.java:158)
at 
org.apache.gobblin.instrumented.writer.InstrumentedDataWriter.write(InstrumentedDataWriter.java:38)
at 
org.apache.gobblin.writer.DataWriter.writeEnvelope(DataWriter.java:106)
at 
org.apache.gobblin.writer.CloseOnFlushWriterWrapper.writeEnvelope(CloseOnFlushWriterWrapper.java:97)
at 
org.apache.gobblin.instrumented.writer.InstrumentedDataWriterDecorator.writeEnvelope(InstrumentedDataWriterDecorator.java:76)
at 
org.apache.gobblin.writer.PartitionedDataWriter.writeEnvelope(PartitionedDataWriter.java:176)
at org.apache.gobblin.writer.RetryWriter$2.call(RetryWriter.java:119)
at org.apache.gobblin.writer.RetryWriter$2.call(RetryWriter.java:116)
at 
com.github.rholder.retry.AttemptTimeLimiters$NoAttemptTimeLimit.call(AttemptTimeLimiters.java:78)
at com.github.rholder.retry.Retryer.call(Retryer.java:160)
at 
com.github.rholder.retry.Retryer$RetryerCallable.call(Retryer.java:318)
at 
org.apache.gobblin.writer.RetryWriter.callWithRetry(RetryWriter.java:142)
at 
org.apache.gobblin.writer.RetryWriter.writeEnvelope(RetryWriter.java:124)
at org.apache.gobblin.runtime.fork.Fork.processRecord(Fork.java:513)
at 
org.apache.gobblin.runtime.fork.AsynchronousFork.processRecord(AsynchronousFork.java:103)
at 
org.apache.gobblin.runtime.fork.AsynchronousFork.processRecords(AsynchronousFork.java:86)
at 

[jira] [Updated] (HADOOP-17157) S3A rename operation not the same as HDFS when dest is empty dir

2020-07-27 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17157?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-17157:

Summary: S3A rename operation not the same as HDFS when dest is empty dir  
(was: S3A rename operation not the same with HDFS)

> S3A rename operation not the same as HDFS when dest is empty dir
> 
>
> Key: HADOOP-17157
> URL: https://issues.apache.org/jira/browse/HADOOP-17157
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3
>Affects Versions: 3.2.1
>Reporter: Jiajia Li
>Priority: Minor
>
> When I run the test ITestS3ADeleteManyFiles, I change the 
> [https://github.com/apache/hadoop/blob/trunk/hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/scale/ITestS3ADeleteManyFiles.java#L97]
> to 
> {code}
> fs.mkdirs(finalDir);
> {code}
> So before rename operator, "finalParent/final" has been created.
> But after the rename operation,  all the files will be moved from 
> "srcParent/src" to "finalParent/final"
> So this is not the same with the HDFS rename operation:
> HDFS rename includes the calculation of the destination path. If the 
> destination exists and is a directory, the final destination of the rename 
> becomes the destination + the filename of the source path.
> let dest = if (isDir(FS, src) and d != src) :
> d + [filename(src)]
> else :
> d



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-17157) S3A rename operation not the same with HDFS

2020-07-27 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17157?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-17157:

Affects Version/s: 3.2.1

> S3A rename operation not the same with HDFS
> ---
>
> Key: HADOOP-17157
> URL: https://issues.apache.org/jira/browse/HADOOP-17157
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3
>Affects Versions: 3.2.1
>Reporter: Jiajia Li
>Priority: Minor
>
> When I run the test ITestS3ADeleteManyFiles, I change the 
> [https://github.com/apache/hadoop/blob/trunk/hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/scale/ITestS3ADeleteManyFiles.java#L97]
> to 
> {code}
> fs.mkdirs(finalDir);
> {code}
> So before rename operator, "finalParent/final" has been created.
> But after the rename operation,  all the files will be moved from 
> "srcParent/src" to "finalParent/final"
> So this is not the same with the HDFS rename operation:
> HDFS rename includes the calculation of the destination path. If the 
> destination exists and is a directory, the final destination of the rename 
> becomes the destination + the filename of the source path.
> let dest = if (isDir(FS, src) and d != src) :
> d + [filename(src)]
> else :
> d



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-17157) S3A rename operation not the same with HDFS

2020-07-27 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17157?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-17157:

Priority: Minor  (was: Major)

> S3A rename operation not the same with HDFS
> ---
>
> Key: HADOOP-17157
> URL: https://issues.apache.org/jira/browse/HADOOP-17157
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3
>Reporter: Jiajia Li
>Priority: Minor
>
> When I run the test ITestS3ADeleteManyFiles, I change the 
> [https://github.com/apache/hadoop/blob/trunk/hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/scale/ITestS3ADeleteManyFiles.java#L97]
> to 
> {code}
> fs.mkdirs(finalDir);
> {code}
> So before rename operator, "finalParent/final" has been created.
> But after the rename operation,  all the files will be moved from 
> "srcParent/src" to "finalParent/final"
> So this is not the same with the HDFS rename operation:
> HDFS rename includes the calculation of the destination path. If the 
> destination exists and is a directory, the final destination of the rename 
> becomes the destination + the filename of the source path.
> let dest = if (isDir(FS, src) and d != src) :
> d + [filename(src)]
> else :
> d



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-17157) S3A rename operation not the same with HDFS

2020-07-27 Thread Steve Loughran (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17157?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17165809#comment-17165809
 ] 

Steve Loughran commented on HADOOP-17157:
-

thank you for running with an enhancing the tests -always appreciated

rename in FileSystem is the troublespot in our lives, especially that bit about 
empty directories which was more of an accident/misunderstanding (mv does that, 
Posix does not. 
https://pubs.opengroup.org/onlinepubs/009695399/functions/rename.html

That HDFS behaviour you see holds if-and-only-if the destination is empty. 

regarding both that filesystem spec and the s3a behaviour, yes, we may be 
wrong, at least as far as empty directories are concerned.

* the bit of the spec needs review/cleanup. It's the bit we are scared of
* I don't really want to change s3a as the general consensus is that HDFS is 
broken

FileContext's rename() doesn't do bad things on empty dest directories. I'll 
have to look @ s3a now to see what it does.

What to do *properly* here

HADOOP-11452 looks at making rename/3 public and specified; never been 
finished. See the discussion on what I'd like now

HDDS-2112 covers ozone/hdfs mismatch


I think I'd like to see that async rename I've discussed there, but if you want 
to take up HADOOP-11452 and finish it off

(ps: thank you for running the tests. Always appreciated)

What now,

# I'd recommend you look at org.apache.hadoop.fs.contract.ContractOptions and 
see the options there, and which filesystems do what. I think S3A copies posix. 
# if it gets things hopelessly wrong there, that's an issue
# if it doesn't copy HDFS's considered-wrong policy: I don't feel too bad. 

Our filesystem.md docs clearly need improving on this section. I think the big 
issue is that the author of that bit of spec didn't fully understand HDFS and 
was scared to look into the details. I speak as that individual.

If you want to review and clarify, gladly welcome




> S3A rename operation not the same with HDFS
> ---
>
> Key: HADOOP-17157
> URL: https://issues.apache.org/jira/browse/HADOOP-17157
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3
>Reporter: Jiajia Li
>Priority: Major
>
> When I run the test ITestS3ADeleteManyFiles, I change the 
> [https://github.com/apache/hadoop/blob/trunk/hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/scale/ITestS3ADeleteManyFiles.java#L97]
> to 
> {code}
> fs.mkdirs(finalDir);
> {code}
> So before rename operator, "finalParent/final" has been created.
> But after the rename operation,  all the files will be moved from 
> "srcParent/src" to "finalParent/final"
> So this is not the same with the HDFS rename operation:
> HDFS rename includes the calculation of the destination path. If the 
> destination exists and is a directory, the final destination of the rename 
> becomes the destination + the filename of the source path.
> let dest = if (isDir(FS, src) and d != src) :
> d + [filename(src)]
> else :
> d



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-17158) Intermittent test timeout for ITestAbfsInputStreamStatistics#testReadAheadCounters

2020-07-27 Thread Mehakmeet Singh (Jira)
Mehakmeet Singh created HADOOP-17158:


 Summary: Intermittent test timeout for 
ITestAbfsInputStreamStatistics#testReadAheadCounters
 Key: HADOOP-17158
 URL: https://issues.apache.org/jira/browse/HADOOP-17158
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs/azure
Affects Versions: 3.3.0
Reporter: Mehakmeet Singh
Assignee: Mehakmeet Singh


Intermittent test timeout for 
ITestAbfsInputStreamStatistics#testReadAheadCounters happening due to race 
conditions in readAhead threads.

Test error:


{code:java}
[ERROR] 
testReadAheadCounters(org.apache.hadoop.fs.azurebfs.ITestAbfsInputStreamStatistics)
  Time elapsed: 30.723 s  <<< 
ERROR!org.junit.runners.model.TestTimedOutException: test timed out after 3 
milliseconds    at java.lang.Thread.sleep(Native Method)    at 
org.apache.hadoop.fs.azurebfs.ITestAbfsInputStreamStatistics.testReadAheadCounters(ITestAbfsInputStreamStatistics.java:346)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)    
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)   
 at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:498)    at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
    at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
    at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
    at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
    at 
org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:298)
    at 
org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:292)
    at java.util.concurrent.FutureTask.run(FutureTask.java:266)    at 
java.lang.Thread.run(Thread.java:748) {code}
Possible Reasoning:

- ReadAhead queue doesn't get completed and hence the counter values are not 
satisfied in 30 seconds time for some systems.

- The condition that readAheadBytesRead and remoteBytesRead counter values need 
to be greater than or equal to 4KB and 32KB respectively doesn't occur in some 
machines due to the fact that sometimes instead of reading for readAhead 
Buffer, remote reads are performed due to Threads still being in the readAhead 
queue to fill that buffer. Thus resulting in either of the 2 counter values to 
be not satisfying the condition and getting in an infinite loop and hence 
timing out the test eventually.

Possible Fixes:

- Write better test(That would pass under all conditions).
- Maybe UT instead of IT?

Possible fix to better the test would be preferable and UT as the last resort.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-17157) S3A rename operation not the same with HDFS

2020-07-27 Thread Jiajia Li (Jira)
Jiajia Li created HADOOP-17157:
--

 Summary: S3A rename operation not the same with HDFS
 Key: HADOOP-17157
 URL: https://issues.apache.org/jira/browse/HADOOP-17157
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs/s3
Reporter: Jiajia Li


When I run the test ITestS3ADeleteManyFiles, I change the 
[https://github.com/apache/hadoop/blob/trunk/hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/scale/ITestS3ADeleteManyFiles.java#L97]

to 

{code}

fs.mkdirs(finalDir);

{code}

So before rename operator, "finalParent/final" has been created.

But after the rename operation,  all the files will be moved from 
"srcParent/src" to "finalParent/final"

So this is not the same with the HDFS rename operation:

HDFS rename includes the calculation of the destination path. If the 
destination exists and is a directory, the final destination of the rename 
becomes the destination + the filename of the source path.
let dest = if (isDir(FS, src) and d != src) :
d + [filename(src)]
else :
d



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on pull request #2166: HDFS-15488. Add a command to list all snapshots for a snaphottable root with snapshot Ids.

2020-07-27 Thread GitBox


hadoop-yetus commented on pull request #2166:
URL: https://github.com/apache/hadoop/pull/2166#issuecomment-664333159


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   1m 20s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files 
found.  |
   | +0 :ok: |  prototool  |   0m  0s |  prototool was not available.  |
   | +0 :ok: |  markdownlint  |   0m  1s |  markdownlint was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  The patch appears to include 
2 new or modified test files.  |
   ||| _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 28s |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  23m 40s |  trunk passed  |
   | +1 :green_heart: |  compile  |   4m 34s |  trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  compile  |   4m  3s |  trunk passed with JDK Private 
Build-1.8.0_252-8u252-b09-1~18.04-b09  |
   | +1 :green_heart: |  checkstyle  |   1m  8s |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   2m 51s |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  16m 17s |  branch has no errors when 
building and testing our client artifacts.  |
   | -1 :x: |  javadoc  |   0m 40s |  hadoop-hdfs-client in trunk failed with 
JDK Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1.  |
   | -1 :x: |  javadoc  |   0m 40s |  hadoop-hdfs in trunk failed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1.  |
   | -1 :x: |  javadoc  |   0m 39s |  hadoop-hdfs-rbf in trunk failed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1.  |
   | +1 :green_heart: |  javadoc  |   2m  9s |  trunk passed with JDK Private 
Build-1.8.0_252-8u252-b09-1~18.04-b09  |
   | +0 :ok: |  spotbugs  |   2m 58s |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | -1 :x: |  findbugs  |   1m  5s |  hadoop-hdfs in trunk failed.  |
   | -1 :x: |  findbugs  |   0m 32s |  hadoop-hdfs-rbf in trunk failed.  |
   ||| _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 28s |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   2m 51s |  the patch passed  |
   | +1 :green_heart: |  compile  |   4m 30s |  the patch passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  cc  |   4m 30s |  the patch passed  |
   | -1 :x: |  javac  |   4m 30s |  
hadoop-hdfs-project-jdkUbuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 generated 1 new + 776 unchanged - 
0 fixed = 777 total (was 776)  |
   | +1 :green_heart: |  compile  |   3m 59s |  the patch passed with JDK 
Private Build-1.8.0_252-8u252-b09-1~18.04-b09  |
   | +1 :green_heart: |  cc  |   3m 59s |  the patch passed  |
   | -1 :x: |  javac  |   3m 59s |  
hadoop-hdfs-project-jdkPrivateBuild-1.8.0_252-8u252-b09-1~18.04-b09 with JDK 
Private Build-1.8.0_252-8u252-b09-1~18.04-b09 generated 1 new + 753 unchanged - 
0 fixed = 754 total (was 753)  |
   | -0 :warning: |  checkstyle  |   1m  1s |  hadoop-hdfs-project: The patch 
generated 9 new + 501 unchanged - 0 fixed = 510 total (was 501)  |
   | +1 :green_heart: |  mvnsite  |   2m 36s |  the patch passed  |
   | +1 :green_heart: |  shellcheck  |   0m 32s |  There were no new shellcheck 
issues.  |
   | +1 :green_heart: |  shelldocs  |   0m 13s |  The patch generated 0 new + 
104 unchanged - 132 fixed = 104 total (was 236)  |
   | -1 :x: |  whitespace  |   0m  0s |  The patch 1 line(s) with tabs.  |
   | -1 :x: |  xml  |   0m  1s |  The patch has 1 ill-formed XML file(s).  |
   | +1 :green_heart: |  shadedclient  |  15m 47s |  patch has no errors when 
building and testing our client artifacts.  |
   | -1 :x: |  javadoc  |   0m 39s |  hadoop-hdfs-client in the patch failed 
with JDK Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1.  |
   | -1 :x: |  javadoc  |   0m 33s |  hadoop-hdfs in the patch failed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1.  |
   | -1 :x: |  javadoc  |   0m 25s |  hadoop-hdfs-rbf in the patch failed with 
JDK Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1.  |
   | -1 :x: |  javadoc  |   0m 33s |  
hadoop-hdfs-project_hadoop-hdfs-client-jdkPrivateBuild-1.8.0_252-8u252-b09-1~18.04-b09
 with JDK Private Build-1.8.0_252-8u252-b09-1~18.04-b09 generated 2 new + 98 
unchanged - 2 fixed = 100 total (was 100)  |
   | -1 :x: |  findbugs  |   3m 24s |  hadoop-hdfs-project/hadoop-hdfs-client 
generated 243 new + 0 unchanged - 0 fixed = 243 total (was 0)  |
   ||| _ Other Tests _ |
   | -1 :x: |  unit  |   2m 18s |  hadoop-hdfs-client in the patch passed.  |
   | -1 :x: |  unit  | 134m 41s |  hadoop-hdfs in the patch passed.  |
   | +1 :green_heart: |  unit  |  10m 16s |  hadoop-hdfs-rbf in the 

[GitHub] [hadoop] hadoop-yetus commented on pull request #2166: HDFS-15488. Add a command to list all snapshots for a snaphottable root with snapshot Ids.

2020-07-27 Thread GitBox


hadoop-yetus commented on pull request #2166:
URL: https://github.com/apache/hadoop/pull/2166#issuecomment-664328526


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   1m  3s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files 
found.  |
   | +0 :ok: |  prototool  |   0m  0s |  prototool was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  The patch appears to include 
2 new or modified test files.  |
   ||| _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |   1m  3s |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  22m 20s |  trunk passed  |
   | +1 :green_heart: |  compile  |   5m 10s |  trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  compile  |   4m  2s |  trunk passed with JDK Private 
Build-1.8.0_252-8u252-b09-1~18.04-b09  |
   | +1 :green_heart: |  checkstyle  |   1m  8s |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   2m 53s |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  16m 43s |  branch has no errors when 
building and testing our client artifacts.  |
   | -1 :x: |  javadoc  |   0m 42s |  hadoop-hdfs-client in trunk failed with 
JDK Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1.  |
   | -1 :x: |  javadoc  |   0m 37s |  hadoop-hdfs in trunk failed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1.  |
   | -1 :x: |  javadoc  |   0m 30s |  hadoop-hdfs-rbf in trunk failed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1.  |
   | +1 :green_heart: |  javadoc  |   1m 51s |  trunk passed with JDK Private 
Build-1.8.0_252-8u252-b09-1~18.04-b09  |
   | +0 :ok: |  spotbugs  |   1m 19s |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   7m 13s |  trunk passed  |
   ||| _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 29s |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   2m 33s |  the patch passed  |
   | +1 :green_heart: |  compile  |   4m 47s |  the patch passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  cc  |   4m 47s |  the patch passed  |
   | -1 :x: |  javac  |   4m 47s |  
hadoop-hdfs-project-jdkUbuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 generated 1 new + 775 unchanged - 
0 fixed = 776 total (was 775)  |
   | +1 :green_heart: |  compile  |   4m 28s |  the patch passed with JDK 
Private Build-1.8.0_252-8u252-b09-1~18.04-b09  |
   | +1 :green_heart: |  cc  |   4m 28s |  the patch passed  |
   | -1 :x: |  javac  |   4m 28s |  
hadoop-hdfs-project-jdkPrivateBuild-1.8.0_252-8u252-b09-1~18.04-b09 with JDK 
Private Build-1.8.0_252-8u252-b09-1~18.04-b09 generated 1 new + 753 unchanged - 
0 fixed = 754 total (was 753)  |
   | -0 :warning: |  checkstyle  |   1m  5s |  hadoop-hdfs-project: The patch 
generated 9 new + 501 unchanged - 0 fixed = 510 total (was 501)  |
   | +1 :green_heart: |  mvnsite  |   2m 48s |  the patch passed  |
   | +1 :green_heart: |  shellcheck  |   0m 34s |  There were no new shellcheck 
issues.  |
   | +1 :green_heart: |  shelldocs  |   0m 13s |  The patch generated 0 new + 
104 unchanged - 132 fixed = 104 total (was 236)  |
   | -1 :x: |  whitespace  |   0m  0s |  The patch 1 line(s) with tabs.  |
   | -1 :x: |  xml  |   0m  1s |  The patch has 1 ill-formed XML file(s).  |
   | +1 :green_heart: |  shadedclient  |  16m 27s |  patch has no errors when 
building and testing our client artifacts.  |
   | -1 :x: |  javadoc  |   0m 35s |  hadoop-hdfs-client in the patch failed 
with JDK Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1.  |
   | -1 :x: |  javadoc  |   0m 33s |  hadoop-hdfs in the patch failed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1.  |
   | -1 :x: |  javadoc  |   0m 24s |  hadoop-hdfs-rbf in the patch failed with 
JDK Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1.  |
   | -1 :x: |  javadoc  |   0m 29s |  
hadoop-hdfs-project_hadoop-hdfs-client-jdkPrivateBuild-1.8.0_252-8u252-b09-1~18.04-b09
 with JDK Private Build-1.8.0_252-8u252-b09-1~18.04-b09 generated 2 new + 98 
unchanged - 2 fixed = 100 total (was 100)  |
   | -1 :x: |  findbugs  |   2m 45s |  hadoop-hdfs-project/hadoop-hdfs-client 
generated 243 new + 0 unchanged - 0 fixed = 243 total (was 0)  |
   ||| _ Other Tests _ |
   | -1 :x: |  unit  |   2m  6s |  hadoop-hdfs-client in the patch passed.  |
   | -1 :x: |  unit  | 136m  6s |  hadoop-hdfs in the patch passed.  |
   | -1 :x: |  unit  |  11m 58s |  hadoop-hdfs-rbf in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 54s |  The patch does not generate 
ASF License warnings.  |
   |  |   | 264m 51s |   |
   
   
   | Reason | 

[jira] [Commented] (HADOOP-16798) job commit failure in S3A MR magic committer test

2020-07-27 Thread Steve Loughran (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16798?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17165602#comment-17165602
 ] 

Steve Loughran commented on HADOOP-16798:
-

no, didn't think of doing a test that complicated. Thanks for the suggestion 
though. Some deliberate POST block would do it, wouldn't it?

> job commit failure in S3A MR magic committer test
> -
>
> Key: HADOOP-16798
> URL: https://issues.apache.org/jira/browse/HADOOP-16798
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.3.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Major
> Fix For: 3.3.1
>
> Attachments: stdout
>
>
> failure in 
> {code}
> ITestS3ACommitterMRJob.test_200_execute:304->Assert.fail:88 Job 
> job_1578669113137_0003 failed in state FAILED with cause Job commit failed: 
> java.util.concurrent.RejectedExecutionException: Task 
> java.util.concurrent.FutureTask@6e894de2 rejected from 
> org.apache.hadoop.util.concurrent.HadoopThreadPoolExecutor@225eed53[Terminated,
>  pool size = 0, active threads = 0, queued tasks = 0, completed tasks = 0]
> {code}
> Stack implies thread pool rejected it, but toString says "Terminated". Race 
> condition?
> *update 2020-04-22*: it's caused when a task is aborted in the AM -the 
> threadpool is disposed of, and while that is shutting down in one thread, 
> task commit is initiated using the same thread pool. When the task 
> committer's destroy operation times out, it kills all the active uploads.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13507) export s3a BlockingThreadPoolExecutorService pool info (size, load) as gauges

2020-07-27 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-13507?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-13507:

Summary: export s3a BlockingThreadPoolExecutorService pool info (size, 
load) as gauges  (was: export s3a BlockingThreadPoolExecutorService pool info 
(size, load) as metrics)

> export s3a BlockingThreadPoolExecutorService pool info (size, load) as gauges
> -
>
> Key: HADOOP-13507
> URL: https://issues.apache.org/jira/browse/HADOOP-13507
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.8.0
>Reporter: Steve Loughran
>Priority: Minor
>
> We should publish load info on {{BlockingThreadPoolExecutorService}} as s3a 
> metrics: size, available, maybe even some timer info on load (at least: rate 
> of recent semaphore acquire/release)



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-13221) s3a create() doesn't check for an ancestor path being a file

2020-07-27 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-13221?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran resolved HADOOP-13221.
-
Fix Version/s: 3.4.0
   Resolution: Fixed

I'm going to WONTFIX this -it's too expensive and nobody has really noticed we 
break one of the fundamental assumptions about storage

> s3a create() doesn't check for an ancestor path being a file
> 
>
> Key: HADOOP-13221
> URL: https://issues.apache.org/jira/browse/HADOOP-13221
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.7.2
>Reporter: Steve Loughran
>Assignee: Sean Mackrory
>Priority: Major
> Fix For: 3.4.0
>
> Attachments: HADOOP-13321-test.patch
>
>
> Seen in a code review. Notable that if true, this got by all the FS contract 
> tests —showing we missed a couple.
> {{S3AFilesystem.create()}} does not examine its parent paths to verify that 
> there does not exist one which is a file. It looks for the destination path 
> if overwrite=false (see HADOOP-13188 for issues there), but it doesn't check 
> the parent for not being a file, or the parent of that path.
> It must go up the tree, verifying that either a path does not exist, or that 
> the path is a directory. The scan can stop at the first entry which is is a 
> directory, thus the operation is O(empty-directories) and not O(directories).



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-17156) Clear readahead requests on stream close

2020-07-27 Thread Rajesh Balamohan (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17156?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rajesh Balamohan updated HADOOP-17156:
--
Priority: Minor  (was: Major)

> Clear readahead requests on stream close
> 
>
> Key: HADOOP-17156
> URL: https://issues.apache.org/jira/browse/HADOOP-17156
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Affects Versions: 3.3.0
>Reporter: Rajesh Balamohan
>Priority: Minor
>
> It would be good to close/clear pending read ahead requests on stream close().



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-17017) S3A client retries on SSL Auth exceptions triggered by "." bucket names

2020-07-27 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17017?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-17017:

Parent Issue: HADOOP-16829  (was: HADOOP-15620)

> S3A client retries on SSL Auth exceptions triggered by "." bucket names
> ---
>
> Key: HADOOP-17017
> URL: https://issues.apache.org/jira/browse/HADOOP-17017
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.2.1
>Reporter: Steve Loughran
>Priority: Minor
>
> If you have a "." in bucket names (it's allowed!) then virtual host HTTPS 
> connections fail with a  java.net.ssl exception. Except we retry and the 
> inner cause is wrapped by generic "client exceptions"
> I'm not going to try and be clever about fixing this, but we should
> * make sure that the inner exception is raised up
> * avoid retries
> * document it in the troubleshooting page. 
> * if there is a well known public "." bucket (cloudera has some:)) we can test
> I get a vague suspicion the AWS SDK is retrying too. Not much we can do there.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-17156) Clear readahead requests on stream close

2020-07-27 Thread Rajesh Balamohan (Jira)
Rajesh Balamohan created HADOOP-17156:
-

 Summary: Clear readahead requests on stream close
 Key: HADOOP-17156
 URL: https://issues.apache.org/jira/browse/HADOOP-17156
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: fs/azure
Affects Versions: 3.3.0
Reporter: Rajesh Balamohan


It would be good to close/clear pending read ahead requests on stream close().



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran commented on pull request #2170: HADOOP-1320. Dir Marker getFileStatus() changes backport

2020-07-27 Thread GitBox


steveloughran commented on pull request #2170:
URL: https://github.com/apache/hadoop/pull/2170#issuecomment-664218542


   ```
   [ERROR] 
testCreateSubdirWithDifferentKey(org.apache.hadoop.fs.s3a.ITestS3AEncryptionSSEC)
  Time elapsed: 1.54 s  <<< FAILURE!
   java.lang.AssertionError: Expected a java.nio.file.AccessDeniedException to 
be thrown, but got the result: : 
S3AFileStatus{path=s3a://stevel-ireland/test/testCreateSubdirWithDifferentKey/nestedDir;
 isDirectory=true; modification_time=0; access_time=0; owner=stevel; 
group=stevel; permission=rwxrwxrwx; isSymlink=false; hasAcl=false; 
isEncrypted=true; isErasureCoded=false} isEmptyDirectory=FALSE
at 
org.apache.hadoop.test.LambdaTestUtils.intercept(LambdaTestUtils.java:492)
at 
org.apache.hadoop.test.LambdaTestUtils.intercept(LambdaTestUtils.java:377)
at 
org.apache.hadoop.test.LambdaTestUtils.intercept(LambdaTestUtils.java:446)
at 
org.apache.hadoop.fs.s3a.ITestS3AEncryptionSSEC.testCreateSubdirWithDifferentKey(ITestS3AEncryptionSSEC.java:124)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
at 
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
at 
org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55)
at 
org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74)
   
   [INFO] Running 
org.apache.hadoop.fs.s3a.s3guard.ITestDynamoDBMetadataStoreScale
   [WARNING] Tests run: 9, Failures: 0, Errors: 0, Skipped: 9, Time elapsed: 
4.045 s - in org.apache.hadoop.fs.s3a.s3guard.ITestDynamoDBMetadataStoreScale
   [INFO] 
   [INFO] Results:
   [INFO] 
   [ERROR] Failures: 
   [ERROR]   ITestS3AEncryptionSSEC.testCreateSubdirWithDifferentKey:124 
Expected a java.nio.file.AccessDeniedException to be thrown, but got the 
result: : 
S3AFileStatus{path=s3a://stevel-ireland/test/testCreateSubdirWithDifferentKey/nestedDir;
 isDirectory=true; modification_time=0; access_time=0; owner=stevel; 
group=stevel; permission=rwxrwxrwx; isSymlink=false; hasAcl=false; 
isEncrypted=true; isErasureCoded=false} isEmptyDirectory=FALSE
   [ERROR]   ITestS3AEncryptionSSEC.testListEncryptedDir:208 Expecting 
java.nio.file.AccessDeniedException with text Service: Amazon S3; Status Code: 
403; but got : void
   [ERROR]   ITestS3AEncryptionSSEC.testListStatusEncryptedDir:253 Expecting 
java.nio.file.AccessDeniedException with text Service: Amazon S3; Status Code: 
403; but got : void
   [INFO] 
   [ERROR] Tests run: 80, Failures: 3, Errors: 0, Skipped: 9
   [INFO] 
   ```



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus removed a comment on pull request #2170: HADOOP-1320. Dir Marker getFileStatus() changes backport

2020-07-27 Thread GitBox


hadoop-yetus removed a comment on pull request #2170:
URL: https://github.com/apache/hadoop/pull/2170#issuecomment-663644838


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |  11m  5s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files 
found.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  The patch appears to include 
4 new or modified test files.  |
   ||| _ branch-3.2 Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 28s |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  21m  8s |  branch-3.2 passed  |
   | +1 :green_heart: |  compile  |  15m 48s |  branch-3.2 passed  |
   | +1 :green_heart: |  checkstyle  |   2m 25s |  branch-3.2 passed  |
   | +1 :green_heart: |  mvnsite  |   2m  2s |  branch-3.2 passed  |
   | +1 :green_heart: |  shadedclient  |  17m 55s |  branch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   1m 41s |  branch-3.2 passed  |
   | +0 :ok: |  spotbugs  |   1m  3s |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   3m  0s |  branch-3.2 passed  |
   ||| _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 21s |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   1m 24s |  the patch passed  |
   | +1 :green_heart: |  compile  |  15m  7s |  the patch passed  |
   | +1 :green_heart: |  javac  |  15m  7s |  the patch passed  |
   | -0 :warning: |  checkstyle  |   2m 25s |  root: The patch generated 19 new 
+ 5 unchanged - 0 fixed = 24 total (was 5)  |
   | +1 :green_heart: |  mvnsite  |   1m 59s |  the patch passed  |
   | -1 :x: |  whitespace  |   0m  0s |  The patch has 1 line(s) that end in 
whitespace. Use git apply --whitespace=fix <>. Refer 
https://git-scm.com/docs/git-apply  |
   | +1 :green_heart: |  shadedclient  |  13m  8s |  patch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   1m 34s |  the patch passed  |
   | +1 :green_heart: |  findbugs  |   3m 26s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  |   9m 26s |  hadoop-common in the patch passed. 
 |
   | +1 :green_heart: |  unit  |   4m 38s |  hadoop-aws in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 44s |  The patch does not generate 
ASF License warnings.  |
   |  |   | 130m  7s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.40 ServerAPI=1.40 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-2170/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/2170 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux c4b6daf3e36e 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | branch-3.2 / 0fb7c48 |
   | Default Java | Private Build-1.8.0_252-8u252-b09-1~16.04-b09 |
   | checkstyle | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-2170/1/artifact/out/diff-checkstyle-root.txt
 |
   | whitespace | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-2170/1/artifact/out/whitespace-eol.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-2170/1/testReport/ |
   | Max. process+thread count | 1370 (vs. ulimit of 5500) |
   | modules | C: hadoop-common-project/hadoop-common hadoop-tools/hadoop-aws 
U: . |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-2170/1/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bshashikant commented on a change in pull request #2165: HDFS-15481. Ordered snapshot deletion: garbage collect deleted snapshots

2020-07-27 Thread GitBox


bshashikant commented on a change in pull request #2165:
URL: https://github.com/apache/hadoop/pull/2165#discussion_r460737232



##
File path: 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
##
@@ -1284,6 +1288,7 @@ void startCommonServices(Configuration conf, HAContext 
haContext) throws IOExcep
   dir.setINodeAttributeProvider(inodeAttributeProvider);
 }
 snapshotManager.registerMXBean();
+snapshotDeletionGc.schedule();

Review comment:
   I think its better to start the gc work in 
FSNameSystem#startActiveServices() after quota setup and initialization is done
   
   startActiveServices()
   ``
// Initialize the quota.
 dir.updateCountForQuota();
 // Enable quota checks.
 dir.enableQuotaChecks();
   





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-17155) DF implementation of CachingGetSpaceUsed makes DFS Used size not correct

2020-07-27 Thread angerszhu (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17155?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

angerszhu resolved HADOOP-17155.

Resolution: Not A Problem

> DF implementation of CachingGetSpaceUsed makes DFS Used size not correct
> 
>
> Key: HADOOP-17155
> URL: https://issues.apache.org/jira/browse/HADOOP-17155
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: angerszhu
>Priority: Minor
> Attachments: HADOOP-17155.1.patch
>
>
> When   we calculate DN's storage used, we add each Volume's used size 
> together and each volume's size comes from it's BP's size. 
> When we use DF instead of DU, we know that DF check disk space usage (not 
> disk size of a directory). so when check BP dir path,  What you're actually 
> checking is the corresponding disk directory space. 
>  
> When we use this with federation, under each volume  may have more than one 
> BP, each BP return it's corresponding disk directory space. 
>  
> If we have two BP under one volume, we will make DN's storage info's Used 
> size double than real size.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-17155) DF implementation of CachingGetSpaceUsed makes DFS Used size not correct

2020-07-27 Thread angerszhu (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17155?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17165479#comment-17165479
 ] 

angerszhu commented on HADOOP-17155:


[~leosun08]

All right,  did't get the point when first look into this .

> DF implementation of CachingGetSpaceUsed makes DFS Used size not correct
> 
>
> Key: HADOOP-17155
> URL: https://issues.apache.org/jira/browse/HADOOP-17155
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: angerszhu
>Priority: Minor
> Attachments: HADOOP-17155.1.patch
>
>
> When   we calculate DN's storage used, we add each Volume's used size 
> together and each volume's size comes from it's BP's size. 
> When we use DF instead of DU, we know that DF check disk space usage (not 
> disk size of a directory). so when check BP dir path,  What you're actually 
> checking is the corresponding disk directory space. 
>  
> When we use this with federation, under each volume  may have more than one 
> BP, each BP return it's corresponding disk directory space. 
>  
> If we have two BP under one volume, we will make DN's storage info's Used 
> size double than real size.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-17155) DF implementation of CachingGetSpaceUsed makes DFS Used size not correct

2020-07-27 Thread angerszhu (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17155?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

angerszhu updated HADOOP-17155:
---
Attachment: HADOOP-17155.1.patch

> DF implementation of CachingGetSpaceUsed makes DFS Used size not correct
> 
>
> Key: HADOOP-17155
> URL: https://issues.apache.org/jira/browse/HADOOP-17155
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: angerszhu
>Priority: Minor
> Attachments: HADOOP-17155.1.patch
>
>
> When   we calculate DN's storage used, we add each Volume's used size 
> together and each volume's size comes from it's BP's size. 
> When we use DF instead of DU, we know that DF check disk space usage (not 
> disk size of a directory). so when check BP dir path,  What you're actually 
> checking is the corresponding disk directory space. 
>  
> When we use this with federation, under each volume  may have more than one 
> BP, each BP return it's corresponding disk directory space. 
>  
> If we have two BP under one volume, we will make DN's storage info's Used 
> size double than real size.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-17155) DF implementation of CachingGetSpaceUsed makes DFS Used size not correct

2020-07-27 Thread Lisheng Sun (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17155?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17165477#comment-17165477
 ] 

Lisheng Sun commented on HADOOP-17155:
--

It indeedly exists and it's recommended to use HDFS-14313.

> DF implementation of CachingGetSpaceUsed makes DFS Used size not correct
> 
>
> Key: HADOOP-17155
> URL: https://issues.apache.org/jira/browse/HADOOP-17155
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: angerszhu
>Priority: Minor
> Attachments: HADOOP-17155.1.patch
>
>
> When   we calculate DN's storage used, we add each Volume's used size 
> together and each volume's size comes from it's BP's size. 
> When we use DF instead of DU, we know that DF check disk space usage (not 
> disk size of a directory). so when check BP dir path,  What you're actually 
> checking is the corresponding disk directory space. 
>  
> When we use this with federation, under each volume  may have more than one 
> BP, each BP return it's corresponding disk directory space. 
>  
> If we have two BP under one volume, we will make DN's storage info's Used 
> size double than real size.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-17155) DF implementation of CachingGetSpaceUsed makes DFS Used size not correct

2020-07-27 Thread angerszhu (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17155?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

angerszhu updated HADOOP-17155:
---
Attachment: (was: HADOOP-17155.1.patch)

> DF implementation of CachingGetSpaceUsed makes DFS Used size not correct
> 
>
> Key: HADOOP-17155
> URL: https://issues.apache.org/jira/browse/HADOOP-17155
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: angerszhu
>Priority: Minor
> Attachments: HADOOP-17155.1.patch
>
>
> When   we calculate DN's storage used, we add each Volume's used size 
> together and each volume's size comes from it's BP's size. 
> When we use DF instead of DU, we know that DF check disk space usage (not 
> disk size of a directory). so when check BP dir path,  What you're actually 
> checking is the corresponding disk directory space. 
>  
> When we use this with federation, under each volume  may have more than one 
> BP, each BP return it's corresponding disk directory space. 
>  
> If we have two BP under one volume, we will make DN's storage info's Used 
> size double than real size.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-17155) DF implementation of CachingGetSpaceUsed makes DFS Used size not correct

2020-07-27 Thread angerszhu (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17155?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

angerszhu updated HADOOP-17155:
---
Attachment: HADOOP-17155.1.patch

> DF implementation of CachingGetSpaceUsed makes DFS Used size not correct
> 
>
> Key: HADOOP-17155
> URL: https://issues.apache.org/jira/browse/HADOOP-17155
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: angerszhu
>Priority: Minor
> Attachments: HADOOP-17155.1.patch
>
>
> When   we calculate DN's storage used, we add each Volume's used size 
> together and each volume's size comes from it's BP's size. 
> When we use DF instead of DU, we know that DF check disk space usage (not 
> disk size of a directory). so when check BP dir path,  What you're actually 
> checking is the corresponding disk directory space. 
>  
> When we use this with federation, under each volume  may have more than one 
> BP, each BP return it's corresponding disk directory space. 
>  
> If we have two BP under one volume, we will make DN's storage info's Used 
> size double than real size.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-17155) DF implementation of CachingGetSpaceUsed makes DFS Used size not correct

2020-07-27 Thread angerszhu (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17155?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

angerszhu updated HADOOP-17155:
---
Attachment: (was: HADOOP-17155.1.patch)

> DF implementation of CachingGetSpaceUsed makes DFS Used size not correct
> 
>
> Key: HADOOP-17155
> URL: https://issues.apache.org/jira/browse/HADOOP-17155
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: angerszhu
>Priority: Minor
> Attachments: HADOOP-17155.1.patch
>
>
> When   we calculate DN's storage used, we add each Volume's used size 
> together and each volume's size comes from it's BP's size. 
> When we use DF instead of DU, we know that DF check disk space usage (not 
> disk size of a directory). so when check BP dir path,  What you're actually 
> checking is the corresponding disk directory space. 
>  
> When we use this with federation, under each volume  may have more than one 
> BP, each BP return it's corresponding disk directory space. 
>  
> If we have two BP under one volume, we will make DN's storage info's Used 
> size double than real size.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-17155) DF implementation of CachingGetSpaceUsed makes DFS Used size not correct

2020-07-27 Thread angerszhu (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17155?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

angerszhu updated HADOOP-17155:
---
Attachment: HADOOP-17155.1.patch

> DF implementation of CachingGetSpaceUsed makes DFS Used size not correct
> 
>
> Key: HADOOP-17155
> URL: https://issues.apache.org/jira/browse/HADOOP-17155
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: angerszhu
>Priority: Minor
> Attachments: HADOOP-17155.1.patch
>
>
> When   we calculate DN's storage used, we add each Volume's used size 
> together and each volume's size comes from it's BP's size. 
> When we use DF instead of DU, we know that DF check disk space usage (not 
> disk size of a directory). so when check BP dir path,  What you're actually 
> checking is the corresponding disk directory space. 
>  
> When we use this with federation, under each volume  may have more than one 
> BP, each BP return it's corresponding disk directory space. 
>  
> If we have two BP under one volume, we will make DN's storage info's Used 
> size double than real size.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bshashikant commented on pull request #2166: HDFS-15488. Add a command to list all snapshots for a snaphottable root with snapshot Ids.

2020-07-27 Thread GitBox


bshashikant commented on pull request #2166:
URL: https://github.com/apache/hadoop/pull/2166#issuecomment-664149078


   Thanks @ayushtkn and @mukul1987 for the review comments. The latest patch 
addresses the review comments.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-17155) DF implementation of CachingGetSpaceUsed makes DFS Used size not correct

2020-07-27 Thread angerszhu (Jira)
angerszhu created HADOOP-17155:
--

 Summary: DF implementation of CachingGetSpaceUsed makes DFS Used 
size not correct
 Key: HADOOP-17155
 URL: https://issues.apache.org/jira/browse/HADOOP-17155
 Project: Hadoop Common
  Issue Type: Sub-task
Reporter: angerszhu


When   we calculate DN's storage used, we add each Volume's used size together 
and each volume's size comes from it's BP's size. 

When we use DF instead of DU, we know that DF check disk space usage (not disk 
size of a directory). so when check BP dir path,  What you're actually checking 
is the corresponding disk directory space. 

 

When we use this with federation, under each volume  may have more than one BP, 
each BP return it's corresponding disk directory space. 

 

If we have two BP under one volume, we will make DN's storage info's Used size 
double than real size.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org