[jira] [Commented] (HDFS-12379) NameNode getListing should use FileStatus instead of HdfsFileStatus

2017-08-30 Thread Zhe Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12379?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16148492#comment-16148492
 ] 

Zhe Zhang commented on HDFS-12379:
--

Pinging [~daryn] [~wheat9] [~kihwal] for opinions based on HDFS-11641 
activities.

> NameNode getListing should use FileStatus instead of HdfsFileStatus
> ---
>
> Key: HDFS-12379
> URL: https://issues.apache.org/jira/browse/HDFS-12379
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Reporter: Zhe Zhang
>
> The public {{listStatus}} APIs in {{FileSystem}} and 
> {{DistributedFileSystem}} expose {{FileStatus}} instead of 
> {{HdfsFileStatus}}. Therefore it is a waste to create the more expensive 
> {{HdfsFileStatus}} objects on NameNode.
> It should be a simple change similar to HDFS-11641. Marking incompatible 
> because wire protocol is incompatible. Not sure what downstream apps are 
> affected by this incompatibility. Maybe those directly using curl, or writing 
> their own HDFS client.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-12379) NameNode getListing should use FileStatus instead of HdfsFileStatus

2017-08-30 Thread Zhe Zhang (JIRA)
Zhe Zhang created HDFS-12379:


 Summary: NameNode getListing should use FileStatus instead of 
HdfsFileStatus
 Key: HDFS-12379
 URL: https://issues.apache.org/jira/browse/HDFS-12379
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: namenode
Reporter: Zhe Zhang


The public {{listStatus}} APIs in {{FileSystem}} and {{DistributedFileSystem}} 
expose {{FileStatus}} instead of {{HdfsFileStatus}}. Therefore it is a waste to 
create the more expensive {{HdfsFileStatus}} objects on NameNode.

It should be a simple change similar to HDFS-11641. Marking incompatible 
because wire protocol is incompatible. Not sure what downstream apps are 
affected by this incompatibility. Maybe those directly using curl, or writing 
their own HDFS client.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11964) RS-6-3-LEGACY has a decoding bug when it is used for pread

2017-08-30 Thread Takanobu Asanuma (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11964?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Takanobu Asanuma updated HDFS-11964:

Attachment: HDFS-11964.3.patch

Thanks for the review, [~drankye]! I uploaded a new patch for addressing it.

This patch also refactors the related random-ec-policy tests. (I referred to 
{{TestErasureCodingPoliciesWithRandomECPolicy}}.)

> RS-6-3-LEGACY has a decoding bug when it is used for pread
> --
>
> Key: HDFS-11964
> URL: https://issues.apache.org/jira/browse/HDFS-11964
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: erasure-coding
>Affects Versions: 3.0.0-alpha3
>Reporter: Lei (Eddy) Xu
>Assignee: Takanobu Asanuma
> Attachments: HDFS-11964.1.patch, HDFS-11964.2.patch, 
> HDFS-11964.3.patch
>
>
> TestDFSStripedInputStreamWithRandomECPolicy#testPreadWithDNFailure fails on 
> trunk:
> {code}
> Running org.apache.hadoop.hdfs.TestDFSStripedInputStreamWithRandomECPolicy
> Tests run: 5, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 10.99 sec <<< 
> FAILURE! - in 
> org.apache.hadoop.hdfs.TestDFSStripedInputStreamWithRandomECPolicy
> testPreadWithDNFailure(org.apache.hadoop.hdfs.TestDFSStripedInputStreamWithRandomECPolicy)
>   Time elapsed: 1.265 sec  <<< FAILURE!
> org.junit.internal.ArrayComparisonFailure: arrays first differed at element 
> [327680]; expected:<-36> but was:<2>
>   at 
> org.junit.internal.ComparisonCriteria.arrayEquals(ComparisonCriteria.java:50)
>   at org.junit.Assert.internalArrayEquals(Assert.java:473)
>   at org.junit.Assert.assertArrayEquals(Assert.java:294)
>   at org.junit.Assert.assertArrayEquals(Assert.java:305)
>   at 
> org.apache.hadoop.hdfs.TestDFSStripedInputStream.testPreadWithDNFailure(TestDFSStripedInputStream.java:306)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>   at 
> org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74)
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12359) Re-encryption should operate with minimum KMS ACL requirements.

2017-08-30 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12359?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16148448#comment-16148448
 ] 

Hadoop QA commented on HDFS-12359:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  1m 
25s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 14m 
10s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
49s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
39s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
55s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
41s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
39s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
37s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 95m 13s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
19s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}122m 30s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.TestReadStripedFileWithDecoding |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure180 |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure150 |
|   | hadoop.hdfs.TestDFSStripedInputStreamWithRandomECPolicy |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailureWithRandomECPolicy |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure080 |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure |
|   | hadoop.hdfs.TestLeaseRecoveryStriped |
|   | hadoop.hdfs.TestEncryptedTransfer |
| Timed out junit tests | org.apache.hadoop.hdfs.TestWriteReadStripedFile |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | HDFS-12359 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12884582/HDFS-12359.02.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux d8722cd20235 3.13.0-116-generic #163-Ubuntu SMP Fri Mar 31 
14:13:22 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / ce79f7b |
| Default Java | 1.8.0_144 |
| findbugs | v3.1.0-RC1 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/20932/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/20932/testReport/ |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/20932/console |
| Powered by | Apache Yetus 0.6.0-SNAPSHOT   

[jira] [Updated] (HDFS-12370) Ozone: Implement TopN container choosing policy for BlockDeletionService

2017-08-30 Thread Yiqun Lin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12370?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yiqun Lin updated HDFS-12370:
-
Status: Patch Available  (was: Open)

> Ozone: Implement TopN container choosing policy for BlockDeletionService
> 
>
> Key: HDFS-12370
> URL: https://issues.apache.org/jira/browse/HDFS-12370
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Affects Versions: HDFS-7240
>Reporter: Yiqun Lin
>Assignee: Yiqun Lin
> Attachments: HDFS-12370-HDFS-7240.001.patch
>
>
> Implement TopN container choosing policy for BlockDeletionService. This is 
> discussed from HDFS-12354.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12370) Ozone: Implement TopN container choosing policy for BlockDeletionService

2017-08-30 Thread Yiqun Lin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12370?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yiqun Lin updated HDFS-12370:
-
Attachment: HDFS-12370-HDFS-7240.001.patch

> Ozone: Implement TopN container choosing policy for BlockDeletionService
> 
>
> Key: HDFS-12370
> URL: https://issues.apache.org/jira/browse/HDFS-12370
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Affects Versions: HDFS-7240
>Reporter: Yiqun Lin
>Assignee: Yiqun Lin
> Attachments: HDFS-12370-HDFS-7240.001.patch
>
>
> Implement TopN container choosing policy for BlockDeletionService. This is 
> discussed from HDFS-12354.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12370) Ozone: Implement TopN container choosing policy for BlockDeletionService

2017-08-30 Thread Yiqun Lin (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12370?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16148428#comment-16148428
 ] 

Yiqun Lin commented on HDFS-12370:
--

Attach initial patch.

> Ozone: Implement TopN container choosing policy for BlockDeletionService
> 
>
> Key: HDFS-12370
> URL: https://issues.apache.org/jira/browse/HDFS-12370
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Affects Versions: HDFS-7240
>Reporter: Yiqun Lin
>Assignee: Yiqun Lin
>
> Implement TopN container choosing policy for BlockDeletionService. This is 
> discussed from HDFS-12354.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12363) Possible NPE in BlockManager$StorageInfoDefragmenter#scanAndCompactStorages

2017-08-30 Thread Xiao Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12363?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiao Chen updated HDFS-12363:
-
Target Version/s: 3.0.0-beta1  (was: 2.9.0, 3.0.0-beta1, 2.6.6, 2.8.3, 
2.7.5)

> Possible NPE in BlockManager$StorageInfoDefragmenter#scanAndCompactStorages
> ---
>
> Key: HDFS-12363
> URL: https://issues.apache.org/jira/browse/HDFS-12363
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 2.6.0
>Reporter: Xiao Chen
>Assignee: Xiao Chen
> Attachments: HDFS-12363.01.patch, HDFS-12363.02.patch
>
>
> Saw NN going down with NPE below:
> {noformat}
> ERROR org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: Thread 
> received Runtime exception.
> java.lang.NullPointerException
> at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$StorageInfoDefragmenter.scanAndCompactStorages(BlockManager.java:3897)
> at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$StorageInfoDefragmenter.run(BlockManager.java:3852)
> at java.lang.Thread.run(Thread.java:745)
> 2017-08-21 22:14:05,303 INFO org.apache.hadoop.util.ExitUtil: Exiting with 
> status 1
> 2017-08-21 22:14:05,313 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: 
> {noformat}
> In that version, {{BlockManager}} code is:
> {code}
> 3896  try {
> 3897   DatanodeStorageInfo storage = datanodeManager.
> 3898 getDatanode(datanodesAndStorages.get(i)).
> 3899getStorageInfo(datanodesAndStorages.get(i + 1));
> 3900if (storage != null) {
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12363) Possible NPE in BlockManager$StorageInfoDefragmenter#scanAndCompactStorages

2017-08-30 Thread Xiao Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12363?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16148387#comment-16148387
 ] 

Xiao Chen commented on HDFS-12363:
--

Failed tests are not related to the changes here.
The intersections are TestLeaseRecoveryStriped, TestWriteReadStripedFile and 
TestClientProtocolForPipelineRecovery, tracked by HDFS-12360, HDFS-12377 and 
HDFS-12378 (didn't find an existing jira, just created), respectively.

> Possible NPE in BlockManager$StorageInfoDefragmenter#scanAndCompactStorages
> ---
>
> Key: HDFS-12363
> URL: https://issues.apache.org/jira/browse/HDFS-12363
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 2.6.0
>Reporter: Xiao Chen
>Assignee: Xiao Chen
> Attachments: HDFS-12363.01.patch, HDFS-12363.02.patch
>
>
> Saw NN going down with NPE below:
> {noformat}
> ERROR org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: Thread 
> received Runtime exception.
> java.lang.NullPointerException
> at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$StorageInfoDefragmenter.scanAndCompactStorages(BlockManager.java:3897)
> at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$StorageInfoDefragmenter.run(BlockManager.java:3852)
> at java.lang.Thread.run(Thread.java:745)
> 2017-08-21 22:14:05,303 INFO org.apache.hadoop.util.ExitUtil: Exiting with 
> status 1
> 2017-08-21 22:14:05,313 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: 
> {noformat}
> In that version, {{BlockManager}} code is:
> {code}
> 3896  try {
> 3897   DatanodeStorageInfo storage = datanodeManager.
> 3898 getDatanode(datanodesAndStorages.get(i)).
> 3899getStorageInfo(datanodesAndStorages.get(i + 1));
> 3900if (storage != null) {
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-12378) TestClientProtocolForPipelineRecovery#testZeroByteBlockRecovery fails on trunk

2017-08-30 Thread Xiao Chen (JIRA)
Xiao Chen created HDFS-12378:


 Summary: 
TestClientProtocolForPipelineRecovery#testZeroByteBlockRecovery fails on trunk
 Key: HDFS-12378
 URL: https://issues.apache.org/jira/browse/HDFS-12378
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: test
Reporter: Xiao Chen


Saw on 
https://builds.apache.org/job/PreCommit-HDFS-Build/20928/testReport/org.apache.hadoop.hdfs/TestClientProtocolForPipelineRecovery/testZeroByteBlockRecovery/:


Error Message
{noformat}
Failed to replace a bad datanode on the existing pipeline due to no more good 
datanodes being available to try. (Nodes: 
current=[DatanodeInfoWithStorage[127.0.0.1:51925,DS-274e8cc9-280b-4370-b494-6a4f0d67ccf4,DISK]],
 
original=[DatanodeInfoWithStorage[127.0.0.1:51925,DS-274e8cc9-280b-4370-b494-6a4f0d67ccf4,DISK]]).
 The current failed datanode replacement policy is ALWAYS, and a client may 
configure this via 'dfs.client.block.write.replace-datanode-on-failure.policy' 
in its configuration.
{noformat}
Stacktrace
{noformat}
java.io.IOException: Failed to replace a bad datanode on the existing pipeline 
due to no more good datanodes being available to try. (Nodes: 
current=[DatanodeInfoWithStorage[127.0.0.1:51925,DS-274e8cc9-280b-4370-b494-6a4f0d67ccf4,DISK]],
 
original=[DatanodeInfoWithStorage[127.0.0.1:51925,DS-274e8cc9-280b-4370-b494-6a4f0d67ccf4,DISK]]).
 The current failed datanode replacement policy is ALWAYS, and a client may 
configure this via 'dfs.client.block.write.replace-datanode-on-failure.policy' 
in its configuration.
at 
org.apache.hadoop.hdfs.DataStreamer.findNewDatanode(DataStreamer.java:1322)
at 
org.apache.hadoop.hdfs.DataStreamer.addDatanode2ExistingPipeline(DataStreamer.java:1388)
at 
org.apache.hadoop.hdfs.DataStreamer.handleDatanodeReplacement(DataStreamer.java:1587)
at 
org.apache.hadoop.hdfs.DataStreamer.setupPipelineInternal(DataStreamer.java:1488)
at 
org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1470)
at 
org.apache.hadoop.hdfs.DataStreamer.processDatanodeOrExternalError(DataStreamer.java:1274)
at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:684)
{noformat}
Standard Output
{noformat}
2017-08-30 18:02:37,714 [main] INFO  hdfs.MiniDFSCluster 
(MiniDFSCluster.java:(469)) - starting cluster: numNameNodes=1, 
numDataNodes=3
Formatting using clusterid: testClusterID
2017-08-30 18:02:37,716 [main] INFO  namenode.FSEditLog 
(FSEditLog.java:newInstance(224)) - Edit logging is async:false
2017-08-30 18:02:37,716 [main] INFO  namenode.FSNamesystem 
(FSNamesystem.java:(742)) - KeyProvider: null
2017-08-30 18:02:37,716 [main] INFO  namenode.FSNamesystem 
(FSNamesystemLock.java:(120)) - fsLock is fair: true
2017-08-30 18:02:37,716 [main] INFO  namenode.FSNamesystem 
(FSNamesystemLock.java:(136)) - Detailed lock hold time metrics enabled: 
false
2017-08-30 18:02:37,717 [main] INFO  namenode.FSNamesystem 
(FSNamesystem.java:(763)) - fsOwner = jenkins (auth:SIMPLE)
2017-08-30 18:02:37,717 [main] INFO  namenode.FSNamesystem 
(FSNamesystem.java:(764)) - supergroup  = supergroup
2017-08-30 18:02:37,717 [main] INFO  namenode.FSNamesystem 
(FSNamesystem.java:(765)) - isPermissionEnabled = true
2017-08-30 18:02:37,717 [main] INFO  namenode.FSNamesystem 
(FSNamesystem.java:(776)) - HA Enabled: false
2017-08-30 18:02:37,718 [main] INFO  common.Util 
(Util.java:isDiskStatsEnabled(395)) - 
dfs.datanode.fileio.profiling.sampling.percentage set to 0. Disabling file IO 
profiling
2017-08-30 18:02:37,718 [main] INFO  blockmanagement.DatanodeManager 
(DatanodeManager.java:(301)) - dfs.block.invalidate.limit: 
configured=1000, counted=60, effected=1000
2017-08-30 18:02:37,718 [main] INFO  blockmanagement.DatanodeManager 
(DatanodeManager.java:(309)) - 
dfs.namenode.datanode.registration.ip-hostname-check=true
2017-08-30 18:02:37,719 [main] INFO  blockmanagement.BlockManager 
(InvalidateBlocks.java:printBlockDeletionTime(76)) - 
dfs.namenode.startup.delay.block.deletion.sec is set to 000:00:00:00.000
2017-08-30 18:02:37,719 [main] INFO  blockmanagement.BlockManager 
(InvalidateBlocks.java:printBlockDeletionTime(82)) - The block deletion will 
start around 2017 Aug 30 18:02:37
2017-08-30 18:02:37,719 [main] INFO  util.GSet 
(LightWeightGSet.java:computeCapacity(395)) - Computing capacity for map 
BlocksMap
2017-08-30 18:02:37,719 [main] INFO  util.GSet 
(LightWeightGSet.java:computeCapacity(396)) - VM type   = 64-bit
2017-08-30 18:02:37,720 [main] INFO  util.GSet 
(LightWeightGSet.java:computeCapacity(397)) - 2.0% max memory 1.8 GB = 36.4 MB
2017-08-30 18:02:37,720 [main] INFO  util.GSet 
(LightWeightGSet.java:computeCapacity(402)) - capacity  = 2^22 = 4194304 
entries
2017-08-30 18:02:37,726 [main] INFO  blockmanagement.BlockManager 
(BlockManager.java:createBlockTokenSecretManager(560)) - 

[jira] [Updated] (HDFS-12359) Re-encryption should operate with minimum KMS ACL requirements.

2017-08-30 Thread Xiao Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12359?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiao Chen updated HDFS-12359:
-
Attachment: HDFS-12359.02.patch

> Re-encryption should operate with minimum KMS ACL requirements.
> ---
>
> Key: HDFS-12359
> URL: https://issues.apache.org/jira/browse/HDFS-12359
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: encryption
>Affects Versions: 3.0.0-beta1
>Reporter: Xiao Chen
>Assignee: Xiao Chen
> Attachments: HDFS-12359.01.patch, HDFS-12359.02.patch
>
>
> This was caught from KMS acl testing.
> HDFS-10899 gets the current key versions from KMS directly, which requires 
> {{READ}} acls.
> It also calls invalidateCache, which requires {{MANAGEMENT}} acls.
> We should fix re-encryption to not require additional ACLs than original 
> encryption.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12359) Re-encryption should operate with minimum KMS ACL requirements.

2017-08-30 Thread Xiao Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12359?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16148375#comment-16148375
 ] 

Xiao Chen commented on HDFS-12359:
--

Thanks for the review Wei-Chiu, added a few specific checks in patch 2. 
Existing reencryption tests should cover this on a higher level.

bq. getCurrentKeyVersion would never return null.
A little confused by this comment:
# {{getCurrentKeyVersion}} could return null, if edek's 
{{getCurrentKeyVersion}} is null. Even if in KMS we check it, it feels to me 
safeguarding against nullity here does more good than harm.
# {{FSDirEncryptionZoneOp.getKeyNameForZone}} could also return null, since it 
calls to:
{code:title=EncryptionZoneManager#getKeyName}
String getKeyName(final INodesInPath iip) {
assert dir.hasReadLock();
EncryptionZoneInt ezi = getEncryptionZoneForPath(iip);
if (ezi == null) {
  return null;
}
return ezi.getKeyName();
  }
{code}

> Re-encryption should operate with minimum KMS ACL requirements.
> ---
>
> Key: HDFS-12359
> URL: https://issues.apache.org/jira/browse/HDFS-12359
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: encryption
>Affects Versions: 3.0.0-beta1
>Reporter: Xiao Chen
>Assignee: Xiao Chen
> Attachments: HDFS-12359.01.patch, HDFS-12359.02.patch
>
>
> This was caught from KMS acl testing.
> HDFS-10899 gets the current key versions from KMS directly, which requires 
> {{READ}} acls.
> It also calls invalidateCache, which requires {{MANAGEMENT}} acls.
> We should fix re-encryption to not require additional ACLs than original 
> encryption.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12377) Refactor TestReadStripedFileWithDecoding to avoid test timeouts

2017-08-30 Thread Andrew Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12377?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16148354#comment-16148354
 ] 

Andrew Wang commented on HDFS-12377:


I think we're hitting the surefire timeout (15min) vs. the junit timeout (5min) 
here, since I'm not seeing a stack dump. Running this test class locally takes 
7.5 mins on my laptop, so we might need some further measures to avoid the 
timeout.

> Refactor TestReadStripedFileWithDecoding to avoid test timeouts
> ---
>
> Key: HDFS-12377
> URL: https://issues.apache.org/jira/browse/HDFS-12377
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: erasure-coding
>Affects Versions: 3.0.0-alpha3
>Reporter: Andrew Wang
>Assignee: Andrew Wang
> Attachments: HDFS-12377.001.patch
>
>
> This test times out since the nested for loops means it runs 12 
> configurations inside each test method.
> Let's refactor this to use JUnit parameters instead.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12350) Support meta tags in configs

2017-08-30 Thread Ajay Kumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12350?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ajay Kumar updated HDFS-12350:
--
Description: 
We should tag the hadoop/hdfs config so that we can retrieve properties by 
there usage/application like PERFORMANCE, NAMENODE etc. Right now we don't have 
an option available to group or list related properties together. Grouping 
properties through some restricted set of Meta tags and then exposing them in 
Configuration class will be useful for end users.
For example, here is an config file with tags.

{code}

   
  dfs.namenode.servicerpc-bind-host
  localhost
   REQUIRED 
   
   
  
  dfs.namenode.fs-limits.min-block-size
   1048576 
   PERFORMANCE,REQUIRED
   

 
  dfs.namenode.logging.level
  Info
  HDFS, DEBUG 
   
  

{code}

  was:
We should add meta tag extension to the hadoop/hdfs config so that we can 
retrieve properties by various tags like PERFORMANCE, NAMENODE etc. Right now 
we don't have an option available to group or list properties related to 
performance or security or datanodes. Grouping properties through some 
restricted set of Meta tags and then exposing them in Configuration class will 
be useful for end users.
For example, here is an config with meta tag.

{code}

   
  dfs.namenode.servicerpc-bind-host
  localhost
   REQUIRED 
   
   
  
  dfs.namenode.fs-limits.min-block-size
   1048576 
   PERFORMANCE,REQUIRED
   

 
  dfs.namenode.logging.level
  Info
  HDFS, DEBUG 
   
  

{code}


> Support meta tags in configs
> 
>
> Key: HDFS-12350
> URL: https://issues.apache.org/jira/browse/HDFS-12350
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
> Attachments: HDFS-12350.01.patch, HDFS-12350.02.patch
>
>
> We should tag the hadoop/hdfs config so that we can retrieve properties by 
> there usage/application like PERFORMANCE, NAMENODE etc. Right now we don't 
> have an option available to group or list related properties together. 
> Grouping properties through some restricted set of Meta tags and then 
> exposing them in Configuration class will be useful for end users.
> For example, here is an config file with tags.
> {code}
> 
>
>   dfs.namenode.servicerpc-bind-host
>   localhost
>REQUIRED 
>
>
>   
>   dfs.namenode.fs-limits.min-block-size
>1048576 
>PERFORMANCE,REQUIRED
>
>  
>   dfs.namenode.logging.level
>   Info
>   HDFS, DEBUG 
>
>   
> 
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11882) Client fails if acknowledged size is greater than bytes sent

2017-08-30 Thread Kai Zheng (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11882?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16148330#comment-16148330
 ] 

Kai Zheng commented on HDFS-11882:
--

Travel today. Please expect slow response.



> Client fails if acknowledged size is greater than bytes sent
> 
>
> Key: HDFS-11882
> URL: https://issues.apache.org/jira/browse/HDFS-11882
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: erasure-coding, test
>Reporter: Akira Ajisaka
>Assignee: Andrew Wang
>Priority: Critical
>  Labels: hdfs-ec-3.0-must-do
> Attachments: HDFS-11882.01.patch, HDFS-11882.02.patch, 
> HDFS-11882.03.patch, HDFS-11882.04.patch, HDFS-11882.05.patch, 
> HDFS-11882.regressiontest.patch
>
>
> Some tests of erasure coding fails by the following exception. The following 
> test was removed by HDFS-11823, however, this type of error can happen in 
> real cluster.
> {noformat}
> Running 
> org.apache.hadoop.hdfs.TestDFSRSDefault10x4StripedOutputStreamWithFailure
> Tests run: 14, Failures: 0, Errors: 1, Skipped: 10, Time elapsed: 89.086 sec 
> <<< FAILURE! - in 
> org.apache.hadoop.hdfs.TestDFSRSDefault10x4StripedOutputStreamWithFailure
> testMultipleDatanodeFailure56(org.apache.hadoop.hdfs.TestDFSRSDefault10x4StripedOutputStreamWithFailure)
>   Time elapsed: 38.831 sec  <<< ERROR!
> java.lang.IllegalStateException: null
>   at 
> com.google.common.base.Preconditions.checkState(Preconditions.java:129)
>   at 
> org.apache.hadoop.hdfs.DFSStripedOutputStream.updatePipeline(DFSStripedOutputStream.java:780)
>   at 
> org.apache.hadoop.hdfs.DFSStripedOutputStream.checkStreamerFailures(DFSStripedOutputStream.java:664)
>   at 
> org.apache.hadoop.hdfs.DFSStripedOutputStream.closeImpl(DFSStripedOutputStream.java:1034)
>   at 
> org.apache.hadoop.hdfs.DFSOutputStream.close(DFSOutputStream.java:842)
>   at 
> org.apache.hadoop.fs.FSDataOutputStream$PositionCache.close(FSDataOutputStream.java:72)
>   at 
> org.apache.hadoop.fs.FSDataOutputStream.close(FSDataOutputStream.java:101)
>   at 
> org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure.runTest(TestDFSStripedOutputStreamWithFailure.java:472)
>   at 
> org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure.runTestWithMultipleFailure(TestDFSStripedOutputStreamWithFailure.java:381)
>   at 
> org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure.testMultipleDatanodeFailure56(TestDFSStripedOutputStreamWithFailure.java:245)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11882) Client fails if acknowledged size is greater than bytes sent

2017-08-30 Thread Andrew Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11882?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16148329#comment-16148329
 ] 

Andrew Wang commented on HDFS-11882:


Amazingly, I think these are all flakes.

HDFS-12360 tracks the TestLeaseRecoveryStriped failure.
I filed HDFS-12377 to fix the timeouts for TestReadStripedFileWhileDecoding
The various WithFailure tests failed on testBlockTokenExpired, which I can 
reproduce locally and is showing in other precommit runs, and it's not the 
error we're trying to fix here.
TestDataNodeHotswap, TestPread, TestDirectoryScanner, 
TestDataNodeVolumeFailureReporting are all existing flakies.

I can fix the checkstyle at commit time, any other review comments?

> Client fails if acknowledged size is greater than bytes sent
> 
>
> Key: HDFS-11882
> URL: https://issues.apache.org/jira/browse/HDFS-11882
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: erasure-coding, test
>Reporter: Akira Ajisaka
>Assignee: Andrew Wang
>Priority: Critical
>  Labels: hdfs-ec-3.0-must-do
> Attachments: HDFS-11882.01.patch, HDFS-11882.02.patch, 
> HDFS-11882.03.patch, HDFS-11882.04.patch, HDFS-11882.05.patch, 
> HDFS-11882.regressiontest.patch
>
>
> Some tests of erasure coding fails by the following exception. The following 
> test was removed by HDFS-11823, however, this type of error can happen in 
> real cluster.
> {noformat}
> Running 
> org.apache.hadoop.hdfs.TestDFSRSDefault10x4StripedOutputStreamWithFailure
> Tests run: 14, Failures: 0, Errors: 1, Skipped: 10, Time elapsed: 89.086 sec 
> <<< FAILURE! - in 
> org.apache.hadoop.hdfs.TestDFSRSDefault10x4StripedOutputStreamWithFailure
> testMultipleDatanodeFailure56(org.apache.hadoop.hdfs.TestDFSRSDefault10x4StripedOutputStreamWithFailure)
>   Time elapsed: 38.831 sec  <<< ERROR!
> java.lang.IllegalStateException: null
>   at 
> com.google.common.base.Preconditions.checkState(Preconditions.java:129)
>   at 
> org.apache.hadoop.hdfs.DFSStripedOutputStream.updatePipeline(DFSStripedOutputStream.java:780)
>   at 
> org.apache.hadoop.hdfs.DFSStripedOutputStream.checkStreamerFailures(DFSStripedOutputStream.java:664)
>   at 
> org.apache.hadoop.hdfs.DFSStripedOutputStream.closeImpl(DFSStripedOutputStream.java:1034)
>   at 
> org.apache.hadoop.hdfs.DFSOutputStream.close(DFSOutputStream.java:842)
>   at 
> org.apache.hadoop.fs.FSDataOutputStream$PositionCache.close(FSDataOutputStream.java:72)
>   at 
> org.apache.hadoop.fs.FSDataOutputStream.close(FSDataOutputStream.java:101)
>   at 
> org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure.runTest(TestDFSStripedOutputStreamWithFailure.java:472)
>   at 
> org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure.runTestWithMultipleFailure(TestDFSStripedOutputStreamWithFailure.java:381)
>   at 
> org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure.testMultipleDatanodeFailure56(TestDFSStripedOutputStreamWithFailure.java:245)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-12357) Let NameNode to bypass external attribute provider for special user

2017-08-30 Thread Yongjun Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12357?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16148139#comment-16148139
 ] 

Yongjun Zhang edited comment on HDFS-12357 at 8/31/17 1:14 AM:
---

Thanks you all for the review and comments!

[~atm]: good point, will remove the static. Thanks.

[~daryn], thanks for your comments, some thoughts:
1. Based on user/path to decide what attributes to reveal is indeed more 
refined. However, it adds complexity. And every provider has to provide an 
implementation. Wonder if you can provide an example we want to decide things 
based on user/path combination?
2. Currently I use NameNode.getRemoteUser() to tell which user it is. If we put 
this bypass logic into Provider, the provider need to know what the current 
user is. we either have to change the API of provider, or add some new methods 
in parallel, to pass the user information. 

[~manojg], talking about SnapshotDiff to bypass provider, the caller need to 
tell the provider to do that, thus new API is needed. Right? thanks.

Look forward to your further thoughts and comments!

Thanks a lot.




was (Author: yzhangal):
Thanks you all for the review and comments!

[~atm]: good point, will remove the static. Thanks.

[~daryn], thanks for your comments, some thoughts:
1. Based on user/path to decide what attributes to reveal is indeed more 
refined. However, it adds complexity. And every provider has to provide an 
implementation. Wonder if you can provide an example we want to decide things 
based on user/path?
2. Currently I use NameNode.getRemoteUser() to tell which user it is. If we put 
this bypass logic into Provider, the provider need to know what the current 
user is. we either have to change the API of provider, or add some new methods 
in parallel.

[~manojg], talking about SnapshotDiff to bypass provider, the caller need to 
tell the provider to do that, thus new API is needed. Right? thanks.

Look forward to your further thoughts and comments!

Thanks a lot.



> Let NameNode to bypass external attribute provider for special user
> ---
>
> Key: HDFS-12357
> URL: https://issues.apache.org/jira/browse/HDFS-12357
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Yongjun Zhang
>Assignee: Yongjun Zhang
> Attachments: HDFS-12357.001.patch
>
>
> This is a third proposal to solve the problem described in HDFS-12202.
> The problem is, when we do distcp from one cluster to another (or within the 
> same cluster), in addition to copying file data, we copy the metadata from 
> source to target. If external attribute provider is enabled, the metadata may 
> be read from the provider, thus provider data read from source may be saved 
> to target HDFS. 
> We want to avoid saving metadata from external provider to HDFS, so we want 
> to bypass external provider when doing the distcp (or hadoop fs -cp) 
> operation.
> Two alternative approaches were proposed earlier, one in HDFS-12202, the 
> other in HDFS-12294. The proposal here is the third one.
> The idea is, we introduce a new config, that specifies a special user (or a 
> list of users), and let NN bypass external provider when the current user is 
> a special user.
> If we run applications as the special user that need data from external 
> attribute provider, then it won't work. So the constraint on this approach 
> is, the special users here should not run applications that need data from 
> external provider.
> Thanks [~asuresh] for proposing this idea and [~chris.douglas], [~daryn], 
> [~manojg] for the discussions in the other jiras. 
> I'm creating this one to discuss further.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12377) Refactor TestReadStripedFileWithDecoding to avoid test timeouts

2017-08-30 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12377?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16148291#comment-16148291
 ] 

Hadoop QA commented on HDFS-12377:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
17s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 14m 
55s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
54s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
36s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
59s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
42s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
41s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
45s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red}  0m 45s{color} 
| {color:red} hadoop-hdfs-project_hadoop-hdfs generated 3 new + 408 unchanged - 
3 fixed = 411 total (was 411) {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 34s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch 
generated 19 new + 5 unchanged - 2 fixed = 24 total (was 7) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
39s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 94m  7s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
23s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}121m 25s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailureWithRandomECPolicy |
|   | hadoop.hdfs.server.namenode.TestFSImage |
|   | hadoop.hdfs.server.namenode.ha.TestPipelinesFailover |
|   | hadoop.hdfs.server.balancer.TestBalancer |
|   | hadoop.hdfs.TestLeaseRecoveryStriped |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure060 |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure180 |
|   | hadoop.hdfs.TestAclsEndToEnd |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure010 |
|   | hadoop.hdfs.TestClientProtocolForPipelineRecovery |
|   | hadoop.hdfs.TestPipelines |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure110 |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure160 |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure190 |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure070 |
| Timed out junit tests | org.apache.hadoop.hdfs.TestWriteReadStripedFile |
|   | org.apache.hadoop.hdfs.TestReadStripedCorruptFileWithDecoding |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | HDFS-12377 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12884564/HDFS-12377.001.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 5b2d96ca9974 3.13.0-119-generic #166-Ubuntu SMP Wed May 3 
12:18:55 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| 

[jira] [Commented] (HDFS-10787) libhdfs++: hdfs_configuration and configuration_loader should be accessible from our public API

2017-08-30 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10787?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16148269#comment-16148269
 ] 

Hadoop QA commented on HDFS-10787:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 11m 
34s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} HDFS-8707 Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 
48s{color} | {color:green} HDFS-8707 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 15m 
32s{color} | {color:green} HDFS-8707 passed with JDK v1.8.0_144 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 15m 
18s{color} | {color:green} HDFS-8707 passed with JDK v1.7.0_151 {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  7m 
45s{color} | {color:green} HDFS-8707 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
35s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 15m 
30s{color} | {color:green} the patch passed with JDK v1.8.0_144 {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green} 15m 
30s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 15m 
30s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 15m  
4s{color} | {color:green} the patch passed with JDK v1.7.0_151 {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green} 15m  
4s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 15m  
4s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  7m  
6s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}350m  0s{color} 
| {color:red} hadoop-hdfs-native-client in the patch failed with JDK 
v1.7.0_151. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
26s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}482m 25s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| JDK v1.7.0_151 Failed CTEST tests | test_hdfs_ext_hdfspp_test_shim_static |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:3117e2a |
| JIRA Issue | HDFS-10787 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12884505/HDFS-10787.HDFS-8707.003.patch
 |
| Optional Tests |  asflicense  compile  cc  mvnsite  javac  unit  |
| uname | Linux a7ff3df56804 3.13.0-116-generic #163-Ubuntu SMP Fri Mar 31 
14:13:22 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | HDFS-8707 / 5cee747 |
| Default Java | 1.7.0_151 |
| Multi-JDK versions |  /usr/lib/jvm/java-8-oracle:1.8.0_144 
/usr/lib/jvm/java-7-openjdk-amd64:1.7.0_151 |
| CTEST | 
https://builds.apache.org/job/PreCommit-HDFS-Build/20929/artifact/patchprocess/patch-hadoop-hdfs-project_hadoop-hdfs-native-client-jdk1.7.0_151-ctest.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/20929/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs-native-client-jdk1.7.0_151.txt
 |
| JDK v1.7.0_151  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/20929/testReport/ |
| modules | C: hadoop-hdfs-project/hadoop-hdfs-native-client U: 
hadoop-hdfs-project/hadoop-hdfs-native-client |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/20929/console |
| Powered by | Apache Yetus 0.6.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> libhdfs++: hdfs_configuration and 

[jira] [Commented] (HDFS-7859) Erasure Coding: Persist erasure coding policies in NameNode

2017-08-30 Thread Kai Zheng (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7859?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16148249#comment-16148249
 ] 

Kai Zheng commented on HDFS-7859:
-

We want to stabilize the API before BETA 1, that includes API names. I'm not so 
comfortable in methods like {{getEcPoliciesOnDir}}, because in most cases in 
existing codes we preferred to use {{EC}} or {{ErasureCoding}} over {{Ec}}.

> Erasure Coding: Persist erasure coding policies in NameNode
> ---
>
> Key: HDFS-7859
> URL: https://issues.apache.org/jira/browse/HDFS-7859
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Kai Zheng
>Assignee: SammiChen
> Attachments: HDFS-7859.001.patch, HDFS-7859.002.patch, 
> HDFS-7859.004.patch, HDFS-7859.005.patch, HDFS-7859.006.patch, 
> HDFS-7859.007.patch, HDFS-7859.008.patch, HDFS-7859.009.patch, 
> HDFS-7859.010.patch, HDFS-7859.011.patch, HDFS-7859-HDFS-7285.002.patch, 
> HDFS-7859-HDFS-7285.002.patch, HDFS-7859-HDFS-7285.003.patch
>
>
> In meetup discussion with [~zhz] and [~jingzhao], it's suggested that we 
> persist EC schemas in NameNode centrally and reliably, so that EC zones can 
> reference them by name efficiently.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-7859) Erasure Coding: Persist erasure coding policies in NameNode

2017-08-30 Thread Kai Zheng (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7859?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16148241#comment-16148241
 ] 

Kai Zheng commented on HDFS-7859:
-

bq. Are only the number of policies persisted? It looks off to me. It will 
depends on the order of system pre-defined policies. So when the pluggable EC 
policy being merged, would that impact the correctness of loading / saving 
fsimage? It might also make upgrade / downgrade difficult.
I agree with Eddy on this and have the same concern. We need to persist all the 
system policies and user defined policies including their info (name, id, cell 
size and EC schema) along with their status (lenabled/disabled, removed). We 
need to ensure all the persisted info can be used to 
recover/export/import/convert data and do the upgrading/downgrading stuffs.

bq. Lei (Eddy) Xu mentioned upgrade and downgrade, it's a good question. Not 
only user defined ec policy, but also built-in ec policy will face this issue. 
The major problem is if a codec is no longer supported after upgrade or 
downgrade, how to handle these type of ec policies in the new cluster, also how 
to handle the files/directories which used these no long supported files?
It should be a rare case we need to consider that an EC codec/coder/algorithm 
will not be supported and removed from the code base. If user adds some 
pluggable codec but then remove it from binary, it's their call. So let's not 
worry about this at this time.

Let's focus on the basic use cases and requirements, and move on not being too 
overloaded.


> Erasure Coding: Persist erasure coding policies in NameNode
> ---
>
> Key: HDFS-7859
> URL: https://issues.apache.org/jira/browse/HDFS-7859
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Kai Zheng
>Assignee: SammiChen
> Attachments: HDFS-7859.001.patch, HDFS-7859.002.patch, 
> HDFS-7859.004.patch, HDFS-7859.005.patch, HDFS-7859.006.patch, 
> HDFS-7859.007.patch, HDFS-7859.008.patch, HDFS-7859.009.patch, 
> HDFS-7859.010.patch, HDFS-7859.011.patch, HDFS-7859-HDFS-7285.002.patch, 
> HDFS-7859-HDFS-7285.002.patch, HDFS-7859-HDFS-7285.003.patch
>
>
> In meetup discussion with [~zhz] and [~jingzhao], it's suggested that we 
> persist EC schemas in NameNode centrally and reliably, so that EC zones can 
> reference them by name efficiently.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12376) Enable JournalNode Sync by default

2017-08-30 Thread Hanisha Koneru (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12376?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16148223#comment-16148223
 ] 

Hanisha Koneru commented on HDFS-12376:
---

Hi [~jingzhao], please let me know your thoughts on this.

> Enable JournalNode Sync by default
> --
>
> Key: HDFS-12376
> URL: https://issues.apache.org/jira/browse/HDFS-12376
> Project: Hadoop HDFS
>  Issue Type: Task
>  Components: hdfs
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
> Attachments: HDFS-12376.001.patch
>
>
> All the tasks related to Journal Node sync (HDFS-4025)  - HDFS-11448, 
> HDFS-11877, HDFS-11878, HDFS-11879, HDFS-12224, HDFS-12356 and HDFS-12358 are 
> resolved. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12376) Enable JournalNode Sync by default

2017-08-30 Thread Hanisha Koneru (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12376?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16148221#comment-16148221
 ] 

Hanisha Koneru commented on HDFS-12376:
---

The test failures look unrelated.

> Enable JournalNode Sync by default
> --
>
> Key: HDFS-12376
> URL: https://issues.apache.org/jira/browse/HDFS-12376
> Project: Hadoop HDFS
>  Issue Type: Task
>  Components: hdfs
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
> Attachments: HDFS-12376.001.patch
>
>
> All the tasks related to Journal Node sync (HDFS-4025)  - HDFS-11448, 
> HDFS-11877, HDFS-11878, HDFS-11879, HDFS-12224, HDFS-12356 and HDFS-12358 are 
> resolved. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-9107) Prevent NN's unrecoverable death spiral after full GC

2017-08-30 Thread Konstantin Shvachko (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9107?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantin Shvachko updated HDFS-9107:
--
Fix Version/s: 2.7.5
   2.9.0

Pushed to branch-2.7. Only a minor conflict in imports.
Updated Fix versions.

> Prevent NN's unrecoverable death spiral after full GC
> -
>
> Key: HDFS-9107
> URL: https://issues.apache.org/jira/browse/HDFS-9107
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 2.0.0-alpha
>Reporter: Daryn Sharp
>Assignee: Daryn Sharp
>Priority: Critical
> Fix For: 2.8.0, 2.9.0, 3.0.0-alpha1, 2.7.5
>
> Attachments: HDFS-9107.patch, HDFS-9107.patch
>
>
> A full GC pause in the NN that exceeds the dead node interval can lead to an 
> infinite cycle of full GCs.  The most common situation that precipitates an 
> unrecoverable state is a network issue that temporarily cuts off multiple 
> racks.
> The NN wakes up and falsely starts marking nodes dead. This bloats the 
> replication queues which increases memory pressure. The replications create a 
> flurry of incremental block reports and a glut of over-replicated blocks.
> The "dead" nodes heartbeat within seconds. The NN forces a re-registration 
> which requires a full block report - more memory pressure. The NN now has to 
> invalidate all the over-replicated blocks. The extra blocks are added to 
> invalidation queues, tracked in an excess blocks map, etc - much more memory 
> pressure.
> All the memory pressure can push the NN into another full GC which repeats 
> the entire cycle.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-12377) Refactor TestReadStripedFileWithDecoding to avoid test timeouts

2017-08-30 Thread Andrew Wang (JIRA)
Andrew Wang created HDFS-12377:
--

 Summary: Refactor TestReadStripedFileWithDecoding to avoid test 
timeouts
 Key: HDFS-12377
 URL: https://issues.apache.org/jira/browse/HDFS-12377
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: erasure-coding
Affects Versions: 3.0.0-alpha3
Reporter: Andrew Wang
Assignee: Andrew Wang


This test times out since the nested for loops means it runs 12 configurations 
inside each test method.

Let's refactor this to use JUnit parameters instead.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12377) Refactor TestReadStripedFileWithDecoding to avoid test timeouts

2017-08-30 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12377?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HDFS-12377:
---
Attachment: HDFS-12377.001.patch

Patch attached.

> Refactor TestReadStripedFileWithDecoding to avoid test timeouts
> ---
>
> Key: HDFS-12377
> URL: https://issues.apache.org/jira/browse/HDFS-12377
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: erasure-coding
>Affects Versions: 3.0.0-alpha3
>Reporter: Andrew Wang
>Assignee: Andrew Wang
> Attachments: HDFS-12377.001.patch
>
>
> This test times out since the nested for loops means it runs 12 
> configurations inside each test method.
> Let's refactor this to use JUnit parameters instead.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12377) Refactor TestReadStripedFileWithDecoding to avoid test timeouts

2017-08-30 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12377?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HDFS-12377:
---
Status: Patch Available  (was: Open)

> Refactor TestReadStripedFileWithDecoding to avoid test timeouts
> ---
>
> Key: HDFS-12377
> URL: https://issues.apache.org/jira/browse/HDFS-12377
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: erasure-coding
>Affects Versions: 3.0.0-alpha3
>Reporter: Andrew Wang
>Assignee: Andrew Wang
> Attachments: HDFS-12377.001.patch
>
>
> This test times out since the nested for loops means it runs 12 
> configurations inside each test method.
> Let's refactor this to use JUnit parameters instead.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-8797) WebHdfsFileSystem creates too many connections for pread

2017-08-30 Thread Konstantin Shvachko (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8797?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantin Shvachko updated HDFS-8797:
--
Fix Version/s: 2.7.5
   2.9.0

Pushed to branch-2.7. Only a minor conflict in TestWebHDFS.
Updated Fix versions.

> WebHdfsFileSystem creates too many connections for pread
> 
>
> Key: HDFS-8797
> URL: https://issues.apache.org/jira/browse/HDFS-8797
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: webhdfs
>Reporter: Jing Zhao
>Assignee: Jing Zhao
> Fix For: 2.8.0, 2.9.0, 3.0.0-alpha1, 2.7.5
>
> Attachments: HDFS-8797.000.patch, HDFS-8797.001.patch, 
> HDFS-8797.002.patch, HDFS-8797.003.patch
>
>
> While running a test we found that WebHdfsFileSystem can create several 
> thousand connections when doing a position read of a 200MB file. For each 
> connection the client will connect to the DataNode again and the DataNode 
> will create a new DFSClient instance to handle the read request. This also 
> leads to several thousand {{getBlockLocations}} call to the NameNode.
> The cause of the issue is that in {{FSInputStream#read(long, byte[], int, 
> int)}}, each time the inputstream reads some time, it seeks back to the old 
> position and resets its state to SEEK. Thus the next read will regenerate the 
> connection.
> {code}
>   public int read(long position, byte[] buffer, int offset, int length)
> throws IOException {
> synchronized (this) {
>   long oldPos = getPos();
>   int nread = -1;
>   try {
> seek(position);
> nread = read(buffer, offset, length);
>   } finally {
> seek(oldPos);
>   }
>   return nread;
> }
>   }
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-9153) Pretty-format the output for DFSIO

2017-08-30 Thread Konstantin Shvachko (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9153?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16148138#comment-16148138
 ] 

Konstantin Shvachko commented on HDFS-9153:
---

Committed to branch-2.7 along with MAPREDUCE-6931.
Updates versions.

> Pretty-format the output for DFSIO
> --
>
> Key: HDFS-9153
> URL: https://issues.apache.org/jira/browse/HDFS-9153
> Project: Hadoop HDFS
>  Issue Type: Test
>Reporter: Kai Zheng
>Assignee: Kai Zheng
> Fix For: 2.8.0, 2.9.0, 3.0.0-alpha1, 2.7.5
>
> Attachments: HDFS-9153-v1.patch
>
>
> Ref. the following DFSIO output, I was surprised the test throughput was only 
> {{17}} MB/s, which doesn't make sense for a real cluster. Maybe it's used for 
> other purpose? For users, it may make more sense to give the throughput 1610 
> MB/s (1228800/763), calculated by *Total MBytes processed / Test exec time*.
> {noformat}
> 15/09/28 11:42:23 INFO fs.TestDFSIO: - TestDFSIO - : write
> 15/09/28 11:42:23 INFO fs.TestDFSIO:Date & time: Mon Sep 28 
> 11:42:23 CST 2015
> 15/09/28 11:42:23 INFO fs.TestDFSIO:Number of files: 100
> 15/09/28 11:42:23 INFO fs.TestDFSIO: Total MBytes processed: 1228800.0
> 15/09/28 11:42:23 INFO fs.TestDFSIO:  Throughput mb/sec: 
> 17.457387239456878
> 15/09/28 11:42:23 INFO fs.TestDFSIO: Average IO rate mb/sec: 17.57563018798828
> 15/09/28 11:42:23 INFO fs.TestDFSIO:  IO rate std deviation: 
> 1.7076328985378455
> 15/09/28 11:42:23 INFO fs.TestDFSIO: Test exec time sec: 762.697
> 15/09/28 11:42:23 INFO fs.TestDFSIO: 
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12357) Let NameNode to bypass external attribute provider for special user

2017-08-30 Thread Yongjun Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12357?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16148139#comment-16148139
 ] 

Yongjun Zhang commented on HDFS-12357:
--

Thanks you all for the review and comments!

[~atm]: good point, will remove the static. Thanks.

[~daryn], thanks for your comments, some thoughts:
1. Based on user/path to decide what attributes to reveal is indeed more 
refined. However, it adds complexity. And every provider has to provide an 
implementation. Wonder if you can provide an example we want to decide things 
based on user/path?
2. Currently I use NameNode.getRemoteUser() to tell which user it is. If we put 
this bypass logic into Provider, the provider need to know what the current 
user is. we either have to change the API of provider, or add some new methods 
in parallel.

[~manojg], talking about SnapshotDiff to bypass provider, the caller need to 
tell the provider to do that, thus new API is needed. Right? thanks.

Look forward to your further thoughts and comments!

Thanks a lot.



> Let NameNode to bypass external attribute provider for special user
> ---
>
> Key: HDFS-12357
> URL: https://issues.apache.org/jira/browse/HDFS-12357
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Yongjun Zhang
>Assignee: Yongjun Zhang
> Attachments: HDFS-12357.001.patch
>
>
> This is a third proposal to solve the problem described in HDFS-12202.
> The problem is, when we do distcp from one cluster to another (or within the 
> same cluster), in addition to copying file data, we copy the metadata from 
> source to target. If external attribute provider is enabled, the metadata may 
> be read from the provider, thus provider data read from source may be saved 
> to target HDFS. 
> We want to avoid saving metadata from external provider to HDFS, so we want 
> to bypass external provider when doing the distcp (or hadoop fs -cp) 
> operation.
> Two alternative approaches were proposed earlier, one in HDFS-12202, the 
> other in HDFS-12294. The proposal here is the third one.
> The idea is, we introduce a new config, that specifies a special user (or a 
> list of users), and let NN bypass external provider when the current user is 
> a special user.
> If we run applications as the special user that need data from external 
> attribute provider, then it won't work. So the constraint on this approach 
> is, the special users here should not run applications that need data from 
> external provider.
> Thanks [~asuresh] for proposing this idea and [~chris.douglas], [~daryn], 
> [~manojg] for the discussions in the other jiras. 
> I'm creating this one to discuss further.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-9153) Pretty-format the output for DFSIO

2017-08-30 Thread Konstantin Shvachko (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9153?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantin Shvachko updated HDFS-9153:
--
Fix Version/s: 2.7.5
   2.9.0

> Pretty-format the output for DFSIO
> --
>
> Key: HDFS-9153
> URL: https://issues.apache.org/jira/browse/HDFS-9153
> Project: Hadoop HDFS
>  Issue Type: Test
>Reporter: Kai Zheng
>Assignee: Kai Zheng
> Fix For: 2.8.0, 2.9.0, 3.0.0-alpha1, 2.7.5
>
> Attachments: HDFS-9153-v1.patch
>
>
> Ref. the following DFSIO output, I was surprised the test throughput was only 
> {{17}} MB/s, which doesn't make sense for a real cluster. Maybe it's used for 
> other purpose? For users, it may make more sense to give the throughput 1610 
> MB/s (1228800/763), calculated by *Total MBytes processed / Test exec time*.
> {noformat}
> 15/09/28 11:42:23 INFO fs.TestDFSIO: - TestDFSIO - : write
> 15/09/28 11:42:23 INFO fs.TestDFSIO:Date & time: Mon Sep 28 
> 11:42:23 CST 2015
> 15/09/28 11:42:23 INFO fs.TestDFSIO:Number of files: 100
> 15/09/28 11:42:23 INFO fs.TestDFSIO: Total MBytes processed: 1228800.0
> 15/09/28 11:42:23 INFO fs.TestDFSIO:  Throughput mb/sec: 
> 17.457387239456878
> 15/09/28 11:42:23 INFO fs.TestDFSIO: Average IO rate mb/sec: 17.57563018798828
> 15/09/28 11:42:23 INFO fs.TestDFSIO:  IO rate std deviation: 
> 1.7076328985378455
> 15/09/28 11:42:23 INFO fs.TestDFSIO: Test exec time sec: 762.697
> 15/09/28 11:42:23 INFO fs.TestDFSIO: 
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12373) Trying to construct Path for a file that has colon (":") throws IllegalArgumentException

2017-08-30 Thread Chen Liang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12373?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16148083#comment-16148083
 ] 

Chen Liang commented on HDFS-12373:
---

This seems to be intentional. {{DFSUtilClient.isValidName}} is called when 
creating a file, and the comment of this method explicitly says:
{code}
  /**
   * Whether the pathname is valid.  Currently prohibits relative paths,
   * names which contain a ":" or "//", or other non-canonical paths.
   */
{code}

> Trying to construct Path for a file that has colon (":") throws 
> IllegalArgumentException
> 
>
> Key: HDFS-12373
> URL: https://issues.apache.org/jira/browse/HDFS-12373
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Vlad Rozov
>
> In case a file has colon in its name, org.apache.hadoop.fs.Path, can not be 
> constructed. For example, I have file "a:b" under /tmp and new 
> Path("file:/tmp", "a:b") throws IllegalArgumentException.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12376) Enable JournalNode Sync by default

2017-08-30 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12376?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16148068#comment-16148068
 ] 

Hadoop QA commented on HDFS-12376:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
17s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 13m 
53s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
51s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
37s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
54s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
38s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
39s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
35s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
37s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 97m 39s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
17s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}123m 25s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.TestReadStripedFileWithDecoding |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure100 |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure150 |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure210 |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure200 |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailureWithRandomECPolicy |
|   | hadoop.hdfs.server.blockmanagement.TestBlockTokenWithDFSStriped |
|   | hadoop.cli.TestCryptoAdminCLI |
|   | hadoop.hdfs.TestClientProtocolForPipelineRecovery |
|   | hadoop.hdfs.TestLeaseRecoveryStriped |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure160 |
|   | hadoop.hdfs.TestEncryptedTransfer |
| Timed out junit tests | org.apache.hadoop.hdfs.TestWriteReadStripedFile |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | HDFS-12376 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12884521/HDFS-12376.001.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  xml  |
| uname | Linux 2500208333e5 3.13.0-116-generic #163-Ubuntu SMP Fri Mar 31 
14:13:22 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / fd66a24 |
| Default Java | 1.8.0_144 |
| findbugs | v3.1.0-RC1 

[jira] [Commented] (HDFS-12357) Let NameNode to bypass external attribute provider for special user

2017-08-30 Thread Manoj Govindassamy (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12357?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16148062#comment-16148062
 ] 

Manoj Govindassamy commented on HDFS-12357:
---

[~yzhangal],
  Here is one other jira on the similar lines - HDFS-12203 - 
INodeAttributesProvider#getAttributes() support for default/passthrough mode.

> Let NameNode to bypass external attribute provider for special user
> ---
>
> Key: HDFS-12357
> URL: https://issues.apache.org/jira/browse/HDFS-12357
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Yongjun Zhang
>Assignee: Yongjun Zhang
> Attachments: HDFS-12357.001.patch
>
>
> This is a third proposal to solve the problem described in HDFS-12202.
> The problem is, when we do distcp from one cluster to another (or within the 
> same cluster), in addition to copying file data, we copy the metadata from 
> source to target. If external attribute provider is enabled, the metadata may 
> be read from the provider, thus provider data read from source may be saved 
> to target HDFS. 
> We want to avoid saving metadata from external provider to HDFS, so we want 
> to bypass external provider when doing the distcp (or hadoop fs -cp) 
> operation.
> Two alternative approaches were proposed earlier, one in HDFS-12202, the 
> other in HDFS-12294. The proposal here is the third one.
> The idea is, we introduce a new config, that specifies a special user (or a 
> list of users), and let NN bypass external provider when the current user is 
> a special user.
> If we run applications as the special user that need data from external 
> attribute provider, then it won't work. So the constraint on this approach 
> is, the special users here should not run applications that need data from 
> external provider.
> Thanks [~asuresh] for proposing this idea and [~chris.douglas], [~daryn], 
> [~manojg] for the discussions in the other jiras. 
> I'm creating this one to discuss further.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12376) Enable JournalNode Sync by default

2017-08-30 Thread Hanisha Koneru (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12376?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hanisha Koneru updated HDFS-12376:
--
Attachment: HDFS-12376.001.patch

> Enable JournalNode Sync by default
> --
>
> Key: HDFS-12376
> URL: https://issues.apache.org/jira/browse/HDFS-12376
> Project: Hadoop HDFS
>  Issue Type: Task
>  Components: hdfs
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
> Attachments: HDFS-12376.001.patch
>
>
> All the tasks related to Journal Node sync (HDFS-4025)  - HDFS-11448, 
> HDFS-11877, HDFS-11878, HDFS-11879, HDFS-12224, HDFS-12356 and HDFS-12358 are 
> resolved. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12376) Enable JournalNode Sync by default

2017-08-30 Thread Hanisha Koneru (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12376?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hanisha Koneru updated HDFS-12376:
--
Status: Patch Available  (was: Open)

> Enable JournalNode Sync by default
> --
>
> Key: HDFS-12376
> URL: https://issues.apache.org/jira/browse/HDFS-12376
> Project: Hadoop HDFS
>  Issue Type: Task
>  Components: hdfs
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
> Attachments: HDFS-12376.001.patch
>
>
> All the tasks related to Journal Node sync (HDFS-4025)  - HDFS-11448, 
> HDFS-11877, HDFS-11878, HDFS-11879, HDFS-12224, HDFS-12356 and HDFS-12358 are 
> resolved. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-12376) Enable JournalNode Sync by default

2017-08-30 Thread Hanisha Koneru (JIRA)
Hanisha Koneru created HDFS-12376:
-

 Summary: Enable JournalNode Sync by default
 Key: HDFS-12376
 URL: https://issues.apache.org/jira/browse/HDFS-12376
 Project: Hadoop HDFS
  Issue Type: Task
  Components: hdfs
Reporter: Hanisha Koneru
Assignee: Hanisha Koneru


All the tasks related to Journal Node sync (HDFS-4025)  - HDFS-11448, 
HDFS-11877, HDFS-11878, HDFS-11879, HDFS-12224, HDFS-12356 and HDFS-12358 are 
resolved. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12363) Possible NPE in BlockManager$StorageInfoDefragmenter#scanAndCompactStorages

2017-08-30 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12363?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16147765#comment-16147765
 ] 

Hadoop QA commented on HDFS-12363:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
16s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 13m 
53s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
50s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
37s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
55s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
44s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
41s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
35s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
38s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 97m 51s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
15s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}123m 54s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.TestReadStripedFileWithDecoding |
|   | hadoop.hdfs.TestReplaceDatanodeOnFailure |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure140 |
|   | hadoop.hdfs.TestClientProtocolForPipelineRecovery |
|   | hadoop.hdfs.TestLeaseRecoveryStriped |
| Timed out junit tests | org.apache.hadoop.hdfs.TestWriteReadStripedFile |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | HDFS-12363 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12884154/HDFS-12363.02.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 0c74973bd755 3.13.0-116-generic #163-Ubuntu SMP Fri Mar 31 
14:13:22 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / a20e710 |
| Default Java | 1.8.0_144 |
| findbugs | v3.1.0-RC1 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/20928/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/20928/testReport/ |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/20928/console |
| Powered by | Apache Yetus 0.6.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Possible NPE in 

[jira] [Commented] (HDFS-12356) Unit test for JournalNode sync during Rolling Upgrade

2017-08-30 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12356?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16147706#comment-16147706
 ] 

Hudson commented on HDFS-12356:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #12276 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/12276/])
HDFS-12356. Unit test for JournalNode sync during Rolling Upgrade. (arp: rev 
fd66a243bfffc8260bfd69058625d4d9509cafe6)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/qjournal/server/TestJournalNodeSync.java


> Unit test for JournalNode sync during Rolling Upgrade
> -
>
> Key: HDFS-12356
> URL: https://issues.apache.org/jira/browse/HDFS-12356
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: ha
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
> Fix For: 2.9.0
>
> Attachments: HDFS-12356.001.patch, HDFS-12356.002.patch, 
> HDFS-12356.003.patch
>
>
> Adding unit tests for testing JN sync functionality during rolling upgrade of 
> NN.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Reopened] (HDFS-11294) libhdfs++: Segfault in HA failover if DNS lookup for both Namenodes fails

2017-08-30 Thread James Clampffer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11294?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Clampffer reopened HDFS-11294:


Reopening.  Looks like this is a real issue that existing tests weren't 
hitting.  Given an HA cluster where both NN hostnames can't be resolved to any 
ip(v4) addresses the rpc engine will try and dereference the first element of 
an empty vector while trying to determine which node to use for failover.

Have a fix, will post soon.  Want better test coverage to see if there are 
similar issues elsewhere. 

> libhdfs++: Segfault in HA failover if DNS lookup for both Namenodes fails
> -
>
> Key: HDFS-11294
> URL: https://issues.apache.org/jira/browse/HDFS-11294
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Reporter: James Clampffer
>Assignee: James Clampffer
>
> Hit while doing more manual testing on HDFS-11028.
> The HANamenodeTracker takes an asio endpoint to figure out what endpoint on 
> the other node to try next during a failover.  This is done by passing the 
> element at index 0 from endpoints (a std::vector).
> When DNS fails the endpoints vector for that node will be empty so the 
> iterator returned by endpoints\[0\] is just a null pointer that gets 
> dereferenced and causes a segfault.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12356) Unit test for JournalNode sync during Rolling Upgrade

2017-08-30 Thread Arpit Agarwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12356?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HDFS-12356:
-
Summary: Unit test for JournalNode sync during Rolling Upgrade  (was: Unit 
test for JN sync during Rolling Upgrade)

> Unit test for JournalNode sync during Rolling Upgrade
> -
>
> Key: HDFS-12356
> URL: https://issues.apache.org/jira/browse/HDFS-12356
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: ha
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
> Fix For: 2.9.0
>
> Attachments: HDFS-12356.001.patch, HDFS-12356.002.patch, 
> HDFS-12356.003.patch
>
>
> Adding unit tests for testing JN sync functionality during rolling upgrade of 
> NN.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12356) Unit test for JN sync during Rolling Upgrade

2017-08-30 Thread Arpit Agarwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12356?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HDFS-12356:
-
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 2.9.0
   Status: Resolved  (was: Patch Available)

Committed to trunk. Thanks for the contribution [~hanishakoneru]

> Unit test for JN sync during Rolling Upgrade
> 
>
> Key: HDFS-12356
> URL: https://issues.apache.org/jira/browse/HDFS-12356
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: ha
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
> Fix For: 2.9.0
>
> Attachments: HDFS-12356.001.patch, HDFS-12356.002.patch, 
> HDFS-12356.003.patch
>
>
> Adding unit tests for testing JN sync functionality during rolling upgrade of 
> NN.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12356) Unit test for JN sync during Rolling Upgrade

2017-08-30 Thread Arpit Agarwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12356?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HDFS-12356:
-
Component/s: ha

> Unit test for JN sync during Rolling Upgrade
> 
>
> Key: HDFS-12356
> URL: https://issues.apache.org/jira/browse/HDFS-12356
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: ha
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
> Fix For: 2.9.0
>
> Attachments: HDFS-12356.001.patch, HDFS-12356.002.patch, 
> HDFS-12356.003.patch
>
>
> Adding unit tests for testing JN sync functionality during rolling upgrade of 
> NN.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12356) Unit test for JN sync during Rolling Upgrade

2017-08-30 Thread Arpit Agarwal (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12356?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16147640#comment-16147640
 ] 

Arpit Agarwal commented on HDFS-12356:
--

+1

The UT failures are clearly unrelated. I will commit the v3 patch shortly.

> Unit test for JN sync during Rolling Upgrade
> 
>
> Key: HDFS-12356
> URL: https://issues.apache.org/jira/browse/HDFS-12356
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
> Attachments: HDFS-12356.001.patch, HDFS-12356.002.patch, 
> HDFS-12356.003.patch
>
>
> Adding unit tests for testing JN sync functionality during rolling upgrade of 
> NN.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11069) Tighten the authorization of datanode RPC

2017-08-30 Thread Erik Krogen (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11069?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16147611#comment-16147611
 ] 

Erik Krogen commented on HDFS-11069:


Ah, thank you for the context, Kihwal. I am too new for that :)

> Tighten the authorization of datanode RPC
> -
>
> Key: HDFS-11069
> URL: https://issues.apache.org/jira/browse/HDFS-11069
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode, security
>Reporter: Kihwal Lee
>Assignee: Kihwal Lee
> Fix For: 2.8.0, 2.9.0, 2.7.4, 3.0.0-alpha2
>
> Attachments: HDFS-11069.patch
>
>
> The current implementation of {{checkSuperuserPrivilege()}} allows the 
> datanode user from any node to be recognized as a super user.  If one 
> datanode is compromised, the intruder can issue {{shutdownDatanode()}}, 
> {{evictWriters()}}, {{triggerBlockReport()}}, etc. against all other 
> datanodes. Although this does not expose stored data, it can cause service 
> disruptions.
> This needs to be tightened to allow only the local datanode user.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12372) Document the impact of HDFS-11069 (Tighten the authorization of datanode RPC)

2017-08-30 Thread Kihwal Lee (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12372?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16147612#comment-16147612
 ] 

Kihwal Lee commented on HDFS-12372:
---

As you can see from the code, issuing command as a hdfs admin user still works. 
The change only affects the Datanode user.

{code:java}
  /** Check whether the current user is in the superuser group. */
  private void checkSuperuserPrivilege() throws IOException, 
AccessControlException {
...
// Is this by the DN user itself?
assert dnUserName != null;
if (callerUgi.getUserName().equals(dnUserName)) {
  return;
}

// Is the user a member of the super group?
List groups = Arrays.asList(callerUgi.getGroupNames());
if (groups.contains(supergroup)) {
  return;
}
// Not a superuser.
throw new AccessControlException();
  }
{code}

> Document the impact of HDFS-11069 (Tighten the authorization of datanode RPC)
> -
>
> Key: HDFS-12372
> URL: https://issues.apache.org/jira/browse/HDFS-12372
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 2.8.0, 2.9.0, 2.7.4, 3.0.0-alpha2
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
>
> The idea of HDFS-11069 is good. But it seems to cause confusion for 
> administrators when they issue commands like hdfs diskbalancer, or hdfs 
> dfsadmin, because this change of behavior is not documented properly.
> I suggest we document a recommended way to kinit (e.g. kinit as 
> hdfs/ho...@host1.example.com, rather than h...@example.com), as well as 
> documenting a notice for running privileged DataNode commands in a Kerberized 
> clusters



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12350) Support meta tags in configs

2017-08-30 Thread Ajay Kumar (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12350?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16147604#comment-16147604
 ] 

Ajay Kumar commented on HDFS-12350:
---

[~anu],[~xyao],[~arpitagarwal] Please review the patch.

> Support meta tags in configs
> 
>
> Key: HDFS-12350
> URL: https://issues.apache.org/jira/browse/HDFS-12350
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
> Attachments: HDFS-12350.01.patch, HDFS-12350.02.patch
>
>
> We should add meta tag extension to the hadoop/hdfs config so that we can 
> retrieve properties by various tags like PERFORMANCE, NAMENODE etc. Right now 
> we don't have an option available to group or list properties related to 
> performance or security or datanodes. Grouping properties through some 
> restricted set of Meta tags and then exposing them in Configuration class 
> will be useful for end users.
> For example, here is an config with meta tag.
> {code}
> 
>
>   dfs.namenode.servicerpc-bind-host
>   localhost
>REQUIRED 
>
>
>   
>   dfs.namenode.fs-limits.min-block-size
>1048576 
>PERFORMANCE,REQUIRED
>
>  
>   dfs.namenode.logging.level
>   Info
>   HDFS, DEBUG 
>
>   
> 
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12372) Document the impact of HDFS-11069 (Tighten the authorization of datanode RPC)

2017-08-30 Thread Kihwal Lee (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12372?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16147594#comment-16147594
 ] 

Kihwal Lee commented on HDFS-12372:
---

You should not run datanode as a hdfs superuser.  Many examples show "dn" as a 
datanode user, which is not a privileged user.  Some people also use 
"hadoop.security.auth_to_local" to map the dn user to the hdfs superuser. This 
is also not a good practice.  One compromised datanode allows a superuser 
access to the hdfs cluster.



> Document the impact of HDFS-11069 (Tighten the authorization of datanode RPC)
> -
>
> Key: HDFS-12372
> URL: https://issues.apache.org/jira/browse/HDFS-12372
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 2.8.0, 2.9.0, 2.7.4, 3.0.0-alpha2
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
>
> The idea of HDFS-11069 is good. But it seems to cause confusion for 
> administrators when they issue commands like hdfs diskbalancer, or hdfs 
> dfsadmin, because this change of behavior is not documented properly.
> I suggest we document a recommended way to kinit (e.g. kinit as 
> hdfs/ho...@host1.example.com, rather than h...@example.com), as well as 
> documenting a notice for running privileged DataNode commands in a Kerberized 
> clusters



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11069) Tighten the authorization of datanode RPC

2017-08-30 Thread Kihwal Lee (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11069?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16147586#comment-16147586
 ] 

Kihwal Lee commented on HDFS-11069:
---

[~xkrogen]. Fixed. Once it was a convention to not include never-been-released 
lines in the fix version field at the time of closing jira. This no longer is 
the case.

[~jojochuang] In terms of user authorization, a hdfs superuser for one namenode 
should also be a superuser for the other namenode and datanodes.  A datanode 
user shouldn't be a privileged user and allowing one DN user to have the admin 
permission on other DNs was giving it more privilege than needed.

> Tighten the authorization of datanode RPC
> -
>
> Key: HDFS-11069
> URL: https://issues.apache.org/jira/browse/HDFS-11069
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode, security
>Reporter: Kihwal Lee
>Assignee: Kihwal Lee
> Fix For: 2.8.0, 2.9.0, 2.7.4, 3.0.0-alpha2
>
> Attachments: HDFS-11069.patch
>
>
> The current implementation of {{checkSuperuserPrivilege()}} allows the 
> datanode user from any node to be recognized as a super user.  If one 
> datanode is compromised, the intruder can issue {{shutdownDatanode()}}, 
> {{evictWriters()}}, {{triggerBlockReport()}}, etc. against all other 
> datanodes. Although this does not expose stored data, it can cause service 
> disruptions.
> This needs to be tightened to allow only the local datanode user.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12336) Listing encryption zones still fails when deleted EZ is not a direct child of snapshottable directory

2017-08-30 Thread Xiao Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12336?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiao Chen updated HDFS-12336:
-
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 2.8.3
   2.9.0
   Status: Resolved  (was: Patch Available)

Committed to branch-2 and branch-2.8. Thank you, [~wchevreuil]!

> Listing encryption zones still fails when deleted EZ is not a direct child of 
> snapshottable directory
> -
>
> Key: HDFS-12336
> URL: https://issues.apache.org/jira/browse/HDFS-12336
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: encryption, hdfs
>Affects Versions: 3.0.0-alpha4
>Reporter: Wellington Chevreuil
>Assignee: Wellington Chevreuil
>Priority: Minor
> Fix For: 2.9.0, 3.0.0-beta1, 2.8.3
>
> Attachments: HDFS-12336.001.patch, HDFS-12336.002.patch, 
> HDFS-12336.003.patch, HDFS-12336.004.patch, HDFS-12336-branch-2.001.patch
>
>
> The fix proposed on HDFS-11197 didn't cover the scenario where the EZ deleted 
> but still under a snapshot is not a direct child of the snapshottable 
> directory.
> Here the code snippet proposed on HDFS-11197 that would avoid the error 
> reported by *hdfs crypto -listZones* when a deleted EZ is still under a given 
> snapshot:
> {noformat}
>   INode lastINode = null;
>   if (inode.getParent() != null || inode.isRoot()) {
> INodesInPath iip = dir.getINodesInPath(pathName, DirOp.READ_LINK);
> lastINode = iip.getLastINode();
>   }
>   if (lastINode == null || lastINode.getId() != ezi.getINodeId()) {
> continue;
>   }
> {noformat} 
> It will ignore EZs when it's a direct child of a snapshot, because its parent 
> inode will be null, and it isn't the root inode. However, if the EZ is not 
> directly under snapshottable directory, its parent will not be null, and it 
> will pass this check, so it will fail further due *absolute path required* 
> validation error.
> I would like to work on a fix that would also cover this scenario.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12372) Document the impact of HDFS-11069 (Tighten the authorization of datanode RPC)

2017-08-30 Thread Wei-Chiu Chuang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12372?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HDFS-12372:
---
Target Version/s: 2.9.0, 2.8.2, 3.0.0

> Document the impact of HDFS-11069 (Tighten the authorization of datanode RPC)
> -
>
> Key: HDFS-12372
> URL: https://issues.apache.org/jira/browse/HDFS-12372
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 2.8.0, 2.9.0, 2.7.4, 3.0.0-alpha2
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
>
> The idea of HDFS-11069 is good. But it seems to cause confusion for 
> administrators when they issue commands like hdfs diskbalancer, or hdfs 
> dfsadmin, because this change of behavior is not documented properly.
> I suggest we document a recommended way to kinit (e.g. kinit as 
> hdfs/ho...@host1.example.com, rather than h...@example.com), as well as 
> documenting a notice for running privileged DataNode commands in a Kerberized 
> clusters



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12372) Document the impact of HDFS-11069 (Tighten the authorization of datanode RPC)

2017-08-30 Thread Wei-Chiu Chuang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12372?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HDFS-12372:
---
Affects Version/s: 2.9.0
   2.8.0
   2.7.4
   3.0.0-alpha2

> Document the impact of HDFS-11069 (Tighten the authorization of datanode RPC)
> -
>
> Key: HDFS-12372
> URL: https://issues.apache.org/jira/browse/HDFS-12372
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 2.8.0, 2.9.0, 2.7.4, 3.0.0-alpha2
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
>
> The idea of HDFS-11069 is good. But it seems to cause confusion for 
> administrators when they issue commands like hdfs diskbalancer, or hdfs 
> dfsadmin, because this change of behavior is not documented properly.
> I suggest we document a recommended way to kinit (e.g. kinit as 
> hdfs/ho...@host1.example.com, rather than h...@example.com), as well as 
> documenting a notice for running privileged DataNode commands in a Kerberized 
> clusters



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10787) libhdfs++: hdfs_configuration and configuration_loader should be accessible from our public API

2017-08-30 Thread Anatoli Shein (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10787?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anatoli Shein updated HDFS-10787:
-
Attachment: HDFS-10787.HDFS-8707.003.patch

Thanks for the reveiw, [~James C].

* In the new patch I added logic for reading multiple directories from 
HADOOP_CONF_DIR, and logging warnings during the validation if some resources 
in them are missing.
* The examples now reuse the file system connection code from tools_common.h.
* This patch also includes a flag BUILD_SHARED_HDFSPP which is set to TRUE by 
default, but can be set to FALSE to prevent shared library from being built 
(this is needed for building LIBHDFSPP as part of an external project like ORC).
* This patch also includes a number of fixes for warnings that arise when 
building libhdfspp on OSx clang compiler, the addressed warnings include: extra 
semi-colons, unused variables, missing override keywords, implicit conversions, 
and copy elision warnings.

> libhdfs++: hdfs_configuration and configuration_loader should be accessible 
> from our public API
> ---
>
> Key: HDFS-10787
> URL: https://issues.apache.org/jira/browse/HDFS-10787
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Reporter: Anatoli Shein
>Assignee: James Clampffer
> Attachments: HDFS-10787.HDFS-8707.000.patch, 
> HDFS-10787.HDFS-8707.001.patch, HDFS-10787.HDFS-8707.002.patch, 
> HDFS-10787.HDFS-8707.003.patch
>
>
> Currently, libhdfspp examples and tools all have this:
> #include "hdfspp/hdfspp.h"
> #include "common/hdfs_configuration.h"
> #include "common/configuration_loader.h"
> This is done in order to read configs and connect. We want  
> hdfs_configuration and configuration_loader to be accessible just by 
> including our hdfspp.h. One way to achieve that would be to create a builder, 
> and would include the above libraries.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10234) DistCp log output should contain copied and deleted files and directories

2017-08-30 Thread Xiaoyu Yao (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10234?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16147553#comment-16147553
 ] 

Xiaoyu Yao commented on HDFS-10234:
---

Thanks [~linyiqun] for the update. Patch v5 looks good to me. +1. 
I will hold off the commit until Friday (9/1) in case 
[~k.shaposhni...@gmail.com] and/or other folks may have additional comments. 


> DistCp log output should contain copied and deleted files and directories
> -
>
> Key: HDFS-10234
> URL: https://issues.apache.org/jira/browse/HDFS-10234
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: distcp
>Affects Versions: 2.7.1
>Reporter: Konstantin Shaposhnikov
>Assignee: Yiqun Lin
> Attachments: HDFS-10234.001.patch, HDFS-10234.002.patch, 
> HDFS-10234.003.patch, HDFS-10234.004.patch, HDFS-10234.005.patch
>
>
> DistCp log output (specified via {{-log}} command line option) currently 
> contains only skipped and failed (when failures are ignored via {{-i}}) files.
> It will be more useful if it also contains copied and deleted files and 
> created directories.
> This should be fixed in 
> https://github.com/apache/hadoop/blob/branch-2.7.1/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/mapred/CopyMapper.java



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11069) Tighten the authorization of datanode RPC

2017-08-30 Thread Kihwal Lee (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11069?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kihwal Lee updated HDFS-11069:
--
Fix Version/s: 2.8.0
   2.9.0

> Tighten the authorization of datanode RPC
> -
>
> Key: HDFS-11069
> URL: https://issues.apache.org/jira/browse/HDFS-11069
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode, security
>Reporter: Kihwal Lee
>Assignee: Kihwal Lee
> Fix For: 2.8.0, 2.9.0, 2.7.4, 3.0.0-alpha2
>
> Attachments: HDFS-11069.patch
>
>
> The current implementation of {{checkSuperuserPrivilege()}} allows the 
> datanode user from any node to be recognized as a super user.  If one 
> datanode is compromised, the intruder can issue {{shutdownDatanode()}}, 
> {{evictWriters()}}, {{triggerBlockReport()}}, etc. against all other 
> datanodes. Although this does not expose stored data, it can cause service 
> disruptions.
> This needs to be tightened to allow only the local datanode user.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12350) Support meta tags in configs

2017-08-30 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12350?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16147523#comment-16147523
 ] 

Hadoop QA commented on HDFS-12350:
--

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
18s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 16m 
25s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 14m 
50s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
37s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
29s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
22s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
50s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
39s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 10m 
40s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 10m 
40s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 38s{color} | {color:orange} hadoop-common-project/hadoop-common: The patch 
generated 2 new + 250 unchanged - 1 fixed = 252 total (was 251) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
50s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  7m 
50s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
29s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 61m  1s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | HDFS-12350 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12884491/HDFS-12350.02.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 88223ea32630 3.13.0-116-generic #163-Ubuntu SMP Fri Mar 31 
14:13:22 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 9992675 |
| Default Java | 1.8.0_144 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HDFS-Build/20927/artifact/patchprocess/diff-checkstyle-hadoop-common-project_hadoop-common.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/20927/testReport/ |
| modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/20927/console |
| Powered by | Apache Yetus 0.6.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Support meta tags in configs
> 
>
> Key: HDFS-12350
> URL: https://issues.apache.org/jira/browse/HDFS-12350
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
> Attachments: HDFS-12350.01.patch, HDFS-12350.02.patch
>
>
> We should add meta tag extension to the hadoop/hdfs config 

[jira] [Commented] (HDFS-11069) Tighten the authorization of datanode RPC

2017-08-30 Thread Wei-Chiu Chuang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11069?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16147517#comment-16147517
 ] 

Wei-Chiu Chuang commented on HDFS-11069:


Hi [~kihwal], I'm just curious, for security concerns, should NameNode also 
tighten its RPC authorization as well? Any reason why not? One reason might be 
the NameNode HA, but I wonder if there are other rationales too. Thanks.

> Tighten the authorization of datanode RPC
> -
>
> Key: HDFS-11069
> URL: https://issues.apache.org/jira/browse/HDFS-11069
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode, security
>Reporter: Kihwal Lee
>Assignee: Kihwal Lee
> Fix For: 2.7.4, 3.0.0-alpha2
>
> Attachments: HDFS-11069.patch
>
>
> The current implementation of {{checkSuperuserPrivilege()}} allows the 
> datanode user from any node to be recognized as a super user.  If one 
> datanode is compromised, the intruder can issue {{shutdownDatanode()}}, 
> {{evictWriters()}}, {{triggerBlockReport()}}, etc. against all other 
> datanodes. Although this does not expose stored data, it can cause service 
> disruptions.
> This needs to be tightened to allow only the local datanode user.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12100) Ozone: KSM: Allocate key should honour volume quota if quota is set on the volume

2017-08-30 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12100?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16147445#comment-16147445
 ] 

Hadoop QA commented on HDFS-12100:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
17s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 6 new or modified test 
files. {color} |
|| || || || {color:brown} HDFS-7240 Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
39s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 
28s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
36s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
44s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
37s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
38s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
42s{color} | {color:green} HDFS-7240 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
8s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
32s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
32s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green}  1m 
32s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
32s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 42s{color} | {color:orange} hadoop-hdfs-project: The patch generated 9 new + 
18 unchanged - 1 fixed = 27 total (was 19) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
38s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
28s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 69m 42s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
20s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}109m 37s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.TestDFSStripedInputStreamWithRandomECPolicy |
|   | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure080 |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | HDFS-12100 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12884471/HDFS-12100-HDFS-7240.001.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  cc  xml  |
| uname | Linux f39bcc4896ae 3.13.0-116-generic #163-Ubuntu SMP Fri Mar 31 
14:13:22 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | HDFS-7240 / b23c267 |
| Default Java | 1.8.0_144 |
| findbugs | v3.1.0-RC1 

[jira] [Commented] (HDFS-12339) NFS Gateway on Shutdown Gives Unregistration Failure. Does Not Unregister with rpcbind Portmapper

2017-08-30 Thread Ruslan Dautkhanov (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12339?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16147443#comment-16147443
 ] 

Ruslan Dautkhanov commented on HDFS-12339:
--

We think this might be the root cause for nfs clients hanging issue we see 
sometimes when gdfs nfs gateway stops.
Timeouts don't work as an nfs client can see rpc service for nfs is up, but  
there is no actual live service behind that rpc service.. 
kinda zombie rpc service if it makes sense to you. Although we are not entirely 
certain if that's the root cause. 
Would be great to have this fixed anyway.

> NFS Gateway on Shutdown Gives Unregistration Failure. Does Not Unregister 
> with rpcbind Portmapper
> -
>
> Key: HDFS-12339
> URL: https://issues.apache.org/jira/browse/HDFS-12339
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.6.0
>Reporter: Sailesh Patel
>Assignee: Mukul Kumar Singh
>
> When stopping NFS Gateway the following error is thrown in the NFS gateway 
> role logs.
> 2017-08-17 18:09:16,529 ERROR org.apache.hadoop.oncrpc.RpcProgram: 
> Unregistration failure with localhost:2049, portmap entry: 
> (PortmapMapping-13:3:6:2049)
> 2017-08-17 18:09:16,531 WARN org.apache.hadoop.util.ShutdownHookManager: 
> ShutdownHook 'NfsShutdownHook' failed, java.lang.RuntimeException: 
> Unregistration failure
> java.lang.RuntimeException: Unregistration failure
> ..
> Caused by: java.net.SocketException: Socket is closed
> at java.net.DatagramSocket.send(DatagramSocket.java:641)
> at org.apache.hadoop.oncrpc.SimpleUdpClient.run(SimpleUdpClient.java:62)
> Checking rpcinfo -p : the following entry is still there:
> " 13 3 tcp 2049 nfs"



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12350) Support meta tags in configs

2017-08-30 Thread Ajay Kumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12350?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ajay Kumar updated HDFS-12350:
--
Attachment: HDFS-12350.02.patch

> Support meta tags in configs
> 
>
> Key: HDFS-12350
> URL: https://issues.apache.org/jira/browse/HDFS-12350
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
> Attachments: HDFS-12350.01.patch, HDFS-12350.02.patch
>
>
> We should add meta tag extension to the hadoop/hdfs config so that we can 
> retrieve properties by various tags like PERFORMANCE, NAMENODE etc. Right now 
> we don't have an option available to group or list properties related to 
> performance or security or datanodes. Grouping properties through some 
> restricted set of Meta tags and then exposing them in Configuration class 
> will be useful for end users.
> For example, here is an config with meta tag.
> {code}
> 
>
>   dfs.namenode.servicerpc-bind-host
>   localhost
>REQUIRED 
>
>
>   
>   dfs.namenode.fs-limits.min-block-size
>1048576 
>PERFORMANCE,REQUIRED
>
>  
>   dfs.namenode.logging.level
>   Info
>   HDFS, DEBUG 
>
>   
> 
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12336) Listing encryption zones still fails when deleted EZ is not a direct child of snapshottable directory

2017-08-30 Thread Wellington Chevreuil (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12336?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16147308#comment-16147308
 ] 

Wellington Chevreuil commented on HDFS-12336:
-

Had a look at last build failed tests, don't think any of those relate to the 
code changes. Those are also passing on my local build.

> Listing encryption zones still fails when deleted EZ is not a direct child of 
> snapshottable directory
> -
>
> Key: HDFS-12336
> URL: https://issues.apache.org/jira/browse/HDFS-12336
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: encryption, hdfs
>Affects Versions: 3.0.0-alpha4
>Reporter: Wellington Chevreuil
>Assignee: Wellington Chevreuil
>Priority: Minor
> Fix For: 3.0.0-beta1
>
> Attachments: HDFS-12336.001.patch, HDFS-12336.002.patch, 
> HDFS-12336.003.patch, HDFS-12336.004.patch, HDFS-12336-branch-2.001.patch
>
>
> The fix proposed on HDFS-11197 didn't cover the scenario where the EZ deleted 
> but still under a snapshot is not a direct child of the snapshottable 
> directory.
> Here the code snippet proposed on HDFS-11197 that would avoid the error 
> reported by *hdfs crypto -listZones* when a deleted EZ is still under a given 
> snapshot:
> {noformat}
>   INode lastINode = null;
>   if (inode.getParent() != null || inode.isRoot()) {
> INodesInPath iip = dir.getINodesInPath(pathName, DirOp.READ_LINK);
> lastINode = iip.getLastINode();
>   }
>   if (lastINode == null || lastINode.getId() != ezi.getINodeId()) {
> continue;
>   }
> {noformat} 
> It will ignore EZs when it's a direct child of a snapshot, because its parent 
> inode will be null, and it isn't the root inode. However, if the EZ is not 
> directly under snapshottable directory, its parent will not be null, and it 
> will pass this check, so it will fail further due *absolute path required* 
> validation error.
> I would like to work on a fix that would also cover this scenario.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDFS-12235) Ozone: DeleteKey-3: KSM SCM block deletion message and ACK interactions

2017-08-30 Thread Weiwei Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12235?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Weiwei Yang reassigned HDFS-12235:
--

Assignee: Weiwei Yang  (was: Yuanbo Liu)

> Ozone: DeleteKey-3: KSM SCM block deletion message and ACK interactions
> ---
>
> Key: HDFS-12235
> URL: https://issues.apache.org/jira/browse/HDFS-12235
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Affects Versions: HDFS-7240
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
> Attachments: HDFS-12235-HDFS-7240.001.patch, 
> HDFS-12235-HDFS-7240.002.patch, HDFS-12235-HDFS-7240.003.patch
>
>
> KSM and SCM interaction for delete key operation, both KSM and SCM stores key 
> state info in a backlog, KSM needs to scan this log and send block-deletion 
> command to SCM, once SCM is fully aware of the message, KSM removes the key 
> completely from namespace. See more from the design doc under HDFS-11922, 
> this is task break down 2.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12235) Ozone: DeleteKey-3: KSM SCM block deletion message and ACK interactions

2017-08-30 Thread Weiwei Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12235?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16147261#comment-16147261
 ] 

Weiwei Yang commented on HDFS-12235:


Hi [~yuanbo]

Thanks for the updated patch, your change mostly makes sense but since this is 
the last piece of key-deletion, we should be able to do end-to-end test in UT. 
Since you are busy recently, I am taking this over and will work on a new patch 
with such tests. I will also do some real cluster testing with this. This JIRA 
will credit to you as well.

Thanks a lot.

> Ozone: DeleteKey-3: KSM SCM block deletion message and ACK interactions
> ---
>
> Key: HDFS-12235
> URL: https://issues.apache.org/jira/browse/HDFS-12235
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Affects Versions: HDFS-7240
>Reporter: Weiwei Yang
>Assignee: Yuanbo Liu
> Attachments: HDFS-12235-HDFS-7240.001.patch, 
> HDFS-12235-HDFS-7240.002.patch, HDFS-12235-HDFS-7240.003.patch
>
>
> KSM and SCM interaction for delete key operation, both KSM and SCM stores key 
> state info in a backlog, KSM needs to scan this log and send block-deletion 
> command to SCM, once SCM is fully aware of the message, KSM removes the key 
> completely from namespace. See more from the design doc under HDFS-11922, 
> this is task break down 2.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12357) Let NameNode to bypass external attribute provider for special user

2017-08-30 Thread Daryn Sharp (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12357?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16147258#comment-16147258
 ] 

Daryn Sharp commented on HDFS-12357:


Should this perhaps be implemented in the external attribute provider itself?  
Instead of an all or nothing approach, it will grant the provider fine-grain 
authorization control over combinations of users and paths to expose "real" 
attrs.

> Let NameNode to bypass external attribute provider for special user
> ---
>
> Key: HDFS-12357
> URL: https://issues.apache.org/jira/browse/HDFS-12357
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Yongjun Zhang
>Assignee: Yongjun Zhang
> Attachments: HDFS-12357.001.patch
>
>
> This is a third proposal to solve the problem described in HDFS-12202.
> The problem is, when we do distcp from one cluster to another (or within the 
> same cluster), in addition to copying file data, we copy the metadata from 
> source to target. If external attribute provider is enabled, the metadata may 
> be read from the provider, thus provider data read from source may be saved 
> to target HDFS. 
> We want to avoid saving metadata from external provider to HDFS, so we want 
> to bypass external provider when doing the distcp (or hadoop fs -cp) 
> operation.
> Two alternative approaches were proposed earlier, one in HDFS-12202, the 
> other in HDFS-12294. The proposal here is the third one.
> The idea is, we introduce a new config, that specifies a special user (or a 
> list of users), and let NN bypass external provider when the current user is 
> a special user.
> If we run applications as the special user that need data from external 
> attribute provider, then it won't work. So the constraint on this approach 
> is, the special users here should not run applications that need data from 
> external provider.
> Thanks [~asuresh] for proposing this idea and [~chris.douglas], [~daryn], 
> [~manojg] for the discussions in the other jiras. 
> I'm creating this one to discuss further.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12100) Ozone: KSM: Allocate key should honour volume quota if quota is set on the volume

2017-08-30 Thread Lokesh Jain (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12100?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lokesh Jain updated HDFS-12100:
---
Status: Patch Available  (was: Open)

> Ozone: KSM: Allocate key should honour volume quota if quota is set on the 
> volume
> -
>
> Key: HDFS-12100
> URL: https://issues.apache.org/jira/browse/HDFS-12100
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Affects Versions: HDFS-7240
>Reporter: Mukul Kumar Singh
>Assignee: Lokesh Jain
> Fix For: HDFS-7240
>
> Attachments: HDFS-12100-HDFS-7240.001.patch
>
>
> KeyManagerImpl#allocateKey currently does not check the volume quota before 
> allocating a key, this can cause the volume quota overrun.
> Volume quota needs to be check before allocating the key in the SCM.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12100) Ozone: KSM: Allocate key should honour volume quota if quota is set on the volume

2017-08-30 Thread Lokesh Jain (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12100?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lokesh Jain updated HDFS-12100:
---
Attachment: HDFS-12100-HDFS-7240.001.patch

This patch adds functionality for volume quota to be honoured after key 
allocation and deletion. For this a "sizeInBytes" field is introduced in 
org.apache.hadoop.ozone.protocol.proto.KeySpaceManagerProtocolProtos.VolumeInfo.
 Two configuration properties have been added for configuring the size and 
units of volume quota. Further key allocation now raises an exception if key 
size is negative and key deletion raises exception if new volume size after 
deletion would be negative.

> Ozone: KSM: Allocate key should honour volume quota if quota is set on the 
> volume
> -
>
> Key: HDFS-12100
> URL: https://issues.apache.org/jira/browse/HDFS-12100
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Affects Versions: HDFS-7240
>Reporter: Mukul Kumar Singh
>Assignee: Lokesh Jain
> Fix For: HDFS-7240
>
> Attachments: HDFS-12100-HDFS-7240.001.patch
>
>
> KeyManagerImpl#allocateKey currently does not check the volume quota before 
> allocating a key, this can cause the volume quota overrun.
> Volume quota needs to be check before allocating the key in the SCM.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11799) Introduce a config to allow setting up write pipeline with fewer nodes than replication factor

2017-08-30 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11799?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16147236#comment-16147236
 ] 

Hadoop QA commented on HDFS-11799:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
19s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
1s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
8s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 16m 
11s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
40s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
49s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
40s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
23s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
8s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
7s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
37s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 49s{color} | {color:orange} hadoop-hdfs-project: The patch generated 4 new + 
568 unchanged - 0 fixed = 572 total (was 568) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
2s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
40s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
4s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
15s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 96m  2s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
16s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}134m 58s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.TestReadStripedFileWithDecoding |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure000 |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure060 |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure150 |
|   | hadoop.hdfs.TestDFSStripedInputStreamWithRandomECPolicy |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure130 |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure200 |
|   | hadoop.tools.TestHdfsConfigFields |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure040 |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure080 |
|   | hadoop.hdfs.TestClientProtocolForPipelineRecovery |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure |
|   | hadoop.hdfs.TestLeaseRecoveryStriped |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure070 |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure030 |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure120 |
| Timed out junit tests | org.apache.hadoop.hdfs.TestWriteReadStripedFile |
\\

[jira] [Commented] (HDFS-12336) Listing encryption zones still fails when deleted EZ is not a direct child of snapshottable directory

2017-08-30 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12336?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16147130#comment-16147130
 ] 

Hadoop QA commented on HDFS-12336:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
18s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
|| || || || {color:brown} branch-2 Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  6m 
54s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
42s{color} | {color:green} branch-2 passed with JDK v1.8.0_144 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
43s{color} | {color:green} branch-2 passed with JDK v1.7.0_131 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
29s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
51s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
1s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
39s{color} | {color:green} branch-2 passed with JDK v1.8.0_144 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
59s{color} | {color:green} branch-2 passed with JDK v1.7.0_131 {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
37s{color} | {color:green} the patch passed with JDK v1.8.0_144 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
41s{color} | {color:green} the patch passed with JDK v1.7.0_131 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
26s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
35s{color} | {color:green} the patch passed with JDK v1.8.0_144 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
1s{color} | {color:green} the patch passed with JDK v1.7.0_131 {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 54m 50s{color} 
| {color:red} hadoop-hdfs in the patch failed with JDK v1.7.0_131. {color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  0m 
22s{color} | {color:red} The patch generated 2 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black}140m 10s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| JDK v1.8.0_144 Failed junit tests | hadoop.hdfs.TestReplaceDatanodeOnFailure |
|   | hadoop.hdfs.TestEncryptedTransfer |
| JDK v1.8.0_144 Timed out junit tests | 
org.apache.hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting |
|   | org.apache.hadoop.hdfs.server.blockmanagement.TestBlockStatsMXBean |
| JDK v1.7.0_131 Failed junit tests | 
hadoop.hdfs.server.blockmanagement.TestReplicationPolicyWithUpgradeDomain |
|   | hadoop.hdfs.TestClientProtocolForPipelineRecovery |
|   | hadoop.hdfs.TestEncryptedTransfer |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:5e40efe |
| JIRA Issue | HDFS-12336 |
| JIRA Patch URL | 

[jira] [Commented] (HDFS-10234) DistCp log output should contain copied and deleted files and directories

2017-08-30 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10234?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16147119#comment-16147119
 ] 

Hadoop QA commented on HDFS-10234:
--

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
34s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 
31s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
22s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
18s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
23s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
30s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
17s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
14s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 15m  
9s{color} | {color:green} hadoop-distcp in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
16s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 36m  9s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | HDFS-10234 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12884459/HDFS-10234.005.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux b36ed5c9b44c 3.13.0-123-generic #172-Ubuntu SMP Mon Jun 26 
18:04:35 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 200b113 |
| Default Java | 1.8.0_144 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/20925/testReport/ |
| modules | C: hadoop-tools/hadoop-distcp U: hadoop-tools/hadoop-distcp |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/20925/console |
| Powered by | Apache Yetus 0.6.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> DistCp log output should contain copied and deleted files and directories
> -
>
> Key: HDFS-10234
> URL: https://issues.apache.org/jira/browse/HDFS-10234
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: distcp
>Affects Versions: 2.7.1
>Reporter: Konstantin Shaposhnikov
>Assignee: Yiqun Lin
> Attachments: HDFS-10234.001.patch, HDFS-10234.002.patch, 
> HDFS-10234.003.patch, HDFS-10234.004.patch, HDFS-10234.005.patch
>
>
> DistCp log output (specified via {{-log}} command line option) currently 
> contains only skipped 

[jira] [Updated] (HDFS-10234) DistCp log output should contain copied and deleted files and directories

2017-08-30 Thread Yiqun Lin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10234?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yiqun Lin updated HDFS-10234:
-
Attachment: HDFS-10234.005.patch

Failed test is related. Attach the new patch to fix this.

> DistCp log output should contain copied and deleted files and directories
> -
>
> Key: HDFS-10234
> URL: https://issues.apache.org/jira/browse/HDFS-10234
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: distcp
>Affects Versions: 2.7.1
>Reporter: Konstantin Shaposhnikov
>Assignee: Yiqun Lin
> Attachments: HDFS-10234.001.patch, HDFS-10234.002.patch, 
> HDFS-10234.003.patch, HDFS-10234.004.patch, HDFS-10234.005.patch
>
>
> DistCp log output (specified via {{-log}} command line option) currently 
> contains only skipped and failed (when failures are ignored via {{-i}}) files.
> It will be more useful if it also contains copied and deleted files and 
> created directories.
> This should be fixed in 
> https://github.com/apache/hadoop/blob/branch-2.7.1/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/mapred/CopyMapper.java



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11799) Introduce a config to allow setting up write pipeline with fewer nodes than replication factor

2017-08-30 Thread Brahma Reddy Battula (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11799?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brahma Reddy Battula updated HDFS-11799:

Attachment: HDFS-11799-005.patch

Uploaded the patch to address above comments. now named like 
"dfs.client.block.write.replace-datanode-on-failure.min.replication"  to unify 
with other.

> Introduce a config to allow setting up write pipeline with fewer nodes than 
> replication factor
> --
>
> Key: HDFS-11799
> URL: https://issues.apache.org/jira/browse/HDFS-11799
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Yongjun Zhang
>Assignee: Brahma Reddy Battula
> Attachments: HDFS-11799-002.patch, HDFS-11799-003.patch, 
> HDFS-11799-004.patch, HDFS-11799-005.patch, HDFS-11799.patch
>
>
> During pipeline recovery, if not enough DNs can be found, if 
> dfs.client.block.write.replace-datanode-on-failure.best-effort
> is enabled, we let the pipeline to continue, even if there is a single DN.
> Similarly, when we create the write pipeline initially, if for some reason we 
> can't find enough DNs, we can have a similar config to enable writing with a 
> single DN.
> More study will be done.
>  



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10234) DistCp log output should contain copied and deleted files and directories

2017-08-30 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10234?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16147051#comment-16147051
 ] 

Hadoop QA commented on HDFS-10234:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
17s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 
56s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
21s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
16s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
24s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
28s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
14s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
19s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
39s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
12s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 14m 15s{color} 
| {color:red} hadoop-distcp in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
14s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 35m  7s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.tools.TestDistCpOptions |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | HDFS-10234 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12884452/HDFS-10234.004.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 4bfdb7a09512 3.13.0-116-generic #163-Ubuntu SMP Fri Mar 31 
14:13:22 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 200b113 |
| Default Java | 1.8.0_144 |
| findbugs | v3.1.0-RC1 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/20923/artifact/patchprocess/patch-unit-hadoop-tools_hadoop-distcp.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/20923/testReport/ |
| modules | C: hadoop-tools/hadoop-distcp U: hadoop-tools/hadoop-distcp |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/20923/console |
| Powered by | Apache Yetus 0.6.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> DistCp log output should contain copied and deleted files and directories
> -
>
> Key: HDFS-10234
> URL: https://issues.apache.org/jira/browse/HDFS-10234
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: distcp
>Affects Versions: 2.7.1
>Reporter: Konstantin Shaposhnikov
>Assignee: Yiqun Lin
> Attachments: 

[jira] [Commented] (HDFS-10234) DistCp log output should contain copied and deleted files and directories

2017-08-30 Thread Yiqun Lin (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10234?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16147009#comment-16147009
 ] 

Yiqun Lin commented on HDFS-10234:
--

Thanks for the review, [~xyao]!
Attach the updated patch to address your comments.

> DistCp log output should contain copied and deleted files and directories
> -
>
> Key: HDFS-10234
> URL: https://issues.apache.org/jira/browse/HDFS-10234
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: distcp
>Affects Versions: 2.7.1
>Reporter: Konstantin Shaposhnikov
>Assignee: Yiqun Lin
> Attachments: HDFS-10234.001.patch, HDFS-10234.002.patch, 
> HDFS-10234.003.patch, HDFS-10234.004.patch
>
>
> DistCp log output (specified via {{-log}} command line option) currently 
> contains only skipped and failed (when failures are ignored via {{-i}}) files.
> It will be more useful if it also contains copied and deleted files and 
> created directories.
> This should be fixed in 
> https://github.com/apache/hadoop/blob/branch-2.7.1/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/mapred/CopyMapper.java



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10234) DistCp log output should contain copied and deleted files and directories

2017-08-30 Thread Yiqun Lin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10234?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yiqun Lin updated HDFS-10234:
-
Attachment: HDFS-10234.004.patch

> DistCp log output should contain copied and deleted files and directories
> -
>
> Key: HDFS-10234
> URL: https://issues.apache.org/jira/browse/HDFS-10234
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: distcp
>Affects Versions: 2.7.1
>Reporter: Konstantin Shaposhnikov
>Assignee: Yiqun Lin
> Attachments: HDFS-10234.001.patch, HDFS-10234.002.patch, 
> HDFS-10234.003.patch, HDFS-10234.004.patch
>
>
> DistCp log output (specified via {{-log}} command line option) currently 
> contains only skipped and failed (when failures are ignored via {{-i}}) files.
> It will be more useful if it also contains copied and deleted files and 
> created directories.
> This should be fixed in 
> https://github.com/apache/hadoop/blob/branch-2.7.1/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/mapred/CopyMapper.java



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12336) Listing encryption zones still fails when deleted EZ is not a direct child of snapshottable directory

2017-08-30 Thread Wellington Chevreuil (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12336?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wellington Chevreuil updated HDFS-12336:

Attachment: HDFS-12336-branch-2.001.patch

Thanks a lot [~xiaochen] for the help here. Attached a patch for branch-2.

> Listing encryption zones still fails when deleted EZ is not a direct child of 
> snapshottable directory
> -
>
> Key: HDFS-12336
> URL: https://issues.apache.org/jira/browse/HDFS-12336
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: encryption, hdfs
>Affects Versions: 3.0.0-alpha4
>Reporter: Wellington Chevreuil
>Assignee: Wellington Chevreuil
>Priority: Minor
> Fix For: 3.0.0-beta1
>
> Attachments: HDFS-12336.001.patch, HDFS-12336.002.patch, 
> HDFS-12336.003.patch, HDFS-12336.004.patch, HDFS-12336-branch-2.001.patch
>
>
> The fix proposed on HDFS-11197 didn't cover the scenario where the EZ deleted 
> but still under a snapshot is not a direct child of the snapshottable 
> directory.
> Here the code snippet proposed on HDFS-11197 that would avoid the error 
> reported by *hdfs crypto -listZones* when a deleted EZ is still under a given 
> snapshot:
> {noformat}
>   INode lastINode = null;
>   if (inode.getParent() != null || inode.isRoot()) {
> INodesInPath iip = dir.getINodesInPath(pathName, DirOp.READ_LINK);
> lastINode = iip.getLastINode();
>   }
>   if (lastINode == null || lastINode.getId() != ezi.getINodeId()) {
> continue;
>   }
> {noformat} 
> It will ignore EZs when it's a direct child of a snapshot, because its parent 
> inode will be null, and it isn't the root inode. However, if the EZ is not 
> directly under snapshottable directory, its parent will not be null, and it 
> will pass this check, so it will fail further due *absolute path required* 
> validation error.
> I would like to work on a fix that would also cover this scenario.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-7859) Erasure Coding: Persist erasure coding policies in NameNode

2017-08-30 Thread SammiChen (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7859?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16146892#comment-16146892
 ] 

SammiChen commented on HDFS-7859:
-

[~eddyxu] mentioned upgrade and downgrade, it's a good question. Not only user 
defined ec policy, but also built-in ec policy will face this issue. The major 
problem is if a codec is no longer supported after upgrade or downgrade,  how 
to handle these type of ec policies in the new cluster, also how to handle the 
files/directories which used these no long supported files?  I tend to keep 
these ec policies in {{ErasureCodingPolicyManager}}, while mark it as obsolete, 
and also load these files/directories into the namespace so that user can go 
through the tree structures(read file cannot be supported). I will do some 
experiment to see if it works. 
Any other suggestions are welcome. [~andrew.wang], [~drankye], [~eddyxu]

> Erasure Coding: Persist erasure coding policies in NameNode
> ---
>
> Key: HDFS-7859
> URL: https://issues.apache.org/jira/browse/HDFS-7859
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Kai Zheng
>Assignee: SammiChen
> Attachments: HDFS-7859.001.patch, HDFS-7859.002.patch, 
> HDFS-7859.004.patch, HDFS-7859.005.patch, HDFS-7859.006.patch, 
> HDFS-7859.007.patch, HDFS-7859.008.patch, HDFS-7859.009.patch, 
> HDFS-7859.010.patch, HDFS-7859.011.patch, HDFS-7859-HDFS-7285.002.patch, 
> HDFS-7859-HDFS-7285.002.patch, HDFS-7859-HDFS-7285.003.patch
>
>
> In meetup discussion with [~zhz] and [~jingzhao], it's suggested that we 
> persist EC schemas in NameNode centrally and reliably, so that EC zones can 
> reference them by name efficiently.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDFS-12339) NFS Gateway on Shutdown Gives Unregistration Failure. Does Not Unregister with rpcbind Portmapper

2017-08-30 Thread Mukul Kumar Singh (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12339?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mukul Kumar Singh reassigned HDFS-12339:


Assignee: Mukul Kumar Singh

> NFS Gateway on Shutdown Gives Unregistration Failure. Does Not Unregister 
> with rpcbind Portmapper
> -
>
> Key: HDFS-12339
> URL: https://issues.apache.org/jira/browse/HDFS-12339
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.6.0
>Reporter: Sailesh Patel
>Assignee: Mukul Kumar Singh
>
> When stopping NFS Gateway the following error is thrown in the NFS gateway 
> role logs.
> 2017-08-17 18:09:16,529 ERROR org.apache.hadoop.oncrpc.RpcProgram: 
> Unregistration failure with localhost:2049, portmap entry: 
> (PortmapMapping-13:3:6:2049)
> 2017-08-17 18:09:16,531 WARN org.apache.hadoop.util.ShutdownHookManager: 
> ShutdownHook 'NfsShutdownHook' failed, java.lang.RuntimeException: 
> Unregistration failure
> java.lang.RuntimeException: Unregistration failure
> ..
> Caused by: java.net.SocketException: Socket is closed
> at java.net.DatagramSocket.send(DatagramSocket.java:641)
> at org.apache.hadoop.oncrpc.SimpleUdpClient.run(SimpleUdpClient.java:62)
> Checking rpcinfo -p : the following entry is still there:
> " 13 3 tcp 2049 nfs"



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12258) ec -listPolicies should list all policies in system, no matter it's enabled or disabled

2017-08-30 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12258?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16146844#comment-16146844
 ] 

Hudson commented on HDFS-12258:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #12272 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/12272/])
HDFS-12258. ec -listPolicies should list all policies in system, no (rakeshr: 
rev 200b11368d3954138a9bce128c8fa763b4a503a1)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/ECAdmin.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/ErasureCodingPolicyManager.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDistributedFileSystem.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestStripedINodeFile.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/resources/testErasureCodingConf.xml
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DistributedFileSystem.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/client/HdfsAdmin.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocolPB/PBHelperClient.java
* (edit) hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/HDFSErasureCoding.md
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocol/ErasureCodingPolicy.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirErasureCodingOp.java
* (edit) hadoop-hdfs-project/hadoop-hdfs-client/src/main/proto/hdfs.proto
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DFSStripedOutputStream.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestErasureCodingPolicies.java
* (add) 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocol/ErasureCodingPolicyState.java


> ec -listPolicies should list all policies in system, no matter it's enabled 
> or disabled
> ---
>
> Key: HDFS-12258
> URL: https://issues.apache.org/jira/browse/HDFS-12258
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: SammiChen
>Assignee: Wei Zhou
>  Labels: hdfs-ec-3.0-must-do
> Fix For: 3.0.0-beta1
>
> Attachments: HDFS-12258.01.patch, HDFS-12258.02.patch, 
> HDFS-12258.03.patch, HDFS-12258.04.patch, HDFS-12258.05.patch, 
> HDFS-12258.06.patch, HDFS-12258-07.patch, HDFS-12258.07.patch
>
>
> ec -listPolicies should list all policies in system, no matter it's enabled 
> or disabled



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12350) Support meta tags in configs

2017-08-30 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12350?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16146836#comment-16146836
 ] 

Hadoop QA commented on HDFS-12350:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
17s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 
17s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 16m  
8s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
34s{color} | {color:green} trunk passed {color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red}  0m 
15s{color} | {color:red} hadoop-common in trunk failed. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
52s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
50s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 12m  
7s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 12m  
7s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 36s{color} | {color:orange} hadoop-common-project/hadoop-common: The patch 
generated 7 new + 250 unchanged - 1 fixed = 257 total (was 251) {color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red}  0m 
13s{color} | {color:red} hadoop-common in the patch failed. {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
5s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
52s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  8m 46s{color} 
| {color:red} hadoop-common in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
28s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 62m  3s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.security.TestShellBasedUnixGroupsMapping |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | HDFS-12350 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12884416/HDFS-12350.01.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux ee5394921f75 4.4.0-43-generic #63-Ubuntu SMP Wed Oct 12 
13:48:03 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 32cba6c |
| Default Java | 1.8.0_144 |
| mvnsite | 
https://builds.apache.org/job/PreCommit-HDFS-Build/20921/artifact/patchprocess/branch-mvnsite-hadoop-common-project_hadoop-common.txt
 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HDFS-Build/20921/artifact/patchprocess/diff-checkstyle-hadoop-common-project_hadoop-common.txt
 |
| mvnsite | 
https://builds.apache.org/job/PreCommit-HDFS-Build/20921/artifact/patchprocess/patch-mvnsite-hadoop-common-project_hadoop-common.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/20921/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/20921/testReport/ |
| modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/20921/console |
| 

[jira] [Updated] (HDFS-12258) ec -listPolicies should list all policies in system, no matter it's enabled or disabled

2017-08-30 Thread Rakesh R (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12258?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rakesh R updated HDFS-12258:

Attachment: HDFS-12258-07.patch

> ec -listPolicies should list all policies in system, no matter it's enabled 
> or disabled
> ---
>
> Key: HDFS-12258
> URL: https://issues.apache.org/jira/browse/HDFS-12258
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: SammiChen
>Assignee: Wei Zhou
>  Labels: hdfs-ec-3.0-must-do
> Fix For: 3.0.0-beta1
>
> Attachments: HDFS-12258.01.patch, HDFS-12258.02.patch, 
> HDFS-12258.03.patch, HDFS-12258.04.patch, HDFS-12258.05.patch, 
> HDFS-12258.06.patch, HDFS-12258-07.patch, HDFS-12258.07.patch
>
>
> ec -listPolicies should list all policies in system, no matter it's enabled 
> or disabled



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12350) Support meta tags in configs

2017-08-30 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12350?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16146833#comment-16146833
 ] 

Hadoop QA commented on HDFS-12350:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
19s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 17m 
12s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 18m  
9s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
48s{color} | {color:green} trunk passed {color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red}  0m 
18s{color} | {color:red} hadoop-common in trunk failed. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
10s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
58s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 14m  
8s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 14m  
8s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 49s{color} | {color:orange} hadoop-common-project/hadoop-common: The patch 
generated 7 new + 250 unchanged - 1 fixed = 257 total (was 251) {color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red}  0m 
19s{color} | {color:red} hadoop-common in the patch failed. {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
57s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  9m 20s{color} 
| {color:red} hadoop-common in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
38s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 70m 30s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.security.TestKDiag |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | HDFS-12350 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12884413/HDFS-12350.01.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 9023539af9b6 3.13.0-117-generic #164-Ubuntu SMP Fri Apr 7 
11:05:26 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 32cba6c |
| Default Java | 1.8.0_144 |
| mvnsite | 
https://builds.apache.org/job/PreCommit-HDFS-Build/20920/artifact/patchprocess/branch-mvnsite-hadoop-common-project_hadoop-common.txt
 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HDFS-Build/20920/artifact/patchprocess/diff-checkstyle-hadoop-common-project_hadoop-common.txt
 |
| mvnsite | 
https://builds.apache.org/job/PreCommit-HDFS-Build/20920/artifact/patchprocess/patch-mvnsite-hadoop-common-project_hadoop-common.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/20920/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/20920/testReport/ |
| modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/20920/console |
| Powered by | Apache 

[jira] [Updated] (HDFS-12258) ec -listPolicies should list all policies in system, no matter it's enabled or disabled

2017-08-30 Thread Rakesh R (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12258?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rakesh R updated HDFS-12258:

   Resolution: Fixed
 Hadoop Flags: Incompatible change
Fix Version/s: 3.0.0-beta1
   Status: Resolved  (was: Patch Available)

Committed to trunk!

Corrected one minor typo in docs 
{{a/hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/HDFSErasureCoding.md}}, 
am attaching the committed patch to jira.
{code}
Lists all (enabled, disabled and removed) the erasure coding policies
{code}
to
{code}
Lists all (enabled, disabled and removed) erasure coding policies
{code}


> ec -listPolicies should list all policies in system, no matter it's enabled 
> or disabled
> ---
>
> Key: HDFS-12258
> URL: https://issues.apache.org/jira/browse/HDFS-12258
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: SammiChen
>Assignee: Wei Zhou
>  Labels: hdfs-ec-3.0-must-do
> Fix For: 3.0.0-beta1
>
> Attachments: HDFS-12258.01.patch, HDFS-12258.02.patch, 
> HDFS-12258.03.patch, HDFS-12258.04.patch, HDFS-12258.05.patch, 
> HDFS-12258.06.patch, HDFS-12258.07.patch
>
>
> ec -listPolicies should list all policies in system, no matter it's enabled 
> or disabled



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-12258) ec -listPolicies should list all policies in system, no matter it's enabled or disabled

2017-08-30 Thread Rakesh R (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12258?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16146814#comment-16146814
 ] 

Rakesh R edited comment on HDFS-12258 at 8/30/17 7:37 AM:
--

Thanks [~zhouwei] for the contribution. +1 LGTM

Test case failures are unrelated to the patch.


was (Author: rakeshr):
Thanks [~zhouwei] for the contribution. +1 LGTM

> ec -listPolicies should list all policies in system, no matter it's enabled 
> or disabled
> ---
>
> Key: HDFS-12258
> URL: https://issues.apache.org/jira/browse/HDFS-12258
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: SammiChen
>Assignee: Wei Zhou
>  Labels: hdfs-ec-3.0-must-do
> Attachments: HDFS-12258.01.patch, HDFS-12258.02.patch, 
> HDFS-12258.03.patch, HDFS-12258.04.patch, HDFS-12258.05.patch, 
> HDFS-12258.06.patch, HDFS-12258.07.patch
>
>
> ec -listPolicies should list all policies in system, no matter it's enabled 
> or disabled



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12258) ec -listPolicies should list all policies in system, no matter it's enabled or disabled

2017-08-30 Thread Rakesh R (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12258?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16146814#comment-16146814
 ] 

Rakesh R commented on HDFS-12258:
-

Thanks [~zhouwei] for the contribution. +1 LGTM

> ec -listPolicies should list all policies in system, no matter it's enabled 
> or disabled
> ---
>
> Key: HDFS-12258
> URL: https://issues.apache.org/jira/browse/HDFS-12258
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: SammiChen
>Assignee: Wei Zhou
>  Labels: hdfs-ec-3.0-must-do
> Attachments: HDFS-12258.01.patch, HDFS-12258.02.patch, 
> HDFS-12258.03.patch, HDFS-12258.04.patch, HDFS-12258.05.patch, 
> HDFS-12258.06.patch, HDFS-12258.07.patch
>
>
> ec -listPolicies should list all policies in system, no matter it's enabled 
> or disabled



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-12375) Fail to start/stop journalnodes using start-dfs.sh/stop-dfs.sh.

2017-08-30 Thread Wenxin He (JIRA)
Wenxin He created HDFS-12375:


 Summary: Fail to start/stop journalnodes using 
start-dfs.sh/stop-dfs.sh.
 Key: HDFS-12375
 URL: https://issues.apache.org/jira/browse/HDFS-12375
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: federation, scripts
Affects Versions: 3.0.0-beta1
Reporter: Wenxin He
Assignee: Wenxin He


When 'dfs.namenode.checkpoint.edits.dir' suffixed with the corresponding 
NameServiceID, we can not start/stop journalnodes using 
start-dfs.sh/stop-dfs.sh.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12350) Support meta tags in configs

2017-08-30 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12350?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16146778#comment-16146778
 ] 

Hadoop QA commented on HDFS-12350:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
16s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 14m 
13s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 14m 
41s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
39s{color} | {color:green} trunk passed {color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red}  0m 
17s{color} | {color:red} hadoop-common in trunk failed. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
49s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
51s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 12m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 12m 
14s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 45s{color} | {color:orange} hadoop-common-project/hadoop-common: The patch 
generated 7 new + 250 unchanged - 1 fixed = 257 total (was 251) {color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red}  0m 
15s{color} | {color:red} hadoop-common in the patch failed. {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
56s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  8m 16s{color} 
| {color:red} hadoop-common in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
29s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 59m 44s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.conf.TestConfiguration |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | HDFS-12350 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12884406/HDFS-12350.01.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 25ffc842cf9e 3.13.0-116-generic #163-Ubuntu SMP Fri Mar 31 
14:13:22 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 4cae120 |
| Default Java | 1.8.0_144 |
| mvnsite | 
https://builds.apache.org/job/PreCommit-HDFS-Build/20919/artifact/patchprocess/branch-mvnsite-hadoop-common-project_hadoop-common.txt
 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HDFS-Build/20919/artifact/patchprocess/diff-checkstyle-hadoop-common-project_hadoop-common.txt
 |
| mvnsite | 
https://builds.apache.org/job/PreCommit-HDFS-Build/20919/artifact/patchprocess/patch-mvnsite-hadoop-common-project_hadoop-common.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/20919/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/20919/testReport/ |
| modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/20919/console |
| Powered by | 

[jira] [Assigned] (HDFS-11066) Improve test coverage for ISA-L native coder

2017-08-30 Thread Huafeng Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11066?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Huafeng Wang reassigned HDFS-11066:
---

Assignee: Huafeng Wang

> Improve test coverage for ISA-L native coder
> 
>
> Key: HDFS-11066
> URL: https://issues.apache.org/jira/browse/HDFS-11066
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Affects Versions: 3.0.0-alpha1
>Reporter: Wei-Chiu Chuang
>Assignee: Huafeng Wang
>  Labels: hdfs-ec-3.0-nice-to-have
>
> Some issues were introduced but not found in time due to lack of necessary 
> Jenkins support for the ISA-L related building options. We should re-enable 
> ISA-L related building options in Jenkins system, so to ensure the quality of 
> the related native codes.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-12374) Document the missing -ns option of haadmin.

2017-08-30 Thread Wenxin He (JIRA)
Wenxin He created HDFS-12374:


 Summary: Document the missing -ns option of haadmin.
 Key: HDFS-12374
 URL: https://issues.apache.org/jira/browse/HDFS-12374
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: documentation, federation
Affects Versions: 3.0.0-alpha4
Reporter: Wenxin He
Assignee: Wenxin He
Priority: Minor


Document the missing -ns option of haadmin in HDFSCommands.md, 
HDFSHighAvailabilityWithQJM.md and HDFSHighAvailabilityWithNFS.md.
Before patch:
{noformat}
Usage:

hdfs haadmin -transitionToActive  [--forceactive]
hdfs haadmin -transitionToStandby 
hdfs haadmin -failover [--forcefence] [--forceactive]  

hdfs haadmin -getServiceState 
hdfs haadmin -getAllServiceState
hdfs haadmin -checkHealth 
hdfs haadmin -help 
{noformat}

After patch:
{noformat}
Usage: haadmin [-ns ]
[-transitionToActive [--forceactive] ]
[-transitionToStandby ]
[-failover [--forcefence] [--forceactive]  ]
[-getServiceState ]
[-getAllServiceState]
[-checkHealth ]
[-help ]
{noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12350) Support meta tags in configs

2017-08-30 Thread Ajay Kumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12350?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ajay Kumar updated HDFS-12350:
--
Attachment: HDFS-12350.01.patch

> Support meta tags in configs
> 
>
> Key: HDFS-12350
> URL: https://issues.apache.org/jira/browse/HDFS-12350
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
> Attachments: HDFS-12350.01.patch
>
>
> We should add meta tag extension to the hadoop/hdfs config so that we can 
> retrieve properties by various tags like PERFORMANCE, NAMENODE etc. Right now 
> we don't have an option available to group or list properties related to 
> performance or security or datanodes. Grouping properties through some 
> restricted set of Meta tags and then exposing them in Configuration class 
> will be useful for end users.
> For example, here is an config with meta tag.
> {code}
> 
>
>   dfs.namenode.servicerpc-bind-host
>   localhost
>REQUIRED 
>
>
>   
>   dfs.namenode.fs-limits.min-block-size
>1048576 
>PERFORMANCE,REQUIRED
>
>  
>   dfs.namenode.logging.level
>   Info
>   HDFS, DEBUG 
>
>   
> 
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12350) Support meta tags in configs

2017-08-30 Thread Ajay Kumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12350?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ajay Kumar updated HDFS-12350:
--
Attachment: (was: HDFS-12350.01.patch)

> Support meta tags in configs
> 
>
> Key: HDFS-12350
> URL: https://issues.apache.org/jira/browse/HDFS-12350
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
> Attachments: HDFS-12350.01.patch
>
>
> We should add meta tag extension to the hadoop/hdfs config so that we can 
> retrieve properties by various tags like PERFORMANCE, NAMENODE etc. Right now 
> we don't have an option available to group or list properties related to 
> performance or security or datanodes. Grouping properties through some 
> restricted set of Meta tags and then exposing them in Configuration class 
> will be useful for end users.
> For example, here is an config with meta tag.
> {code}
> 
>
>   dfs.namenode.servicerpc-bind-host
>   localhost
>REQUIRED 
>
>
>   
>   dfs.namenode.fs-limits.min-block-size
>1048576 
>PERFORMANCE,REQUIRED
>
>  
>   dfs.namenode.logging.level
>   Info
>   HDFS, DEBUG 
>
>   
> 
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11964) RS-6-3-LEGACY has a decoding bug when it is used for pread

2017-08-30 Thread Kai Zheng (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11964?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16146708#comment-16146708
 ] 

Kai Zheng commented on HDFS-11964:
--

Hi [~tasanuma0829],

Thanks for your update digging into the deep. I thought you caught the root 
cause. One comment, in the below block the re-initialization of the 
{{codingBuffer}} isn't necessary because it has been done in 
{{initDecodeInputs}} already. +1 on addressing this.

{code}
+int bufLen = (int) alignedStripe.getSpanInBlock();
+int bufCount = dataBlkNum + parityBlkNum;
+codingBuffer = dfsStripedInputStream.getBufferPool().
+getBuffer(useDirectBuffer(), bufLen * bufCount);
+ByteBuffer buffer = codingBuffer.duplicate();
+decodeInputs[index] = new ECChunk(buffer, index * bufLen, bufLen);
{code}

> RS-6-3-LEGACY has a decoding bug when it is used for pread
> --
>
> Key: HDFS-11964
> URL: https://issues.apache.org/jira/browse/HDFS-11964
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: erasure-coding
>Affects Versions: 3.0.0-alpha3
>Reporter: Lei (Eddy) Xu
>Assignee: Takanobu Asanuma
> Attachments: HDFS-11964.1.patch, HDFS-11964.2.patch
>
>
> TestDFSStripedInputStreamWithRandomECPolicy#testPreadWithDNFailure fails on 
> trunk:
> {code}
> Running org.apache.hadoop.hdfs.TestDFSStripedInputStreamWithRandomECPolicy
> Tests run: 5, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 10.99 sec <<< 
> FAILURE! - in 
> org.apache.hadoop.hdfs.TestDFSStripedInputStreamWithRandomECPolicy
> testPreadWithDNFailure(org.apache.hadoop.hdfs.TestDFSStripedInputStreamWithRandomECPolicy)
>   Time elapsed: 1.265 sec  <<< FAILURE!
> org.junit.internal.ArrayComparisonFailure: arrays first differed at element 
> [327680]; expected:<-36> but was:<2>
>   at 
> org.junit.internal.ComparisonCriteria.arrayEquals(ComparisonCriteria.java:50)
>   at org.junit.Assert.internalArrayEquals(Assert.java:473)
>   at org.junit.Assert.assertArrayEquals(Assert.java:294)
>   at org.junit.Assert.assertArrayEquals(Assert.java:305)
>   at 
> org.apache.hadoop.hdfs.TestDFSStripedInputStream.testPreadWithDNFailure(TestDFSStripedInputStream.java:306)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>   at 
> org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74)
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12350) Support meta tags in configs

2017-08-30 Thread Ajay Kumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12350?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ajay Kumar updated HDFS-12350:
--
Attachment: (was: HDFS-12350.01.patch)

> Support meta tags in configs
> 
>
> Key: HDFS-12350
> URL: https://issues.apache.org/jira/browse/HDFS-12350
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
> Attachments: HDFS-12350.01.patch
>
>
> We should add meta tag extension to the hadoop/hdfs config so that we can 
> retrieve properties by various tags like PERFORMANCE, NAMENODE etc. Right now 
> we don't have an option available to group or list properties related to 
> performance or security or datanodes. Grouping properties through some 
> restricted set of Meta tags and then exposing them in Configuration class 
> will be useful for end users.
> For example, here is an config with meta tag.
> {code}
> 
>
>   dfs.namenode.servicerpc-bind-host
>   localhost
>REQUIRED 
>
>
>   
>   dfs.namenode.fs-limits.min-block-size
>1048576 
>PERFORMANCE,REQUIRED
>
>  
>   dfs.namenode.logging.level
>   Info
>   HDFS, DEBUG 
>
>   
> 
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12350) Support meta tags in configs

2017-08-30 Thread Ajay Kumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12350?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ajay Kumar updated HDFS-12350:
--
Attachment: HDFS-12350.01.patch

> Support meta tags in configs
> 
>
> Key: HDFS-12350
> URL: https://issues.apache.org/jira/browse/HDFS-12350
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
> Attachments: HDFS-12350.01.patch
>
>
> We should add meta tag extension to the hadoop/hdfs config so that we can 
> retrieve properties by various tags like PERFORMANCE, NAMENODE etc. Right now 
> we don't have an option available to group or list properties related to 
> performance or security or datanodes. Grouping properties through some 
> restricted set of Meta tags and then exposing them in Configuration class 
> will be useful for end users.
> For example, here is an config with meta tag.
> {code}
> 
>
>   dfs.namenode.servicerpc-bind-host
>   localhost
>REQUIRED 
>
>
>   
>   dfs.namenode.fs-limits.min-block-size
>1048576 
>PERFORMANCE,REQUIRED
>
>  
>   dfs.namenode.logging.level
>   Info
>   HDFS, DEBUG 
>
>   
> 
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-7859) Erasure Coding: Persist erasure coding policies in NameNode

2017-08-30 Thread SammiChen (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7859?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16146700#comment-16146700
 ] 

SammiChen commented on HDFS-7859:
-

Hi [~eddyxu], thanks for reviewing the patch!  Persist erasure coding policies 
in NameNode is a critical part of the "provide support for user to customize EC 
policy" feature. Without this JIRA, it cannot say that customized EC policy 
feature is completed. So it's better to get this in beta1. For detail comments 
part, add state to EC policy is covered by HDFS-12258. I will upload a new 
patch shortly after HDFS-12258 commitment. 

> Erasure Coding: Persist erasure coding policies in NameNode
> ---
>
> Key: HDFS-7859
> URL: https://issues.apache.org/jira/browse/HDFS-7859
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Kai Zheng
>Assignee: SammiChen
> Attachments: HDFS-7859.001.patch, HDFS-7859.002.patch, 
> HDFS-7859.004.patch, HDFS-7859.005.patch, HDFS-7859.006.patch, 
> HDFS-7859.007.patch, HDFS-7859.008.patch, HDFS-7859.009.patch, 
> HDFS-7859.010.patch, HDFS-7859.011.patch, HDFS-7859-HDFS-7285.002.patch, 
> HDFS-7859-HDFS-7285.002.patch, HDFS-7859-HDFS-7285.003.patch
>
>
> In meetup discussion with [~zhz] and [~jingzhao], it's suggested that we 
> persist EC schemas in NameNode centrally and reliably, so that EC zones can 
> reference them by name efficiently.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12363) Possible NPE in BlockManager$StorageInfoDefragmenter#scanAndCompactStorages

2017-08-30 Thread Mingliang Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12363?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16146677#comment-16146677
 ] 

Mingliang Liu commented on HDFS-12363:
--

The unit tests failures seem not related, can you confirm? If possible, we can 
trigger another pre-commit run and hopefully they're fine.

> Possible NPE in BlockManager$StorageInfoDefragmenter#scanAndCompactStorages
> ---
>
> Key: HDFS-12363
> URL: https://issues.apache.org/jira/browse/HDFS-12363
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 2.6.0
>Reporter: Xiao Chen
>Assignee: Xiao Chen
> Attachments: HDFS-12363.01.patch, HDFS-12363.02.patch
>
>
> Saw NN going down with NPE below:
> {noformat}
> ERROR org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: Thread 
> received Runtime exception.
> java.lang.NullPointerException
> at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$StorageInfoDefragmenter.scanAndCompactStorages(BlockManager.java:3897)
> at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$StorageInfoDefragmenter.run(BlockManager.java:3852)
> at java.lang.Thread.run(Thread.java:745)
> 2017-08-21 22:14:05,303 INFO org.apache.hadoop.util.ExitUtil: Exiting with 
> status 1
> 2017-08-21 22:14:05,313 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: 
> {noformat}
> In that version, {{BlockManager}} code is:
> {code}
> 3896  try {
> 3897   DatanodeStorageInfo storage = datanodeManager.
> 3898 getDatanode(datanodesAndStorages.get(i)).
> 3899getStorageInfo(datanodesAndStorages.get(i + 1));
> 3900if (storage != null) {
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org