[jira] [Commented] (HDFS-12291) [SPS]: Provide a mechanism to recursively iterate and satisfy storage policy of all the files under the given dir

2017-09-01 Thread Xiao Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12291?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16151401#comment-16151401
 ] 

Xiao Chen commented on HDFS-12291:
--

Thanks for the thoughts [~umamaheswararao].

I think it makes sense to require SPS call for rename ops here. Assuming rename 
races are handled correctly. E.g. 
# rename on a dir
# dir added file X to the batch, which is then being processed in the 
{{storageMovementNeeded}}
# rename of X -> Y happened, then SPS for Y for a different storage policy
I have not get a chance to check the code, but since step 2 tracks by inode id, 
I think we're fine.

bq. Later when we enable automatic SPS, this should be handed automatically. 
Also makes sense. Now it sounds like whoever does phase 2 first will handle the 
rename with pride. :)

> [SPS]: Provide a mechanism to recursively iterate and satisfy storage policy 
> of all the files under the given dir
> -
>
> Key: HDFS-12291
> URL: https://issues.apache.org/jira/browse/HDFS-12291
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: datanode, namenode
>Reporter: Rakesh R
>Assignee: Surendra Singh Lilhore
> Attachments: HDFS-12291-HDFS-10285-01.patch, 
> HDFS-12291-HDFS-10285-02.patch
>
>
> For the given source path directory, presently SPS consider only the files 
> immediately under the directory(only one level of scanning) for satisfying 
> the policy. It WON’T do recursive directory scanning and then schedules SPS 
> tasks to satisfy the storage policy of all the files till the leaf node. 
> The idea of this jira is to discuss & implement an efficient recursive 
> directory iteration mechanism and satisfies storage policy for all the files 
> under the given directory.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12385) Ozone: OzoneClient: Refactoring OzoneClient API

2017-09-01 Thread Nandakumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12385?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nandakumar updated HDFS-12385:
--
Summary: Ozone: OzoneClient: Refactoring OzoneClient API  (was: Ozone: 
OzoneClient: Refactoring of OzoneClient API)

> Ozone: OzoneClient: Refactoring OzoneClient API
> ---
>
> Key: HDFS-12385
> URL: https://issues.apache.org/jira/browse/HDFS-12385
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Reporter: Nandakumar
>Assignee: Nandakumar
> Attachments: HDFS-12385-HDFS-7240.000.patch, OzoneClient.pdf
>
>
> This jira is for refactoring {{OzoneClient}} API. [^OzoneClient.pdf] will 
> give an idea on how the API will look.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12357) Let NameNode to bypass external attribute provider for special user

2017-09-01 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12357?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16151396#comment-16151396
 ] 

Hadoop QA commented on HDFS-12357:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
13s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 13m 
47s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
49s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
44s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
53s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
41s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
41s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
45s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red}  0m 45s{color} 
| {color:red} hadoop-hdfs-project_hadoop-hdfs generated 4 new + 411 unchanged - 
0 fixed = 415 total (was 411) {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 41s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch 
generated 4 new + 647 unchanged - 0 fixed = 651 total (was 647) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
2s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
38s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 89m 57s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
17s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}115m 49s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.TestHFlush |
|   | hadoop.hdfs.TestLeaseRecoveryStriped |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure180 |
|   | hadoop.hdfs.TestClientProtocolForPipelineRecovery |
|   | hadoop.tools.TestHdfsConfigFields |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure150 |
| Timed out junit tests | org.apache.hadoop.hdfs.TestLeaseRecovery2 |
|   | org.apache.hadoop.hdfs.TestWriteReadStripedFile |
|   | org.apache.hadoop.hdfs.TestReadStripedFileWithDecoding |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:71bbb86 |
| JIRA Issue | HDFS-12357 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12885045/HDFS-12357.005.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  xml  |
| uname | Linux b864616b7569 3.13.0-119-generic #166-Ubuntu SMP Wed May 3 
12:18:55 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 7996eca |
| Default Java | 1.8.0_144 |
| findbugs | v3.1.0-RC1 |
| javac | 
https://builds.apache.org/job/PreCommit-HDFS-Build/20976/artifact/patchprocess/diff-compile-javac-hadoop-hdfs-project_hadoop-hdfs.txt
 |
| checkstyle | 

[jira] [Commented] (HDFS-12386) Add fsserver defaults call to WebhdfsFileSystem.

2017-09-01 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12386?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16151392#comment-16151392
 ] 

Hadoop QA commented on HDFS-12386:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
14s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
33s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 14m 
44s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
31s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
46s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
33s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
12s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
3s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
8s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
26s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red}  1m 26s{color} 
| {color:red} hadoop-hdfs-project generated 1 new + 450 unchanged - 0 fixed = 
451 total (was 450) {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 43s{color} | {color:orange} hadoop-hdfs-project: The patch generated 5 new + 
241 unchanged - 0 fixed = 246 total (was 241) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
0s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
23s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}102m  7s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
20s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}139m  7s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure060 |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailureWithRandomECPolicy |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure100 |
|   | hadoop.hdfs.qjournal.server.TestJournalNodeSync |
|   | hadoop.hdfs.TestReadStripedFileWithMissingBlocks |
|   | hadoop.hdfs.TestClientProtocolForPipelineRecovery |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure130 |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure180 |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure140 |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure210 |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure070 |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure170 |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure020 |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure090 |
|   | hadoop.hdfs.TestLeaseRecoveryStriped |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure080 |
|   | hadoop.hdfs.server.datanode.TestDirectoryScanner |
|   | 

[jira] [Commented] (HDFS-10467) Router-based HDFS federation

2017-09-01 Thread JIRA

[ 
https://issues.apache.org/jira/browse/HDFS-10467?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16151369#comment-16151369
 ] 

Íñigo Goiri commented on HDFS-10467:


[~He Tianyi], now that we start to have most of the patches ready and we are 
discussing what would take to merge the branch into trunk, would you mind 
taking a look at the code?
Let me know if there is any feature from NNProxy you think should be covered 
here.

> Router-based HDFS federation
> 
>
> Key: HDFS-10467
> URL: https://issues.apache.org/jira/browse/HDFS-10467
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: fs
>Affects Versions: 2.8.1
>Reporter: Íñigo Goiri
>Assignee: Íñigo Goiri
> Fix For: HDFS-10467
>
> Attachments: HDFS-10467.002.patch, HDFS-10467.PoC.001.patch, 
> HDFS-10467.PoC.patch, HDFS Router Federation.pdf, 
> HDFS-Router-Federation-Prototype.patch
>
>
> Add a Router to provide a federated view of multiple HDFS clusters.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12218) Rename split EC / replicated block metrics in BlockManager

2017-09-01 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12218?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HDFS-12218:
---
Labels: hdfs-ec-3.0-must-do  (was: )

> Rename split EC / replicated block metrics in BlockManager
> --
>
> Key: HDFS-12218
> URL: https://issues.apache.org/jira/browse/HDFS-12218
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: erasure-coding, metrics
>Affects Versions: 3.0.0-beta1
>Reporter: Andrew Wang
>Assignee: Andrew Wang
>  Labels: hdfs-ec-3.0-must-do
> Attachments: 
> 0001-Rename-ECBlockGroupsStats-to-ECBlockGroupStats-and-B.patch, 
> 0002-Deeper-rename-of-BlocksStats-to-BlockStats-and-ECBlo.patch, 
> 0003-Rename-the-metric-name-in-FSNamesystem-as-well.patch, 
> HDFS-12218.consolidated.001.patch, HDFS-12218.consolidated.002.patch
>
>
> Noted in HDFS-12206, we should propagate the naming changes made in 
> HDFS-12206 for FSNamesystem into BlockManager and related classes. Also an 
> opportunity to clarify usage of "ECBlocks" vs "ECBlockGroups" in some names.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12218) Rename split EC / replicated block metrics in BlockManager

2017-09-01 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12218?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HDFS-12218:
---
Attachment: HDFS-12218.consolidated.002.patch

> Rename split EC / replicated block metrics in BlockManager
> --
>
> Key: HDFS-12218
> URL: https://issues.apache.org/jira/browse/HDFS-12218
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: erasure-coding, metrics
>Affects Versions: 3.0.0-beta1
>Reporter: Andrew Wang
>  Labels: hdfs-ec-3.0-must-do
> Attachments: 
> 0001-Rename-ECBlockGroupsStats-to-ECBlockGroupStats-and-B.patch, 
> 0002-Deeper-rename-of-BlocksStats-to-BlockStats-and-ECBlo.patch, 
> 0003-Rename-the-metric-name-in-FSNamesystem-as-well.patch, 
> HDFS-12218.consolidated.001.patch, HDFS-12218.consolidated.002.patch
>
>
> Noted in HDFS-12206, we should propagate the naming changes made in 
> HDFS-12206 for FSNamesystem into BlockManager and related classes. Also an 
> opportunity to clarify usage of "ECBlocks" vs "ECBlockGroups" in some names.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDFS-12218) Rename split EC / replicated block metrics in BlockManager

2017-09-01 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12218?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang reassigned HDFS-12218:
--

Assignee: Andrew Wang

> Rename split EC / replicated block metrics in BlockManager
> --
>
> Key: HDFS-12218
> URL: https://issues.apache.org/jira/browse/HDFS-12218
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: erasure-coding, metrics
>Affects Versions: 3.0.0-beta1
>Reporter: Andrew Wang
>Assignee: Andrew Wang
>  Labels: hdfs-ec-3.0-must-do
> Attachments: 
> 0001-Rename-ECBlockGroupsStats-to-ECBlockGroupStats-and-B.patch, 
> 0002-Deeper-rename-of-BlocksStats-to-BlockStats-and-ECBlo.patch, 
> 0003-Rename-the-metric-name-in-FSNamesystem-as-well.patch, 
> HDFS-12218.consolidated.001.patch, HDFS-12218.consolidated.002.patch
>
>
> Noted in HDFS-12206, we should propagate the naming changes made in 
> HDFS-12206 for FSNamesystem into BlockManager and related classes. Also an 
> opportunity to clarify usage of "ECBlocks" vs "ECBlockGroups" in some names.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12218) Rename split EC / replicated block metrics in BlockManager

2017-09-01 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12218?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HDFS-12218:
---
Attachment: 0003-Rename-the-metric-name-in-FSNamesystem-as-well.patch

Missed renaming the metric, will post up a new consolidated patch too. I'll 
squash the #2 and #3 patches when committing for real.

> Rename split EC / replicated block metrics in BlockManager
> --
>
> Key: HDFS-12218
> URL: https://issues.apache.org/jira/browse/HDFS-12218
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: erasure-coding, metrics
>Affects Versions: 3.0.0-beta1
>Reporter: Andrew Wang
> Attachments: 
> 0001-Rename-ECBlockGroupsStats-to-ECBlockGroupStats-and-B.patch, 
> 0002-Deeper-rename-of-BlocksStats-to-BlockStats-and-ECBlo.patch, 
> 0003-Rename-the-metric-name-in-FSNamesystem-as-well.patch, 
> HDFS-12218.consolidated.001.patch
>
>
> Noted in HDFS-12206, we should propagate the naming changes made in 
> HDFS-12206 for FSNamesystem into BlockManager and related classes. Also an 
> opportunity to clarify usage of "ECBlocks" vs "ECBlockGroups" in some names.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12218) Rename split EC / replicated block metrics in BlockManager

2017-09-01 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12218?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HDFS-12218:
---
Hadoop Flags: Incompatible change
Release Note: 
This renames ClientProtocol#getECBlockGroupsStats to 
ClientProtocol#getEcBlockGroupStats and ClientProtocol#getBlockStats to 
ClientProtocol#getReplicatedBlockStats. The return-type classes have also been 
similarly renamed. Their fields have also been renamed to drop the "stats" 
suffix.

Additionally, ECBlockGroupStats#pendingDeletionBlockGroups has been renamed to 
ECBlockGroupStats#pendingDeletionBlocks, to clarify that this is the number of 
blocks, not block groups, pending deletion. The corresponding NameNode metric 
has also been renamed to PendingDeletionECBlocks.

> Rename split EC / replicated block metrics in BlockManager
> --
>
> Key: HDFS-12218
> URL: https://issues.apache.org/jira/browse/HDFS-12218
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: erasure-coding, metrics
>Affects Versions: 3.0.0-beta1
>Reporter: Andrew Wang
> Attachments: 
> 0001-Rename-ECBlockGroupsStats-to-ECBlockGroupStats-and-B.patch, 
> 0002-Deeper-rename-of-BlocksStats-to-BlockStats-and-ECBlo.patch, 
> HDFS-12218.consolidated.001.patch
>
>
> Noted in HDFS-12206, we should propagate the naming changes made in 
> HDFS-12206 for FSNamesystem into BlockManager and related classes. Also an 
> opportunity to clarify usage of "ECBlocks" vs "ECBlockGroups" in some names.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12218) Rename split EC / replicated block metrics in BlockManager

2017-09-01 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12218?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HDFS-12218:
---
Attachment: HDFS-12218.consolidated.001.patch

Here's a consolidated patch. I found some more places that need renaming to 
remove the "Stats" suffix within BlockManager and also the getFsBlocksStats API 
which required a lot of movement. Looking at InvalidateBlocks, it's counting 
the number of blocks, not block groups, so I updated that as well everywhere.

The split patches are easier to review, since it does the rename separate from 
the rest of the changes.

[~manojg] / [~eddyxu] could you review?

> Rename split EC / replicated block metrics in BlockManager
> --
>
> Key: HDFS-12218
> URL: https://issues.apache.org/jira/browse/HDFS-12218
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: erasure-coding, metrics
>Affects Versions: 3.0.0-beta1
>Reporter: Andrew Wang
> Attachments: 
> 0001-Rename-ECBlockGroupsStats-to-ECBlockGroupStats-and-B.patch, 
> 0002-Deeper-rename-of-BlocksStats-to-BlockStats-and-ECBlo.patch, 
> HDFS-12218.consolidated.001.patch
>
>
> Noted in HDFS-12206, we should propagate the naming changes made in 
> HDFS-12206 for FSNamesystem into BlockManager and related classes. Also an 
> opportunity to clarify usage of "ECBlocks" vs "ECBlockGroups" in some names.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12218) Rename split EC / replicated block metrics in BlockManager

2017-09-01 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12218?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HDFS-12218:
---
Attachment: 0002-Deeper-rename-of-BlocksStats-to-BlockStats-and-ECBlo.patch
0001-Rename-ECBlockGroupsStats-to-ECBlockGroupStats-and-B.patch

Here are some split patches that preserve the rename. Will attach a 
consolidated patch for Jenkins.

> Rename split EC / replicated block metrics in BlockManager
> --
>
> Key: HDFS-12218
> URL: https://issues.apache.org/jira/browse/HDFS-12218
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: erasure-coding, metrics
>Affects Versions: 3.0.0-beta1
>Reporter: Andrew Wang
> Attachments: 
> 0001-Rename-ECBlockGroupsStats-to-ECBlockGroupStats-and-B.patch, 
> 0002-Deeper-rename-of-BlocksStats-to-BlockStats-and-ECBlo.patch
>
>
> Noted in HDFS-12206, we should propagate the naming changes made in 
> HDFS-12206 for FSNamesystem into BlockManager and related classes. Also an 
> opportunity to clarify usage of "ECBlocks" vs "ECBlockGroups" in some names.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12357) Let NameNode to bypass external attribute provider for special user

2017-09-01 Thread Yongjun Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12357?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16151362#comment-16151362
 ] 

Yongjun Zhang commented on HDFS-12357:
--

HI [~chris.douglas],

I uploaded rev005 to avoid the {{components = Arrays.copyOfRange(components, 1, 
components.length);}} overhead.

Basically I added a new API (package scope)  {{boolean isBypassUser() {}} to 
{{INodeAttributeProvider}} class, and have a default implementation of 
returning false. Then let {{UserFilterINodeAttributeProvider}} version to 
override it. Then do the following
{code}
if (attributeProvider != null &&
!attributeProvider.isBypassUser()) {
  // permission checking sends the full components array including the
  // first empty component for the root.  however file status
  // related calls are expected to strip out the root component according
  // to TestINodeAttributeProvider.
  byte[][] components = iip.getPathComponents();
  components = Arrays.copyOfRange(components, 1, components.length);
  nodeAttrs = attributeProvider.getAttributes(components, nodeAttrs);
}
return nodeAttrs;
..
{code}
similar to the logic as in v001. 

So here is a trade-off between not exposing the isBypassUser API and suffer the 
cost overhead, vs exposing it and save the cost.

Wonder what you think?

Thanks.



> Let NameNode to bypass external attribute provider for special user
> ---
>
> Key: HDFS-12357
> URL: https://issues.apache.org/jira/browse/HDFS-12357
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Yongjun Zhang
>Assignee: Yongjun Zhang
> Attachments: HDFS-12357.001.patch, HDFS-12357.002.patch, 
> HDFS-12357.003.patch, HDFS-12357.004.patch, HDFS-12357.005.patch
>
>
> This is a third proposal to solve the problem described in HDFS-12202.
> The problem is, when we do distcp from one cluster to another (or within the 
> same cluster), in addition to copying file data, we copy the metadata from 
> source to target. If external attribute provider is enabled, the metadata may 
> be read from the provider, thus provider data read from source may be saved 
> to target HDFS. 
> We want to avoid saving metadata from external provider to HDFS, so we want 
> to bypass external provider when doing the distcp (or hadoop fs -cp) 
> operation.
> Two alternative approaches were proposed earlier, one in HDFS-12202, the 
> other in HDFS-12294. The proposal here is the third one.
> The idea is, we introduce a new config, that specifies a special user (or a 
> list of users), and let NN bypass external provider when the current user is 
> a special user.
> If we run applications as the special user that need data from external 
> attribute provider, then it won't work. So the constraint on this approach 
> is, the special users here should not run applications that need data from 
> external provider.
> Thanks [~asuresh] for proposing this idea and [~chris.douglas], [~daryn], 
> [~manojg] for the discussions in the other jiras. 
> I'm creating this one to discuss further.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12357) Let NameNode to bypass external attribute provider for special user

2017-09-01 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12357?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16151361#comment-16151361
 ] 

Hadoop QA commented on HDFS-12357:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
22s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 
46s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
53s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
49s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
57s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
47s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
43s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
56s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red}  0m 56s{color} 
| {color:red} hadoop-hdfs-project_hadoop-hdfs generated 4 new + 411 unchanged - 
0 fixed = 415 total (was 411) {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 47s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch 
generated 4 new + 643 unchanged - 0 fixed = 647 total (was 643) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
57s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
2s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
57s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
42s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}118m 42s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
22s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}148m 22s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.hdfs.server.blockmanagement.TestBlockTokenWithDFSStriped |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure140 |
|   | hadoop.tools.TestHdfsConfigFields |
|   | hadoop.hdfs.TestReconstructStripedFile |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure010 |
|   | hadoop.hdfs.TestClientProtocolForPipelineRecovery |
|   | hadoop.hdfs.TestLeaseRecoveryStriped |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailureWithRandomECPolicy |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure150 |
|   | hadoop.hdfs.server.namenode.TestNameNodeMetadataConsistency |
|   | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailure |
|   | hadoop.hdfs.TestSafeMode |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure190 |
|   | hadoop.hdfs.TestEncryptedTransfer |
| Timed out junit tests | org.apache.hadoop.hdfs.TestWriteReadStripedFile |
|   | org.apache.hadoop.hdfs.TestReadStripedFileWithDecoding |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:71bbb86 |
| JIRA Issue | HDFS-12357 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12885032/HDFS-12357.004.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  xml  |
| uname | Linux 671cd6c01249 3.13.0-123-generic #172-Ubuntu SMP Mon Jun 26 
18:04:35 UTC 2017 x86_64 x86_64 x86_64 

[jira] [Updated] (HDFS-12357) Let NameNode to bypass external attribute provider for special user

2017-09-01 Thread Yongjun Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12357?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yongjun Zhang updated HDFS-12357:
-
Attachment: HDFS-12357.005.patch

> Let NameNode to bypass external attribute provider for special user
> ---
>
> Key: HDFS-12357
> URL: https://issues.apache.org/jira/browse/HDFS-12357
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Yongjun Zhang
>Assignee: Yongjun Zhang
> Attachments: HDFS-12357.001.patch, HDFS-12357.002.patch, 
> HDFS-12357.003.patch, HDFS-12357.004.patch, HDFS-12357.005.patch
>
>
> This is a third proposal to solve the problem described in HDFS-12202.
> The problem is, when we do distcp from one cluster to another (or within the 
> same cluster), in addition to copying file data, we copy the metadata from 
> source to target. If external attribute provider is enabled, the metadata may 
> be read from the provider, thus provider data read from source may be saved 
> to target HDFS. 
> We want to avoid saving metadata from external provider to HDFS, so we want 
> to bypass external provider when doing the distcp (or hadoop fs -cp) 
> operation.
> Two alternative approaches were proposed earlier, one in HDFS-12202, the 
> other in HDFS-12294. The proposal here is the third one.
> The idea is, we introduce a new config, that specifies a special user (or a 
> list of users), and let NN bypass external provider when the current user is 
> a special user.
> If we run applications as the special user that need data from external 
> attribute provider, then it won't work. So the constraint on this approach 
> is, the special users here should not run applications that need data from 
> external provider.
> Thanks [~asuresh] for proposing this idea and [~chris.douglas], [~daryn], 
> [~manojg] for the discussions in the other jiras. 
> I'm creating this one to discuss further.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12386) Add fsserver defaults call to WebhdfsFileSystem.

2017-09-01 Thread Rushabh S Shah (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12386?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rushabh S Shah updated HDFS-12386:
--
Status: Patch Available  (was: Open)

> Add fsserver defaults call to WebhdfsFileSystem.
> 
>
> Key: HDFS-12386
> URL: https://issues.apache.org/jira/browse/HDFS-12386
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: webhdfs
>Reporter: Rushabh S Shah
>Assignee: Rushabh S Shah
>Priority: Minor
> Attachments: HDFS-12386.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12386) Add fsserver defaults call to WebhdfsFileSystem.

2017-09-01 Thread Rushabh S Shah (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12386?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rushabh S Shah updated HDFS-12386:
--
Attachment: HDFS-12386.patch

Attaching trunk patch.

> Add fsserver defaults call to WebhdfsFileSystem.
> 
>
> Key: HDFS-12386
> URL: https://issues.apache.org/jira/browse/HDFS-12386
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: webhdfs
>Reporter: Rushabh S Shah
>Assignee: Rushabh S Shah
>Priority: Minor
> Attachments: HDFS-12386.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12357) Let NameNode to bypass external attribute provider for special user

2017-09-01 Thread Yongjun Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12357?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16151345#comment-16151345
 ] 

Yongjun Zhang commented on HDFS-12357:
--

Hi [~chris.douglas],

With v004, the only concern is now 

https://issues.apache.org/jira/browse/HDFS-12357?focusedCommentId=16151306=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16151306

If we can avoid {{components = Arrays.copyOfRange(components, 1, 
components.length);}}, that will be great. Because this happens to every 
{{getAttributes}} call which can be avoided when it's the bypass user.

Thanks.


> Let NameNode to bypass external attribute provider for special user
> ---
>
> Key: HDFS-12357
> URL: https://issues.apache.org/jira/browse/HDFS-12357
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Yongjun Zhang
>Assignee: Yongjun Zhang
> Attachments: HDFS-12357.001.patch, HDFS-12357.002.patch, 
> HDFS-12357.003.patch, HDFS-12357.004.patch
>
>
> This is a third proposal to solve the problem described in HDFS-12202.
> The problem is, when we do distcp from one cluster to another (or within the 
> same cluster), in addition to copying file data, we copy the metadata from 
> source to target. If external attribute provider is enabled, the metadata may 
> be read from the provider, thus provider data read from source may be saved 
> to target HDFS. 
> We want to avoid saving metadata from external provider to HDFS, so we want 
> to bypass external provider when doing the distcp (or hadoop fs -cp) 
> operation.
> Two alternative approaches were proposed earlier, one in HDFS-12202, the 
> other in HDFS-12294. The proposal here is the third one.
> The idea is, we introduce a new config, that specifies a special user (or a 
> list of users), and let NN bypass external provider when the current user is 
> a special user.
> If we run applications as the special user that need data from external 
> attribute provider, then it won't work. So the constraint on this approach 
> is, the special users here should not run applications that need data from 
> external provider.
> Thanks [~asuresh] for proposing this idea and [~chris.douglas], [~daryn], 
> [~manojg] for the discussions in the other jiras. 
> I'm creating this one to discuss further.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12357) Let NameNode to bypass external attribute provider for special user

2017-09-01 Thread Manoj Govindassamy (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12357?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16151344#comment-16151344
 ] 

Manoj Govindassamy commented on HDFS-12357:
---

Thanks for the patch [~chris.douglas]. Having 
{{UserFilterINodeAttributeProvider}} seems like a cleaner approach. Is it 
possible to examine the {{bypassUser}} config and skip the wrapper 
{{UserFilterINodeAttributeProvider}} if the user list is empty. Most of the 
times, the bypass user list is going to empty and we can totally skip the 
wrapper if so. 

{noformat}
205   void setINodeAttributeProvider(
206   INodeAttributeProvider provider, Configuration conf) {
207 attributeProvider = null == provider
208 ? null
209 : new UserFilterINodeAttributeProvider(provider, conf);
207   } 210 
{noformat}

[~yzhangal], I don't see the problem with {{getAccessControlEnforcer}}. But as 
you pointed out, if we can avoid duplicate of components, it would be great. 

> Let NameNode to bypass external attribute provider for special user
> ---
>
> Key: HDFS-12357
> URL: https://issues.apache.org/jira/browse/HDFS-12357
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Yongjun Zhang
>Assignee: Yongjun Zhang
> Attachments: HDFS-12357.001.patch, HDFS-12357.002.patch, 
> HDFS-12357.003.patch, HDFS-12357.004.patch
>
>
> This is a third proposal to solve the problem described in HDFS-12202.
> The problem is, when we do distcp from one cluster to another (or within the 
> same cluster), in addition to copying file data, we copy the metadata from 
> source to target. If external attribute provider is enabled, the metadata may 
> be read from the provider, thus provider data read from source may be saved 
> to target HDFS. 
> We want to avoid saving metadata from external provider to HDFS, so we want 
> to bypass external provider when doing the distcp (or hadoop fs -cp) 
> operation.
> Two alternative approaches were proposed earlier, one in HDFS-12202, the 
> other in HDFS-12294. The proposal here is the third one.
> The idea is, we introduce a new config, that specifies a special user (or a 
> list of users), and let NN bypass external provider when the current user is 
> a special user.
> If we run applications as the special user that need data from external 
> attribute provider, then it won't work. So the constraint on this approach 
> is, the special users here should not run applications that need data from 
> external provider.
> Thanks [~asuresh] for proposing this idea and [~chris.douglas], [~daryn], 
> [~manojg] for the discussions in the other jiras. 
> I'm creating this one to discuss further.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12357) Let NameNode to bypass external attribute provider for special user

2017-09-01 Thread Yongjun Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12357?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16151325#comment-16151325
 ] 

Yongjun Zhang commented on HDFS-12357:
--

Ah, I overlooked the code here you added in the new class
{code}
 @Override
  public AccessControlEnforcer getExternalAccessControlEnforcer(
  AccessControlEnforcer defaultEnforcer) {
return isBypassUser()
? defaultEnforcer
: provider.getExternalAccessControlEnforcer(defaultEnforcer);
  }
{code}
so, that actually addressed comment 2.b.

Thanks.


> Let NameNode to bypass external attribute provider for special user
> ---
>
> Key: HDFS-12357
> URL: https://issues.apache.org/jira/browse/HDFS-12357
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Yongjun Zhang
>Assignee: Yongjun Zhang
> Attachments: HDFS-12357.001.patch, HDFS-12357.002.patch, 
> HDFS-12357.003.patch, HDFS-12357.004.patch
>
>
> This is a third proposal to solve the problem described in HDFS-12202.
> The problem is, when we do distcp from one cluster to another (or within the 
> same cluster), in addition to copying file data, we copy the metadata from 
> source to target. If external attribute provider is enabled, the metadata may 
> be read from the provider, thus provider data read from source may be saved 
> to target HDFS. 
> We want to avoid saving metadata from external provider to HDFS, so we want 
> to bypass external provider when doing the distcp (or hadoop fs -cp) 
> operation.
> Two alternative approaches were proposed earlier, one in HDFS-12202, the 
> other in HDFS-12294. The proposal here is the third one.
> The idea is, we introduce a new config, that specifies a special user (or a 
> list of users), and let NN bypass external provider when the current user is 
> a special user.
> If we run applications as the special user that need data from external 
> attribute provider, then it won't work. So the constraint on this approach 
> is, the special users here should not run applications that need data from 
> external provider.
> Thanks [~asuresh] for proposing this idea and [~chris.douglas], [~daryn], 
> [~manojg] for the discussions in the other jiras. 
> I'm creating this one to discuss further.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12357) Let NameNode to bypass external attribute provider for special user

2017-09-01 Thread Yongjun Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12357?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16151315#comment-16151315
 ] 

Yongjun Zhang commented on HDFS-12357:
--

HI [~chris.douglas],

Would you please revisit my comment 2.b? 

In order to do the wrapper implementation,  we either need to add a new API to 
the provider base class, such that it returns the real provider based on 
whether it's bypass user, or add a new API to say whether it's bypass user, and 
let the following method to call this API:
{code}
 private AccessControlEnforcer getAccessControlEnforcer() {
return (attributeProvider != null)
? attributeProvider.getExternalAccessControlEnforcer(this) : this;
  }
{code}
Adding this new API is an integration issue to me.

Thanks.


> Let NameNode to bypass external attribute provider for special user
> ---
>
> Key: HDFS-12357
> URL: https://issues.apache.org/jira/browse/HDFS-12357
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Yongjun Zhang
>Assignee: Yongjun Zhang
> Attachments: HDFS-12357.001.patch, HDFS-12357.002.patch, 
> HDFS-12357.003.patch, HDFS-12357.004.patch
>
>
> This is a third proposal to solve the problem described in HDFS-12202.
> The problem is, when we do distcp from one cluster to another (or within the 
> same cluster), in addition to copying file data, we copy the metadata from 
> source to target. If external attribute provider is enabled, the metadata may 
> be read from the provider, thus provider data read from source may be saved 
> to target HDFS. 
> We want to avoid saving metadata from external provider to HDFS, so we want 
> to bypass external provider when doing the distcp (or hadoop fs -cp) 
> operation.
> Two alternative approaches were proposed earlier, one in HDFS-12202, the 
> other in HDFS-12294. The proposal here is the third one.
> The idea is, we introduce a new config, that specifies a special user (or a 
> list of users), and let NN bypass external provider when the current user is 
> a special user.
> If we run applications as the special user that need data from external 
> attribute provider, then it won't work. So the constraint on this approach 
> is, the special users here should not run applications that need data from 
> external provider.
> Thanks [~asuresh] for proposing this idea and [~chris.douglas], [~daryn], 
> [~manojg] for the discussions in the other jiras. 
> I'm creating this one to discuss further.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12357) Let NameNode to bypass external attribute provider for special user

2017-09-01 Thread Yongjun Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12357?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16151306#comment-16151306
 ] 

Yongjun Zhang commented on HDFS-12357:
--

Hm, I saw that you do this
{code}
  @Override
  public INodeAttributes getAttributes(
  String[] pathElements, INodeAttributes inode) {
return isBypassUser()
? inode
: provider.getAttributes(pathElements, inode);
  }
{code}
that is, you did not try to get the HDFS attributes again, instead, you 
returned the attributes passed from caller.

However, 
{code}
  byte[][] components = iip.getPathComponents();
  components = Arrays.copyOfRange(components, 1, components.length);
{code}
the above code is avoided in v001, but it's unavoidable in wrapper 
implementation, even though {{components}} will not be used when it's bypass 
user. So this is a waste.




> Let NameNode to bypass external attribute provider for special user
> ---
>
> Key: HDFS-12357
> URL: https://issues.apache.org/jira/browse/HDFS-12357
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Yongjun Zhang
>Assignee: Yongjun Zhang
> Attachments: HDFS-12357.001.patch, HDFS-12357.002.patch, 
> HDFS-12357.003.patch, HDFS-12357.004.patch
>
>
> This is a third proposal to solve the problem described in HDFS-12202.
> The problem is, when we do distcp from one cluster to another (or within the 
> same cluster), in addition to copying file data, we copy the metadata from 
> source to target. If external attribute provider is enabled, the metadata may 
> be read from the provider, thus provider data read from source may be saved 
> to target HDFS. 
> We want to avoid saving metadata from external provider to HDFS, so we want 
> to bypass external provider when doing the distcp (or hadoop fs -cp) 
> operation.
> Two alternative approaches were proposed earlier, one in HDFS-12202, the 
> other in HDFS-12294. The proposal here is the third one.
> The idea is, we introduce a new config, that specifies a special user (or a 
> list of users), and let NN bypass external provider when the current user is 
> a special user.
> If we run applications as the special user that need data from external 
> attribute provider, then it won't work. So the constraint on this approach 
> is, the special users here should not run applications that need data from 
> external provider.
> Thanks [~asuresh] for proposing this idea and [~chris.douglas], [~daryn], 
> [~manojg] for the discussions in the other jiras. 
> I'm creating this one to discuss further.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12357) Let NameNode to bypass external attribute provider for special user

2017-09-01 Thread Chris Douglas (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12357?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16151304#comment-16151304
 ] 

Chris Douglas commented on HDFS-12357:
--

The inner provider is not invoked. 
{{attributeProvider.getAttributes(components, nodeAttrs)}} checks for the user, 
and returns {{nodeAttrs}}.

If {{iip.getPathComponents()}} and the copy is a significant cost- which would 
be bad news for external attribute providers generally- then this could still 
be pushed down a level, out of {{FSDirectory}}.

I don't see why this is a significant difference.

> Let NameNode to bypass external attribute provider for special user
> ---
>
> Key: HDFS-12357
> URL: https://issues.apache.org/jira/browse/HDFS-12357
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Yongjun Zhang
>Assignee: Yongjun Zhang
> Attachments: HDFS-12357.001.patch, HDFS-12357.002.patch, 
> HDFS-12357.003.patch, HDFS-12357.004.patch
>
>
> This is a third proposal to solve the problem described in HDFS-12202.
> The problem is, when we do distcp from one cluster to another (or within the 
> same cluster), in addition to copying file data, we copy the metadata from 
> source to target. If external attribute provider is enabled, the metadata may 
> be read from the provider, thus provider data read from source may be saved 
> to target HDFS. 
> We want to avoid saving metadata from external provider to HDFS, so we want 
> to bypass external provider when doing the distcp (or hadoop fs -cp) 
> operation.
> Two alternative approaches were proposed earlier, one in HDFS-12202, the 
> other in HDFS-12294. The proposal here is the third one.
> The idea is, we introduce a new config, that specifies a special user (or a 
> list of users), and let NN bypass external provider when the current user is 
> a special user.
> If we run applications as the special user that need data from external 
> attribute provider, then it won't work. So the constraint on this approach 
> is, the special users here should not run applications that need data from 
> external provider.
> Thanks [~asuresh] for proposing this idea and [~chris.douglas], [~daryn], 
> [~manojg] for the discussions in the other jiras. 
> I'm creating this one to discuss further.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12235) Ozone: DeleteKey-3: KSM SCM block deletion message and ACK interactions

2017-09-01 Thread Chen Liang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12235?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16151296#comment-16151296
 ] 

Chen Liang commented on HDFS-12235:
---

Thanks [~yuanbo] for the patch, looks good to me overall, some comments:

in {{BlockManagerImpl.java}} deleteBlocks

- seems to silently handles the exception when deleteBlockLog.addTransactions 
went wrong. By which I mean, the rollback gets done and messages get logged, 
but the caller of the function seems to have no clue to know whether this 
delete is done, or exception happened and things got rolled back, in which case 
delete is not done. Maybe throw the exception after the rollback is done?
- the writeBatch(rollbackBatch) may throw IOException and fail, in which case 
rollback failed…not sure whether there is even a way to handle such rollback 
failure at all…but maybe we should at least log a LOG.error message or 
something here…
- This may not be a valid case, but since deleteBlocks does not have 
synchronized, I wonder is it possible to have multiple threads doing deletion 
here? if yes, then is there any chance multiple threads trying to delete same 
blockID? (e.g. two duplicate key deletion gets to the server at the same time) 
will we potentially run into any issue? (e.g. any issue if two threads run into 
exactly same blockStore.writeBatch(batch); call?)
- some TODO comments get removed, it appears to me some of them are still 
things to be done in the future?

in {{KeyManagerImpl.java}} deletePendingDeletionKey
{code}
  byte[] pendingDelKey = DFSUtil.string2Bytes(objectKeyName);
  if (pendingDelKey == null) {
throw new IOException("Failed to delete key " + objectKeyName
+ " because it is not found in DB");
  }
{code}
it says “not found in DB” but this does not seem to be reading DB but checking 
the key bytes? is it missing a db.get() call or something?


> Ozone: DeleteKey-3: KSM SCM block deletion message and ACK interactions
> ---
>
> Key: HDFS-12235
> URL: https://issues.apache.org/jira/browse/HDFS-12235
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Affects Versions: HDFS-7240
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
> Attachments: HDFS-12235-HDFS-7240.001.patch, 
> HDFS-12235-HDFS-7240.002.patch, HDFS-12235-HDFS-7240.003.patch, 
> HDFS-12235-HDFS-7240.004.patch, HDFS-12235-HDFS-7240.005.patch
>
>
> KSM and SCM interaction for delete key operation, both KSM and SCM stores key 
> state info in a backlog, KSM needs to scan this log and send block-deletion 
> command to SCM, once SCM is fully aware of the message, KSM removes the key 
> completely from namespace. See more from the design doc under HDFS-11922, 
> this is task break down 2.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12357) Let NameNode to bypass external attribute provider for special user

2017-09-01 Thread Yongjun Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12357?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16151290#comment-16151290
 ] 

Yongjun Zhang commented on HDFS-12357:
--

{quote}
As in the v001 version, this is avoided.
{quote}
Not really. In the following code, 

we get HDFS attributes first by {{INodeAttributes nodeAttrs = 
node.getSnapshotINode(snapshot);}}. Then we get the external provider attribute 
if needed.

In v001, for special user, it's not needed to get external provider attribute, 
thus we don't call {{nodeAttrs = attributeProvider.getAttributes(components, 
nodeAttrs);}}; 

However, in the wrapper solution, we will go into the {{if (attributeProvider 
!= null) {}} block and call it. If the {{attributeProvider.getAttributes}} 
decides to bypass external provider, it's going to do the same thing as  
{{INodeAttributes nodeAttrs = node.getSnapshotINode(snapshot);}} to get the 
HDFS version attribute. So we get the HDFS attribute twice.In v001, we only get 
it once.

{code}
 INodeAttributes getAttributes(INodesInPath iip)
  throws FileNotFoundException {
INode node = FSDirectory.resolveLastINode(iip);
int snapshot = iip.getPathSnapshotId();
INodeAttributes nodeAttrs = node.getSnapshotINode(snapshot);
if (attributeProvider != null) {
  // permission checking sends the full components array including the
  // first empty component for the root.  however file status
  // related calls are expected to strip out the root component according
  // to TestINodeAttributeProvider.
  byte[][] components = iip.getPathComponents();
  components = Arrays.copyOfRange(components, 1, components.length);
  nodeAttrs = attributeProvider.getAttributes(components, nodeAttrs);
}
return nodeAttrs;
  }
{code}


> Let NameNode to bypass external attribute provider for special user
> ---
>
> Key: HDFS-12357
> URL: https://issues.apache.org/jira/browse/HDFS-12357
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Yongjun Zhang
>Assignee: Yongjun Zhang
> Attachments: HDFS-12357.001.patch, HDFS-12357.002.patch, 
> HDFS-12357.003.patch, HDFS-12357.004.patch
>
>
> This is a third proposal to solve the problem described in HDFS-12202.
> The problem is, when we do distcp from one cluster to another (or within the 
> same cluster), in addition to copying file data, we copy the metadata from 
> source to target. If external attribute provider is enabled, the metadata may 
> be read from the provider, thus provider data read from source may be saved 
> to target HDFS. 
> We want to avoid saving metadata from external provider to HDFS, so we want 
> to bypass external provider when doing the distcp (or hadoop fs -cp) 
> operation.
> Two alternative approaches were proposed earlier, one in HDFS-12202, the 
> other in HDFS-12294. The proposal here is the third one.
> The idea is, we introduce a new config, that specifies a special user (or a 
> list of users), and let NN bypass external provider when the current user is 
> a special user.
> If we run applications as the special user that need data from external 
> attribute provider, then it won't work. So the constraint on this approach 
> is, the special users here should not run applications that need data from 
> external provider.
> Thanks [~asuresh] for proposing this idea and [~chris.douglas], [~daryn], 
> [~manojg] for the discussions in the other jiras. 
> I'm creating this one to discuss further.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12357) Let NameNode to bypass external attribute provider for special user

2017-09-01 Thread Chris Douglas (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12357?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16151280#comment-16151280
 ] 

Chris Douglas commented on HDFS-12357:
--

v002 assumed that enforcement should always delegate to the provider, unlike 
v001 and v004 (which should be equivalent). I'm not familiar with how 
{{INodeAttributeProvider}} is used in practice, so I'll defer to you on the 
correct semantics.

> Let NameNode to bypass external attribute provider for special user
> ---
>
> Key: HDFS-12357
> URL: https://issues.apache.org/jira/browse/HDFS-12357
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Yongjun Zhang
>Assignee: Yongjun Zhang
> Attachments: HDFS-12357.001.patch, HDFS-12357.002.patch, 
> HDFS-12357.003.patch, HDFS-12357.004.patch
>
>
> This is a third proposal to solve the problem described in HDFS-12202.
> The problem is, when we do distcp from one cluster to another (or within the 
> same cluster), in addition to copying file data, we copy the metadata from 
> source to target. If external attribute provider is enabled, the metadata may 
> be read from the provider, thus provider data read from source may be saved 
> to target HDFS. 
> We want to avoid saving metadata from external provider to HDFS, so we want 
> to bypass external provider when doing the distcp (or hadoop fs -cp) 
> operation.
> Two alternative approaches were proposed earlier, one in HDFS-12202, the 
> other in HDFS-12294. The proposal here is the third one.
> The idea is, we introduce a new config, that specifies a special user (or a 
> list of users), and let NN bypass external provider when the current user is 
> a special user.
> If we run applications as the special user that need data from external 
> attribute provider, then it won't work. So the constraint on this approach 
> is, the special users here should not run applications that need data from 
> external provider.
> Thanks [~asuresh] for proposing this idea and [~chris.douglas], [~daryn], 
> [~manojg] for the discussions in the other jiras. 
> I'm creating this one to discuss further.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11882) Client fails if acknowledged size is greater than bytes sent

2017-09-01 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11882?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16151273#comment-16151273
 ] 

Hadoop QA commented on HDFS-11882:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
15s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
27s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 
13s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
38s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
43s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
36s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m  
8s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
2s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
7s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
30s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
30s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
56s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
12s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}101m 53s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
24s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}138m 32s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure060 |
|   | hadoop.hdfs.TestClientProtocolForPipelineRecovery |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure130 |
|   | hadoop.hdfs.TestEncryptedTransfer |
|   | hadoop.hdfs.tools.TestStoragePolicyCommands |
|   | hadoop.hdfs.TestFileAppendRestart |
|   | hadoop.hdfs.tools.TestDFSAdminWithHA |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure140 |
|   | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureToleration |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure210 |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure070 |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure170 |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure020 |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure090 |
|   | hadoop.hdfs.TestLeaseRecoveryStriped |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure080 |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure200 |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure040 |
|   | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting |
|   | hadoop.hdfs.TestReadStripedFileWithDecoding |
|   | 

[jira] [Commented] (HDFS-12357) Let NameNode to bypass external attribute provider for special user

2017-09-01 Thread Yongjun Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12357?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16151272#comment-16151272
 ] 

Yongjun Zhang commented on HDFS-12357:
--

Hi [~chris.douglas],

In patch rev1, I passed null to attributeProvider in getPermissionChecker when 
it's a bypass user (2.b of my earlier comments),  so the external provider is 
bypassed and we don't need to check {{isBypassUser}} in {{private 
AccessControlEnforcer getAccessControlEnforcer()}} you asked.

I think my rev1 covered all. But it's possible I missed something.

Thanks.
 


> Let NameNode to bypass external attribute provider for special user
> ---
>
> Key: HDFS-12357
> URL: https://issues.apache.org/jira/browse/HDFS-12357
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Yongjun Zhang
>Assignee: Yongjun Zhang
> Attachments: HDFS-12357.001.patch, HDFS-12357.002.patch, 
> HDFS-12357.003.patch, HDFS-12357.004.patch
>
>
> This is a third proposal to solve the problem described in HDFS-12202.
> The problem is, when we do distcp from one cluster to another (or within the 
> same cluster), in addition to copying file data, we copy the metadata from 
> source to target. If external attribute provider is enabled, the metadata may 
> be read from the provider, thus provider data read from source may be saved 
> to target HDFS. 
> We want to avoid saving metadata from external provider to HDFS, so we want 
> to bypass external provider when doing the distcp (or hadoop fs -cp) 
> operation.
> Two alternative approaches were proposed earlier, one in HDFS-12202, the 
> other in HDFS-12294. The proposal here is the third one.
> The idea is, we introduce a new config, that specifies a special user (or a 
> list of users), and let NN bypass external provider when the current user is 
> a special user.
> If we run applications as the special user that need data from external 
> attribute provider, then it won't work. So the constraint on this approach 
> is, the special users here should not run applications that need data from 
> external provider.
> Thanks [~asuresh] for proposing this idea and [~chris.douglas], [~daryn], 
> [~manojg] for the discussions in the other jiras. 
> I'm creating this one to discuss further.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12323) NameNode terminates after full GC thinking QJM unresponsive if full GC is much longer than timeout

2017-09-01 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12323?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16151270#comment-16151270
 ] 

Hadoop QA commented on HDFS-12323:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
35s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 19m 
 2s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
3s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
44s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
3s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
1s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
49s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
 9s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
59s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
59s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
40s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
9s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
46s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}119m 39s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
17s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}153m 59s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure130 |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure080 |
|   | hadoop.hdfs.TestLeaseRecoveryStriped |
|   | hadoop.hdfs.TestClientProtocolForPipelineRecovery |
|   | hadoop.hdfs.TestReconstructStripedFile |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure150 |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure170 |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure100 |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure200 |
|   | hadoop.hdfs.server.namenode.ha.TestPipelinesFailover |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure120 |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure190 |
|   | hadoop.hdfs.TestReadStripedFileWithMissingBlocks |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure160 |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure |
|   | hadoop.hdfs.TestEncryptedTransfer |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure110 |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure070 |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailureWithRandomECPolicy |
| Timed out junit tests | org.apache.hadoop.hdfs.TestWriteReadStripedFile |
|   | org.apache.hadoop.hdfs.TestReadStripedFileWithDecoding |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:71bbb86 |
| JIRA Issue | HDFS-12323 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12885012/HDFS-12323.000.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  

[jira] [Commented] (HDFS-12377) Refactor TestReadStripedFileWithDecoding to avoid test timeouts

2017-09-01 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12377?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16151268#comment-16151268
 ] 

Hadoop QA commented on HDFS-12377:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
19s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 6 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 16m 
 3s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
57s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
37s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
1s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
58s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
42s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
58s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
52s{color} | {color:green} hadoop-hdfs-project_hadoop-hdfs generated 0 new + 
408 unchanged - 3 fixed = 408 total (was 411) {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 36s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch 
generated 2 new + 9 unchanged - 6 fixed = 11 total (was 15) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
58s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
9s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
40s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}143m 13s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
49s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}173m 20s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.TestMaintenanceState |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure080 |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure030 |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailureWithRandomECPolicy |
|   | hadoop.hdfs.TestLeaseRecoveryStriped |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure200 |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure050 |
|   | hadoop.hdfs.TestReadStripedFileWithMissingBlocks |
|   | hadoop.hdfs.TestDataTransferProtocol |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure180 |
|   | hadoop.hdfs.TestFileAppendRestart |
|   | hadoop.hdfs.TestReconstructStripedFile |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure110 |
|   | hadoop.cli.TestCryptoAdminCLI |
|   | hadoop.hdfs.TestReplaceDatanodeOnFailure |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure120 |
|   | hadoop.hdfs.server.blockmanagement.TestRBWBlockInvalidation |
|   | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure170 |
|   | hadoop.hdfs.tools.TestDFSAdminWithHA |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure010 |
|   | hadoop.hdfs.web.TestWebHdfsTimeouts |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure160 |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure190 |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure210 |
|   | hadoop.hdfs.TestClientProtocolForPipelineRecovery 

[jira] [Updated] (HDFS-12357) Let NameNode to bypass external attribute provider for special user

2017-09-01 Thread Chris Douglas (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12357?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Douglas updated HDFS-12357:
-
Attachment: HDFS-12357.004.patch

Missed a dead store

> Let NameNode to bypass external attribute provider for special user
> ---
>
> Key: HDFS-12357
> URL: https://issues.apache.org/jira/browse/HDFS-12357
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Yongjun Zhang
>Assignee: Yongjun Zhang
> Attachments: HDFS-12357.001.patch, HDFS-12357.002.patch, 
> HDFS-12357.003.patch, HDFS-12357.004.patch
>
>
> This is a third proposal to solve the problem described in HDFS-12202.
> The problem is, when we do distcp from one cluster to another (or within the 
> same cluster), in addition to copying file data, we copy the metadata from 
> source to target. If external attribute provider is enabled, the metadata may 
> be read from the provider, thus provider data read from source may be saved 
> to target HDFS. 
> We want to avoid saving metadata from external provider to HDFS, so we want 
> to bypass external provider when doing the distcp (or hadoop fs -cp) 
> operation.
> Two alternative approaches were proposed earlier, one in HDFS-12202, the 
> other in HDFS-12294. The proposal here is the third one.
> The idea is, we introduce a new config, that specifies a special user (or a 
> list of users), and let NN bypass external provider when the current user is 
> a special user.
> If we run applications as the special user that need data from external 
> attribute provider, then it won't work. So the constraint on this approach 
> is, the special users here should not run applications that need data from 
> external provider.
> Thanks [~asuresh] for proposing this idea and [~chris.douglas], [~daryn], 
> [~manojg] for the discussions in the other jiras. 
> I'm creating this one to discuss further.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12357) Let NameNode to bypass external attribute provider for special user

2017-09-01 Thread Chris Douglas (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12357?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Douglas updated HDFS-12357:
-
Attachment: HDFS-12357.003.patch

> Let NameNode to bypass external attribute provider for special user
> ---
>
> Key: HDFS-12357
> URL: https://issues.apache.org/jira/browse/HDFS-12357
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Yongjun Zhang
>Assignee: Yongjun Zhang
> Attachments: HDFS-12357.001.patch, HDFS-12357.002.patch, 
> HDFS-12357.003.patch
>
>
> This is a third proposal to solve the problem described in HDFS-12202.
> The problem is, when we do distcp from one cluster to another (or within the 
> same cluster), in addition to copying file data, we copy the metadata from 
> source to target. If external attribute provider is enabled, the metadata may 
> be read from the provider, thus provider data read from source may be saved 
> to target HDFS. 
> We want to avoid saving metadata from external provider to HDFS, so we want 
> to bypass external provider when doing the distcp (or hadoop fs -cp) 
> operation.
> Two alternative approaches were proposed earlier, one in HDFS-12202, the 
> other in HDFS-12294. The proposal here is the third one.
> The idea is, we introduce a new config, that specifies a special user (or a 
> list of users), and let NN bypass external provider when the current user is 
> a special user.
> If we run applications as the special user that need data from external 
> attribute provider, then it won't work. So the constraint on this approach 
> is, the special users here should not run applications that need data from 
> external provider.
> Thanks [~asuresh] for proposing this idea and [~chris.douglas], [~daryn], 
> [~manojg] for the discussions in the other jiras. 
> I'm creating this one to discuss further.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-12387) Ozone: Support Ratis as a first class replication mechanism

2017-09-01 Thread Anu Engineer (JIRA)
Anu Engineer created HDFS-12387:
---

 Summary: Ozone: Support Ratis as a first class replication 
mechanism
 Key: HDFS-12387
 URL: https://issues.apache.org/jira/browse/HDFS-12387
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: ozone
Affects Versions: HDFS-7240
Reporter: Anu Engineer
Assignee: Anu Engineer
Priority: Critical


Ozone container layer supports pluggable replication policies. This JIRA brings 
Apache Ratis based replication to Ozone.  Apache Ratis is a java implementation 
of Raft protocol.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12357) Let NameNode to bypass external attribute provider for special user

2017-09-01 Thread Chris Douglas (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12357?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16151246#comment-16151246
 ] 

Chris Douglas commented on HDFS-12357:
--

I wasn't sure if {{INodeAttributeProvider::getExternalAccessControlEnforcer}} 
should have respected the user list, but from 
{{FSPermissionChecker::getAccessControlEnforcer}}:
{code:java}
  private AccessControlEnforcer getAccessControlEnforcer() {
return (attributeProvider != null)
? attributeProvider.getExternalAccessControlEnforcer(this) : this;
  }
{code}
It looks like v001 should check {{isBypassUser}} and return the default if it 
matches, exactly like the other methods. Are there other cases that this needs 
to cover?

> Let NameNode to bypass external attribute provider for special user
> ---
>
> Key: HDFS-12357
> URL: https://issues.apache.org/jira/browse/HDFS-12357
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Yongjun Zhang
>Assignee: Yongjun Zhang
> Attachments: HDFS-12357.001.patch, HDFS-12357.002.patch
>
>
> This is a third proposal to solve the problem described in HDFS-12202.
> The problem is, when we do distcp from one cluster to another (or within the 
> same cluster), in addition to copying file data, we copy the metadata from 
> source to target. If external attribute provider is enabled, the metadata may 
> be read from the provider, thus provider data read from source may be saved 
> to target HDFS. 
> We want to avoid saving metadata from external provider to HDFS, so we want 
> to bypass external provider when doing the distcp (or hadoop fs -cp) 
> operation.
> Two alternative approaches were proposed earlier, one in HDFS-12202, the 
> other in HDFS-12294. The proposal here is the third one.
> The idea is, we introduce a new config, that specifies a special user (or a 
> list of users), and let NN bypass external provider when the current user is 
> a special user.
> If we run applications as the special user that need data from external 
> attribute provider, then it won't work. So the constraint on this approach 
> is, the special users here should not run applications that need data from 
> external provider.
> Thanks [~asuresh] for proposing this idea and [~chris.douglas], [~daryn], 
> [~manojg] for the discussions in the other jiras. 
> I'm creating this one to discuss further.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12357) Let NameNode to bypass external attribute provider for special user

2017-09-01 Thread Chris Douglas (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12357?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16151239#comment-16151239
 ] 

Chris Douglas commented on HDFS-12357:
--

bq. 1. the wrapper need to create two provider objects, one is the default 
(HDFS), the other is the external provider, and switch between these two. 
However, in the existing code, I don't see the default provider object is 
always created
Sure, but if no external attribute provider is created, then the wrapper 
doesn't need to be created. What is the problem?

bq. 2a. \[...]  The easiest way is to check if the user is a special user, then 
we don't ask for provider's data at all. If we do this in a wrapper class, we 
always have to get some attributes, which maybe from HDFS or not. \[...]
As in the v001 version, this is avoided.

bq. 2b. Here we need to pass either a null or the external attributeProvider 
configured to permission checker. if we include this logic to the external 
provider, we need have an API in this wrapper class, to return the external 
provicer or null
Unless this is invoked in a separate thread, doesn't the same logic apply? If 
the provider is configured then it's invoked by {{FSPermissionChecker}}, if 
it's a filtered user then it doesn't consult the external attribute provider.

bq. My comments are largely about the integration, which is the key part that 
you did not address in the example patch. If you'd like, would you please take 
a look?
I'll take a second pass, but I don't intend to take over the patch...

> Let NameNode to bypass external attribute provider for special user
> ---
>
> Key: HDFS-12357
> URL: https://issues.apache.org/jira/browse/HDFS-12357
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Yongjun Zhang
>Assignee: Yongjun Zhang
> Attachments: HDFS-12357.001.patch, HDFS-12357.002.patch
>
>
> This is a third proposal to solve the problem described in HDFS-12202.
> The problem is, when we do distcp from one cluster to another (or within the 
> same cluster), in addition to copying file data, we copy the metadata from 
> source to target. If external attribute provider is enabled, the metadata may 
> be read from the provider, thus provider data read from source may be saved 
> to target HDFS. 
> We want to avoid saving metadata from external provider to HDFS, so we want 
> to bypass external provider when doing the distcp (or hadoop fs -cp) 
> operation.
> Two alternative approaches were proposed earlier, one in HDFS-12202, the 
> other in HDFS-12294. The proposal here is the third one.
> The idea is, we introduce a new config, that specifies a special user (or a 
> list of users), and let NN bypass external provider when the current user is 
> a special user.
> If we run applications as the special user that need data from external 
> attribute provider, then it won't work. So the constraint on this approach 
> is, the special users here should not run applications that need data from 
> external provider.
> Thanks [~asuresh] for proposing this idea and [~chris.douglas], [~daryn], 
> [~manojg] for the discussions in the other jiras. 
> I'm creating this one to discuss further.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12373) Trying to construct Path for a file that has colon (":") throws IllegalArgumentException

2017-09-01 Thread Vlad Rozov (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12373?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16151231#comment-16151231
 ] 

Vlad Rozov commented on HDFS-12373:
---

{{org.apache.hadoop.fs.Path}} is not limited to a distributed file system. It 
can be used with any FileSystem including local file system. For local file 
system on Linux (or Mac), colon {{:}} is a valid/permitted character.

> Trying to construct Path for a file that has colon (":") throws 
> IllegalArgumentException
> 
>
> Key: HDFS-12373
> URL: https://issues.apache.org/jira/browse/HDFS-12373
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Vlad Rozov
>
> In case a file has colon in its name, org.apache.hadoop.fs.Path, can not be 
> constructed. For example, I have file "a:b" under /tmp and new 
> Path("file:/tmp", "a:b") throws IllegalArgumentException.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-12357) Let NameNode to bypass external attribute provider for special user

2017-09-01 Thread Yongjun Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12357?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16151047#comment-16151047
 ] 

Yongjun Zhang edited comment on HDFS-12357 at 9/1/17 9:36 PM:
--

Thanks [~chris.douglas] and [~manojg].

Sorry for a lengthy reply here:

{quote}
Would a filter implementation wrapping the configured, external attribute 
provider suffice?
{quote}
The current patch implements this logic (like an inlined version of the wrapper 
class in C++ world). If we put this logic to the wrapper class, I can see some 
issues:

1. the wrapper need to create two provider objects, one is the default (HDFS), 
the other is the external provider, and switch between these two. However, in 
the existing code, I don't see the default provider object is always created. 
See 2.a below.

The default value of the following config is empty, which means no default 
provider will be created.
{code}

  dfs.namenode.inode.attributes.provider.class
  
  
Name of class to use for delegating HDFS authorization.
  

{code}
Not sure whether we should have the default provider configured here.

2. currently there are two places to decide whether to consult external 
attribute provider
2.a.
{code}
  INodeAttributes getAttributes(INodesInPath iip)
  throws FileNotFoundException {
INode node = FSDirectory.resolveLastINode(iip);
int snapshot = iip.getPathSnapshotId();
INodeAttributes nodeAttrs = node.getSnapshotINode(snapshot);
if (attributeProvider != null) {
  // permission checking sends the full components array including the
  // first empty component for the root.  however file status
  // related calls are expected to strip out the root component according
  // to TestINodeAttributeProvider.
  byte[][] components = iip.getPathComponents();
  components = Arrays.copyOfRange(components, 1, components.length);
  nodeAttrs = attributeProvider.getAttributes(components, nodeAttrs);
}
return nodeAttrs;
  }
{code}
we already got the attributes from HDFS, then we decide to whether to overwrite 
it with provider's data. The easiest way is to check if the user is a special 
user, then we don't ask for provider's data at all. If we do this in a wrapper 
class, we always have to get some attributes, which maybe from HDFS or not. 
It's not a clear implementation and may incur runtime cost.

2.b
{code}
 @VisibleForTesting
  FSPermissionChecker getPermissionChecker(String fsOwner, String superGroup,
  UserGroupInformation ugi) throws AccessControlException {
return new FSPermissionChecker(
fsOwner, superGroup, ugi, attributeProvider);
  }
{code}
Here we need to pass either a null or the external attributeProvider configured 
to permission checker. if we include this logic to the external provider, we 
need have an API in this wrapper class, to return the external provicer or 
null, and pass it to the "attributeProvider" parameter in the above code. like
{code}
return new FSPermissionChecker(
fsOwner, superGroup, ugi, attributeProvider.getRealAttributeProvider());
{code}
We need to add this getRealAttibuteProvider() API to the base provider class, 
which is a bit weird because this API is only meaning ful in the wrapper layer. 
And changing the provider API is what we try to avoid here.

Thoughts?

Thanks.



was (Author: yzhangal):
Thanks [~chris.douglas] and [~manojg].

Sorry for a lengthy reply here:

{quote}
Would a filter implementation wrapping the configured, external attribute 
provider suffice?
{quote}
The current patch implements this logic (like an inlined version of the wrapper 
class in C++ world). If we put this logic to the wrapper class, I can see some 
issues:

1. the wrapper need to create two provider objects, one is the default (HDFS), 
the other is the external provider, and switch between these two. However, in 
the existing code, I don't see the default provider object is always created. 
See 2.a below.

The default value of the following config is empty, which means no default 
provider will be created.
{code}

  dfs.namenode.inode.attributes.provider.class
  
  
Name of class to use for delegating HDFS authorization.
  

{code}
Not sure whether we should have the default provider configured here.

2. currently there are two places to decide whether to consult external 
attribute provider
2.a.
{code}
  INodeAttributes getAttributes(INodesInPath iip)
  throws FileNotFoundException {
INode node = FSDirectory.resolveLastINode(iip);
int snapshot = iip.getPathSnapshotId();
INodeAttributes nodeAttrs = node.getSnapshotINode(snapshot);
if (attributeProvider != null) {
  // permission checking sends the full components array including the
  // first empty component for the root.  however file status
  // related calls are expected to strip out the root component according
  // to 

[jira] [Commented] (HDFS-12357) Let NameNode to bypass external attribute provider for special user

2017-09-01 Thread Yongjun Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12357?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16151189#comment-16151189
 ] 

Yongjun Zhang commented on HDFS-12357:
--

HI [~chris.douglas],

Sorry I did not see your latest comment and even updated a revised patch when I 
made the earlier comments. Thanks much for doing that. 

It seems my last comments still applies. My comments are largely about the 
integration, which is the key part that you did not address in the example 
patch. If you'd like, would you please take a look?

Thanks.


> Let NameNode to bypass external attribute provider for special user
> ---
>
> Key: HDFS-12357
> URL: https://issues.apache.org/jira/browse/HDFS-12357
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Yongjun Zhang
>Assignee: Yongjun Zhang
> Attachments: HDFS-12357.001.patch, HDFS-12357.002.patch
>
>
> This is a third proposal to solve the problem described in HDFS-12202.
> The problem is, when we do distcp from one cluster to another (or within the 
> same cluster), in addition to copying file data, we copy the metadata from 
> source to target. If external attribute provider is enabled, the metadata may 
> be read from the provider, thus provider data read from source may be saved 
> to target HDFS. 
> We want to avoid saving metadata from external provider to HDFS, so we want 
> to bypass external provider when doing the distcp (or hadoop fs -cp) 
> operation.
> Two alternative approaches were proposed earlier, one in HDFS-12202, the 
> other in HDFS-12294. The proposal here is the third one.
> The idea is, we introduce a new config, that specifies a special user (or a 
> list of users), and let NN bypass external provider when the current user is 
> a special user.
> If we run applications as the special user that need data from external 
> attribute provider, then it won't work. So the constraint on this approach 
> is, the special users here should not run applications that need data from 
> external provider.
> Thanks [~asuresh] for proposing this idea and [~chris.douglas], [~daryn], 
> [~manojg] for the discussions in the other jiras. 
> I'm creating this one to discuss further.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12000) Ozone: Container : Add key versioning support-1

2017-09-01 Thread Xiaoyu Yao (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12000?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16151164#comment-16151164
 ] 

Xiaoyu Yao commented on HDFS-12000:
---

Thanks [~vagarychen] for working on this. The patch looks good to me overall. I 
just have a few comments below.

KeySpaceManagerProtocol.proto
Line 237: should createVersion be optional?
Line 241: KeyLocationListVersioned -> KeyLocationList or KeyLocations
Line 252: can we add the latestVersion to the end? 

KeyManager.java
Line 32: Can you add more comments on the sequence of 
allocateBlock->write->commitBlock?
Line 42: "Give a request size, a key and a version" I did not find the version 
from the input
parameter. Can you clarify?

KeyManagerImpl.java
Line 115: should we use Time.monotonicNow()?
Line 173: NIT: keykey-> objectKey

ChunkgroupOutputStream.java
Line 281: Please file followup JIRAs for TODO

KsmKeyInfo.java
Line 41: should we use AtomicLong for latestVersion? 

Line 81: NIT: getLatestVersionList -> getLatestLocations, can we modify this 
function
to take version as parameter so that this can be reused later when versions 
other than latest are fully supported.
Line 82: version -> location
Line 107: Can you add some javadoc on the latestversion update?
Since we only update the latestVersion at commit time, there could be multiple 
clients that allocate blocks before 
the first commit is done. Once the first commit is done, all the allocate will 
be invalid to commit due to the logic
between line 109-115.
 
KsmKeyLocationListVersioned.java
NIT: To be consistent with change in KsmKeyInfo, can we rename this to 
KsmKeyLocationList.java, which always include version.



> Ozone: Container : Add key versioning support-1
> ---
>
> Key: HDFS-12000
> URL: https://issues.apache.org/jira/browse/HDFS-12000
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Affects Versions: HDFS-7240
>Reporter: Anu Engineer
>Assignee: Chen Liang
> Attachments: HDFS-12000-HDFS-7240.001.patch, 
> HDFS-12000-HDFS-7240.002.patch, HDFS-12000-HDFS-7240.003.patch, 
> OzoneVersion.001.pdf
>
>
> The rest interface of ozone supports versioning of keys. This support comes 
> from the containers and how chunks are managed to support this feature. This 
> JIRA tracks that feature. Will post a detailed design doc so that we can talk 
> about this feature.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12235) Ozone: DeleteKey-3: KSM SCM block deletion message and ACK interactions

2017-09-01 Thread Anu Engineer (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12235?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16151156#comment-16151156
 ] 

Anu Engineer commented on HDFS-12235:
-

[~cheersyang] I will take a look next week, but if your testing is done feel 
free to commit. [~vagarychen] or [~xyao] will be able to comment before me. If 
I have any comments we can always file follow up JIRAs. [~cheersyang] and 
[~yuanbo] Thanks for getting this critical piece of ozone work done.

> Ozone: DeleteKey-3: KSM SCM block deletion message and ACK interactions
> ---
>
> Key: HDFS-12235
> URL: https://issues.apache.org/jira/browse/HDFS-12235
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Affects Versions: HDFS-7240
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
> Attachments: HDFS-12235-HDFS-7240.001.patch, 
> HDFS-12235-HDFS-7240.002.patch, HDFS-12235-HDFS-7240.003.patch, 
> HDFS-12235-HDFS-7240.004.patch, HDFS-12235-HDFS-7240.005.patch
>
>
> KSM and SCM interaction for delete key operation, both KSM and SCM stores key 
> state info in a backlog, KSM needs to scan this log and send block-deletion 
> command to SCM, once SCM is fully aware of the message, KSM removes the key 
> completely from namespace. See more from the design doc under HDFS-11922, 
> this is task break down 2.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11356) figure out what to do about hadoop-hdfs-project/hadoop-hdfs/src/main/native

2017-09-01 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11356?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HDFS-11356:
---
Priority: Major  (was: Critical)

> figure out what to do about hadoop-hdfs-project/hadoop-hdfs/src/main/native
> ---
>
> Key: HDFS-11356
> URL: https://issues.apache.org/jira/browse/HDFS-11356
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: build, documentation
>Affects Versions: 3.0.0-alpha2
>Reporter: Allen Wittenauer
>Assignee: Wei-Chiu Chuang
> Attachments: HDFS-11356.001.patch
>
>
> The move of code to hdfs-client-native creation caused all sorts of loose 
> ends, and this is just another one.  



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12357) Let NameNode to bypass external attribute provider for special user

2017-09-01 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12357?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16151121#comment-16151121
 ] 

Hadoop QA commented on HDFS-12357:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
17s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 17m 
17s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
8s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
53s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
15s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
17s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
51s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
 4s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
1s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red}  1m  1s{color} 
| {color:red} hadoop-hdfs-project_hadoop-hdfs generated 4 new + 411 unchanged - 
0 fixed = 415 total (was 411) {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 46s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch 
generated 5 new + 466 unchanged - 0 fixed = 471 total (was 466) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
2s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
2s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
50s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs generated 1 new + 0 
unchanged - 0 fixed = 1 total (was 0) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
38s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 83m  5s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
29s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}115m 20s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| FindBugs | module:hadoop-hdfs-project/hadoop-hdfs |
|  |  Write to static field 
org.apache.hadoop.hdfs.server.namenode.FSDirectory.usersToBypassExtAttrProvider 
from instance method new 
org.apache.hadoop.hdfs.server.namenode.FSDirectory(FSNamesystem, Configuration) 
 At FSDirectory.java:from instance method new 
org.apache.hadoop.hdfs.server.namenode.FSDirectory(FSNamesystem, Configuration) 
 At FSDirectory.java:[line 371] |
| Failed junit tests | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure100 |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure130 |
|   | hadoop.hdfs.tools.TestDFSAdminWithHA |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure050 |
|   | hadoop.hdfs.tools.TestDebugAdmin |
|   | hadoop.hdfs.server.namenode.ha.TestPipelinesFailover |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure090 |
|   | hadoop.hdfs.TestLeaseRecoveryStriped |
|   | hadoop.hdfs.server.namenode.ha.TestPendingCorruptDnMessages |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure200 |
|   | hadoop.hdfs.TestReadStripedFileWithDecoding |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure150 |
|   | hadoop.hdfs.TestDistributedFileSystem |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure030 |
| Timed out junit tests | 

[jira] [Updated] (HDFS-11882) Client fails if acknowledged size is greater than bytes sent

2017-09-01 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11882?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HDFS-11882:
---
Attachment: HDFS-11882.06.patch

Thanks again Kai for reviewing, attached patch I believe addresses all feedback 
and the checkstyle.

> Client fails if acknowledged size is greater than bytes sent
> 
>
> Key: HDFS-11882
> URL: https://issues.apache.org/jira/browse/HDFS-11882
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: erasure-coding, test
>Reporter: Akira Ajisaka
>Assignee: Andrew Wang
>Priority: Critical
>  Labels: hdfs-ec-3.0-must-do
> Attachments: HDFS-11882.01.patch, HDFS-11882.02.patch, 
> HDFS-11882.03.patch, HDFS-11882.04.patch, HDFS-11882.05.patch, 
> HDFS-11882.06.patch, HDFS-11882.regressiontest.patch
>
>
> Some tests of erasure coding fails by the following exception. The following 
> test was removed by HDFS-11823, however, this type of error can happen in 
> real cluster.
> {noformat}
> Running 
> org.apache.hadoop.hdfs.TestDFSRSDefault10x4StripedOutputStreamWithFailure
> Tests run: 14, Failures: 0, Errors: 1, Skipped: 10, Time elapsed: 89.086 sec 
> <<< FAILURE! - in 
> org.apache.hadoop.hdfs.TestDFSRSDefault10x4StripedOutputStreamWithFailure
> testMultipleDatanodeFailure56(org.apache.hadoop.hdfs.TestDFSRSDefault10x4StripedOutputStreamWithFailure)
>   Time elapsed: 38.831 sec  <<< ERROR!
> java.lang.IllegalStateException: null
>   at 
> com.google.common.base.Preconditions.checkState(Preconditions.java:129)
>   at 
> org.apache.hadoop.hdfs.DFSStripedOutputStream.updatePipeline(DFSStripedOutputStream.java:780)
>   at 
> org.apache.hadoop.hdfs.DFSStripedOutputStream.checkStreamerFailures(DFSStripedOutputStream.java:664)
>   at 
> org.apache.hadoop.hdfs.DFSStripedOutputStream.closeImpl(DFSStripedOutputStream.java:1034)
>   at 
> org.apache.hadoop.hdfs.DFSOutputStream.close(DFSOutputStream.java:842)
>   at 
> org.apache.hadoop.fs.FSDataOutputStream$PositionCache.close(FSDataOutputStream.java:72)
>   at 
> org.apache.hadoop.fs.FSDataOutputStream.close(FSDataOutputStream.java:101)
>   at 
> org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure.runTest(TestDFSStripedOutputStreamWithFailure.java:472)
>   at 
> org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure.runTestWithMultipleFailure(TestDFSStripedOutputStreamWithFailure.java:381)
>   at 
> org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure.testMultipleDatanodeFailure56(TestDFSStripedOutputStreamWithFailure.java:245)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12323) NameNode terminates after full GC thinking QJM unresponsive if full GC is much longer than timeout

2017-09-01 Thread Erik Krogen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12323?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erik Krogen updated HDFS-12323:
---
Attachment: HDFS-12323.000.patch

Attaching v000 patch which solves the issue by measuring the estimated pause 
time and increasing the end (timeout) time by that amount instead of the 
initial wait time.

I have a test prepared as well but it relies on {{StopWatch}} being able to be 
controlled during a test; filed HADOOP-14827 which I've already submitted a 
patch for. After that goes through I will attach the patch with the test but 
will refrain from doing so for now to avoid upsetting Jenkins.

> NameNode terminates after full GC thinking QJM unresponsive if full GC is 
> much longer than timeout
> --
>
> Key: HDFS-12323
> URL: https://issues.apache.org/jira/browse/HDFS-12323
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode, qjm
>Affects Versions: 2.7.4
>Reporter: Erik Krogen
>Assignee: Erik Krogen
> Attachments: HDFS-12323.000.patch
>
>
> HDFS-10733 attempted to fix the issue where the Namenode process would 
> terminate itself if it had a GC pause which lasted longer than the QJM 
> timeout, since it would think that the QJM had taken too long to respond. 
> However, it only bumps up the timeout expiration by one timeout length, so if 
> the GC pause was e.g. 2x the length of the timeout, a TimeoutException will 
> be thrown and the NN will still terminate itself.
> Thanks to [~yangjiandan] for noting this issue as a comment on HDFS-10733; we 
> have also seen this issue on a real cluster even after HDFS-10733 is applied.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12323) NameNode terminates after full GC thinking QJM unresponsive if full GC is much longer than timeout

2017-09-01 Thread Erik Krogen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12323?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erik Krogen updated HDFS-12323:
---
Status: Patch Available  (was: In Progress)

> NameNode terminates after full GC thinking QJM unresponsive if full GC is 
> much longer than timeout
> --
>
> Key: HDFS-12323
> URL: https://issues.apache.org/jira/browse/HDFS-12323
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode, qjm
>Affects Versions: 2.7.4
>Reporter: Erik Krogen
>Assignee: Erik Krogen
> Attachments: HDFS-12323.000.patch
>
>
> HDFS-10733 attempted to fix the issue where the Namenode process would 
> terminate itself if it had a GC pause which lasted longer than the QJM 
> timeout, since it would think that the QJM had taken too long to respond. 
> However, it only bumps up the timeout expiration by one timeout length, so if 
> the GC pause was e.g. 2x the length of the timeout, a TimeoutException will 
> be thrown and the NN will still terminate itself.
> Thanks to [~yangjiandan] for noting this issue as a comment on HDFS-10733; we 
> have also seen this issue on a real cluster even after HDFS-10733 is applied.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work started] (HDFS-12323) NameNode terminates after full GC thinking QJM unresponsive if full GC is much longer than timeout

2017-09-01 Thread Erik Krogen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12323?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HDFS-12323 started by Erik Krogen.
--
> NameNode terminates after full GC thinking QJM unresponsive if full GC is 
> much longer than timeout
> --
>
> Key: HDFS-12323
> URL: https://issues.apache.org/jira/browse/HDFS-12323
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode, qjm
>Affects Versions: 2.7.4
>Reporter: Erik Krogen
>Assignee: Erik Krogen
>
> HDFS-10733 attempted to fix the issue where the Namenode process would 
> terminate itself if it had a GC pause which lasted longer than the QJM 
> timeout, since it would think that the QJM had taken too long to respond. 
> However, it only bumps up the timeout expiration by one timeout length, so if 
> the GC pause was e.g. 2x the length of the timeout, a TimeoutException will 
> be thrown and the NN will still terminate itself.
> Thanks to [~yangjiandan] for noting this issue as a comment on HDFS-10733; we 
> have also seen this issue on a real cluster even after HDFS-10733 is applied.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12377) Refactor TestReadStripedFileWithDecoding to avoid test timeouts

2017-09-01 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12377?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HDFS-12377:
---
Attachment: HDFS-12377.003.patch

Thanks Xiao, new patch attached parameterizing that additional test. Fixed the 
checkstyles and some other small improvements too along the way.

I think the testBlockTokenExpired failures are new, and they seem to fail quite 
reproducibly. I haven't had time to bisect it, but agree that it seems recent.

The EC tests in general seem very flaky, we need a push to stabilize these 
(particularly before GA).

> Refactor TestReadStripedFileWithDecoding to avoid test timeouts
> ---
>
> Key: HDFS-12377
> URL: https://issues.apache.org/jira/browse/HDFS-12377
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: erasure-coding
>Affects Versions: 3.0.0-alpha3
>Reporter: Andrew Wang
>Assignee: Andrew Wang
> Attachments: HDFS-12377.001.patch, HDFS-12377.002.patch, 
> HDFS-12377.003.patch
>
>
> This test times out since the nested for loops means it runs 12 
> configurations inside each test method.
> Let's refactor this to use JUnit parameters instead.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-12386) Add fsserver defaults call to WebhdfsFileSystem.

2017-09-01 Thread Rushabh S Shah (JIRA)
Rushabh S Shah created HDFS-12386:
-

 Summary: Add fsserver defaults call to WebhdfsFileSystem.
 Key: HDFS-12386
 URL: https://issues.apache.org/jira/browse/HDFS-12386
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: webhdfs
Reporter: Rushabh S Shah
Assignee: Rushabh S Shah
Priority: Minor






--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-12357) Let NameNode to bypass external attribute provider for special user

2017-09-01 Thread Yongjun Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12357?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16151047#comment-16151047
 ] 

Yongjun Zhang edited comment on HDFS-12357 at 9/1/17 7:32 PM:
--

Thanks [~chris.douglas] and [~manojg].

Sorry for a lengthy reply here:

{quote}
Would a filter implementation wrapping the configured, external attribute 
provider suffice?
{quote}
The current patch implements this logic (like an inlined version of the wrapper 
class in C++ world). If we put this logic to the wrapper class, I can see some 
issues:

1. the wrapper need to create two provider objects, one is the default (HDFS), 
the other is the external provider, and switch between these two. However, in 
the existing code, I don't see the default provider object is always created. 
See 2.a below.

The default value of the following config is empty, which means no default 
provider will be created.
{code}

  dfs.namenode.inode.attributes.provider.class
  
  
Name of class to use for delegating HDFS authorization.
  

{code}
Not sure whether we should have the default provider configured here.

2. currently there are two places to decide whether to consult external 
attribute provider
2.a.
{code}
  INodeAttributes getAttributes(INodesInPath iip)
  throws FileNotFoundException {
INode node = FSDirectory.resolveLastINode(iip);
int snapshot = iip.getPathSnapshotId();
INodeAttributes nodeAttrs = node.getSnapshotINode(snapshot);
if (attributeProvider != null) {
  // permission checking sends the full components array including the
  // first empty component for the root.  however file status
  // related calls are expected to strip out the root component according
  // to TestINodeAttributeProvider.
  byte[][] components = iip.getPathComponents();
  components = Arrays.copyOfRange(components, 1, components.length);
  nodeAttrs = attributeProvider.getAttributes(components, nodeAttrs);
}
return nodeAttrs;
  }
{code}
we already got the attributes from HDFS, then we decide to whether to overwrite 
it with provider's data. The easiest way is to check if the user is a special 
user, then we don't ask for provider's data at all. If we do this in a wrapper 
class, we always have to get some attributes, which maybe from HDFS or not. 
It's not a clear implementation and may incur runtime cost.

2.b
{code}
 @VisibleForTesting
  FSPermissionChecker getPermissionChecker(String fsOwner, String superGroup,
  UserGroupInformation ugi) throws AccessControlException {
return new FSPermissionChecker(
fsOwner, superGroup, ugi, attributeProvider);
  }
{code}
Here we need to pass either a null or the external attributeProvider configured 
to permission checker. if we include this logic to the external provider, we 
need have an API in this wrapper class, to return the external provicer or 
null, and pass it to the "attributeProvider" parameter in the above code. like
{code}
return new FSPermissionChecker(
fsOwner, superGroup, ugi, attributeProvider.getRealAttributeProvider());
{code}
We need to add this getRealAttibuteProvider() API to the base provider class, 
which is a bit weird because this API is only meaning ful in the wrapper layer.

Thoughts?

Thanks.



was (Author: yzhangal):
Thanks [~chris.douglas] and [~manojg].

Sorry for a lengthy reply here:

{quote}
Would a filter implementation wrapping the configured, external attribute 
provider suffice?
{quote}
The current patch implements this logic (like an inlined version of the wrapper 
class in C++ world). If we put this logic to the wrapper class, I can see some 
issues:

1. the wrapper need to create two provider objects, one is the default (HDFS), 
the other is the external provider, and switch between these two. However, in 
the existing code, I don't see the default provider object is always created. 
See 2.a below.

2. currently there are two places to decide whether to consult external 
attribute provider
2.a.
{code}
  INodeAttributes getAttributes(INodesInPath iip)
  throws FileNotFoundException {
INode node = FSDirectory.resolveLastINode(iip);
int snapshot = iip.getPathSnapshotId();
INodeAttributes nodeAttrs = node.getSnapshotINode(snapshot);
if (attributeProvider != null) {
  // permission checking sends the full components array including the
  // first empty component for the root.  however file status
  // related calls are expected to strip out the root component according
  // to TestINodeAttributeProvider.
  byte[][] components = iip.getPathComponents();
  components = Arrays.copyOfRange(components, 1, components.length);
  nodeAttrs = attributeProvider.getAttributes(components, nodeAttrs);
}
return nodeAttrs;
  }
{code}
we already got the attributes from HDFS, then we decide to whether to overwrite 
it with provider's data. The easiest 

[jira] [Commented] (HDFS-12357) Let NameNode to bypass external attribute provider for special user

2017-09-01 Thread Yongjun Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12357?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16151047#comment-16151047
 ] 

Yongjun Zhang commented on HDFS-12357:
--

Thanks [~chris.douglas] and [~manojg].

Sorry for a lengthy reply here:

{quote}
Would a filter implementation wrapping the configured, external attribute 
provider suffice?
{quote}
The current patch implements this logic (like an inlined version of the wrapper 
class in C++ world). If we put this logic to the wrapper class, I can see some 
issues:

1. the wrapper need to create two provider objects, one is the default (HDFS), 
the other is the external provider, and switch between these two. However, in 
the existing code, I don't see the default provider object is always created. 
See 2.a below.

2. currently there are two places to decide whether to consult external 
attribute provider
2.a.
{code}
  INodeAttributes getAttributes(INodesInPath iip)
  throws FileNotFoundException {
INode node = FSDirectory.resolveLastINode(iip);
int snapshot = iip.getPathSnapshotId();
INodeAttributes nodeAttrs = node.getSnapshotINode(snapshot);
if (attributeProvider != null) {
  // permission checking sends the full components array including the
  // first empty component for the root.  however file status
  // related calls are expected to strip out the root component according
  // to TestINodeAttributeProvider.
  byte[][] components = iip.getPathComponents();
  components = Arrays.copyOfRange(components, 1, components.length);
  nodeAttrs = attributeProvider.getAttributes(components, nodeAttrs);
}
return nodeAttrs;
  }
{code}
we already got the attributes from HDFS, then we decide to whether to overwrite 
it with provider's data. The easiest way is to check if the user is a special 
user, then we don't ask for provider's data at all. If we do this in a wrapper 
class, we always have to get some attributes, which maybe from HDFS or not. 
It's not a clear implementation and may incur runtime cost.

2.b
{code}
 @VisibleForTesting
  FSPermissionChecker getPermissionChecker(String fsOwner, String superGroup,
  UserGroupInformation ugi) throws AccessControlException {
return new FSPermissionChecker(
fsOwner, superGroup, ugi, attributeProvider);
  }
{code}
Here we need to pass either a null or the external attributeProvider configured 
to permission checker. if we include this logic to the external provider, we 
need have an API in this wrapper class, to return the external provicer or 
null, and pass it to the "attributeProvider" parameter in the above code. like
{code}
return new FSPermissionChecker(
fsOwner, superGroup, ugi, attributeProvider.getRealAttributeProvider());
{code}
We need to add this getRealAttibuteProvider() API to the base provider class, 
which is a bit weird because this API is only meaning ful in the wrapper layer.

Thoughts?

Thanks.


> Let NameNode to bypass external attribute provider for special user
> ---
>
> Key: HDFS-12357
> URL: https://issues.apache.org/jira/browse/HDFS-12357
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Yongjun Zhang
>Assignee: Yongjun Zhang
> Attachments: HDFS-12357.001.patch, HDFS-12357.002.patch
>
>
> This is a third proposal to solve the problem described in HDFS-12202.
> The problem is, when we do distcp from one cluster to another (or within the 
> same cluster), in addition to copying file data, we copy the metadata from 
> source to target. If external attribute provider is enabled, the metadata may 
> be read from the provider, thus provider data read from source may be saved 
> to target HDFS. 
> We want to avoid saving metadata from external provider to HDFS, so we want 
> to bypass external provider when doing the distcp (or hadoop fs -cp) 
> operation.
> Two alternative approaches were proposed earlier, one in HDFS-12202, the 
> other in HDFS-12294. The proposal here is the third one.
> The idea is, we introduce a new config, that specifies a special user (or a 
> list of users), and let NN bypass external provider when the current user is 
> a special user.
> If we run applications as the special user that need data from external 
> attribute provider, then it won't work. So the constraint on this approach 
> is, the special users here should not run applications that need data from 
> external provider.
> Thanks [~asuresh] for proposing this idea and [~chris.douglas], [~daryn], 
> [~manojg] for the discussions in the other jiras. 
> I'm creating this one to discuss further.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, 

[jira] [Updated] (HDFS-12357) Let NameNode to bypass external attribute provider for special user

2017-09-01 Thread Chris Douglas (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12357?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Douglas updated HDFS-12357:
-
Attachment: HDFS-12357.002.patch

Example moving this to a filter provider, no integration/tests, including a 
test verifying that the filter overrides all the methods of 
{{INodeAttributeProvider}}. Would this work?

> Let NameNode to bypass external attribute provider for special user
> ---
>
> Key: HDFS-12357
> URL: https://issues.apache.org/jira/browse/HDFS-12357
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Yongjun Zhang
>Assignee: Yongjun Zhang
> Attachments: HDFS-12357.001.patch, HDFS-12357.002.patch
>
>
> This is a third proposal to solve the problem described in HDFS-12202.
> The problem is, when we do distcp from one cluster to another (or within the 
> same cluster), in addition to copying file data, we copy the metadata from 
> source to target. If external attribute provider is enabled, the metadata may 
> be read from the provider, thus provider data read from source may be saved 
> to target HDFS. 
> We want to avoid saving metadata from external provider to HDFS, so we want 
> to bypass external provider when doing the distcp (or hadoop fs -cp) 
> operation.
> Two alternative approaches were proposed earlier, one in HDFS-12202, the 
> other in HDFS-12294. The proposal here is the third one.
> The idea is, we introduce a new config, that specifies a special user (or a 
> list of users), and let NN bypass external provider when the current user is 
> a special user.
> If we run applications as the special user that need data from external 
> attribute provider, then it won't work. So the constraint on this approach 
> is, the special users here should not run applications that need data from 
> external provider.
> Thanks [~asuresh] for proposing this idea and [~chris.douglas], [~daryn], 
> [~manojg] for the discussions in the other jiras. 
> I'm creating this one to discuss further.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12381) [Documentation] Adding configuration keys for the Router

2017-09-01 Thread Manoj Govindassamy (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12381?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16151000#comment-16151000
 ] 

Manoj Govindassamy commented on HDFS-12381:
---

Thanks [~goiri]. Previously attached patch has good details. Additional config 
related items will be very useful.

> [Documentation] Adding configuration keys for the Router
> 
>
> Key: HDFS-12381
> URL: https://issues.apache.org/jira/browse/HDFS-12381
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: fs
>Reporter: Íñigo Goiri
>Assignee: Íñigo Goiri
>Priority: Minor
> Fix For: HDFS-10467
>
> Attachments: HDFS-12381-HDFS-10467.000.patch
>
>
> Adding configuration options in tabular format.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12359) Re-encryption should operate with minimum KMS ACL requirements.

2017-09-01 Thread Wei-Chiu Chuang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12359?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16150976#comment-16150976
 ] 

Wei-Chiu Chuang commented on HDFS-12359:


Hi [~xiaochen] thanks again for the patch.

Would you please add a test to demonstrate:
bq. We should fix re-encryption to not require additional ACLs than original 
encryption.

I would expect you have a test where it fails if the patch is reverted.

> Re-encryption should operate with minimum KMS ACL requirements.
> ---
>
> Key: HDFS-12359
> URL: https://issues.apache.org/jira/browse/HDFS-12359
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: encryption
>Affects Versions: 3.0.0-beta1
>Reporter: Xiao Chen
>Assignee: Xiao Chen
> Attachments: HDFS-12359.01.patch, HDFS-12359.02.patch
>
>
> This was caught from KMS acl testing.
> HDFS-10899 gets the current key versions from KMS directly, which requires 
> {{READ}} acls.
> It also calls invalidateCache, which requires {{MANAGEMENT}} acls.
> We should fix re-encryption to not require additional ACLs than original 
> encryption.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12357) Let NameNode to bypass external attribute provider for special user

2017-09-01 Thread Manoj Govindassamy (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12357?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16150921#comment-16150921
 ] 

Manoj Govindassamy commented on HDFS-12357:
---

Thanks for working on this [~yzhangal]. Thanks [~chris.douglas] for your review 
and comments.

I believe the motive here is to strictly not return any of external provider 
attributes for certain users. Tools like distcp can listFileStatus() as this 
special user to get plain/standalone hdfs attributes which can then be _safely_ 
copied to a remote hdfs. We might not want tools like DistCp to copy external 
attributes to HDFS. 

Now, this knob/control for returning external attributes can either be given to 
HDFS or the external provider. While having all the logics about returning the 
right set of attributes at a single place, like the provider does sound like 
very good idea, there is still a gap in the design. If I understand the problem 
rightly, here the choice need to be given to HDFS whether to contact external 
attributes provider or return the local default provider, so as to be totally 
sure that right set of attributes are returned. May be this guarantee is not 
established if the control is placed at the external provider. 


> Let NameNode to bypass external attribute provider for special user
> ---
>
> Key: HDFS-12357
> URL: https://issues.apache.org/jira/browse/HDFS-12357
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Yongjun Zhang
>Assignee: Yongjun Zhang
> Attachments: HDFS-12357.001.patch
>
>
> This is a third proposal to solve the problem described in HDFS-12202.
> The problem is, when we do distcp from one cluster to another (or within the 
> same cluster), in addition to copying file data, we copy the metadata from 
> source to target. If external attribute provider is enabled, the metadata may 
> be read from the provider, thus provider data read from source may be saved 
> to target HDFS. 
> We want to avoid saving metadata from external provider to HDFS, so we want 
> to bypass external provider when doing the distcp (or hadoop fs -cp) 
> operation.
> Two alternative approaches were proposed earlier, one in HDFS-12202, the 
> other in HDFS-12294. The proposal here is the third one.
> The idea is, we introduce a new config, that specifies a special user (or a 
> list of users), and let NN bypass external provider when the current user is 
> a special user.
> If we run applications as the special user that need data from external 
> attribute provider, then it won't work. So the constraint on this approach 
> is, the special users here should not run applications that need data from 
> external provider.
> Thanks [~asuresh] for proposing this idea and [~chris.douglas], [~daryn], 
> [~manojg] for the discussions in the other jiras. 
> I'm creating this one to discuss further.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12357) Let NameNode to bypass external attribute provider for special user

2017-09-01 Thread Chris Douglas (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12357?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16150890#comment-16150890
 ] 

Chris Douglas commented on HDFS-12357:
--

bq. we can implement the same logic in the provider. However, that means all 
different providers (sentry, ranger etc) need to be fixed accordingly
Would a filter implementation wrapping the configured, external attribute 
provider suffice?

> Let NameNode to bypass external attribute provider for special user
> ---
>
> Key: HDFS-12357
> URL: https://issues.apache.org/jira/browse/HDFS-12357
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Yongjun Zhang
>Assignee: Yongjun Zhang
> Attachments: HDFS-12357.001.patch
>
>
> This is a third proposal to solve the problem described in HDFS-12202.
> The problem is, when we do distcp from one cluster to another (or within the 
> same cluster), in addition to copying file data, we copy the metadata from 
> source to target. If external attribute provider is enabled, the metadata may 
> be read from the provider, thus provider data read from source may be saved 
> to target HDFS. 
> We want to avoid saving metadata from external provider to HDFS, so we want 
> to bypass external provider when doing the distcp (or hadoop fs -cp) 
> operation.
> Two alternative approaches were proposed earlier, one in HDFS-12202, the 
> other in HDFS-12294. The proposal here is the third one.
> The idea is, we introduce a new config, that specifies a special user (or a 
> list of users), and let NN bypass external provider when the current user is 
> a special user.
> If we run applications as the special user that need data from external 
> attribute provider, then it won't work. So the constraint on this approach 
> is, the special users here should not run applications that need data from 
> external provider.
> Thanks [~asuresh] for proposing this idea and [~chris.douglas], [~daryn], 
> [~manojg] for the discussions in the other jiras. 
> I'm creating this one to discuss further.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12339) NFS Gateway on Shutdown Gives Unregistration Failure. Does Not Unregister with rpcbind Portmapper

2017-09-01 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12339?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16150881#comment-16150881
 ] 

Hadoop QA commented on HDFS-12339:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
14s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
45s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 
 5s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 20m  
9s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
 0s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
49s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
2s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
41s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
14s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
29s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 11m  
1s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 11m  
1s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
 0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
40s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
30s{color} | {color:green} hadoop-nfs in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m  
5s{color} | {color:green} hadoop-hdfs-nfs in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
30s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 62m  5s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:71bbb86 |
| JIRA Issue | HDFS-12339 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12884974/HDFS-12339.001.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 187cc9b46844 3.13.0-123-generic #172-Ubuntu SMP Mon Jun 26 
18:04:35 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 99a7f5d |
| Default Java | 1.8.0_144 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/20969/testReport/ |
| modules | C: hadoop-common-project/hadoop-nfs 
hadoop-hdfs-project/hadoop-hdfs-nfs U: . |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/20969/console |
| Powered by | Apache Yetus 0.6.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> NFS Gateway on Shutdown Gives Unregistration Failure. Does Not Unregister 
> with rpcbind Portmapper
> 

[jira] [Resolved] (HDFS-11515) -du throws ConcurrentModificationException

2017-09-01 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11515?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang resolved HDFS-11515.

  Resolution: Invalid
Target Version/s: 2.8.1, 3.0.0-beta1  (was: 3.0.0-beta1, 2.8.1)

Agree, let's resolve this one too. Thanks Istvan.

> -du throws ConcurrentModificationException
> --
>
> Key: HDFS-11515
> URL: https://issues.apache.org/jira/browse/HDFS-11515
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode, shell
>Affects Versions: 2.8.0, 3.0.0-alpha2
>Reporter: Wei-Chiu Chuang
>Assignee: Istvan Fajth
> Attachments: HDFS-11515.001.patch, HDFS-11515.002.patch, 
> HDFS-11515.003.patch, HDFS-11515.004.patch, HDFS-11515.test.patch
>
>
> HDFS-10797 fixed a disk summary (-du) bug, but it introduced a new bug.
> The bug can be reproduced running the following commands:
> {noformat}
> bash-4.1$ hdfs dfs -mkdir /tmp/d0
> bash-4.1$ hdfs dfsadmin -allowSnapshot /tmp/d0
> Allowing snaphot on /tmp/d0 succeeded
> bash-4.1$ hdfs dfs -touchz /tmp/d0/f4
> bash-4.1$ hdfs dfs -mkdir /tmp/d0/d1
> bash-4.1$ hdfs dfs -createSnapshot /tmp/d0 s1
> Created snapshot /tmp/d0/.snapshot/s1
> bash-4.1$ hdfs dfs -mkdir /tmp/d0/d1/d2
> bash-4.1$ hdfs dfs -mkdir /tmp/d0/d1/d3
> bash-4.1$ hdfs dfs -mkdir /tmp/d0/d1/d2/d4
> bash-4.1$ hdfs dfs -mkdir /tmp/d0/d1/d3/d5
> bash-4.1$ hdfs dfs -createSnapshot /tmp/d0 s2
> Created snapshot /tmp/d0/.snapshot/s2
> bash-4.1$ hdfs dfs -rmdir /tmp/d0/d1/d2/d4
> bash-4.1$ hdfs dfs -rmdir /tmp/d0/d1/d2
> bash-4.1$ hdfs dfs -rmdir /tmp/d0/d1/d3/d5
> bash-4.1$ hdfs dfs -rmdir /tmp/d0/d1/d3
> bash-4.1$ hdfs dfs -du -h /tmp/d0
> du: java.util.ConcurrentModificationException
> 0 0 /tmp/d0/f4
> {noformat}
> A ConcurrentModificationException forced du to terminate abruptly.
> Correspondingly, NameNode log has the following error:
> {noformat}
> 2017-03-08 14:32:17,673 WARN org.apache.hadoop.ipc.Server: IPC Server handler 
> 4 on 8020, call org.apache.hadoop.hdfs.protocol.ClientProtocol.getContentSumma
> ry from 10.0.0.198:49957 Call#2 Retry#0
> java.util.ConcurrentModificationException
> at java.util.HashMap$HashIterator.nextEntry(HashMap.java:922)
> at java.util.HashMap$KeyIterator.next(HashMap.java:956)
> at 
> org.apache.hadoop.hdfs.server.namenode.ContentSummaryComputationContext.tallyDeletedSnapshottedINodes(ContentSummaryComputationContext.java:209)
> at 
> org.apache.hadoop.hdfs.server.namenode.INode.computeAndConvertContentSummary(INode.java:507)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSDirectory.getContentSummary(FSDirectory.java:2302)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getContentSummary(FSNamesystem.java:4535)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getContentSummary(NameNodeRpcServer.java:1087)
> at 
> org.apache.hadoop.hdfs.server.namenode.AuthorizationProviderProxyClientProtocol.getContentSummary(AuthorizationProviderProxyClientProtocol.java:5
> 63)
> at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getContentSummary(ClientNamenodeProtocolServerSideTranslatorPB.jav
> a:873)
> at 
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:617)
> at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1073)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2216)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2212)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:415)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1920)
> at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2210)
> {noformat}
> The bug is due to a improper use of HashSet, not concurrent operations. 
> Basically, a HashSet can not be updated while an iterator is traversing it.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12310) [SPS]: Provide an option to track the status of in progress requests

2017-09-01 Thread Uma Maheswara Rao G (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12310?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16150862#comment-16150862
 ] 

Uma Maheswara Rao G commented on HDFS-12310:


When I had discussion with [~andrew.wong], he suggested to have at least a 
status command like 'setrep -w'
Here is the proposal:
  Add a subcommand to wait to finish the SPS work. ex: hdfs storagepolicies 
-satisfyStoragePolicy -path  -w
 Here -w will make this call to wait and check the progress of SPS work. This 
call could be block for long time until the work finished. We will add a client 
API to get the SatisfierStatus of SPS work. SatisfierStatus could contain the 
status of Failed/Retried Items, status=IN_PROGRESS, NOT_AVAILABLE (if SPS work 
finished and cleaned up all the information or no SPS calls made to that 
directory/file ), SUCCESS ( once Work finished, we could cache the status for 
certain amount of time (say 1min or 5mins?). After this period, status would be 
turned to NOT_AVAILABLE).
In the command line: '-w' will make use of above API and wait at client side.

Any comments?

> [SPS]: Provide an option to track the status of in progress requests
> 
>
> Key: HDFS-12310
> URL: https://issues.apache.org/jira/browse/HDFS-12310
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: datanode, namenode
>Reporter: Uma Maheswara Rao G
>
> As per the [~andrew.wang] 's review comments in HDFS-10285, This is the JIRA 
> for tracking about the options how we track the progress of SPS requests.
>  



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12339) NFS Gateway on Shutdown Gives Unregistration Failure. Does Not Unregister with rpcbind Portmapper

2017-09-01 Thread Mukul Kumar Singh (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12339?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16150826#comment-16150826
 ] 

Mukul Kumar Singh commented on HDFS-12339:
--

The issue in the current code is that for {{PrivilegedNfsGatewayStarter}}, the 
registration socket is closed as part of destroy.

{code}
  @Override
  public void destroy() {
if (registrationSocket != null && !registrationSocket.isClosed()) {
  registrationSocket.close();
}
  }
{code}

Where as the shutdown hook later uses this closed port to unregister the 
services. Hence causing the error in the logs.

> NFS Gateway on Shutdown Gives Unregistration Failure. Does Not Unregister 
> with rpcbind Portmapper
> -
>
> Key: HDFS-12339
> URL: https://issues.apache.org/jira/browse/HDFS-12339
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: nfs
>Affects Versions: 2.6.0
>Reporter: Sailesh Patel
>Assignee: Mukul Kumar Singh
> Attachments: HDFS-12339.001.patch
>
>
> When stopping NFS Gateway the following error is thrown in the NFS gateway 
> role logs.
> 2017-08-17 18:09:16,529 ERROR org.apache.hadoop.oncrpc.RpcProgram: 
> Unregistration failure with localhost:2049, portmap entry: 
> (PortmapMapping-13:3:6:2049)
> 2017-08-17 18:09:16,531 WARN org.apache.hadoop.util.ShutdownHookManager: 
> ShutdownHook 'NfsShutdownHook' failed, java.lang.RuntimeException: 
> Unregistration failure
> java.lang.RuntimeException: Unregistration failure
> ..
> Caused by: java.net.SocketException: Socket is closed
> at java.net.DatagramSocket.send(DatagramSocket.java:641)
> at org.apache.hadoop.oncrpc.SimpleUdpClient.run(SimpleUdpClient.java:62)
> Checking rpcinfo -p : the following entry is still there:
> " 13 3 tcp 2049 nfs"



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12357) Let NameNode to bypass external attribute provider for special user

2017-09-01 Thread Yongjun Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12357?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16150819#comment-16150819
 ] 

Yongjun Zhang commented on HDFS-12357:
--

HI [~chris.douglas],

Thanks a lot for your comment.

Some thoughts:

- I assumed that the external attribute provider is not expected to have 
knowledge of NameNode, is this not the case? 
- I agree that if we call NameNode.getRemoteUser in external provider, we can 
implement the same logic in the provider. However, that means all different 
providers (sentry, ranger etc) need to be fixed accordingly, otherwise we will 
get unexpected result. Is this what we want to do?
- The problem here is to decide whether to consult ext provider based on user, 
not based on user/path combination. So it seems more clear to let NN to decide 
whether to consult ext provider. If we let the provider to decide, and if there 
is bug in the provider, we will get unexpected result.
- Operation-wise, to change all provider's implementation and update clusters 
is more expensive. 

What do you think about these points?

Thanks.



> Let NameNode to bypass external attribute provider for special user
> ---
>
> Key: HDFS-12357
> URL: https://issues.apache.org/jira/browse/HDFS-12357
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Yongjun Zhang
>Assignee: Yongjun Zhang
> Attachments: HDFS-12357.001.patch
>
>
> This is a third proposal to solve the problem described in HDFS-12202.
> The problem is, when we do distcp from one cluster to another (or within the 
> same cluster), in addition to copying file data, we copy the metadata from 
> source to target. If external attribute provider is enabled, the metadata may 
> be read from the provider, thus provider data read from source may be saved 
> to target HDFS. 
> We want to avoid saving metadata from external provider to HDFS, so we want 
> to bypass external provider when doing the distcp (or hadoop fs -cp) 
> operation.
> Two alternative approaches were proposed earlier, one in HDFS-12202, the 
> other in HDFS-12294. The proposal here is the third one.
> The idea is, we introduce a new config, that specifies a special user (or a 
> list of users), and let NN bypass external provider when the current user is 
> a special user.
> If we run applications as the special user that need data from external 
> attribute provider, then it won't work. So the constraint on this approach 
> is, the special users here should not run applications that need data from 
> external provider.
> Thanks [~asuresh] for proposing this idea and [~chris.douglas], [~daryn], 
> [~manojg] for the discussions in the other jiras. 
> I'm creating this one to discuss further.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12291) [SPS]: Provide a mechanism to recursively iterate and satisfy storage policy of all the files under the given dir

2017-09-01 Thread Uma Maheswara Rao G (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12291?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16150812#comment-16150812
 ] 

Uma Maheswara Rao G commented on HDFS-12291:


Hi [~xiaochen], thank you for detailed review! 
{quote}
The most headache for re-encryption was renames.
.The difficulty here is to guarantee no inodes are lost in the iteration.
{quote}
We thought about this scenario, I think to make it simple at this stage, In 
SPS, how about we make constraint to call SPS for every rename op? Later when 
we enable automatic SPS, this should be handed automatically. 
For any rename, user has to call SPS on destination to trigger the SPS. Now 
irrespective of whether directory already under SPS call or not, it should be 
fine to make users to call SPS. Because, if multiple users working on some 
directory tree, one user may not know whether SPS already called on parent or 
not. So, it is recommended to call SPS for rename ops. Once we enable SPS to do 
automatic later, all renames will be tracked automatically. Does this make 
sense?

> [SPS]: Provide a mechanism to recursively iterate and satisfy storage policy 
> of all the files under the given dir
> -
>
> Key: HDFS-12291
> URL: https://issues.apache.org/jira/browse/HDFS-12291
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: datanode, namenode
>Reporter: Rakesh R
>Assignee: Surendra Singh Lilhore
> Attachments: HDFS-12291-HDFS-10285-01.patch, 
> HDFS-12291-HDFS-10285-02.patch
>
>
> For the given source path directory, presently SPS consider only the files 
> immediately under the directory(only one level of scanning) for satisfying 
> the policy. It WON’T do recursive directory scanning and then schedules SPS 
> tasks to satisfy the storage policy of all the files till the leaf node. 
> The idea of this jira is to discuss & implement an efficient recursive 
> directory iteration mechanism and satisfies storage policy for all the files 
> under the given directory.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12385) Ozone: OzoneClient: Refactoring of OzoneClient API

2017-09-01 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12385?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16150809#comment-16150809
 ] 

Hadoop QA commented on HDFS-12385:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
15s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} HDFS-7240 Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
40s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 
31s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
39s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
44s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
38s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
41s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
43s{color} | {color:green} HDFS-7240 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
7s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
30s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
32s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
32s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 41s{color} | {color:orange} hadoop-hdfs-project: The patch generated 1 new + 
0 unchanged - 0 fixed = 1 total (was 0) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
32s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
2s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
51s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs-client generated 1 new 
+ 0 unchanged - 0 fixed = 1 total (was 0) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
50s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
38s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 79m 30s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  1m 
53s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}121m 49s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| FindBugs | module:hadoop-hdfs-project/hadoop-hdfs-client |
|  |  Boxed value is unboxed and then immediately reboxed in 
org.apache.hadoop.ozone.client.rpc.RpcProtocol.createBucket(String, String, 
BucketArgs)  At RpcProtocol.java:then immediately reboxed in 
org.apache.hadoop.ozone.client.rpc.RpcProtocol.createBucket(String, String, 
BucketArgs)  At RpcProtocol.java:[line 265] |
| Failed junit tests | hadoop.hdfs.TestDFSStripedInputStreamWithRandomECPolicy |
|   | hadoop.ozone.web.client.TestKeys |
|   | hadoop.ozone.TestOzoneConfigurationFields |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | HDFS-12385 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12884955/HDFS-12385-HDFS-7240.000.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  xml  |
| uname | Linux 2d4a10c14938 

[jira] [Commented] (HDFS-11515) -du throws ConcurrentModificationException

2017-09-01 Thread Istvan Fajth (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11515?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16150796#comment-16150796
 ] 

Istvan Fajth commented on HDFS-11515:
-

Hi [~wchevreuil],

thank you for the hints, that could have been a solution as well, I believe I 
was afraid that the structure can be large, and did not wanted to duplicate it, 
and I was as well not sure what can happen as the problem was stemming from the 
fact that the deleted inodes was not discovered recursively in the tree as I 
remember, so with a copy we would have INodes that are left out from the 
calculation.

[~andrew.wang] I am not sure why this issue was reopened, could you please 
clarify? As far as I know, HDFS-10797 was reverted, along with my patch here 
due to HDFS-11661 and I do not see anything we need to do with this one, as 
after reverting HDFS-10797 this is more or less becomes a "Not a bug" or a 
"Won't fix" as I think.

> -du throws ConcurrentModificationException
> --
>
> Key: HDFS-11515
> URL: https://issues.apache.org/jira/browse/HDFS-11515
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode, shell
>Affects Versions: 2.8.0, 3.0.0-alpha2
>Reporter: Wei-Chiu Chuang
>Assignee: Istvan Fajth
> Attachments: HDFS-11515.001.patch, HDFS-11515.002.patch, 
> HDFS-11515.003.patch, HDFS-11515.004.patch, HDFS-11515.test.patch
>
>
> HDFS-10797 fixed a disk summary (-du) bug, but it introduced a new bug.
> The bug can be reproduced running the following commands:
> {noformat}
> bash-4.1$ hdfs dfs -mkdir /tmp/d0
> bash-4.1$ hdfs dfsadmin -allowSnapshot /tmp/d0
> Allowing snaphot on /tmp/d0 succeeded
> bash-4.1$ hdfs dfs -touchz /tmp/d0/f4
> bash-4.1$ hdfs dfs -mkdir /tmp/d0/d1
> bash-4.1$ hdfs dfs -createSnapshot /tmp/d0 s1
> Created snapshot /tmp/d0/.snapshot/s1
> bash-4.1$ hdfs dfs -mkdir /tmp/d0/d1/d2
> bash-4.1$ hdfs dfs -mkdir /tmp/d0/d1/d3
> bash-4.1$ hdfs dfs -mkdir /tmp/d0/d1/d2/d4
> bash-4.1$ hdfs dfs -mkdir /tmp/d0/d1/d3/d5
> bash-4.1$ hdfs dfs -createSnapshot /tmp/d0 s2
> Created snapshot /tmp/d0/.snapshot/s2
> bash-4.1$ hdfs dfs -rmdir /tmp/d0/d1/d2/d4
> bash-4.1$ hdfs dfs -rmdir /tmp/d0/d1/d2
> bash-4.1$ hdfs dfs -rmdir /tmp/d0/d1/d3/d5
> bash-4.1$ hdfs dfs -rmdir /tmp/d0/d1/d3
> bash-4.1$ hdfs dfs -du -h /tmp/d0
> du: java.util.ConcurrentModificationException
> 0 0 /tmp/d0/f4
> {noformat}
> A ConcurrentModificationException forced du to terminate abruptly.
> Correspondingly, NameNode log has the following error:
> {noformat}
> 2017-03-08 14:32:17,673 WARN org.apache.hadoop.ipc.Server: IPC Server handler 
> 4 on 8020, call org.apache.hadoop.hdfs.protocol.ClientProtocol.getContentSumma
> ry from 10.0.0.198:49957 Call#2 Retry#0
> java.util.ConcurrentModificationException
> at java.util.HashMap$HashIterator.nextEntry(HashMap.java:922)
> at java.util.HashMap$KeyIterator.next(HashMap.java:956)
> at 
> org.apache.hadoop.hdfs.server.namenode.ContentSummaryComputationContext.tallyDeletedSnapshottedINodes(ContentSummaryComputationContext.java:209)
> at 
> org.apache.hadoop.hdfs.server.namenode.INode.computeAndConvertContentSummary(INode.java:507)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSDirectory.getContentSummary(FSDirectory.java:2302)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getContentSummary(FSNamesystem.java:4535)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getContentSummary(NameNodeRpcServer.java:1087)
> at 
> org.apache.hadoop.hdfs.server.namenode.AuthorizationProviderProxyClientProtocol.getContentSummary(AuthorizationProviderProxyClientProtocol.java:5
> 63)
> at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getContentSummary(ClientNamenodeProtocolServerSideTranslatorPB.jav
> a:873)
> at 
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:617)
> at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1073)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2216)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2212)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:415)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1920)
> at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2210)
> {noformat}
> The bug is due to a improper use of HashSet, not concurrent operations. 
> Basically, a HashSet can not be updated while an iterator is traversing it.



--
This message 

[jira] [Updated] (HDFS-12339) NFS Gateway on Shutdown Gives Unregistration Failure. Does Not Unregister with rpcbind Portmapper

2017-09-01 Thread Mukul Kumar Singh (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12339?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mukul Kumar Singh updated HDFS-12339:
-
Attachment: HDFS-12339.001.patch

Please find the attached patch which fixes this issue.

I have tested this my starting a nfs server using ambari. Without this patch I 
was able to reproduce the issue.

Also after the patch, the nfs and mountd daemon are un-regisered.

{code}
[vagrant@c6803 ~]$ rpcinfo -p
   program vers proto   port  service
104   tcp111  portmapper
103   tcp111  portmapper
102   tcp111  portmapper
104   udp111  portmapper
103   udp111  portmapper
102   udp111  portmapper
1000241   udp  48682  status
1000241   tcp  60059  status
{code}

Following is the snippet of the nfs3 logs after the fix.
{code}
2017-09-01 16:06:39,902 INFO  http.HttpServer2 
(HttpServer2.java:bindListener(989)) - Jetty bound to port 50079
 
2017-09-01 16:06:39,907 INFO  mortbay.log (Slf4jLog.java:info(67)) - 
jetty-6.1.26.hwx
   
2017-09-01 16:06:41,397 INFO  mortbay.log (Slf4jLog.java:info(67)) - Started 
HttpServer2$SelectChannelConnectorWithSafeStartup@0.0.0.0:50079 
   
2017-09-01 16:06:41,413 INFO  oncrpc.SimpleTcpServer 
(SimpleTcpServer.java:run(92)) - Started listening to TCP requests at port 2049 
for Rpc program: NFS3 at localhost:2049 with workerCount 0 
2017-09-01 16:12:08,534 INFO  nfs3.AsyncDataService 
(AsyncDataService.java:shutdown(85)) - Shutting down all async data service 
threads...  
2017-09-01 16:12:08,535 INFO  nfs3.AsyncDataService 
(AsyncDataService.java:shutdown(90)) - All async data service threads have been 
shut down   
2017-09-01 16:12:08,535 INFO  nfs3.OpenFileCtxCache 
(OpenFileCtxCache.java:run(265)) - StreamMonitor got interrupted

2017-09-01 16:12:08,549 INFO  mortbay.log (Slf4jLog.java:info(67)) - Stopped 
HttpServer2$SelectChannelConnectorWithSafeStartup@0.0.0.0:50079 
   
2017-09-01 16:12:08,550 WARN  http.HttpServer2 
(HttpServer2.java:isRunning(544)) - HttpServer Acceptor: isRunning is false. 
Rechecking. 
2017-09-01 16:12:08,552 WARN  http.HttpServer2 
(HttpServer2.java:isRunning(553)) - HttpServer Acceptor: isRunning is false 
 
2017-09-01 16:12:08,700 INFO  nfs3.Nfs3Base (LogAdapter.java:info(45)) - 
SHUTDOWN_MSG:  
/   

SHUTDOWN_MSG: Shutting down Nfs3 at c6803.ambari.apache.org/192.168.68.103  

/ 
{code}

> NFS Gateway on Shutdown Gives Unregistration Failure. Does Not Unregister 
> with rpcbind Portmapper
> -
>
> Key: HDFS-12339
> URL: https://issues.apache.org/jira/browse/HDFS-12339
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: nfs
>Affects Versions: 2.6.0
>Reporter: Sailesh Patel
>Assignee: Mukul Kumar Singh
> Attachments: HDFS-12339.001.patch
>
>
> When stopping NFS Gateway the following error is thrown in the NFS gateway 
> role logs.
> 2017-08-17 18:09:16,529 ERROR org.apache.hadoop.oncrpc.RpcProgram: 
> Unregistration failure with localhost:2049, portmap entry: 
> (PortmapMapping-13:3:6:2049)
> 2017-08-17 18:09:16,531 WARN org.apache.hadoop.util.ShutdownHookManager: 
> ShutdownHook 'NfsShutdownHook' failed, java.lang.RuntimeException: 
> Unregistration failure
> java.lang.RuntimeException: Unregistration failure
> ..
> Caused by: java.net.SocketException: Socket is closed
> at java.net.DatagramSocket.send(DatagramSocket.java:641)
> at org.apache.hadoop.oncrpc.SimpleUdpClient.run(SimpleUdpClient.java:62)
> Checking rpcinfo -p : the following entry is still there:
> " 13 3 tcp 2049 nfs"



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12339) NFS Gateway on Shutdown Gives Unregistration Failure. Does Not Unregister with rpcbind Portmapper

2017-09-01 Thread Mukul Kumar Singh (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12339?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mukul Kumar Singh updated HDFS-12339:
-
Status: Patch Available  (was: Open)

> NFS Gateway on Shutdown Gives Unregistration Failure. Does Not Unregister 
> with rpcbind Portmapper
> -
>
> Key: HDFS-12339
> URL: https://issues.apache.org/jira/browse/HDFS-12339
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: nfs
>Affects Versions: 2.6.0
>Reporter: Sailesh Patel
>Assignee: Mukul Kumar Singh
> Attachments: HDFS-12339.001.patch
>
>
> When stopping NFS Gateway the following error is thrown in the NFS gateway 
> role logs.
> 2017-08-17 18:09:16,529 ERROR org.apache.hadoop.oncrpc.RpcProgram: 
> Unregistration failure with localhost:2049, portmap entry: 
> (PortmapMapping-13:3:6:2049)
> 2017-08-17 18:09:16,531 WARN org.apache.hadoop.util.ShutdownHookManager: 
> ShutdownHook 'NfsShutdownHook' failed, java.lang.RuntimeException: 
> Unregistration failure
> java.lang.RuntimeException: Unregistration failure
> ..
> Caused by: java.net.SocketException: Socket is closed
> at java.net.DatagramSocket.send(DatagramSocket.java:641)
> at org.apache.hadoop.oncrpc.SimpleUdpClient.run(SimpleUdpClient.java:62)
> Checking rpcinfo -p : the following entry is still there:
> " 13 3 tcp 2049 nfs"



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12300) Audit-log delegation token related operations

2017-09-01 Thread Ravi Prakash (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12300?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16150765#comment-16150765
 ] 

Ravi Prakash commented on HDFS-12300:
-

Thanks for the work Xiao!

Up to you to push into branch-2. I'm supportive of it.

> Audit-log delegation token related operations
> -
>
> Key: HDFS-12300
> URL: https://issues.apache.org/jira/browse/HDFS-12300
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Affects Versions: 0.22.0
>Reporter: Xiao Chen
>Assignee: Xiao Chen
> Fix For: 3.0.0-beta1
>
> Attachments: HDFS-12300.01.patch, HDFS-12300.02.patch
>
>
> When inspecting the code, I found that the following methods in FSNamesystem 
> are not audit logged:
> - getDelegationToken
> - renewDelegationToken
> - cancelDelegationToken
> The audit log itself does have a logTokenTrackingId field to additionally log 
> some details when a token is used for authentication.
> After emailing the community, we should add that.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11882) Client fails if acknowledged size is greater than bytes sent

2017-09-01 Thread Kai Zheng (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11882?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16150716#comment-16150716
 ] 

Kai Zheng commented on HDFS-11882:
--

Thanks [~andrew.wang] for the explanation. Got it. Suggest a minor change of 
the comment text: "Parity cells are of the length of the longest data cells"

Another message clarifying: "Full stripe length can't be greater than file 
length" => "... greater than the block group length"

In the new function you have two similar {{for}} blocks. It's possible to 
refactor the codes and just share the same one if you would save block numBytes 
in an array (instead of the ArrayList) and reuse the array.
{code}
+for (int i = 0; i < numAllBlocks; i++) {
+  final StripedDataStreamer streamer = getStripedDataStreamer(i);
+  if (streamer.isHealthy()) {
+if (streamer.getBlock() != null) {
+  final long numBytes = streamer.getBlock().getNumBytes();
+  if (numBytes == expectedBlockLengths[i]) {
+numBlocksWithCorrectLength++;
+  }
+}
+  }
+}
+
{code}

> Client fails if acknowledged size is greater than bytes sent
> 
>
> Key: HDFS-11882
> URL: https://issues.apache.org/jira/browse/HDFS-11882
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: erasure-coding, test
>Reporter: Akira Ajisaka
>Assignee: Andrew Wang
>Priority: Critical
>  Labels: hdfs-ec-3.0-must-do
> Attachments: HDFS-11882.01.patch, HDFS-11882.02.patch, 
> HDFS-11882.03.patch, HDFS-11882.04.patch, HDFS-11882.05.patch, 
> HDFS-11882.regressiontest.patch
>
>
> Some tests of erasure coding fails by the following exception. The following 
> test was removed by HDFS-11823, however, this type of error can happen in 
> real cluster.
> {noformat}
> Running 
> org.apache.hadoop.hdfs.TestDFSRSDefault10x4StripedOutputStreamWithFailure
> Tests run: 14, Failures: 0, Errors: 1, Skipped: 10, Time elapsed: 89.086 sec 
> <<< FAILURE! - in 
> org.apache.hadoop.hdfs.TestDFSRSDefault10x4StripedOutputStreamWithFailure
> testMultipleDatanodeFailure56(org.apache.hadoop.hdfs.TestDFSRSDefault10x4StripedOutputStreamWithFailure)
>   Time elapsed: 38.831 sec  <<< ERROR!
> java.lang.IllegalStateException: null
>   at 
> com.google.common.base.Preconditions.checkState(Preconditions.java:129)
>   at 
> org.apache.hadoop.hdfs.DFSStripedOutputStream.updatePipeline(DFSStripedOutputStream.java:780)
>   at 
> org.apache.hadoop.hdfs.DFSStripedOutputStream.checkStreamerFailures(DFSStripedOutputStream.java:664)
>   at 
> org.apache.hadoop.hdfs.DFSStripedOutputStream.closeImpl(DFSStripedOutputStream.java:1034)
>   at 
> org.apache.hadoop.hdfs.DFSOutputStream.close(DFSOutputStream.java:842)
>   at 
> org.apache.hadoop.fs.FSDataOutputStream$PositionCache.close(FSDataOutputStream.java:72)
>   at 
> org.apache.hadoop.fs.FSDataOutputStream.close(FSDataOutputStream.java:101)
>   at 
> org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure.runTest(TestDFSStripedOutputStreamWithFailure.java:472)
>   at 
> org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure.runTestWithMultipleFailure(TestDFSStripedOutputStreamWithFailure.java:381)
>   at 
> org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure.testMultipleDatanodeFailure56(TestDFSStripedOutputStreamWithFailure.java:245)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12339) NFS Gateway on Shutdown Gives Unregistration Failure. Does Not Unregister with rpcbind Portmapper

2017-09-01 Thread Mukul Kumar Singh (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12339?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mukul Kumar Singh updated HDFS-12339:
-
Component/s: nfs

> NFS Gateway on Shutdown Gives Unregistration Failure. Does Not Unregister 
> with rpcbind Portmapper
> -
>
> Key: HDFS-12339
> URL: https://issues.apache.org/jira/browse/HDFS-12339
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: nfs
>Affects Versions: 2.6.0
>Reporter: Sailesh Patel
>Assignee: Mukul Kumar Singh
>
> When stopping NFS Gateway the following error is thrown in the NFS gateway 
> role logs.
> 2017-08-17 18:09:16,529 ERROR org.apache.hadoop.oncrpc.RpcProgram: 
> Unregistration failure with localhost:2049, portmap entry: 
> (PortmapMapping-13:3:6:2049)
> 2017-08-17 18:09:16,531 WARN org.apache.hadoop.util.ShutdownHookManager: 
> ShutdownHook 'NfsShutdownHook' failed, java.lang.RuntimeException: 
> Unregistration failure
> java.lang.RuntimeException: Unregistration failure
> ..
> Caused by: java.net.SocketException: Socket is closed
> at java.net.DatagramSocket.send(DatagramSocket.java:641)
> at org.apache.hadoop.oncrpc.SimpleUdpClient.run(SimpleUdpClient.java:62)
> Checking rpcinfo -p : the following entry is still there:
> " 13 3 tcp 2049 nfs"



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12340) Ozone: C/C++ implementation of ozone client using curl

2017-09-01 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12340?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16150651#comment-16150651
 ] 

Hadoop QA commented on HDFS-12340:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
16s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} HDFS-7240 Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
22s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 17m 
51s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 16m 
17s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 10m 
54s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  6m  
5s{color} | {color:green} HDFS-7240 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
18s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 18m 
35s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 11m 
42s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} cc {color} | {color:red} 11m 42s{color} | 
{color:red} root generated 16 new + 7 unchanged - 0 fixed = 23 total (was 7) 
{color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 11m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 12m  
1s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 34 line(s) that end in whitespace. Use 
git apply --whitespace=fix <>. Refer 
https://git-scm.com/docs/git-apply {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  7m  
2s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 15m 46s{color} 
| {color:red} root in the patch failed. {color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  0m 
29s{color} | {color:red} The patch generated 1 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black}118m 31s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.security.TestRaceWhenRelogin |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | HDFS-12340 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12884933/HDFS-12340-HDFS-7240.001.patch
 |
| Optional Tests |  asflicense  compile  cc  mvnsite  javac  unit  javadoc  
mvninstall  |
| uname | Linux 304d4bf10ff2 3.13.0-123-generic #172-Ubuntu SMP Mon Jun 26 
18:04:35 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | HDFS-7240 / 3c8f1c5 |
| Default Java | 1.8.0_144 |
| cc | 
https://builds.apache.org/job/PreCommit-HDFS-Build/20967/artifact/patchprocess/diff-compile-cc-root.txt
 |
| whitespace | 
https://builds.apache.org/job/PreCommit-HDFS-Build/20967/artifact/patchprocess/whitespace-eol.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/20967/artifact/patchprocess/patch-unit-root.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/20967/testReport/ |
| asflicense | 
https://builds.apache.org/job/PreCommit-HDFS-Build/20967/artifact/patchprocess/patch-asflicense-problems.txt
 |
| modules | C: hadoop-common-project/hadoop-common 
hadoop-hdfs-project/hadoop-hdfs-native-client . U: . |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/20967/console |
| Powered by | Apache Yetus 0.6.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Ozone: C/C++ 

[jira] [Updated] (HDFS-12385) Ozone: OzoneClient: Refactoring of OzoneClient API

2017-09-01 Thread Nandakumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12385?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nandakumar updated HDFS-12385:
--
Status: Patch Available  (was: Open)

> Ozone: OzoneClient: Refactoring of OzoneClient API
> --
>
> Key: HDFS-12385
> URL: https://issues.apache.org/jira/browse/HDFS-12385
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Reporter: Nandakumar
>Assignee: Nandakumar
> Attachments: HDFS-12385-HDFS-7240.000.patch, OzoneClient.pdf
>
>
> This jira is for refactoring {{OzoneClient}} API. [^OzoneClient.pdf] will 
> give an idea on how the API will look.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12385) Ozone: OzoneClient: Refactoring of OzoneClient API

2017-09-01 Thread Nandakumar (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12385?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16150630#comment-16150630
 ] 

Nandakumar commented on HDFS-12385:
---

Had an offline discussion with [~anu] on introducing a cluster discovery API in 
KSM, once that is in place {{OzoneClient}} will use that API to get details 
about the services running in Ozone cluster.

> Ozone: OzoneClient: Refactoring of OzoneClient API
> --
>
> Key: HDFS-12385
> URL: https://issues.apache.org/jira/browse/HDFS-12385
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Reporter: Nandakumar
>Assignee: Nandakumar
> Attachments: HDFS-12385-HDFS-7240.000.patch, OzoneClient.pdf
>
>
> This jira is for refactoring {{OzoneClient}} API. [^OzoneClient.pdf] will 
> give an idea on how the API will look.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12385) Ozone: OzoneClient: Refactoring of OzoneClient API

2017-09-01 Thread Nandakumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12385?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nandakumar updated HDFS-12385:
--
Attachment: HDFS-12385-HDFS-7240.000.patch

> Ozone: OzoneClient: Refactoring of OzoneClient API
> --
>
> Key: HDFS-12385
> URL: https://issues.apache.org/jira/browse/HDFS-12385
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Reporter: Nandakumar
>Assignee: Nandakumar
> Attachments: HDFS-12385-HDFS-7240.000.patch, OzoneClient.pdf
>
>
> This jira is for refactoring {{OzoneClient}} API. [^OzoneClient.pdf] will 
> give an idea on how the API will look.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12385) Ozone: OzoneClient: Refactoring of OzoneClient API

2017-09-01 Thread Nandakumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12385?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nandakumar updated HDFS-12385:
--
Description: This jira is for refactoring {{OzoneClient}} API. 
[^OzoneClient.pdf] will give an idea on how the API will look.  (was: This jira 
is for refactoring {{OzoneClient}} API. OzoneClient.pdf will give an idea on 
how the API will look.)

> Ozone: OzoneClient: Refactoring of OzoneClient API
> --
>
> Key: HDFS-12385
> URL: https://issues.apache.org/jira/browse/HDFS-12385
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Reporter: Nandakumar
>Assignee: Nandakumar
> Attachments: OzoneClient.pdf
>
>
> This jira is for refactoring {{OzoneClient}} API. [^OzoneClient.pdf] will 
> give an idea on how the API will look.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12385) Ozone: OzoneClient: Refactoring of OzoneClient API

2017-09-01 Thread Nandakumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12385?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nandakumar updated HDFS-12385:
--
Description: This jira is for refactoring {{OzoneClient}} API. 
OzoneClient.pdf will give an idea on how the API will look.  (was: This jira is 
for refactoring {{OzoneClient}} API. OzoneClient.pdf will give an idea on how 
the APIs will look.)

> Ozone: OzoneClient: Refactoring of OzoneClient API
> --
>
> Key: HDFS-12385
> URL: https://issues.apache.org/jira/browse/HDFS-12385
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Reporter: Nandakumar
>Assignee: Nandakumar
> Attachments: OzoneClient.pdf
>
>
> This jira is for refactoring {{OzoneClient}} API. OzoneClient.pdf will give 
> an idea on how the API will look.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12385) Ozone: OzoneClient: Refactoring of OzoneClient API

2017-09-01 Thread Nandakumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12385?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nandakumar updated HDFS-12385:
--
Attachment: OzoneClient.pdf

> Ozone: OzoneClient: Refactoring of OzoneClient API
> --
>
> Key: HDFS-12385
> URL: https://issues.apache.org/jira/browse/HDFS-12385
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Reporter: Nandakumar
>Assignee: Nandakumar
> Attachments: OzoneClient.pdf
>
>
> This jira is for refactoring {{OzoneClient}} API. OzoneClient.pdf will give 
> an idea on how the API will look.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12385) Ozone: OzoneClient: Refactoring of OzoneClient API

2017-09-01 Thread Nandakumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12385?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nandakumar updated HDFS-12385:
--
Environment: (was: This jira is for refactoring {{OzoneClient}} API. 
OzoneClient.pdf will give an idea on how the APIs will look.)

> Ozone: OzoneClient: Refactoring of OzoneClient API
> --
>
> Key: HDFS-12385
> URL: https://issues.apache.org/jira/browse/HDFS-12385
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Reporter: Nandakumar
>Assignee: Nandakumar
> Attachments: OzoneClient.pdf
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12385) Ozone: OzoneClient: Refactoring of OzoneClient API

2017-09-01 Thread Nandakumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12385?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nandakumar updated HDFS-12385:
--
Description: This jira is for refactoring {{OzoneClient}} API. 
OzoneClient.pdf will give an idea on how the APIs will look.

> Ozone: OzoneClient: Refactoring of OzoneClient API
> --
>
> Key: HDFS-12385
> URL: https://issues.apache.org/jira/browse/HDFS-12385
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Reporter: Nandakumar
>Assignee: Nandakumar
> Attachments: OzoneClient.pdf
>
>
> This jira is for refactoring {{OzoneClient}} API. OzoneClient.pdf will give 
> an idea on how the APIs will look.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-12385) Ozone: OzoneClient: Refactoring of OzoneClient API

2017-09-01 Thread Nandakumar (JIRA)
Nandakumar created HDFS-12385:
-

 Summary: Ozone: OzoneClient: Refactoring of OzoneClient API
 Key: HDFS-12385
 URL: https://issues.apache.org/jira/browse/HDFS-12385
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: ozone
 Environment: This jira is for refactoring {{OzoneClient}} API. 
OzoneClient.pdf will give an idea on how the APIs will look.
Reporter: Nandakumar
Assignee: Nandakumar






--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12259) Ozone: OzoneClient: Refactor and move ozone client from hadoop-hdfs to hadoop-hdfs-client

2017-09-01 Thread Nandakumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12259?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nandakumar updated HDFS-12259:
--
Summary: Ozone: OzoneClient: Refactor and move ozone client from 
hadoop-hdfs to hadoop-hdfs-client  (was: Ozone: OzoneClient: Refactor move 
ozone client from hadoop-hdfs to hadoop-hdfs-client)

> Ozone: OzoneClient: Refactor and move ozone client from hadoop-hdfs to 
> hadoop-hdfs-client
> -
>
> Key: HDFS-12259
> URL: https://issues.apache.org/jira/browse/HDFS-12259
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Reporter: Nandakumar
>Assignee: Nandakumar
> Fix For: HDFS-7240
>
> Attachments: HDFS-12259-HDFS-7240.000.patch, 
> HDFS-12259-HDFS-7240.001.patch, HDFS-12259-HDFS-7240.002.patch, 
> HDFS-12259-HDFS-7240.003.patch, HDFS-12259-HDFS-7240.004.patch
>
>
> Most of the client code is in hadoop-hdfs project, this jira is for 
> refactoring OzoneClient code and move it to hadoop-hdfs-client project.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12383) Re-encryption updater should handle canceled tasks better

2017-09-01 Thread Rushabh S Shah (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12383?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16150572#comment-16150572
 ] 

Rushabh S Shah commented on HDFS-12383:
---

We can change the while loop exit condition to use {{isRunning}} flag instead 
of {{while(true)}}
So when we want to exit, we can just set the {{isRunning}} to false.

> Re-encryption updater should handle canceled tasks better
> -
>
> Key: HDFS-12383
> URL: https://issues.apache.org/jira/browse/HDFS-12383
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: encryption
>Affects Versions: 3.0.0-beta1
>Reporter: Xiao Chen
>Assignee: Xiao Chen
> Attachments: HDFS-12383.01.patch, HDFS-12383.02.patch
>
>
> Seen an instance where the re-encryption updater exited due to an exception, 
> and later tasks no longer executes. Logs below:
> {noformat}
> 2017-08-31 09:54:08,104 INFO 
> org.apache.hadoop.hdfs.server.namenode.EncryptionZoneManager: Zone 
> /tmp/encryption-zone-3(16819) is submitted for re-encryption.
> 2017-08-31 09:54:08,104 INFO 
> org.apache.hadoop.hdfs.server.namenode.ReencryptionHandler: Executing 
> re-encrypt commands on zone 16819. Current zones:[zone:16787 state:Completed 
> lastProcessed:null filesReencrypted:1 fileReencryptionFailures:0][zone:16813 
> state:Completed lastProcessed:null filesReencrypted:1 
> fileReencryptionFailures:0][zone:16819 state:Submitted lastProcessed:null 
> filesReencrypted:0 fileReencryptionFailures:0]
> 2017-08-31 09:54:08,105 INFO 
> org.apache.hadoop.hdfs.protocol.ReencryptionStatus: Zone 16819 starts 
> re-encryption processing
> 2017-08-31 09:54:08,105 INFO 
> org.apache.hadoop.hdfs.server.namenode.ReencryptionHandler: Re-encrypting 
> zone /tmp/encryption-zone-3(id=16819)
> 2017-08-31 09:54:08,105 INFO 
> org.apache.hadoop.hdfs.server.namenode.ReencryptionHandler: Submitted batch 
> (start:/tmp/encryption-zone-3/data1, size:1) of zone 16819 to re-encrypt.
> 2017-08-31 09:54:08,105 INFO 
> org.apache.hadoop.hdfs.server.namenode.ReencryptionHandler: Submission 
> completed of zone 16819 for re-encryption.
> 2017-08-31 09:54:08,105 INFO 
> org.apache.hadoop.hdfs.server.namenode.ReencryptionHandler: Processing 
> batched re-encryption for zone 16819, batch size 1, 
> start:/tmp/encryption-zone-3/data1
> 2017-08-31 09:54:08,979 INFO BlockStateChange: BLOCK* BlockManager: ask 
> 172.26.1.71:20002 to delete [blk_1073742291_1467]
> 2017-08-31 09:54:18,295 INFO 
> org.apache.hadoop.hdfs.server.namenode.ReencryptionUpdater: Cancelling 1 
> re-encryption tasks
> 2017-08-31 09:54:18,295 INFO 
> org.apache.hadoop.hdfs.server.namenode.EncryptionZoneManager: Cancelled zone 
> /tmp/encryption-zone-3(16819) for re-encryption.
> 2017-08-31 09:54:18,295 INFO 
> org.apache.hadoop.hdfs.protocol.ReencryptionStatus: Zone 16819 completed 
> re-encryption.
> 2017-08-31 09:54:18,296 INFO 
> org.apache.hadoop.hdfs.server.namenode.ReencryptionHandler: Completed 
> re-encrypting one batch of 1 edeks from KMS, time consumed: 10.19 s, start: 
> /tmp/encryption-zone-3/data1.
> 2017-08-31 09:54:18,296 ERROR 
> org.apache.hadoop.hdfs.server.namenode.ReencryptionUpdater: Re-encryption 
> updater thread exiting.
> java.util.concurrent.CancellationException
> at java.util.concurrent.FutureTask.report(FutureTask.java:121)
> at java.util.concurrent.FutureTask.get(FutureTask.java:192)
> at 
> org.apache.hadoop.hdfs.server.namenode.ReencryptionUpdater.takeAndProcessTasks(ReencryptionUpdater.java:404)
> at 
> org.apache.hadoop.hdfs.server.namenode.ReencryptionUpdater.run(ReencryptionUpdater.java:250)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> at java.lang.Thread.run(Thread.java:745)
> {noformat}
> Updater should be fixed to handle canceled tasks better.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12370) Ozone: Implement TopN container choosing policy for BlockDeletionService

2017-09-01 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12370?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16150475#comment-16150475
 ] 

Hadoop QA commented on HDFS-12370:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
15s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} HDFS-7240 Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 18m 
45s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
11s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
41s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
8s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
32s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
3s{color} | {color:green} HDFS-7240 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
 8s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
4s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
52s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 69m 36s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
20s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}104m 22s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure010 |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | HDFS-12370 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12884932/HDFS-12370-HDFS-7240.003.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  xml  |
| uname | Linux c4c9403de366 3.13.0-123-generic #172-Ubuntu SMP Mon Jun 26 
18:04:35 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | HDFS-7240 / 3c8f1c5 |
| Default Java | 1.8.0_144 |
| findbugs | v3.1.0-RC1 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/20966/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/20966/testReport/ |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/20966/console |
| Powered by | Apache Yetus 0.6.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Ozone: Implement TopN container choosing policy for BlockDeletionService
> 
>
> Key: HDFS-12370
> URL: https://issues.apache.org/jira/browse/HDFS-12370
> 

[jira] [Updated] (HDFS-12340) Ozone: C/C++ implementation of ozone client using curl

2017-09-01 Thread Mukul Kumar Singh (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12340?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mukul Kumar Singh updated HDFS-12340:
-
Status: Patch Available  (was: Open)

> Ozone: C/C++ implementation of ozone client using curl
> --
>
> Key: HDFS-12340
> URL: https://issues.apache.org/jira/browse/HDFS-12340
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Affects Versions: HDFS-7240
>Reporter: Shashikant Banerjee
>Assignee: Shashikant Banerjee
> Fix For: HDFS-7240
>
> Attachments: HDFS-12340-HDFS-7240.001.patch, main.C, ozoneClient.C, 
> ozoneClient.h
>
>
> This Jira is introduced for implementation of ozone client in C/C++ using 
> curl library.
> All these calls will make use of HTTP protocol and would require libcurl. The 
> libcurl API are referenced from here:
> https://curl.haxx.se/libcurl/
> Additional details would be posted along with the patches.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12370) Ozone: Implement TopN container choosing policy for BlockDeletionService

2017-09-01 Thread Yiqun Lin (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12370?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16150368#comment-16150368
 ] 

Yiqun Lin commented on HDFS-12370:
--

Seems pending deletion blocks initialize logic moved into {{readContainerInfo}} 
broken the test {{TestContainerPersistence}}.
Attach the new patch to fix failure unit test and fix remaining checkstyle 
warnings.

> Ozone: Implement TopN container choosing policy for BlockDeletionService
> 
>
> Key: HDFS-12370
> URL: https://issues.apache.org/jira/browse/HDFS-12370
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Affects Versions: HDFS-7240
>Reporter: Yiqun Lin
>Assignee: Yiqun Lin
> Attachments: HDFS-12370-HDFS-7240.001.patch, 
> HDFS-12370-HDFS-7240.002.patch, HDFS-12370-HDFS-7240.003.patch
>
>
> Implement TopN container choosing policy for BlockDeletionService. This is 
> discussed from HDFS-12354.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12340) Ozone: C/C++ implementation of ozone client using curl

2017-09-01 Thread Shashikant Banerjee (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12340?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16150374#comment-16150374
 ] 

Shashikant Banerjee commented on HDFS-12340:


Attached patch addresses:

1) Code Review Comments 

2) Maven integration for ozoneclient library.

> Ozone: C/C++ implementation of ozone client using curl
> --
>
> Key: HDFS-12340
> URL: https://issues.apache.org/jira/browse/HDFS-12340
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Affects Versions: HDFS-7240
>Reporter: Shashikant Banerjee
>Assignee: Shashikant Banerjee
> Fix For: HDFS-7240
>
> Attachments: HDFS-12340-HDFS-7240.001.patch, main.C, ozoneClient.C, 
> ozoneClient.h
>
>
> This Jira is introduced for implementation of ozone client in C/C++ using 
> curl library.
> All these calls will make use of HTTP protocol and would require libcurl. The 
> libcurl API are referenced from here:
> https://curl.haxx.se/libcurl/
> Additional details would be posted along with the patches.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12340) Ozone: C/C++ implementation of ozone client using curl

2017-09-01 Thread Shashikant Banerjee (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12340?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shashikant Banerjee updated HDFS-12340:
---
Attachment: HDFS-12340-HDFS-7240.001.patch

> Ozone: C/C++ implementation of ozone client using curl
> --
>
> Key: HDFS-12340
> URL: https://issues.apache.org/jira/browse/HDFS-12340
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Affects Versions: HDFS-7240
>Reporter: Shashikant Banerjee
>Assignee: Shashikant Banerjee
> Fix For: HDFS-7240
>
> Attachments: HDFS-12340-HDFS-7240.001.patch, main.C, ozoneClient.C, 
> ozoneClient.h
>
>
> This Jira is introduced for implementation of ozone client in C/C++ using 
> curl library.
> All these calls will make use of HTTP protocol and would require libcurl. The 
> libcurl API are referenced from here:
> https://curl.haxx.se/libcurl/
> Additional details would be posted along with the patches.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12370) Ozone: Implement TopN container choosing policy for BlockDeletionService

2017-09-01 Thread Yiqun Lin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12370?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yiqun Lin updated HDFS-12370:
-
Attachment: HDFS-12370-HDFS-7240.003.patch

> Ozone: Implement TopN container choosing policy for BlockDeletionService
> 
>
> Key: HDFS-12370
> URL: https://issues.apache.org/jira/browse/HDFS-12370
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Affects Versions: HDFS-7240
>Reporter: Yiqun Lin
>Assignee: Yiqun Lin
> Attachments: HDFS-12370-HDFS-7240.001.patch, 
> HDFS-12370-HDFS-7240.002.patch, HDFS-12370-HDFS-7240.003.patch
>
>
> Implement TopN container choosing policy for BlockDeletionService. This is 
> discussed from HDFS-12354.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11964) Decoding inputs should be correctly prepared in pread

2017-09-01 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11964?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16150325#comment-16150325
 ] 

Hudson commented on HDFS-11964:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #12296 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/12296/])
HDFS-11964. Decoding inputs should be correctly prepared in pread. (kai.zheng: 
rev 7a96033b15580a01a2867fa3cab9c1e409dbaafd)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/PositionStripeReader.java


> Decoding inputs should be correctly prepared in pread
> -
>
> Key: HDFS-11964
> URL: https://issues.apache.org/jira/browse/HDFS-11964
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: erasure-coding
>Affects Versions: 3.0.0-alpha3
>Reporter: Lei (Eddy) Xu
>Assignee: Takanobu Asanuma
> Fix For: 3.0.0-beta1
>
> Attachments: HDFS-11964.1.patch, HDFS-11964.2.patch, 
> HDFS-11964.3.patch, HDFS-11964.4.patch
>
>
> TestDFSStripedInputStreamWithRandomECPolicy#testPreadWithDNFailure fails on 
> trunk:
> {code}
> Running org.apache.hadoop.hdfs.TestDFSStripedInputStreamWithRandomECPolicy
> Tests run: 5, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 10.99 sec <<< 
> FAILURE! - in 
> org.apache.hadoop.hdfs.TestDFSStripedInputStreamWithRandomECPolicy
> testPreadWithDNFailure(org.apache.hadoop.hdfs.TestDFSStripedInputStreamWithRandomECPolicy)
>   Time elapsed: 1.265 sec  <<< FAILURE!
> org.junit.internal.ArrayComparisonFailure: arrays first differed at element 
> [327680]; expected:<-36> but was:<2>
>   at 
> org.junit.internal.ComparisonCriteria.arrayEquals(ComparisonCriteria.java:50)
>   at org.junit.Assert.internalArrayEquals(Assert.java:473)
>   at org.junit.Assert.assertArrayEquals(Assert.java:294)
>   at org.junit.Assert.assertArrayEquals(Assert.java:305)
>   at 
> org.apache.hadoop.hdfs.TestDFSStripedInputStream.testPreadWithDNFailure(TestDFSStripedInputStream.java:306)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>   at 
> org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74)
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11964) Decoding inputs should be correctly prepared in pread

2017-09-01 Thread Takanobu Asanuma (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11964?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16150312#comment-16150312
 ] 

Takanobu Asanuma commented on HDFS-11964:
-

Thanks for reviewing and committing, [~drankye]!

> Decoding inputs should be correctly prepared in pread
> -
>
> Key: HDFS-11964
> URL: https://issues.apache.org/jira/browse/HDFS-11964
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: erasure-coding
>Affects Versions: 3.0.0-alpha3
>Reporter: Lei (Eddy) Xu
>Assignee: Takanobu Asanuma
> Fix For: 3.0.0-beta1
>
> Attachments: HDFS-11964.1.patch, HDFS-11964.2.patch, 
> HDFS-11964.3.patch, HDFS-11964.4.patch
>
>
> TestDFSStripedInputStreamWithRandomECPolicy#testPreadWithDNFailure fails on 
> trunk:
> {code}
> Running org.apache.hadoop.hdfs.TestDFSStripedInputStreamWithRandomECPolicy
> Tests run: 5, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 10.99 sec <<< 
> FAILURE! - in 
> org.apache.hadoop.hdfs.TestDFSStripedInputStreamWithRandomECPolicy
> testPreadWithDNFailure(org.apache.hadoop.hdfs.TestDFSStripedInputStreamWithRandomECPolicy)
>   Time elapsed: 1.265 sec  <<< FAILURE!
> org.junit.internal.ArrayComparisonFailure: arrays first differed at element 
> [327680]; expected:<-36> but was:<2>
>   at 
> org.junit.internal.ComparisonCriteria.arrayEquals(ComparisonCriteria.java:50)
>   at org.junit.Assert.internalArrayEquals(Assert.java:473)
>   at org.junit.Assert.assertArrayEquals(Assert.java:294)
>   at org.junit.Assert.assertArrayEquals(Assert.java:305)
>   at 
> org.apache.hadoop.hdfs.TestDFSStripedInputStream.testPreadWithDNFailure(TestDFSStripedInputStream.java:306)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>   at 
> org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74)
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-12370) Ozone: Implement TopN container choosing policy for BlockDeletionService

2017-09-01 Thread Weiwei Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12370?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16150305#comment-16150305
 ] 

Weiwei Yang edited comment on HDFS-12370 at 9/1/17 9:54 AM:


Hi [~linyiqun]

v2 patch looks good to me, can you fix the remaining checkstyle issues please.
I will test this patch on a real cluster to make sure this works as expected 
before we commit this. Thanks a lot.



was (Author: cheersyang):
Hi [~linyiqun]

v2 patch looks good to me, can you fix the remaining checkstyle issues please. 
Thanks.


> Ozone: Implement TopN container choosing policy for BlockDeletionService
> 
>
> Key: HDFS-12370
> URL: https://issues.apache.org/jira/browse/HDFS-12370
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Affects Versions: HDFS-7240
>Reporter: Yiqun Lin
>Assignee: Yiqun Lin
> Attachments: HDFS-12370-HDFS-7240.001.patch, 
> HDFS-12370-HDFS-7240.002.patch
>
>
> Implement TopN container choosing policy for BlockDeletionService. This is 
> discussed from HDFS-12354.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12370) Ozone: Implement TopN container choosing policy for BlockDeletionService

2017-09-01 Thread Weiwei Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12370?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16150305#comment-16150305
 ] 

Weiwei Yang commented on HDFS-12370:


Hi [~linyiqun]

v2 patch looks good to me, can you fix the remaining checkstyle issues please. 
Thanks.


> Ozone: Implement TopN container choosing policy for BlockDeletionService
> 
>
> Key: HDFS-12370
> URL: https://issues.apache.org/jira/browse/HDFS-12370
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Affects Versions: HDFS-7240
>Reporter: Yiqun Lin
>Assignee: Yiqun Lin
> Attachments: HDFS-12370-HDFS-7240.001.patch, 
> HDFS-12370-HDFS-7240.002.patch
>
>
> Implement TopN container choosing policy for BlockDeletionService. This is 
> discussed from HDFS-12354.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-1068) Reduce NameNode GC by reusing HdfsFileStatus objects in RPC handlers

2017-09-01 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-1068?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16150301#comment-16150301
 ] 

Hadoop QA commented on HDFS-1068:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
14s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
20s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 13m 
46s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 14m 
19s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
58s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
44s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
43s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m  
6s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
16s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
28s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 12m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 12m 
14s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
2m  1s{color} | {color:orange} root: The patch generated 30 new + 88 unchanged 
- 4 fixed = 118 total (was 92) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
53s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 2 line(s) that end in whitespace. Use git 
apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply 
{color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  5m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
13s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  8m 20s{color} 
| {color:red} hadoop-common in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
30s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 98m 41s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
35s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}178m 35s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.ha.TestZKFailoverController |
|   | hadoop.fs.shell.TestCopyFromLocal |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure060 |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure100 |
|   | hadoop.hdfs.tools.TestDFSAdminWithHA |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure050 |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure110 |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure140 |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure210 |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure090 |
|   | hadoop.hdfs.server.namenode.TestFsck |
|   | hadoop.hdfs.TestLeaseRecoveryStriped |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure200 |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure120 |
| 

[jira] [Updated] (HDFS-11964) Decoding inputs should be correctly prepared in pread

2017-09-01 Thread Kai Zheng (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11964?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kai Zheng updated HDFS-11964:
-
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 3.0.0-beta1
   Status: Resolved  (was: Patch Available)

Committed to trunk. Thanks [~tasanuma0829] for the contribution!

> Decoding inputs should be correctly prepared in pread
> -
>
> Key: HDFS-11964
> URL: https://issues.apache.org/jira/browse/HDFS-11964
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: erasure-coding
>Affects Versions: 3.0.0-alpha3
>Reporter: Lei (Eddy) Xu
>Assignee: Takanobu Asanuma
> Fix For: 3.0.0-beta1
>
> Attachments: HDFS-11964.1.patch, HDFS-11964.2.patch, 
> HDFS-11964.3.patch, HDFS-11964.4.patch
>
>
> TestDFSStripedInputStreamWithRandomECPolicy#testPreadWithDNFailure fails on 
> trunk:
> {code}
> Running org.apache.hadoop.hdfs.TestDFSStripedInputStreamWithRandomECPolicy
> Tests run: 5, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 10.99 sec <<< 
> FAILURE! - in 
> org.apache.hadoop.hdfs.TestDFSStripedInputStreamWithRandomECPolicy
> testPreadWithDNFailure(org.apache.hadoop.hdfs.TestDFSStripedInputStreamWithRandomECPolicy)
>   Time elapsed: 1.265 sec  <<< FAILURE!
> org.junit.internal.ArrayComparisonFailure: arrays first differed at element 
> [327680]; expected:<-36> but was:<2>
>   at 
> org.junit.internal.ComparisonCriteria.arrayEquals(ComparisonCriteria.java:50)
>   at org.junit.Assert.internalArrayEquals(Assert.java:473)
>   at org.junit.Assert.assertArrayEquals(Assert.java:294)
>   at org.junit.Assert.assertArrayEquals(Assert.java:305)
>   at 
> org.apache.hadoop.hdfs.TestDFSStripedInputStream.testPreadWithDNFailure(TestDFSStripedInputStream.java:306)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>   at 
> org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74)
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11964) Decoding inputs should be correctly prepared in pread

2017-09-01 Thread Kai Zheng (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11964?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16150290#comment-16150290
 ] 

Kai Zheng commented on HDFS-11964:
--

Ok, thanks for the update. +1 and will commit it shortly.

> Decoding inputs should be correctly prepared in pread
> -
>
> Key: HDFS-11964
> URL: https://issues.apache.org/jira/browse/HDFS-11964
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: erasure-coding
>Affects Versions: 3.0.0-alpha3
>Reporter: Lei (Eddy) Xu
>Assignee: Takanobu Asanuma
> Attachments: HDFS-11964.1.patch, HDFS-11964.2.patch, 
> HDFS-11964.3.patch, HDFS-11964.4.patch
>
>
> TestDFSStripedInputStreamWithRandomECPolicy#testPreadWithDNFailure fails on 
> trunk:
> {code}
> Running org.apache.hadoop.hdfs.TestDFSStripedInputStreamWithRandomECPolicy
> Tests run: 5, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 10.99 sec <<< 
> FAILURE! - in 
> org.apache.hadoop.hdfs.TestDFSStripedInputStreamWithRandomECPolicy
> testPreadWithDNFailure(org.apache.hadoop.hdfs.TestDFSStripedInputStreamWithRandomECPolicy)
>   Time elapsed: 1.265 sec  <<< FAILURE!
> org.junit.internal.ArrayComparisonFailure: arrays first differed at element 
> [327680]; expected:<-36> but was:<2>
>   at 
> org.junit.internal.ComparisonCriteria.arrayEquals(ComparisonCriteria.java:50)
>   at org.junit.Assert.internalArrayEquals(Assert.java:473)
>   at org.junit.Assert.assertArrayEquals(Assert.java:294)
>   at org.junit.Assert.assertArrayEquals(Assert.java:305)
>   at 
> org.apache.hadoop.hdfs.TestDFSStripedInputStream.testPreadWithDNFailure(TestDFSStripedInputStream.java:306)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>   at 
> org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74)
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12235) Ozone: DeleteKey-3: KSM SCM block deletion message and ACK interactions

2017-09-01 Thread Weiwei Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12235?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16150278#comment-16150278
 ] 

Weiwei Yang commented on HDFS-12235:


Hi [~anu]

Could you please help to review the latest patch? I've been working on this 
with [~yuanbo] for a while, this is the last big piece of delete key work. With 
this patch, functionally delete key starts to work. Next week, I will setup a 
multi-nodes cluster to generate a large number of keys using corona and test 
this function in scale, there might be some more improvements or bug fixes but 
basic stuff should be ready now.

The test case failure TestKeys doesn't seem to be related to this patch, this 
test fails on recent jenkins runs even without the patch and could not be 
reproduced locally, might need another JIRA to track.

Please let me know your thoughts or suggestions. Thanks.

> Ozone: DeleteKey-3: KSM SCM block deletion message and ACK interactions
> ---
>
> Key: HDFS-12235
> URL: https://issues.apache.org/jira/browse/HDFS-12235
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Affects Versions: HDFS-7240
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
> Attachments: HDFS-12235-HDFS-7240.001.patch, 
> HDFS-12235-HDFS-7240.002.patch, HDFS-12235-HDFS-7240.003.patch, 
> HDFS-12235-HDFS-7240.004.patch, HDFS-12235-HDFS-7240.005.patch
>
>
> KSM and SCM interaction for delete key operation, both KSM and SCM stores key 
> state info in a backlog, KSM needs to scan this log and send block-deletion 
> command to SCM, once SCM is fully aware of the message, KSM removes the key 
> completely from namespace. See more from the design doc under HDFS-11922, 
> this is task break down 2.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11964) Decoding inputs should be correctly prepared in pread

2017-09-01 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11964?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16150273#comment-16150273
 ] 

Hadoop QA commented on HDFS-11964:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
18s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 14m 
50s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
35s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
17s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
35s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
30s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
25s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
31s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
31s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
32s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
39s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
19s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
13s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
15s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 25m  8s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:71bbb86 |
| JIRA Issue | HDFS-11964 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12884920/HDFS-11964.4.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 4781101bf43e 3.13.0-117-generic #164-Ubuntu SMP Fri Apr 7 
11:05:26 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 1b3b993 |
| Default Java | 1.8.0_144 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/20965/testReport/ |
| modules | C: hadoop-hdfs-project/hadoop-hdfs-client U: 
hadoop-hdfs-project/hadoop-hdfs-client |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/20965/console |
| Powered by | Apache Yetus 0.6.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Decoding inputs should be correctly prepared in pread
> -
>
> Key: HDFS-11964
> URL: https://issues.apache.org/jira/browse/HDFS-11964
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: erasure-coding
>Affects Versions: 3.0.0-alpha3
>Reporter: Lei (Eddy) Xu
>Assignee: Takanobu Asanuma
> Attachments: HDFS-11964.1.patch, HDFS-11964.2.patch, 
> HDFS-11964.3.patch, HDFS-11964.4.patch
>
>
> 

[jira] [Commented] (HDFS-12370) Ozone: Implement TopN container choosing policy for BlockDeletionService

2017-09-01 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12370?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16150270#comment-16150270
 ] 

Hadoop QA commented on HDFS-12370:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
16s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} HDFS-7240 Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 18m 
32s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
2s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
42s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
10s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
9s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
0s{color} | {color:green} HDFS-7240 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
 3s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
54s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 38s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch 
generated 4 new + 1 unchanged - 0 fixed = 5 total (was 1) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
5s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
22s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
56s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 79m 49s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
20s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}113m 24s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.hdfs.server.balancer.TestBalancerWithMultipleNameNodes |
|   | hadoop.hdfs.server.blockmanagement.TestUnderReplicatedBlocks |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure080 |
|   | hadoop.ozone.container.common.impl.TestContainerPersistence |
|   | hadoop.ozone.web.client.TestKeys |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | HDFS-12370 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12884896/HDFS-12370-HDFS-7240.002.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  xml  |
| uname | Linux 9968ed3d21d9 3.13.0-123-generic #172-Ubuntu SMP Mon Jun 26 
18:04:35 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | HDFS-7240 / 3c8f1c5 |
| Default Java | 1.8.0_144 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HDFS-Build/20964/artifact/patchprocess/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/20964/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/20964/testReport/ |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 

[jira] [Comment Edited] (HDFS-11964) Decoding inputs should be correctly prepared in pread

2017-09-01 Thread Takanobu Asanuma (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11964?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16150237#comment-16150237
 ] 

Takanobu Asanuma edited comment on HDFS-11964 at 9/1/17 9:01 AM:
-

Sorry, refactoring the tests may cause the error of 
{{TestDFSStripedOutputStreamWithFailureWithRandomECPolicy}}. I will check it in 
another jira. I reverted it in the latest patch.


was (Author: tasanuma0829):
Sorry, refactoring the tests may cause the errors. I reverted it in the latest 
patch.

> Decoding inputs should be correctly prepared in pread
> -
>
> Key: HDFS-11964
> URL: https://issues.apache.org/jira/browse/HDFS-11964
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: erasure-coding
>Affects Versions: 3.0.0-alpha3
>Reporter: Lei (Eddy) Xu
>Assignee: Takanobu Asanuma
> Attachments: HDFS-11964.1.patch, HDFS-11964.2.patch, 
> HDFS-11964.3.patch, HDFS-11964.4.patch
>
>
> TestDFSStripedInputStreamWithRandomECPolicy#testPreadWithDNFailure fails on 
> trunk:
> {code}
> Running org.apache.hadoop.hdfs.TestDFSStripedInputStreamWithRandomECPolicy
> Tests run: 5, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 10.99 sec <<< 
> FAILURE! - in 
> org.apache.hadoop.hdfs.TestDFSStripedInputStreamWithRandomECPolicy
> testPreadWithDNFailure(org.apache.hadoop.hdfs.TestDFSStripedInputStreamWithRandomECPolicy)
>   Time elapsed: 1.265 sec  <<< FAILURE!
> org.junit.internal.ArrayComparisonFailure: arrays first differed at element 
> [327680]; expected:<-36> but was:<2>
>   at 
> org.junit.internal.ComparisonCriteria.arrayEquals(ComparisonCriteria.java:50)
>   at org.junit.Assert.internalArrayEquals(Assert.java:473)
>   at org.junit.Assert.assertArrayEquals(Assert.java:294)
>   at org.junit.Assert.assertArrayEquals(Assert.java:305)
>   at 
> org.apache.hadoop.hdfs.TestDFSStripedInputStream.testPreadWithDNFailure(TestDFSStripedInputStream.java:306)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>   at 
> org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74)
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11964) Decoding inputs should be correctly prepared in pread

2017-09-01 Thread Takanobu Asanuma (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11964?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Takanobu Asanuma updated HDFS-11964:

Attachment: HDFS-11964.4.patch

Sorry, refactoring the tests may cause the errors. I reverted it in the latest 
patch.

> Decoding inputs should be correctly prepared in pread
> -
>
> Key: HDFS-11964
> URL: https://issues.apache.org/jira/browse/HDFS-11964
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: erasure-coding
>Affects Versions: 3.0.0-alpha3
>Reporter: Lei (Eddy) Xu
>Assignee: Takanobu Asanuma
> Attachments: HDFS-11964.1.patch, HDFS-11964.2.patch, 
> HDFS-11964.3.patch, HDFS-11964.4.patch
>
>
> TestDFSStripedInputStreamWithRandomECPolicy#testPreadWithDNFailure fails on 
> trunk:
> {code}
> Running org.apache.hadoop.hdfs.TestDFSStripedInputStreamWithRandomECPolicy
> Tests run: 5, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 10.99 sec <<< 
> FAILURE! - in 
> org.apache.hadoop.hdfs.TestDFSStripedInputStreamWithRandomECPolicy
> testPreadWithDNFailure(org.apache.hadoop.hdfs.TestDFSStripedInputStreamWithRandomECPolicy)
>   Time elapsed: 1.265 sec  <<< FAILURE!
> org.junit.internal.ArrayComparisonFailure: arrays first differed at element 
> [327680]; expected:<-36> but was:<2>
>   at 
> org.junit.internal.ComparisonCriteria.arrayEquals(ComparisonCriteria.java:50)
>   at org.junit.Assert.internalArrayEquals(Assert.java:473)
>   at org.junit.Assert.assertArrayEquals(Assert.java:294)
>   at org.junit.Assert.assertArrayEquals(Assert.java:305)
>   at 
> org.apache.hadoop.hdfs.TestDFSStripedInputStream.testPreadWithDNFailure(TestDFSStripedInputStream.java:306)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>   at 
> org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74)
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11964) Decoding inputs should be correctly prepared in pread

2017-09-01 Thread Kai Zheng (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11964?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16150213#comment-16150213
 ] 

Kai Zheng commented on HDFS-11964:
--

The latest patch LGTM and the failed unit tests are not relevant. +1.

> Decoding inputs should be correctly prepared in pread
> -
>
> Key: HDFS-11964
> URL: https://issues.apache.org/jira/browse/HDFS-11964
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: erasure-coding
>Affects Versions: 3.0.0-alpha3
>Reporter: Lei (Eddy) Xu
>Assignee: Takanobu Asanuma
> Attachments: HDFS-11964.1.patch, HDFS-11964.2.patch, 
> HDFS-11964.3.patch
>
>
> TestDFSStripedInputStreamWithRandomECPolicy#testPreadWithDNFailure fails on 
> trunk:
> {code}
> Running org.apache.hadoop.hdfs.TestDFSStripedInputStreamWithRandomECPolicy
> Tests run: 5, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 10.99 sec <<< 
> FAILURE! - in 
> org.apache.hadoop.hdfs.TestDFSStripedInputStreamWithRandomECPolicy
> testPreadWithDNFailure(org.apache.hadoop.hdfs.TestDFSStripedInputStreamWithRandomECPolicy)
>   Time elapsed: 1.265 sec  <<< FAILURE!
> org.junit.internal.ArrayComparisonFailure: arrays first differed at element 
> [327680]; expected:<-36> but was:<2>
>   at 
> org.junit.internal.ComparisonCriteria.arrayEquals(ComparisonCriteria.java:50)
>   at org.junit.Assert.internalArrayEquals(Assert.java:473)
>   at org.junit.Assert.assertArrayEquals(Assert.java:294)
>   at org.junit.Assert.assertArrayEquals(Assert.java:305)
>   at 
> org.apache.hadoop.hdfs.TestDFSStripedInputStream.testPreadWithDNFailure(TestDFSStripedInputStream.java:306)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>   at 
> org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74)
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11964) Decoding inputs should be correctly prepared in pread

2017-09-01 Thread Kai Zheng (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11964?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kai Zheng updated HDFS-11964:
-
Summary: Decoding inputs should be correctly prepared in pread  (was: 
RS-6-3-LEGACY has a decoding bug when it is used for pread)

> Decoding inputs should be correctly prepared in pread
> -
>
> Key: HDFS-11964
> URL: https://issues.apache.org/jira/browse/HDFS-11964
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: erasure-coding
>Affects Versions: 3.0.0-alpha3
>Reporter: Lei (Eddy) Xu
>Assignee: Takanobu Asanuma
> Attachments: HDFS-11964.1.patch, HDFS-11964.2.patch, 
> HDFS-11964.3.patch
>
>
> TestDFSStripedInputStreamWithRandomECPolicy#testPreadWithDNFailure fails on 
> trunk:
> {code}
> Running org.apache.hadoop.hdfs.TestDFSStripedInputStreamWithRandomECPolicy
> Tests run: 5, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 10.99 sec <<< 
> FAILURE! - in 
> org.apache.hadoop.hdfs.TestDFSStripedInputStreamWithRandomECPolicy
> testPreadWithDNFailure(org.apache.hadoop.hdfs.TestDFSStripedInputStreamWithRandomECPolicy)
>   Time elapsed: 1.265 sec  <<< FAILURE!
> org.junit.internal.ArrayComparisonFailure: arrays first differed at element 
> [327680]; expected:<-36> but was:<2>
>   at 
> org.junit.internal.ComparisonCriteria.arrayEquals(ComparisonCriteria.java:50)
>   at org.junit.Assert.internalArrayEquals(Assert.java:473)
>   at org.junit.Assert.assertArrayEquals(Assert.java:294)
>   at org.junit.Assert.assertArrayEquals(Assert.java:305)
>   at 
> org.apache.hadoop.hdfs.TestDFSStripedInputStream.testPreadWithDNFailure(TestDFSStripedInputStream.java:306)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>   at 
> org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74)
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



  1   2   >