[jira] [Commented] (HDFS-12026) libhdfs++: Fix compilation errors and warnings when compiling with Clang

2017-07-07 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12026?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16078957#comment-16078957
 ] 

Allen Wittenauer commented on HDFS-12026:
-

Hi gang.

Some outside observation: this patch is a bit problematic right now.

Both gcc and clang are now both required for building the native components.  
Outside of testing, that's extreme overkill.  There's no reason for a normal 
build to do that.  

Hard-coding the locations of things outside of the source tree needs to be done 
with extreme care.  In this particular case, there's no guarantee on any given 
platform that, e.g., /usr/bin/gcc actually exists or is even the compiler in 
use.  To make it even more exciting, systems like OS X actually ship clang as 
/usr/bin/gcc, which makes this test even more... interesting.

If you want to do these extra steps ONLY during CI, then you can wrap them in 
the "test-patch" maven profile.  mvn -Ptest-patch is used on every single 
command line when Apache Yetus is used to invoke the build.

As a side note, it should probably be pointed out that this branch is *way* far 
behind trunk right now.   Some effort should probably be spent in getting it up 
to speed, especially given that I'd hope this feature would appear in 3.0.0 
beta1.

> libhdfs++: Fix compilation errors and warnings when compiling with Clang 
> -
>
> Key: HDFS-12026
> URL: https://issues.apache.org/jira/browse/HDFS-12026
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Reporter: Anatoli Shein
>Assignee: Anatoli Shein
> Attachments: HDFS-12026.HDFS-8707.000.patch, 
> HDFS-12026.HDFS-8707.001.patch, HDFS-12026.HDFS-8707.002.patch, 
> HDFS-12026.HDFS-8707.003.patch, HDFS-12026.HDFS-8707.004.patch, 
> HDFS-12026.HDFS-8707.005.patch, HDFS-12026.HDFS-8707.006.patch
>
>
> Currently multiple errors and warnings prevent libhdfspp from being compiled 
> with clang. It should compile cleanly using flags:
> -std=c++11 -stdlib=libc++
> and also warning flags:
> -Weverything -Wno-c++98-compat -Wno-missing-prototypes 
> -Wno-c++98-compat-pedantic -Wno-padded -Wno-covered-switch-default 
> -Wno-missing-noreturn -Wno-unknown-pragmas -Wconversion -Werror



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12069) Ozone: Create a general abstraction for metadata store

2017-07-07 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12069?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16078953#comment-16078953
 ] 

Hadoop QA commented on HDFS-12069:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
10s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 4 new or modified test 
files. {color} |
|| || || || {color:brown} HDFS-7240 Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 14m 
43s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
51s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
38s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
58s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
51s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
50s{color} | {color:green} HDFS-7240 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
59s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
49s{color} | {color:green} hadoop-hdfs-project_hadoop-hdfs generated 0 new + 9 
unchanged - 1 fixed = 9 total (was 10) {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 66m 16s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
20s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 93m 56s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure140 |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | HDFS-12069 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12876172/HDFS-12069-HDFS-7240.009.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  xml  |
| uname | Linux d01b640cfc06 3.13.0-116-generic #163-Ubuntu SMP Fri Mar 31 
14:13:22 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | HDFS-7240 / 4e3fbc8 |
| Default Java | 1.8.0_131 |
| findbugs | v3.1.0-RC1 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/20198/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/20198/testReport/ |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/20198/console |
| Powered by | Apache Yetus 0.5.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Ozone: Create a general abstraction for metadata store
> --
>
> Key: HDFS-12069
> URL: 

[jira] [Commented] (HDFS-12082) BlockInvalidateLimit value is incorrectly set after namenode heartbeat interval reconfigured

2017-07-07 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12082?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16078926#comment-16078926
 ] 

Hadoop QA commented on HDFS-12082:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
9s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 13m 
 7s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
48s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
35s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
54s{color} | {color:green} trunk passed {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
37s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs in trunk has 10 extant 
Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
40s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
33s{color} | {color:green} hadoop-hdfs-project/hadoop-hdfs: The patch generated 
0 new + 41 unchanged - 2 fixed = 41 total (was 43) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
43s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
38s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 63m 37s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
19s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 88m 18s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure080 |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | HDFS-12082 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12876169/HDFS-12082.003.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux e7abd3ea0994 3.13.0-119-generic #166-Ubuntu SMP Wed May 3 
12:18:55 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / f484a6f |
| Default Java | 1.8.0_131 |
| findbugs | v3.1.0-RC1 |
| findbugs | 
https://builds.apache.org/job/PreCommit-HDFS-Build/20197/artifact/patchprocess/branch-findbugs-hadoop-hdfs-project_hadoop-hdfs-warnings.html
 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/20197/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/20197/testReport/ |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/20197/console |
| Powered by | Apache Yetus 0.5.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> BlockInvalidateLimit value is incorrectly set after namenode heartbeat 
> interval reconfigured 
> 

[jira] [Commented] (HDFS-12069) Ozone: Create a general abstraction for metadata store

2017-07-07 Thread Weiwei Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12069?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16078921#comment-16078921
 ] 

Weiwei Yang commented on HDFS-12069:


Hi [~xyao] Thank you, I have uploaded v9 patch to address your comments. So now 
only cblock classes are left over, I will open a JIRA to track that once this 
gets in. Thanks!

> Ozone: Create a general abstraction for metadata store
> --
>
> Key: HDFS-12069
> URL: https://issues.apache.org/jira/browse/HDFS-12069
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
> Attachments: HDFS-12069-HDFS-7240.001.patch, 
> HDFS-12069-HDFS-7240.002.patch, HDFS-12069-HDFS-7240.003.patch, 
> HDFS-12069-HDFS-7240.004.patch, HDFS-12069-HDFS-7240.005.patch, 
> HDFS-12069-HDFS-7240.006.patch, HDFS-12069-HDFS-7240.007.patch, 
> HDFS-12069-HDFS-7240.008.patch, HDFS-12069-HDFS-7240.009.patch
>
>
> Create a general abstraction for metadata store so that we can plug other key 
> value store to host ozone metadata. Currently only levelDB is implemented, we 
> want to support RocksDB as it provides more production ready features.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12069) Ozone: Create a general abstraction for metadata store

2017-07-07 Thread Weiwei Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12069?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Weiwei Yang updated HDFS-12069:
---
Attachment: HDFS-12069-HDFS-7240.009.patch

> Ozone: Create a general abstraction for metadata store
> --
>
> Key: HDFS-12069
> URL: https://issues.apache.org/jira/browse/HDFS-12069
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
> Attachments: HDFS-12069-HDFS-7240.001.patch, 
> HDFS-12069-HDFS-7240.002.patch, HDFS-12069-HDFS-7240.003.patch, 
> HDFS-12069-HDFS-7240.004.patch, HDFS-12069-HDFS-7240.005.patch, 
> HDFS-12069-HDFS-7240.006.patch, HDFS-12069-HDFS-7240.007.patch, 
> HDFS-12069-HDFS-7240.008.patch, HDFS-12069-HDFS-7240.009.patch
>
>
> Create a general abstraction for metadata store so that we can plug other key 
> value store to host ozone metadata. Currently only levelDB is implemented, we 
> want to support RocksDB as it provides more production ready features.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12037) Ozone: Improvement rest API output format for better looking

2017-07-07 Thread Weiwei Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12037?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Weiwei Yang updated HDFS-12037:
---
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: HDFS-7240
   Status: Resolved  (was: Patch Available)

> Ozone: Improvement rest API output format for better looking
> 
>
> Key: HDFS-12037
> URL: https://issues.apache.org/jira/browse/HDFS-12037
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
>  Labels: user-experience
> Fix For: HDFS-7240
>
> Attachments: HDFS-12037-HDFS-7240.001.patch
>
>
> Right now ozone rest api output is displayed as a raw json string in single 
> line, not quite human readable,
> {noformat}
> {"volumes":[{"owner":{"name":"wwei"},"quota":{"unit":"GB","size":200},"volumeName":"volume-aug-1","createdOn":null,"createdBy":null}]}
> {noformat}
> propose to improve the output format by pretty printer
> {noformat}
> {
>   "volumes" : [ {
> "owner" : {
>   "name" : "wwei"
> },
> "quota" : {
>   "unit" : "GB",
>   "size" : 200
> },
> "volumeName" : "volume-aug-1",
> "createdOn" : null,
> "createdBy" : null
>   } ]
> }
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12037) Ozone: Improvement rest API output format for better looking

2017-07-07 Thread Weiwei Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12037?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16078905#comment-16078905
 ] 

Weiwei Yang commented on HDFS-12037:


Thanks [~xyao] for the review, I have just committed to the feature branch.

> Ozone: Improvement rest API output format for better looking
> 
>
> Key: HDFS-12037
> URL: https://issues.apache.org/jira/browse/HDFS-12037
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
>  Labels: user-experience
> Attachments: HDFS-12037-HDFS-7240.001.patch
>
>
> Right now ozone rest api output is displayed as a raw json string in single 
> line, not quite human readable,
> {noformat}
> {"volumes":[{"owner":{"name":"wwei"},"quota":{"unit":"GB","size":200},"volumeName":"volume-aug-1","createdOn":null,"createdBy":null}]}
> {noformat}
> propose to improve the output format by pretty printer
> {noformat}
> {
>   "volumes" : [ {
> "owner" : {
>   "name" : "wwei"
> },
> "quota" : {
>   "unit" : "GB",
>   "size" : 200
> },
> "volumeName" : "volume-aug-1",
> "createdOn" : null,
> "createdBy" : null
>   } ]
> }
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12082) BlockInvalidateLimit value is incorrectly set after namenode heartbeat interval reconfigured

2017-07-07 Thread Weiwei Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12082?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16078904#comment-16078904
 ] 

Weiwei Yang commented on HDFS-12082:


Hi [~vagarychen]

Thanks for helping to review this. You are making a good point. Second thought, 
I think it is better to ensure the effected invalidate block limit is the 
bigger one of configured value in hdfs-site.xml and 20*HB_interval.  This will 
ensure we don't throttle the block deletion too much on datanodes. I have 
revised the patch to do so. Please let me know if v3 patch makes sense to you. 
Thanks.

> BlockInvalidateLimit value is incorrectly set after namenode heartbeat 
> interval reconfigured 
> -
>
> Key: HDFS-12082
> URL: https://issues.apache.org/jira/browse/HDFS-12082
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs, namenode
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
> Attachments: HDFS-12082.001.patch, HDFS-12082.002.patch, 
> HDFS-12082.003.patch
>
>
> HDFS-1477 provides an option to reconfigured namenode heartbeat interval 
> without restarting the namenode. When the heartbeat interval is reconfigured, 
> {{blockInvalidateLimit}} gets recounted
> {code}
>  this.blockInvalidateLimit = Math.max(20 * (int) (intervalSeconds),
> DFSConfigKeys.DFS_BLOCK_INVALIDATE_LIMIT_DEFAULT);
> {code}
> this doesn't honor the existing value set by {{dfs.block.invalidate.limit}}.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12082) BlockInvalidateLimit value is incorrectly set after namenode heartbeat interval reconfigured

2017-07-07 Thread Weiwei Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12082?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Weiwei Yang updated HDFS-12082:
---
Attachment: HDFS-12082.003.patch

> BlockInvalidateLimit value is incorrectly set after namenode heartbeat 
> interval reconfigured 
> -
>
> Key: HDFS-12082
> URL: https://issues.apache.org/jira/browse/HDFS-12082
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs, namenode
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
> Attachments: HDFS-12082.001.patch, HDFS-12082.002.patch, 
> HDFS-12082.003.patch
>
>
> HDFS-1477 provides an option to reconfigured namenode heartbeat interval 
> without restarting the namenode. When the heartbeat interval is reconfigured, 
> {{blockInvalidateLimit}} gets recounted
> {code}
>  this.blockInvalidateLimit = Math.max(20 * (int) (intervalSeconds),
> DFSConfigKeys.DFS_BLOCK_INVALIDATE_LIMIT_DEFAULT);
> {code}
> this doesn't honor the existing value set by {{dfs.block.invalidate.limit}}.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12077) Implement a remaining space based balancer policy

2017-07-07 Thread Tsz Wo Nicholas Sze (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12077?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16078870#comment-16078870
 ] 

Tsz Wo Nicholas Sze commented on HDFS-12077:


Therefore, it seems good to have a remaining space threshold: when the 
remaining space > threshold, use percentage based.  When the remaining space <= 
threshold, use remaining space based.


> Implement a remaining space based balancer policy
> -
>
> Key: HDFS-12077
> URL: https://issues.apache.org/jira/browse/HDFS-12077
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: balancer & mover
>Affects Versions: 2.6.0
>Reporter: liuyiyang
>
> Our cluster has DataNodes with 2T disk storage, as storage utilization of the 
> cluster growing, we need to add new DataNodes to increse the capacity of our 
> cluster. In order to make utilization of every DataNode be in relatively 
> balanced state, usually we use HDFS balancer tool to balance our cluster 
> every time we add new DataNodes.
> We have been facing an issue with heterogeneous disk capacity when using HDFS 
> balancer tool. In production cluster, we often have to add new DataNodes with 
> larger disk capacity than previous DNs. Since the original balancer is 
> implemented to balance utilization of every DataNode, the balancer will make 
> every DN's utilization and average utilization of the cluster be within a 
> given threshold.
> For example, in a cluster with two DataNodes DN1 and DN2, DN1 has ten disks 
> with 2T capacity, DN2 has ten disks with 10T capacity, the original balancer 
> may make the cluster balanced in the following state:
> ||DataNode||Total Capacity||Used||Remaining|| utilization||
> |DN1   | 20T  |  18T| 2T| 90%|
> |DN2|100T   |90T   |  10T|90%|
> each DN has reached a 90% utilization, in such a case, DN1's capacibility to 
> store new blocks is far less than DN2's. When DN1 is full, all of the new 
> blocks will be written to DN2 and more MR tasks will be scheduled to DN2. As 
> a result, DN2 is overloaded and we can not 
> make full use of each DN's I/O capacity. In such a case, We wish the balancer 
> could run based on remaining space of every DN. After balancing, every DN's 
> remaining space could be balanced like the following state:
> ||DataNode  ||Total Capacity || Used  ||Remaining||utilization||
>  |DN1   |  20T |14T | 6T |70%|
>  |DN2   |  100T |   94T|  6T |94%|
> In a cluster with balanced remaining space of DN's capacity, every DN will be 
> utilized when writing new blocks to the cluster,  on the other hand,  every 
> DN's I/O capacity can be utilized when running MR jobs.
> Please let me know what you guys think.  I will attach a patch if necessary.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12077) Implement a remaining space based balancer policy

2017-07-07 Thread Tsz Wo Nicholas Sze (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12077?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16078867#comment-16078867
 ] 

Tsz Wo Nicholas Sze commented on HDFS-12077:


Thanks for filing this JIRA.  The idea sounds very useful when the disk usage 
is high.

We also need to take care the case when the disk usage is low.  For example,
- Percentage based
||DataNode  ||Total Capacity || Used  ||Remaining||utilization||
 |DN1   |  20T |2T | 18T |10%|
 |DN2   |  100T |   10T|  90T |10%|
- Remaining space based
||DataNode  ||Total Capacity || Used  ||Remaining||utilization||
 |DN1   |  20T |0T | 20T | 0%|
 |DN2   |  100T |   12T|  88T |12%|

As shown above, the remaining space based policy will keep the small datanodes 
empty.

> Implement a remaining space based balancer policy
> -
>
> Key: HDFS-12077
> URL: https://issues.apache.org/jira/browse/HDFS-12077
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: balancer & mover
>Affects Versions: 2.6.0
>Reporter: liuyiyang
>
> Our cluster has DataNodes with 2T disk storage, as storage utilization of the 
> cluster growing, we need to add new DataNodes to increse the capacity of our 
> cluster. In order to make utilization of every DataNode be in relatively 
> balanced state, usually we use HDFS balancer tool to balance our cluster 
> every time we add new DataNodes.
> We have been facing an issue with heterogeneous disk capacity when using HDFS 
> balancer tool. In production cluster, we often have to add new DataNodes with 
> larger disk capacity than previous DNs. Since the original balancer is 
> implemented to balance utilization of every DataNode, the balancer will make 
> every DN's utilization and average utilization of the cluster be within a 
> given threshold.
> For example, in a cluster with two DataNodes DN1 and DN2, DN1 has ten disks 
> with 2T capacity, DN2 has ten disks with 10T capacity, the original balancer 
> may make the cluster balanced in the following state:
> ||DataNode||Total Capacity||Used||Remaining|| utilization||
> |DN1   | 20T  |  18T| 2T| 90%|
> |DN2|100T   |90T   |  10T|90%|
> each DN has reached a 90% utilization, in such a case, DN1's capacibility to 
> store new blocks is far less than DN2's. When DN1 is full, all of the new 
> blocks will be written to DN2 and more MR tasks will be scheduled to DN2. As 
> a result, DN2 is overloaded and we can not 
> make full use of each DN's I/O capacity. In such a case, We wish the balancer 
> could run based on remaining space of every DN. After balancing, every DN's 
> remaining space could be balanced like the following state:
> ||DataNode  ||Total Capacity || Used  ||Remaining||utilization||
>  |DN1   |  20T |14T | 6T |70%|
>  |DN2   |  100T |   94T|  6T |94%|
> In a cluster with balanced remaining space of DN's capacity, every DN will be 
> utilized when writing new blocks to the cluster,  on the other hand,  every 
> DN's I/O capacity can be utilized when running MR jobs.
> Please let me know what you guys think.  I will attach a patch if necessary.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11590) Nodemanagers have DDoS our namenode due to HDFS_DELEGATION_TOKEN expired or not in the cache

2017-07-07 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11590?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16078836#comment-16078836
 ] 

Hadoop QA commented on HDFS-11590:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
8s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
8s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 13m 
16s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
26s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
42s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
27s{color} | {color:green} trunk passed {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
22s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs-client in trunk has 2 
extant Findbugs warnings. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
41s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs in trunk has 10 extant 
Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
1s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
7s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
23s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 38s{color} | {color:orange} hadoop-hdfs-project: The patch generated 8 new + 
81 unchanged - 1 fixed = 89 total (was 82) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
56s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
11s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 65m  9s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
20s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 98m 12s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting |
|   | hadoop.hdfs.server.namenode.ha.TestPipelinesFailover |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure090 |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure070 |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | HDFS-11590 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12861062/HDFS-11590.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 1096a272b7e6 3.13.0-119-generic #166-Ubuntu SMP Wed May 3 
12:18:55 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / f484a6f |
| Default Java | 1.8.0_131 |
| findbugs | v3.1.0-RC1 |
| findbugs | 

[jira] [Commented] (HDFS-11874) [SPS]: Document the SPS feature

2017-07-07 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11874?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16078821#comment-16078821
 ] 

Hadoop QA commented on HDFS-11874:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 15m 
44s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} HDFS-10285 Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 16m 
35s{color} | {color:green} HDFS-10285 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
51s{color} | {color:green} HDFS-10285 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
48s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 2 line(s) that end in whitespace. Use git 
apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply 
{color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
17s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 34m 32s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:a9ad5d6 |
| JIRA Issue | HDFS-11874 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12876147/HDFS-11874-HDFS-10285-001.patch
 |
| Optional Tests |  asflicense  mvnsite  |
| uname | Linux 02f9f749c94c 3.13.0-116-generic #163-Ubuntu SMP Fri Mar 31 
14:13:22 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | HDFS-10285 / 258fdc6 |
| whitespace | 
https://builds.apache.org/job/PreCommit-HDFS-Build/20196/artifact/patchprocess/whitespace-eol.txt
 |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/20196/console |
| Powered by | Apache Yetus 0.5.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> [SPS]: Document the SPS feature
> ---
>
> Key: HDFS-11874
> URL: https://issues.apache.org/jira/browse/HDFS-11874
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: documentation
>Reporter: Uma Maheswara Rao G
> Attachments: ArchivalStorage.html, HDFS-11874-HDFS-10285-001.patch
>
>
> This JIRA is for tracking the documentation about the feature



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11965) [SPS]: Should give chance to satisfy the low redundant blocks before removing the xattr

2017-07-07 Thread Uma Maheswara Rao G (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11965?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16078804#comment-16078804
 ] 

Uma Maheswara Rao G commented on HDFS-11965:


Thank you [~surendrasingh] for the update. A quick comment
{code}
case FEW_LOW_REDUNDENCY_BLOCKS:
+LOG.info("Adding trackID " + blockCollectionID
++ " back to retry queue as some of the blocks"
++ " are low redundant.");
+this.storageMovementNeeded.add(blockCollectionID);
{code}
When there are no other elements in the this.storageMovementNeeded list, this 
element comes for every 300ms and log this info. So, shall we make this as 
debug to reduce too much logging in that case?

Please file the JIRA for definite retry implementation which we discussed in 
previous comments.

> [SPS]: Should give chance to satisfy the low redundant blocks before removing 
> the xattr
> ---
>
> Key: HDFS-11965
> URL: https://issues.apache.org/jira/browse/HDFS-11965
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: namenode
>Affects Versions: HDFS-10285
>Reporter: Surendra Singh Lilhore
>Assignee: Surendra Singh Lilhore
> Attachments: HDFS-11965-HDFS-10285.001.patch, 
> HDFS-11965-HDFS-10285.002.patch, HDFS-11965-HDFS-10285.003.patch, 
> HDFS-11965-HDFS-10285.004.patch, HDFS-11965-HDFS-10285.005.patch, 
> HDFS-11965-HDFS-10285.006.patch
>
>
> The test case is failing because all the required replicas are not moved in 
> expected storage. This is happened because of delay in datanode registration 
> after cluster restart.
> Scenario :
> 1. Start cluster with 3 DataNodes.
> 2. Create file and set storage policy to WARM.
> 3. Restart the cluster.
> 4. Now Namenode and two DataNodes started first and  got registered with 
> NameNode. (one datanode  not yet registered)
> 5. SPS scheduled block movement based on available DataNodes (It will move 
> one replica in ARCHIVE based on policy).
> 6. Block movement also success and Xattr removed from the file because this 
> condition is true {{itemInfo.isAllBlockLocsAttemptedToSatisfy()}}.
> {code}
> if (itemInfo != null
> && !itemInfo.isAllBlockLocsAttemptedToSatisfy()) {
>   blockStorageMovementNeeded
>   .add(storageMovementAttemptedResult.getTrackId());
> 
> ..
> } else {
> 
> ..
>   this.sps.postBlkStorageMovementCleanup(
>   storageMovementAttemptedResult.getTrackId());
> }
> {code}
> 7. Now third DN registered with namenode and its reported one more DISK 
> replica. Now Namenode has two DISK and one ARCHIVE replica.
> In test case we have condition to check the number of DISK replica..
> {code} DFSTestUtil.waitExpectedStorageType(testFileName, StorageType.DISK, 1, 
> timeout, fs);{code}
> This condition never became true and test case will be timed out.
>  



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11874) [SPS]: Document the SPS feature

2017-07-07 Thread Uma Maheswara Rao G (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11874?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Uma Maheswara Rao G updated HDFS-11874:
---
Status: Patch Available  (was: Open)

> [SPS]: Document the SPS feature
> ---
>
> Key: HDFS-11874
> URL: https://issues.apache.org/jira/browse/HDFS-11874
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: documentation
>Reporter: Uma Maheswara Rao G
> Attachments: ArchivalStorage.html, HDFS-11874-HDFS-10285-001.patch
>
>
> This JIRA is for tracking the documentation about the feature



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11874) [SPS]: Document the SPS feature

2017-07-07 Thread Uma Maheswara Rao G (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11874?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Uma Maheswara Rao G updated HDFS-11874:
---
Attachment: ArchivalStorage.html
HDFS-11874-HDFS-10285-001.patch

Attached a patch for review. Also attached generated html file.

> [SPS]: Document the SPS feature
> ---
>
> Key: HDFS-11874
> URL: https://issues.apache.org/jira/browse/HDFS-11874
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: documentation
>Reporter: Uma Maheswara Rao G
> Attachments: ArchivalStorage.html, HDFS-11874-HDFS-10285-001.patch
>
>
> This JIRA is for tracking the documentation about the feature



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12026) libhdfs++: Fix compilation errors and warnings when compiling with Clang

2017-07-07 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12026?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16078785#comment-16078785
 ] 

Hadoop QA commented on HDFS-12026:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 11m  
5s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:blue}0{color} | {color:blue} shelldocs {color} | {color:blue}  0m  
8s{color} | {color:blue} Shelldocs was not available. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} HDFS-8707 Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
21s{color} | {color:green} HDFS-8707 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  7m  
4s{color} | {color:green} HDFS-8707 passed with JDK v1.8.0_131 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  7m  
2s{color} | {color:green} HDFS-8707 passed with JDK v1.7.0_131 {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
17s{color} | {color:green} HDFS-8707 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
12s{color} | {color:green} HDFS-8707 passed with JDK v1.8.0_131 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
12s{color} | {color:green} HDFS-8707 passed with JDK v1.7.0_131 {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 11m 
35s{color} | {color:green} the patch passed with JDK v1.8.0_131 {color} |
| {color:red}-1{color} | {color:red} cc {color} | {color:red} 11m 35s{color} | 
{color:red} hadoop-hdfs-project_hadoop-hdfs-native-client-jdk1.8.0_131 with JDK 
v1.8.0_131 generated 423 new + 5 unchanged - 0 fixed = 428 total (was 5) 
{color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 11m 
35s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 11m 
42s{color} | {color:green} the patch passed with JDK v1.7.0_131 {color} |
| {color:red}-1{color} | {color:red} cc {color} | {color:red} 11m 42s{color} | 
{color:red} hadoop-hdfs-project_hadoop-hdfs-native-client-jdk1.7.0_131 with JDK 
v1.7.0_131 generated 423 new + 5 unchanged - 0 fixed = 428 total (was 5) 
{color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 11m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} shellcheck {color} | {color:green}  0m 
 0s{color} | {color:green} There were no new shellcheck issues. {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m  
8s{color} | {color:green} the patch passed with JDK v1.8.0_131 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
10s{color} | {color:green} the patch passed with JDK v1.7.0_131 {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 16m 
31s{color} | {color:green} hadoop-hdfs-native-client in the patch passed with 
JDK v1.7.0_131. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
20s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 92m 41s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:5ae34ac |
| JIRA Issue | HDFS-12026 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12876135/HDFS-12026.HDFS-8707.006.patch
 |
| Optional Tests |  asflicense  shellcheck  shelldocs  compile  javac  javadoc  
mvninstall  

[jira] [Commented] (HDFS-6874) Add GETFILEBLOCKLOCATIONS operation to HttpFS

2017-07-07 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6874?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16078783#comment-16078783
 ] 

Hadoop QA commented on HDFS-6874:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red}  0m  4s{color} 
| {color:red} HDFS-6874 does not apply to trunk. Rebase required? Wrong Branch? 
See https://wiki.apache.org/hadoop/HowToContribute for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | HDFS-6874 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12845945/HDFS-6874.05.patch |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/20195/console |
| Powered by | Apache Yetus 0.5.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Add GETFILEBLOCKLOCATIONS operation to HttpFS
> -
>
> Key: HDFS-6874
> URL: https://issues.apache.org/jira/browse/HDFS-6874
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 2.4.1, 2.7.3
>Reporter: Gao Zhong Liang
>Assignee: Weiwei Yang
>  Labels: BB2015-05-TBR
> Attachments: HDFS-6874.02.patch, HDFS-6874.03.patch, 
> HDFS-6874.04.patch, HDFS-6874.05.patch, HDFS-6874-1.patch, 
> HDFS-6874-branch-2.6.0.patch, HDFS-6874.patch
>
>
> GETFILEBLOCKLOCATIONS operation is missing in HttpFS, which is already 
> supported in WebHDFS.  For the request of GETFILEBLOCKLOCATIONS in 
> org.apache.hadoop.fs.http.server.HttpFSServer, BAD_REQUEST is returned so far:
> ...
>  case GETFILEBLOCKLOCATIONS: {
> response = Response.status(Response.Status.BAD_REQUEST).build();
> break;
>   }
>  



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12104) libhdfs++: Make sure all steps in SaslProtocol end up calling AuthComplete

2017-07-07 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12104?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16078782#comment-16078782
 ] 

Hadoop QA commented on HDFS-12104:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 10m 
48s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} HDFS-8707 Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  9m 
39s{color} | {color:green} HDFS-8707 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
46s{color} | {color:green} HDFS-8707 passed with JDK v1.8.0_131 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
49s{color} | {color:green} HDFS-8707 passed with JDK v1.7.0_131 {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
17s{color} | {color:green} HDFS-8707 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
 9s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
45s{color} | {color:green} the patch passed with JDK v1.8.0_131 {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green}  6m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  6m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
50s{color} | {color:green} the patch passed with JDK v1.7.0_131 {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green}  6m 
50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  6m 
50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  9m  
7s{color} | {color:green} hadoop-hdfs-native-client in the patch passed with 
JDK v1.7.0_131. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
18s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 68m 23s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:5ae34ac |
| JIRA Issue | HDFS-12104 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12876138/HDFS-12104.HDFS-8707.000.patch
 |
| Optional Tests |  asflicense  compile  cc  mvnsite  javac  unit  |
| uname | Linux 5e3baa1064d8 3.13.0-117-generic #164-Ubuntu SMP Fri Apr 7 
11:05:26 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | HDFS-8707 / 821f971 |
| Default Java | 1.7.0_131 |
| Multi-JDK versions |  /usr/lib/jvm/java-8-oracle:1.8.0_131 
/usr/lib/jvm/java-7-openjdk-amd64:1.7.0_131 |
| JDK v1.7.0_131  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/20193/testReport/ |
| modules | C: hadoop-hdfs-project/hadoop-hdfs-native-client U: 
hadoop-hdfs-project/hadoop-hdfs-native-client |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/20193/console |
| Powered by | Apache Yetus 0.5.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> libhdfs++:  Make sure all steps in SaslProtocol end up calling AuthComplete
> ---
>
> Key: HDFS-12104
> URL: https://issues.apache.org/jira/browse/HDFS-12104
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Reporter: James Clampffer
>Assignee: James Clampffer
> Attachments: 

[jira] [Commented] (HDFS-12104) libhdfs++: Make sure all steps in SaslProtocol end up calling AuthComplete

2017-07-07 Thread Xiaowei Zhu (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12104?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16078781#comment-16078781
 ] 

Xiaowei Zhu commented on HDFS-12104:


+1,  the patch looks good.

> libhdfs++:  Make sure all steps in SaslProtocol end up calling AuthComplete
> ---
>
> Key: HDFS-12104
> URL: https://issues.apache.org/jira/browse/HDFS-12104
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Reporter: James Clampffer
>Assignee: James Clampffer
> Attachments: HDFS-12104.HDFS-8707.000.patch
>
>
> SaslProtocol provides an abstraction for stepping through the authentication 
> challenge and response stages in the Cyrus SASL library by chaining callbacks 
> together (next one is invoked when async io is done).
> To authenticate SaslProtocol::Authenticate is called, and when authentication 
> is finished SaslProtocol::AuthComplete is called which invokes an 
> authentication completion callback.  There's a couple cases where the 
> intermediate callbacks return without calling AuthComplete which breaks 
> applications that take advantage of that callback.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-6874) Add GETFILEBLOCKLOCATIONS operation to HttpFS

2017-07-07 Thread Tsz Wo Nicholas Sze (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6874?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16078775#comment-16078775
 ] 

Tsz Wo Nicholas Sze commented on HDFS-6874:
---

Hi [~cheersyang], the 05 patch no longer applies.  Could you update it?  Thanks.

> Add GETFILEBLOCKLOCATIONS operation to HttpFS
> -
>
> Key: HDFS-6874
> URL: https://issues.apache.org/jira/browse/HDFS-6874
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 2.4.1, 2.7.3
>Reporter: Gao Zhong Liang
>Assignee: Weiwei Yang
>  Labels: BB2015-05-TBR
> Attachments: HDFS-6874.02.patch, HDFS-6874.03.patch, 
> HDFS-6874.04.patch, HDFS-6874.05.patch, HDFS-6874-1.patch, 
> HDFS-6874-branch-2.6.0.patch, HDFS-6874.patch
>
>
> GETFILEBLOCKLOCATIONS operation is missing in HttpFS, which is already 
> supported in WebHDFS.  For the request of GETFILEBLOCKLOCATIONS in 
> org.apache.hadoop.fs.http.server.HttpFSServer, BAD_REQUEST is returned so far:
> ...
>  case GETFILEBLOCKLOCATIONS: {
> response = Response.status(Response.Status.BAD_REQUEST).build();
> break;
>   }
>  



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12103) libhdfs++: Provide workaround to support cancel on filesystem connect until HDFS-11437 is resolved

2017-07-07 Thread Xiaowei Zhu (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12103?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16078771#comment-16078771
 ] 

Xiaowei Zhu commented on HDFS-12103:


+1. The workaround looks reasonable to me.

> libhdfs++: Provide workaround to support cancel on filesystem connect until 
> HDFS-11437 is resolved
> --
>
> Key: HDFS-12103
> URL: https://issues.apache.org/jira/browse/HDFS-12103
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Reporter: James Clampffer
>Assignee: James Clampffer
> Attachments: HDFS-12103.HDFS-8707.000.patch
>
>
> HDFS-11437 is going to take a non-trivial amount of work to do right.  In the 
> meantime it'd be nice to have a way to cancel pending connections (even when 
> the FS claimed they are finished).  
> Proposed workaround is to relax the rules about when 
> FileSystem::CancelPending connect can be called since it isn't able to 
> properly determine when it's connected anyway.  In order to determine when 
> the FS has connected you can do some simple RPC call since that will wait on 
> failover.  If CancelPending can be called during that first RPC call then it 
> will effectively be canceling FileSystem::Connect
> Current cancel rules - asterisk on steps where CancelPending is allowed
> FileSystem::Connect called
> FileSystem communicates with first NN *
> FileSystem::Connect returns - even if it hasn't communicated with the active 
> NN
> Proposed relaxation
> FileSystem::Connect called
> FileSystem communicates with first NN*
> FileSystem::Connect returns *
> FileSystem::GetFileInfo called * -any namenode RPC call will do, ignore perm 
> errors
> RPC engine blocks until it hits the active or runs out of retries *
> FileSystem::GetFileInfo returns
> It'd be up to the user to add in the dummy NN RPC call.  Once HDFS-11437 is 
> fixed this workaround can be removed.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12082) BlockInvalidateLimit value is incorrectly set after namenode heartbeat interval reconfigured

2017-07-07 Thread Chen Liang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12082?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16078770#comment-16078770
 ] 

Chen Liang commented on HDFS-12082:
---

Thansk [~cheersyang] for reporting this!

I'm a little confused about the patch though. When reading the description, I 
was thinking the change is probably that, when {{setHeartbeatInterval}} is 
called, instead of 
{{blockInvalidateLimit = Math.max(20 * (int) (intervalSeconds), 
DFSConfigKeys.DFS_BLOCK_INVALIDATE_LIMIT_DEFAULT);}}
we change it to something like
{{blockInvalidateLimit = Math.max(20 * (int) (intervalSeconds), 
configuredLimit);}}
where {{final int configuredLimit = 
conf.getInt(DFSConfigKeys.DFS_BLOCK_INVALIDATE_LIMIT_KEY, 
DFSConfigKeys.DFS_BLOCK_INVALIDATE_LIMIT_DEFAULT);}}

But seems the patch removed this part completely. It seems to me in this case 
{{blockInvalidateLimit}} will be set to configured value at the start once and 
will no longer change when {{setHeartbeatInterval}} gets called. Is this the 
desired behaviour? because the original code seems to have the syntax guarantee 
that no matter how {{setHeartbeatInterval}} gets called, 
{{blockInvalidateLimit}} will never be larger than 20x {{intervalSeconds}}, and 
it appears that this will not be guaranteed with the patch.

An addition minor comment, in the unit test, how about changing {{"" + 6}} to 
{{Integer.toString(6)}}?

> BlockInvalidateLimit value is incorrectly set after namenode heartbeat 
> interval reconfigured 
> -
>
> Key: HDFS-12082
> URL: https://issues.apache.org/jira/browse/HDFS-12082
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs, namenode
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
> Attachments: HDFS-12082.001.patch, HDFS-12082.002.patch
>
>
> HDFS-1477 provides an option to reconfigured namenode heartbeat interval 
> without restarting the namenode. When the heartbeat interval is reconfigured, 
> {{blockInvalidateLimit}} gets recounted
> {code}
>  this.blockInvalidateLimit = Math.max(20 * (int) (intervalSeconds),
> DFSConfigKeys.DFS_BLOCK_INVALIDATE_LIMIT_DEFAULT);
> {code}
> this doesn't honor the existing value set by {{dfs.block.invalidate.limit}}.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11590) Nodemanagers have DDoS our namenode due to HDFS_DELEGATION_TOKEN expired or not in the cache

2017-07-07 Thread Wei-Chiu Chuang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11590?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16078759#comment-16078759
 ] 

Wei-Chiu Chuang commented on HDFS-11590:


I am not sure if the patch is going to be effective. If I understand the patch 
correctly, the client would try to close HDFS files if it gets invalid token 
error.
However, if the client has no valid token, the close would fail because the 
client can't be authenticated, right?

> Nodemanagers have DDoS our namenode due to HDFS_DELEGATION_TOKEN expired or 
> not in the cache
> 
>
> Key: HDFS-11590
> URL: https://issues.apache.org/jira/browse/HDFS-11590
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs-client
>Affects Versions: 2.6.0
> Environment: Releases:
> cloudera release cdh-5.5.0
> openjdk version "1.8.0_91"
> linux centos6 servers
> Cluster info:
> Namenode and resourcemanager in HA with kerberos authentication
> More than 1300 datanodes/nodemanagers
>Reporter: Nicolas Fraison
>Priority: Minor
> Attachments: HDFS-11590.patch
>
>
> We have faced some huge slowdowns on our namenode due to all our nodemanagers 
> continuing to retry to renew a lease and reconnecting to the namenode every 
> second during 1 hour due to some HDFS_DELEGATION_TOKEN being expired or not 
> in the cache.
> The number of time_wait connection on our namenode was stuck to the maximum 
> configured of 250k during this period due to the reconnections each time.
> {code}
> 2017-03-02 11:51:42,817 INFO 
> SecurityLogger.org.apache.hadoop.security.authorize.ServiceAuthorizationManager:
>  Authorization successful for appattempt_1488396860014_156103_01 
> (auth:TOKEN) for protocol=interface 
> org.apache.hadoop.yarn.api.ContainerManagementProtocolPB
>   2017-03-02 11:51:43,414 INFO 
> SecurityLogger.org.apache.hadoop.security.authorize.ServiceAuthorizationManager:
>  Authorization successful for appattempt_1488396860014_156120_01 
> (auth:TOKEN) for protocol=interface 
> org.apache.hadoop.yarn.api.ContainerManagementProtocolPB
>   2017-03-02 11:51:51,994 WARN 
> org.apache.hadoop.security.UserGroupInformation: PriviledgedActionException 
> as:prediction (auth:SIMPLE) 
> cause:org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.token.SecretManager$InvalidToken):
>  token (HDFS_DELEGATION_TOKEN token 111018676 for prediction) is expired
>   2017-03-02 11:51:51,995 WARN org.apache.hadoop.ipc.Client: Exception 
> encountered while connecting to the server : 
> org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.token.SecretManager$InvalidToken):
>  token (HDFS_DELEGATION_TOKEN token 111018676 for prediction) is expired
>   2017-03-02 11:51:51,995 WARN 
> org.apache.hadoop.security.UserGroupInformation: PriviledgedActionException 
> as:prediction (auth:SIMPLE) 
> cause:org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.token.SecretManager$InvalidToken):
>  token (HDFS_DELEGATION_TOKEN token 111018676 for prediction) is expired
>   2017-03-02 11:51:51,995 WARN org.apache.hadoop.hdfs.LeaseRenewer: Failed to 
> renew lease for [DFSClient_NONMAPREDUCE_1560141256_4187204] for 30 seconds.  
> Will retry shortly ...
>   token (HDFS_DELEGATION_TOKEN token 111018676 for prediction) is expired
>  at org.apache.hadoop.ipc.Client.call(Client.java:1472)
>  at org.apache.hadoop.ipc.Client.call(Client.java:1403)
>  at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:230)
>  at com.sun.proxy.$Proxy20.renewLease(Unknown Source)
>  at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.renewLease(ClientNamenodeProtocolTranslatorPB.java:571)
>  at sun.reflect.GeneratedMethodAccessor74.invoke(Unknown Source)
>  at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  at java.lang.reflect.Method.invoke(Method.java:498)
>  at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:252)
>  at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:104)
>  at com.sun.proxy.$Proxy21.renewLease(Unknown Source)
>  at org.apache.hadoop.hdfs.DFSClient.renewLease(DFSClient.java:921)
>  at org.apache.hadoop.hdfs.LeaseRenewer.renew(LeaseRenewer.java:423)
>  at org.apache.hadoop.hdfs.LeaseRenewer.run(LeaseRenewer.java:448)
>  at org.apache.hadoop.hdfs.LeaseRenewer.access$700(LeaseRenewer.java:71)
>  at org.apache.hadoop.hdfs.LeaseRenewer$1.run(LeaseRenewer.java:304)
>  at java.lang.Thread.run(Thread.java:745)
>   2017-03-02 12:51:22,032 WARN 
> org.apache.hadoop.security.UserGroupInformation: PriviledgedActionException 
> as:prediction (auth:SIMPLE) 
> 

[jira] [Commented] (HDFS-12069) Ozone: Create a general abstraction for metadata store

2017-07-07 Thread Xiaoyu Yao (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12069?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16078749#comment-16078749
 ] 

Xiaoyu Yao commented on HDFS-12069:
---

Thanks [~cheersyang] for the update. Patch v8 looks good to me. Agree that we 
can fix the cblock usage with a separate ticket. 

3 more minor issues. +1 after that being fixed.

ContainerCache.java
Line 86: NIT: OzoneLevelDBStore should be changed to MetadataStore

OzoneMetadataManager.java
We should also change the LevelDBStore used by the localStorage Handler.

KeyUtils.java
Line 62: LevelDB handle should be MetadataStore handler.


> Ozone: Create a general abstraction for metadata store
> --
>
> Key: HDFS-12069
> URL: https://issues.apache.org/jira/browse/HDFS-12069
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
> Attachments: HDFS-12069-HDFS-7240.001.patch, 
> HDFS-12069-HDFS-7240.002.patch, HDFS-12069-HDFS-7240.003.patch, 
> HDFS-12069-HDFS-7240.004.patch, HDFS-12069-HDFS-7240.005.patch, 
> HDFS-12069-HDFS-7240.006.patch, HDFS-12069-HDFS-7240.007.patch, 
> HDFS-12069-HDFS-7240.008.patch
>
>
> Create a general abstraction for metadata store so that we can plug other key 
> value store to host ozone metadata. Currently only levelDB is implemented, we 
> want to support RocksDB as it provides more production ready features.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12103) libhdfs++: Provide workaround to support cancel on filesystem connect until HDFS-11437 is resolved

2017-07-07 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12103?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16078724#comment-16078724
 ] 

Hadoop QA commented on HDFS-12103:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 11m  
2s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} HDFS-8707 Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
39s{color} | {color:green} HDFS-8707 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  7m 
13s{color} | {color:green} HDFS-8707 passed with JDK v1.8.0_131 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  7m  
2s{color} | {color:green} HDFS-8707 passed with JDK v1.7.0_131 {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
16s{color} | {color:green} HDFS-8707 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
57s{color} | {color:green} the patch passed with JDK v1.8.0_131 {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green}  6m 
57s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  6m 
57s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  7m 
27s{color} | {color:green} the patch passed with JDK v1.7.0_131 {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green}  7m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  7m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  9m 
41s{color} | {color:green} hadoop-hdfs-native-client in the patch passed with 
JDK v1.7.0_131. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
17s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 69m 46s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:5ae34ac |
| JIRA Issue | HDFS-12103 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12876133/HDFS-12103.HDFS-8707.000.patch
 |
| Optional Tests |  asflicense  compile  cc  mvnsite  javac  unit  |
| uname | Linux fb43da32377f 3.13.0-119-generic #166-Ubuntu SMP Wed May 3 
12:18:55 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | HDFS-8707 / 821f971 |
| Default Java | 1.7.0_131 |
| Multi-JDK versions |  /usr/lib/jvm/java-8-oracle:1.8.0_131 
/usr/lib/jvm/java-7-openjdk-amd64:1.7.0_131 |
| JDK v1.7.0_131  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/20191/testReport/ |
| modules | C: hadoop-hdfs-project/hadoop-hdfs-native-client U: 
hadoop-hdfs-project/hadoop-hdfs-native-client |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/20191/console |
| Powered by | Apache Yetus 0.5.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> libhdfs++: Provide workaround to support cancel on filesystem connect until 
> HDFS-11437 is resolved
> --
>
> Key: HDFS-12103
> URL: https://issues.apache.org/jira/browse/HDFS-12103
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Reporter: James Clampffer
>Assignee: James 

[jira] [Commented] (HDFS-12102) VolumeScanner throttle dropped (fast scan enabled) when there is a corrupt block

2017-07-07 Thread Nathan Roberts (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12102?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16078714#comment-16078714
 ] 

Nathan Roberts commented on HDFS-12102:
---

Hi [~aramesh2]. Thanks for the patch. Couple of quick comments (I'll look more 
closely early next week)
- Need to change hdfs-default.xml to include new config options
- LOG.debug statements should be surrounded by if (LOG.isDebugEnabled()) checks 
(reduces overhead)
- Is it possible to disable the fast-scan behavior all together (i.e. might be 
good for default behavior to remain the same. If someone want's fast-scan, they 
have to enable it). Maybe setting the period to -1 could be a way?
- Description of corruptBlockThreshold doesn't really match its use. I think 
it's just a straight count. Maybe we don't even need it at all since it's not 
configurable and only ever set to 1.





> VolumeScanner throttle dropped (fast scan enabled) when there is a corrupt 
> block
> 
>
> Key: HDFS-12102
> URL: https://issues.apache.org/jira/browse/HDFS-12102
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: datanode, hdfs
>Affects Versions: 2.8.2
>Reporter: Ashwin Ramesh
>Priority: Minor
> Fix For: 2.8.2
>
> Attachments: HDFS-12102-001.patch
>
>
> When the Volume scanner sees a corrupt block, it restarts the scan and scans 
> the blocks at much faster rate with a negligible scan period. This is so that 
> it doesn't take 3 weeks to report blocks since a corrupt block means 
> increased likelihood that there are more corrupt blocks.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12102) VolumeScanner throttle dropped (fast scan enabled) when there is a corrupt block

2017-07-07 Thread Nathan Roberts (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12102?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nathan Roberts updated HDFS-12102:
--
Issue Type: New Feature  (was: Improvement)

> VolumeScanner throttle dropped (fast scan enabled) when there is a corrupt 
> block
> 
>
> Key: HDFS-12102
> URL: https://issues.apache.org/jira/browse/HDFS-12102
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: datanode, hdfs
>Affects Versions: 2.8.2
>Reporter: Ashwin Ramesh
>Priority: Minor
> Fix For: 2.8.2
>
> Attachments: HDFS-12102-001.patch
>
>
> When the Volume scanner sees a corrupt block, it restarts the scan and scans 
> the blocks at much faster rate with a negligible scan period. This is so that 
> it doesn't take 3 weeks to report blocks since a corrupt block means 
> increased likelihood that there are more corrupt blocks.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12102) VolumeScanner throttle dropped (fast scan enabled) when there is a corrupt block

2017-07-07 Thread Nathan Roberts (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12102?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nathan Roberts updated HDFS-12102:
--
Issue Type: Improvement  (was: New Feature)

> VolumeScanner throttle dropped (fast scan enabled) when there is a corrupt 
> block
> 
>
> Key: HDFS-12102
> URL: https://issues.apache.org/jira/browse/HDFS-12102
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode, hdfs
>Affects Versions: 2.8.2
>Reporter: Ashwin Ramesh
>Priority: Minor
> Fix For: 2.8.2
>
> Attachments: HDFS-12102-001.patch
>
>
> When the Volume scanner sees a corrupt block, it restarts the scan and scans 
> the blocks at much faster rate with a negligible scan period. This is so that 
> it doesn't take 3 weeks to report blocks since a corrupt block means 
> increased likelihood that there are more corrupt blocks.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12104) libhdfs++: Make sure all steps in SaslProtocol end up calling AuthComplete

2017-07-07 Thread James Clampffer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12104?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Clampffer updated HDFS-12104:
---
Status: Patch Available  (was: Open)

> libhdfs++:  Make sure all steps in SaslProtocol end up calling AuthComplete
> ---
>
> Key: HDFS-12104
> URL: https://issues.apache.org/jira/browse/HDFS-12104
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Reporter: James Clampffer
>Assignee: James Clampffer
> Attachments: HDFS-12104.HDFS-8707.000.patch
>
>
> SaslProtocol provides an abstraction for stepping through the authentication 
> challenge and response stages in the Cyrus SASL library by chaining callbacks 
> together (next one is invoked when async io is done).
> To authenticate SaslProtocol::Authenticate is called, and when authentication 
> is finished SaslProtocol::AuthComplete is called which invokes an 
> authentication completion callback.  There's a couple cases where the 
> intermediate callbacks return without calling AuthComplete which breaks 
> applications that take advantage of that callback.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12037) Ozone: Improvement rest API output format for better looking

2017-07-07 Thread Xiaoyu Yao (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12037?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16078684#comment-16078684
 ] 

Xiaoyu Yao commented on HDFS-12037:
---

Patch looks good to me. +1

> Ozone: Improvement rest API output format for better looking
> 
>
> Key: HDFS-12037
> URL: https://issues.apache.org/jira/browse/HDFS-12037
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
>  Labels: user-experience
> Attachments: HDFS-12037-HDFS-7240.001.patch
>
>
> Right now ozone rest api output is displayed as a raw json string in single 
> line, not quite human readable,
> {noformat}
> {"volumes":[{"owner":{"name":"wwei"},"quota":{"unit":"GB","size":200},"volumeName":"volume-aug-1","createdOn":null,"createdBy":null}]}
> {noformat}
> propose to improve the output format by pretty printer
> {noformat}
> {
>   "volumes" : [ {
> "owner" : {
>   "name" : "wwei"
> },
> "quota" : {
>   "unit" : "GB",
>   "size" : 200
> },
> "volumeName" : "volume-aug-1",
> "createdOn" : null,
> "createdBy" : null
>   } ]
> }
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12104) libhdfs++: Make sure all steps in SaslProtocol end up calling AuthComplete

2017-07-07 Thread James Clampffer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12104?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Clampffer updated HDFS-12104:
---
Attachment: HDFS-12104.HDFS-8707.000.patch

Added patch: When we lose the RPC connection due to a cancel or because the 
remote side shut it call authcomplete with an appropriate status.

There really isn't a good way to test this one short of instrumenting the KDC 
to inject errors right after the namenode has connected to the KDC and after 
libhdfs++ SASL mechanism negotiation with the NN.  It'd fail in <1% of 
integration tests with another product I work on; the only reason it became 
noticeable was when the library version was pinned and hundreds of tests were 
run (1000s of machine hours).  Verified the fix by doing the same batch style 
test and didn't hit a failure after >1000 runs.

> libhdfs++:  Make sure all steps in SaslProtocol end up calling AuthComplete
> ---
>
> Key: HDFS-12104
> URL: https://issues.apache.org/jira/browse/HDFS-12104
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Reporter: James Clampffer
>Assignee: James Clampffer
> Attachments: HDFS-12104.HDFS-8707.000.patch
>
>
> SaslProtocol provides an abstraction for stepping through the authentication 
> challenge and response stages in the Cyrus SASL library by chaining callbacks 
> together (next one is invoked when async io is done).
> To authenticate SaslProtocol::Authenticate is called, and when authentication 
> is finished SaslProtocol::AuthComplete is called which invokes an 
> authentication completion callback.  There's a couple cases where the 
> intermediate callbacks return without calling AuthComplete which breaks 
> applications that take advantage of that callback.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12102) VolumeScanner throttle dropped (fast scan enabled) when there is a corrupt block

2017-07-07 Thread Ashwin Ramesh (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12102?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ashwin Ramesh updated HDFS-12102:
-
Attachment: HDFS-12102-001.patch

> VolumeScanner throttle dropped (fast scan enabled) when there is a corrupt 
> block
> 
>
> Key: HDFS-12102
> URL: https://issues.apache.org/jira/browse/HDFS-12102
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: datanode, hdfs
>Affects Versions: 2.8.2
>Reporter: Ashwin Ramesh
>Priority: Minor
> Fix For: 2.8.2
>
> Attachments: HDFS-12102-001.patch
>
>
> When the Volume scanner sees a corrupt block, it restarts the scan and scans 
> the blocks at much faster rate with a negligible scan period. This is so that 
> it doesn't take 3 weeks to report blocks since a corrupt block means 
> increased likelihood that there are more corrupt blocks.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12026) libhdfs++: Fix compilation errors and warnings when compiling with Clang

2017-07-07 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12026?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16078668#comment-16078668
 ] 

Hadoop QA commented on HDFS-12026:
--

(!) A patch to the testing environment has been detected. 
Re-executing against the patched versions to perform further tests. 
The console is at 
https://builds.apache.org/job/PreCommit-HDFS-Build/20192/console in case of 
problems.


> libhdfs++: Fix compilation errors and warnings when compiling with Clang 
> -
>
> Key: HDFS-12026
> URL: https://issues.apache.org/jira/browse/HDFS-12026
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Reporter: Anatoli Shein
>Assignee: Anatoli Shein
> Attachments: HDFS-12026.HDFS-8707.000.patch, 
> HDFS-12026.HDFS-8707.001.patch, HDFS-12026.HDFS-8707.002.patch, 
> HDFS-12026.HDFS-8707.003.patch, HDFS-12026.HDFS-8707.004.patch, 
> HDFS-12026.HDFS-8707.005.patch, HDFS-12026.HDFS-8707.006.patch
>
>
> Currently multiple errors and warnings prevent libhdfspp from being compiled 
> with clang. It should compile cleanly using flags:
> -std=c++11 -stdlib=libc++
> and also warning flags:
> -Weverything -Wno-c++98-compat -Wno-missing-prototypes 
> -Wno-c++98-compat-pedantic -Wno-padded -Wno-covered-switch-default 
> -Wno-missing-noreturn -Wno-unknown-pragmas -Wconversion -Werror



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12097) libhdfs++: add Clang build and tests to the CI system

2017-07-07 Thread Anatoli Shein (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12097?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16078658#comment-16078658
 ] 

Anatoli Shein commented on HDFS-12097:
--

This is now being solved as part of the HDFS-12026.

> libhdfs++: add Clang build and tests to the CI system
> -
>
> Key: HDFS-12097
> URL: https://issues.apache.org/jira/browse/HDFS-12097
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Reporter: Anatoli Shein
>Assignee: Anatoli Shein
>
> For better portability we should add a Clang build and tests of libhdfs++ 
> library to the CI system. To accomplish that the Dockerfile will need to be 
> updated with the environment setup, and the maven files should be updated to 
> build libhdfs++  using Clang and then run the tests.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12026) libhdfs++: Fix compilation errors and warnings when compiling with Clang

2017-07-07 Thread Anatoli Shein (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12026?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16078654#comment-16078654
 ] 

Anatoli Shein commented on HDFS-12026:
--

The Jira HDFS-12097 is now solved as part of the new patch.

> libhdfs++: Fix compilation errors and warnings when compiling with Clang 
> -
>
> Key: HDFS-12026
> URL: https://issues.apache.org/jira/browse/HDFS-12026
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Reporter: Anatoli Shein
>Assignee: Anatoli Shein
> Attachments: HDFS-12026.HDFS-8707.000.patch, 
> HDFS-12026.HDFS-8707.001.patch, HDFS-12026.HDFS-8707.002.patch, 
> HDFS-12026.HDFS-8707.003.patch, HDFS-12026.HDFS-8707.004.patch, 
> HDFS-12026.HDFS-8707.005.patch, HDFS-12026.HDFS-8707.006.patch
>
>
> Currently multiple errors and warnings prevent libhdfspp from being compiled 
> with clang. It should compile cleanly using flags:
> -std=c++11 -stdlib=libc++
> and also warning flags:
> -Weverything -Wno-c++98-compat -Wno-missing-prototypes 
> -Wno-c++98-compat-pedantic -Wno-padded -Wno-covered-switch-default 
> -Wno-missing-noreturn -Wno-unknown-pragmas -Wconversion -Werror



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12026) libhdfs++: Fix compilation errors and warnings when compiling with Clang

2017-07-07 Thread Anatoli Shein (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12026?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anatoli Shein updated HDFS-12026:
-
Attachment: HDFS-12026.HDFS-8707.006.patch

In the new patch I added a clang build and tests by updating pom.xml and 
Dockerfile.

> libhdfs++: Fix compilation errors and warnings when compiling with Clang 
> -
>
> Key: HDFS-12026
> URL: https://issues.apache.org/jira/browse/HDFS-12026
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Reporter: Anatoli Shein
>Assignee: Anatoli Shein
> Attachments: HDFS-12026.HDFS-8707.000.patch, 
> HDFS-12026.HDFS-8707.001.patch, HDFS-12026.HDFS-8707.002.patch, 
> HDFS-12026.HDFS-8707.003.patch, HDFS-12026.HDFS-8707.004.patch, 
> HDFS-12026.HDFS-8707.005.patch, HDFS-12026.HDFS-8707.006.patch
>
>
> Currently multiple errors and warnings prevent libhdfspp from being compiled 
> with clang. It should compile cleanly using flags:
> -std=c++11 -stdlib=libc++
> and also warning flags:
> -Weverything -Wno-c++98-compat -Wno-missing-prototypes 
> -Wno-c++98-compat-pedantic -Wno-padded -Wno-covered-switch-default 
> -Wno-missing-noreturn -Wno-unknown-pragmas -Wconversion -Werror



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12084) Scheduled Count will not decrement when file is deleted before all IBR's received

2017-07-07 Thread Rushabh S Shah (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12084?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16078646#comment-16078646
 ] 

Rushabh S Shah commented on HDFS-12084:
---

I don't have enough context for Erasure coding. I will let some other member 
review. sorry.

> Scheduled Count will not decrement when file is deleted before all IBR's 
> received
> -
>
> Key: HDFS-12084
> URL: https://issues.apache.org/jira/browse/HDFS-12084
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Reporter: Brahma Reddy Battula
>Assignee: Brahma Reddy Battula
> Attachments: HDFS-12084-001.patch, HDFS-12084-002.patch
>
>
> When small files creation && deletion happens so frequently and DN's did not 
> report blocks to NN before deletion, then scheduled count will keep on 
> increment and which will not deleted as blocks are deleted.
> *Note*: Every 20 mins,this can be rolled, but with in 20 mins, count can be 
> more as so many operations.
> when batchIBR enabled with committed allowed=1 this will be observed more.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-12104) libhdfs++: Make sure all steps in SaslProtocol end up calling AuthComplete

2017-07-07 Thread James Clampffer (JIRA)
James Clampffer created HDFS-12104:
--

 Summary: libhdfs++:  Make sure all steps in SaslProtocol end up 
calling AuthComplete
 Key: HDFS-12104
 URL: https://issues.apache.org/jira/browse/HDFS-12104
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: James Clampffer
Assignee: James Clampffer


SaslProtocol provides an abstraction for stepping through the authentication 
challenge and response stages in the Cyrus SASL library by chaining callbacks 
together (next one is invoked when async io is done).

To authenticate SaslProtocol::Authenticate is called, and when authentication 
is finished SaslProtocol::AuthComplete is called which invokes an 
authentication completion callback.  There's a couple cases where the 
intermediate callbacks return without calling AuthComplete which breaks 
applications that take advantage of that callback.




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-9388) Refactor decommission related code to support maintenance state for datanodes

2017-07-07 Thread Manoj Govindassamy (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9388?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16078618#comment-16078618
 ] 

Manoj Govindassamy commented on HDFS-9388:
--

Thanks for the review [~mingma] and [~eddyxu]. Will commit soon.

> Refactor decommission related code to support maintenance state for datanodes
> -
>
> Key: HDFS-9388
> URL: https://issues.apache.org/jira/browse/HDFS-9388
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Affects Versions: 3.0.0-alpha1
>Reporter: Ming Ma
>Assignee: Manoj Govindassamy
> Attachments: HDFS-9388.01.patch, HDFS-9388.02.patch
>
>
> Lots of code can be shared between the existing decommission functionality 
> and to-be-added maintenance state support for datanodes. To make it easier to 
> add maintenance state support, let us first modify the existing code to make 
> it more general.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12103) libhdfs++: Provide workaround to support cancel on filesystem connect until HDFS-11437 is resolved

2017-07-07 Thread James Clampffer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12103?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Clampffer updated HDFS-12103:
---
Status: Patch Available  (was: Open)

> libhdfs++: Provide workaround to support cancel on filesystem connect until 
> HDFS-11437 is resolved
> --
>
> Key: HDFS-12103
> URL: https://issues.apache.org/jira/browse/HDFS-12103
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Reporter: James Clampffer
>Assignee: James Clampffer
> Attachments: HDFS-12103.HDFS-8707.000.patch
>
>
> HDFS-11437 is going to take a non-trivial amount of work to do right.  In the 
> meantime it'd be nice to have a way to cancel pending connections (even when 
> the FS claimed they are finished).  
> Proposed workaround is to relax the rules about when 
> FileSystem::CancelPending connect can be called since it isn't able to 
> properly determine when it's connected anyway.  In order to determine when 
> the FS has connected you can do some simple RPC call since that will wait on 
> failover.  If CancelPending can be called during that first RPC call then it 
> will effectively be canceling FileSystem::Connect
> Current cancel rules - asterisk on steps where CancelPending is allowed
> FileSystem::Connect called
> FileSystem communicates with first NN *
> FileSystem::Connect returns - even if it hasn't communicated with the active 
> NN
> Proposed relaxation
> FileSystem::Connect called
> FileSystem communicates with first NN*
> FileSystem::Connect returns *
> FileSystem::GetFileInfo called * -any namenode RPC call will do, ignore perm 
> errors
> RPC engine blocks until it hits the active or runs out of retries *
> FileSystem::GetFileInfo returns
> It'd be up to the user to add in the dummy NN RPC call.  Once HDFS-11437 is 
> fixed this workaround can be removed.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12103) libhdfs++: Provide workaround to support cancel on filesystem connect until HDFS-11437 is resolved

2017-07-07 Thread James Clampffer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12103?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Clampffer updated HDFS-12103:
---
Attachment: HDFS-12103.HDFS-8707.000.patch

Attached patch to allow NamenodeOperations::CancelPendingConnect to be called 
after FileSystem::Connect returns.  This will make any blocked RPC calls return 
an operation canceled failure status.

> libhdfs++: Provide workaround to support cancel on filesystem connect until 
> HDFS-11437 is resolved
> --
>
> Key: HDFS-12103
> URL: https://issues.apache.org/jira/browse/HDFS-12103
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Reporter: James Clampffer
>Assignee: James Clampffer
> Attachments: HDFS-12103.HDFS-8707.000.patch
>
>
> HDFS-11437 is going to take a non-trivial amount of work to do right.  In the 
> meantime it'd be nice to have a way to cancel pending connections (even when 
> the FS claimed they are finished).  
> Proposed workaround is to relax the rules about when 
> FileSystem::CancelPending connect can be called since it isn't able to 
> properly determine when it's connected anyway.  In order to determine when 
> the FS has connected you can do some simple RPC call since that will wait on 
> failover.  If CancelPending can be called during that first RPC call then it 
> will effectively be canceling FileSystem::Connect
> Current cancel rules - asterisk on steps where CancelPending is allowed
> FileSystem::Connect called
> FileSystem communicates with first NN *
> FileSystem::Connect returns - even if it hasn't communicated with the active 
> NN
> Proposed relaxation
> FileSystem::Connect called
> FileSystem communicates with first NN*
> FileSystem::Connect returns *
> FileSystem::GetFileInfo called * -any namenode RPC call will do, ignore perm 
> errors
> RPC engine blocks until it hits the active or runs out of retries *
> FileSystem::GetFileInfo returns
> It'd be up to the user to add in the dummy NN RPC call.  Once HDFS-11437 is 
> fixed this workaround can be removed.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11908) libhdfs++: Authentication failure when first NN of kerberized HA cluster is standby

2017-07-07 Thread James Clampffer (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11908?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16078606#comment-16078606
 ] 

James Clampffer commented on HDFS-11908:


Thanks for reviewing [~anatoli.shein].

I agree about adding CI tests.  This fix is being indirectly tested since it's 
integrated into another product, goes through tests where NNs are turned on and 
off and standby/active and lots of concurrent connection attempts with 
different principals.  I'm going to start getting the minidfscluster testing in 
better shape since this bug never should have made it into the code.

> libhdfs++: Authentication failure when first NN of kerberized HA cluster is 
> standby
> ---
>
> Key: HDFS-11908
> URL: https://issues.apache.org/jira/browse/HDFS-11908
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Reporter: James Clampffer
>Assignee: James Clampffer
> Attachments: HDFS-11908.HDFS-8707.000.patch
>
>
> Library won't properly authenticate to kerberized HA cluster if the first 
> namenode it tries to connect to is the standby.  RpcConnection ends up 
> attempting to use simple auth.
> Control flow to connect to NN for the first time:
> # RpcConnection constructed with a pointer to the RpcEngine as the only 
> argument
> # RpcConnection::Connect(server endpoints, auth_info, callback called)
> ** auth_info contains the SASL mechanism to use + the delegation token if we 
> already have one
> Control flow to connect to NN after failover:
> # RpcEngine::NewConnection called, allocates an RpcConnection exactly how 
> step 1 above would
> # RpcEngine::InitializeConnection called, sets event hooks and a string for 
> cluster name
> # Rpc calls sent using RpcConnection::PreEnqueueRequests called to add RPC 
> message that didn't make it on last call due to standby exception
> # RpcConnection::ConnectAndFlush called to send RPC packets. This only takes 
> server endpoints, no auth info
> To fix:
> RpcEngine::InitializeConnection just needs to set RpcConnection::auth_info_ 
> from the existing RpcEngine::auth_info_, even better would be setting this in 
> the constructor so if an RpcConnection exists it can be expected to be in a 
> usable state.  I'll get a diff up once I sort out CI build failures.
> Also really need to get CI test coverage for HA and kerberos because this 
> issue should not have been around for so long.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDFS-12088) Remove a user defined EC Policy,the policy is not removed from the userPolicies map

2017-07-07 Thread SammiChen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12088?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

SammiChen reassigned HDFS-12088:


Assignee: lufei  (was: SammiChen)

> Remove a user defined EC Policy,the policy is not removed from the 
> userPolicies map
> ---
>
> Key: HDFS-12088
> URL: https://issues.apache.org/jira/browse/HDFS-12088
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: erasure-coding, hdfs
>Affects Versions: 3.0.0-alpha3
>Reporter: lufei
>Assignee: lufei
>Priority: Critical
>  Labels: hdfs-ec-3.0-must-do
> Attachments: HDFS-12088.001.patch
>
>
> When user remove an user defined EC policy, it needs to remove the policy 
> from the userPolicies Map but not only remove from the enabledPolicies 
> Map.Otherwise, after remove the user defined EC policy, user can recover the 
> EC policy by enable the same EC policy.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11908) libhdfs++: Authentication failure when first NN of kerberized HA cluster is standby

2017-07-07 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11908?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16078580#comment-16078580
 ] 

Hadoop QA commented on HDFS-11908:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
10s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} HDFS-8707 Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
18s{color} | {color:green} HDFS-8707 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
47s{color} | {color:green} HDFS-8707 passed with JDK v1.8.0_131 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
48s{color} | {color:green} HDFS-8707 passed with JDK v1.7.0_131 {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
16s{color} | {color:green} HDFS-8707 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
 9s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
45s{color} | {color:green} the patch passed with JDK v1.8.0_131 {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green}  6m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  6m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
46s{color} | {color:green} the patch passed with JDK v1.7.0_131 {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green}  6m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  6m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  9m 
10s{color} | {color:green} hadoop-hdfs-native-client in the patch passed with 
JDK v1.7.0_131. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
18s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 55m 20s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:5ae34ac |
| JIRA Issue | HDFS-11908 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12876119/HDFS-11908.HDFS-8707.000.patch
 |
| Optional Tests |  asflicense  compile  cc  mvnsite  javac  unit  |
| uname | Linux c1608f238749 3.13.0-116-generic #163-Ubuntu SMP Fri Mar 31 
14:13:22 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | HDFS-8707 / 821f971 |
| Default Java | 1.7.0_131 |
| Multi-JDK versions |  /usr/lib/jvm/java-8-oracle:1.8.0_131 
/usr/lib/jvm/java-7-openjdk-amd64:1.7.0_131 |
| JDK v1.7.0_131  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/20190/testReport/ |
| modules | C: hadoop-hdfs-project/hadoop-hdfs-native-client U: 
hadoop-hdfs-project/hadoop-hdfs-native-client |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/20190/console |
| Powered by | Apache Yetus 0.5.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> libhdfs++: Authentication failure when first NN of kerberized HA cluster is 
> standby
> ---
>
> Key: HDFS-11908
> URL: https://issues.apache.org/jira/browse/HDFS-11908
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Reporter: James Clampffer
>Assignee: James Clampffer
> 

[jira] [Assigned] (HDFS-12088) Remove a user defined EC Policy,the policy is not removed from the userPolicies map

2017-07-07 Thread SammiChen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12088?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

SammiChen reassigned HDFS-12088:


Assignee: SammiChen  (was: lufei)

> Remove a user defined EC Policy,the policy is not removed from the 
> userPolicies map
> ---
>
> Key: HDFS-12088
> URL: https://issues.apache.org/jira/browse/HDFS-12088
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: erasure-coding, hdfs
>Affects Versions: 3.0.0-alpha3
>Reporter: lufei
>Assignee: SammiChen
>Priority: Critical
>  Labels: hdfs-ec-3.0-must-do
> Attachments: HDFS-12088.001.patch
>
>
> When user remove an user defined EC policy, it needs to remove the policy 
> from the userPolicies Map but not only remove from the enabledPolicies 
> Map.Otherwise, after remove the user defined EC policy, user can recover the 
> EC policy by enable the same EC policy.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12102) VolumeScanner throttle dropped (fast scan enabled) when there is a corrupt block

2017-07-07 Thread Arpit Agarwal (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12102?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16078559#comment-16078559
 ] 

Arpit Agarwal commented on HDFS-12102:
--

Hi [~aramesh2], can you please clarify what you'd like to see changed?

> VolumeScanner throttle dropped (fast scan enabled) when there is a corrupt 
> block
> 
>
> Key: HDFS-12102
> URL: https://issues.apache.org/jira/browse/HDFS-12102
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: datanode, hdfs
>Affects Versions: 2.8.2
>Reporter: Ashwin Ramesh
>Priority: Minor
> Fix For: 2.8.2
>
>
> When the Volume scanner sees a corrupt block, it restarts the scan and scans 
> the blocks at much faster rate with a negligible scan period. This is so that 
> it doesn't take 3 weeks to report blocks since a corrupt block means 
> increased likelihood that there are more corrupt blocks.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11437) libhdfs++: Handler for FileSystem async connect can be invoked before successful communication with active NN

2017-07-07 Thread James Clampffer (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11437?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16078556#comment-16078556
 ] 

James Clampffer commented on HDFS-11437:


Proposing a short term workaround in HDFS-12103.  Once this is fixed that could 
be removed (if workaround ends up being committed).

> libhdfs++: Handler for FileSystem async connect can be invoked before 
> successful communication with active NN
> -
>
> Key: HDFS-11437
> URL: https://issues.apache.org/jira/browse/HDFS-11437
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Reporter: James Clampffer
>Assignee: James Clampffer
>
> The handler provided to FileSystem::Connect can be invoked as soon as the FS 
> makes a connection to the standby NN rather than waiting until it connects to 
> the active NN.  This allows RPC requests to be enqueued before a real 
> connection is made and if the active NN isn't reachable for some reason the 
> only way to cancel is to delete the FS from another thread which kills all 
> pending requests.
> The underlying issue is that currently the only thing that must happen for 
> the connect handler to be invoked is a successful handshake with a NN.  
> Connecting to the standby NN and receiving a StandbyException satisfies this 
> requirement but it should wait until it is able to get a handshake from the 
> active NN.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-12103) libhdfs++: Provide workaround to support cancel on filesystem connect until HDFS-11437 is resolved

2017-07-07 Thread James Clampffer (JIRA)
James Clampffer created HDFS-12103:
--

 Summary: libhdfs++: Provide workaround to support cancel on 
filesystem connect until HDFS-11437 is resolved
 Key: HDFS-12103
 URL: https://issues.apache.org/jira/browse/HDFS-12103
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: James Clampffer
Assignee: James Clampffer


HDFS-11437 is going to take a non-trivial amount of work to do right.  In the 
meantime it'd be nice to have a way to cancel pending connections (even when 
the FS claimed they are finished).  

Proposed workaround is to relax the rules about when FileSystem::CancelPending 
connect can be called since it isn't able to properly determine when it's 
connected anyway.  In order to determine when the FS has connected you can do 
some simple RPC call since that will wait on failover.  If CancelPending can be 
called during that first RPC call then it will effectively be canceling 
FileSystem::Connect

Current cancel rules - asterisk on steps where CancelPending is allowed

FileSystem::Connect called
FileSystem communicates with first NN *
FileSystem::Connect returns - even if it hasn't communicated with the active NN

Proposed relaxation
FileSystem::Connect called
FileSystem communicates with first NN*
FileSystem::Connect returns *
FileSystem::GetFileInfo called * -any namenode RPC call will do, ignore perm 
errors
RPC engine blocks until it hits the active or runs out of retries *
FileSystem::GetFileInfo returns

It'd be up to the user to add in the dummy NN RPC call.  Once HDFS-11437 is 
fixed this workaround can be removed.





--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11908) libhdfs++: Authentication failure when first NN of kerberized HA cluster is standby

2017-07-07 Thread Anatoli Shein (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11908?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16078542#comment-16078542
 ] 

Anatoli Shein commented on HDFS-11908:
--

+1. The patch looks good to me. Nice and straightforward solution. We do need 
to add some test coverage though for HA and kerberos stuff soon. 

> libhdfs++: Authentication failure when first NN of kerberized HA cluster is 
> standby
> ---
>
> Key: HDFS-11908
> URL: https://issues.apache.org/jira/browse/HDFS-11908
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Reporter: James Clampffer
>Assignee: James Clampffer
> Attachments: HDFS-11908.HDFS-8707.000.patch
>
>
> Library won't properly authenticate to kerberized HA cluster if the first 
> namenode it tries to connect to is the standby.  RpcConnection ends up 
> attempting to use simple auth.
> Control flow to connect to NN for the first time:
> # RpcConnection constructed with a pointer to the RpcEngine as the only 
> argument
> # RpcConnection::Connect(server endpoints, auth_info, callback called)
> ** auth_info contains the SASL mechanism to use + the delegation token if we 
> already have one
> Control flow to connect to NN after failover:
> # RpcEngine::NewConnection called, allocates an RpcConnection exactly how 
> step 1 above would
> # RpcEngine::InitializeConnection called, sets event hooks and a string for 
> cluster name
> # Rpc calls sent using RpcConnection::PreEnqueueRequests called to add RPC 
> message that didn't make it on last call due to standby exception
> # RpcConnection::ConnectAndFlush called to send RPC packets. This only takes 
> server endpoints, no auth info
> To fix:
> RpcEngine::InitializeConnection just needs to set RpcConnection::auth_info_ 
> from the existing RpcEngine::auth_info_, even better would be setting this in 
> the constructor so if an RpcConnection exists it can be expected to be in a 
> usable state.  I'll get a diff up once I sort out CI build failures.
> Also really need to get CI test coverage for HA and kerberos because this 
> issue should not have been around for so long.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12101) DFSClient.rename() to unwrap ParentNotDirectoryException; define policy for renames under a file

2017-07-07 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12101?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16078538#comment-16078538
 ] 

Steve Loughran commented on HDFS-12101:
---

This patch modifies DFSClient to unwind the relevant exception, so you now get

{code}
org.apache.hadoop.fs.ParentNotDirectoryException: 
/test/testRenameFileUnderFileSubdir/file (is not a directory)
at 
org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkIsDirectory(FSPermissionChecker.java:596)
at 
org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkSimpleTraverse(FSPermissionChecker.java:587)
at 
org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkTraverse(FSPermissionChecker.java:562)
at 
org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkTraverse(FSDirectory.java:1730)
at 
org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkTraverse(FSDirectory.java:1748)
at 
org.apache.hadoop.hdfs.server.namenode.FSDirectory.resolvePath(FSDirectory.java:606)
at 
org.apache.hadoop.hdfs.server.namenode.FSDirRenameOp.renameToInt(FSDirRenameOp.java:62)
at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.renameTo(FSNamesystem.java:2822)
at 
org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.rename(NameNodeRpcServer.java:988)
at 
org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.rename(ClientNamenodeProtocolServerSideTranslatorPB.java:628)
at 
org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:522)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:991)
at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:869)
at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:815)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1965)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2675)


at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at 
sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at 
sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
at 
org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:121)
at 
org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:88)
at org.apache.hadoop.hdfs.DFSClient.rename(DFSClient.java:1522)
at 
org.apache.hadoop.hdfs.DistributedFileSystem.rename(DistributedFileSystem.java:787)
at 
org.apache.hadoop.fs.contract.AbstractFSContractTestBase.rename(AbstractFSContractTestBase.java:372)
at 
org.apache.hadoop.fs.contract.AbstractContractRenameTest.expectRenameUnderFileFails(AbstractContractRenameTest.java:265)
at 
org.apache.hadoop.fs.contract.AbstractContractRenameTest.testRenameFileUnderFileSubdir(AbstractContractRenameTest.java:250)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
at 
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
at 
org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55)
at 
org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74)
Caused by: 
org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.fs.ParentNotDirectoryException):
 /test/testRenameFileUnderFileSubdir/file (is not a directory)
at 
org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkIsDirectory(FSPermissionChecker.java:596)
at 
org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkSimpleTraverse(FSPermissionChecker.java:587)
at 

[jira] [Created] (HDFS-12102) VolumeScanner throttle dropped (fast scan enabled) when there is a corrupt block

2017-07-07 Thread Ashwin Ramesh (JIRA)
Ashwin Ramesh created HDFS-12102:


 Summary: VolumeScanner throttle dropped (fast scan enabled) when 
there is a corrupt block
 Key: HDFS-12102
 URL: https://issues.apache.org/jira/browse/HDFS-12102
 Project: Hadoop HDFS
  Issue Type: New Feature
  Components: datanode, hdfs
Affects Versions: 2.8.2
Reporter: Ashwin Ramesh
Priority: Minor
 Fix For: 2.8.2


When the Volume scanner sees a corrupt block, it restarts the scan and scans 
the blocks at much faster rate with a negligible scan period. This is so that 
it doesn't take 3 weeks to report blocks since a corrupt block means increased 
likelihood that there are more corrupt blocks.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12101) DFSClient.rename() to unwrap ParentNotDirectoryException; define policy for renames under a file

2017-07-07 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12101?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16078537#comment-16078537
 ] 

Steve Loughran commented on HDFS-12101:
---

Current stack when you try to rename a file under a file:
{code}
cmd=delete  src=/test   dst=nullperm=null   proto=rpc

org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.fs.ParentNotDirectoryException):
 /test/testRenameFileUnderFileSubdir/file (is not a directory)
at 
org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkIsDirectory(FSPermissionChecker.java:596)
at 
org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkSimpleTraverse(FSPermissionChecker.java:587)
at 
org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkTraverse(FSPermissionChecker.java:562)
at 
org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkTraverse(FSDirectory.java:1730)
at 
org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkTraverse(FSDirectory.java:1748)
at 
org.apache.hadoop.hdfs.server.namenode.FSDirectory.resolvePath(FSDirectory.java:606)
at 
org.apache.hadoop.hdfs.server.namenode.FSDirRenameOp.renameToInt(FSDirRenameOp.java:62)
at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.renameTo(FSNamesystem.java:2822)
at 
org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.rename(NameNodeRpcServer.java:988)
at 
org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.rename(ClientNamenodeProtocolServerSideTranslatorPB.java:628)
at 
org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:522)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:991)
at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:869)
at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:815)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1965)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2675)


at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1484)
at org.apache.hadoop.ipc.Client.call(Client.java:1430)
at org.apache.hadoop.ipc.Client.call(Client.java:1340)
at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:227)
at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:115)
at com.sun.proxy.$Proxy28.rename(Unknown Source)
at 
org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.rename(ClientNamenodeProtocolTranslatorPB.java:554)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:411)
at 
org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:165)
at 
org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:157)
at 
org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95)
at 
org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:348)
at com.sun.proxy.$Proxy32.rename(Unknown Source)
at org.apache.hadoop.hdfs.DFSClient.rename(DFSClient.java:1520)
at 
org.apache.hadoop.hdfs.DistributedFileSystem.rename(DistributedFileSystem.java:787)
at 
org.apache.hadoop.fs.contract.AbstractFSContractTestBase.rename(AbstractFSContractTestBase.java:372)
at 
org.apache.hadoop.fs.contract.AbstractContractRenameTest.expectRenameUnderFileFails(AbstractContractRenameTest.java:263)
at 
org.apache.hadoop.fs.contract.AbstractContractRenameTest.testRenameFileUnderFileSubdir(AbstractContractRenameTest.java:250)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at 

[jira] [Updated] (HDFS-12101) DFSClient.rename() to unwrap ParentNotDirectoryException; define policy for renames under a file

2017-07-07 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12101?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HDFS-12101:
--
Attachment: HADOOP-14630-001.patch

> DFSClient.rename() to unwrap ParentNotDirectoryException; define policy for 
> renames under a file
> 
>
> Key: HDFS-12101
> URL: https://issues.apache.org/jira/browse/HDFS-12101
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: fs
>Affects Versions: 2.8.1
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Minor
> Attachments: HADOOP-14630-001.patch
>
>
> HADOOP-14630 adds some contract tests trying to create files or rename files 
> *under other files*.
> On a rename under an existing file (or dir under an existing file), HDFS 
> fails throwing 
> {{org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.fs.ParentNotDirectoryException)}}.
>  
> # is throwing an exception here what people agree is the correct behaviour? 
> If so, it can go into the filesystem spec, tests set up to expect it. object 
> stores tweaked for consistency. If not, HDFS needs a change.
> # At the very least, HDFS should be unwrapping the exception.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-12101) DFSClient.rename() to unwrap ParentNotDirectoryException; define policy for renames under a file

2017-07-07 Thread Steve Loughran (JIRA)
Steve Loughran created HDFS-12101:
-

 Summary: DFSClient.rename() to unwrap ParentNotDirectoryException; 
define policy for renames under a file
 Key: HDFS-12101
 URL: https://issues.apache.org/jira/browse/HDFS-12101
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: fs
Affects Versions: 2.8.1
Reporter: Steve Loughran
Assignee: Steve Loughran
Priority: Minor


HADOOP-14630 adds some contract tests trying to create files or rename files 
*under other files*.

On a rename under an existing file (or dir under an existing file), HDFS fails 
throwing 
{{org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.fs.ParentNotDirectoryException)}}.
 

# is throwing an exception here what people agree is the correct behaviour? If 
so, it can go into the filesystem spec, tests set up to expect it. object 
stores tweaked for consistency. If not, HDFS needs a change.
# At the very least, HDFS should be unwrapping the exception.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12026) libhdfs++: Fix compilation errors and warnings when compiling with Clang

2017-07-07 Thread James Clampffer (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12026?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16078519#comment-16078519
 ] 

James Clampffer commented on HDFS-12026:


Latest patch looks good to me.  I'd just like to wait on the +1/commit until 
you have a proof of concept for the test side of it with HDFS-12097 so it's 
clear that it will get built both ways.

> libhdfs++: Fix compilation errors and warnings when compiling with Clang 
> -
>
> Key: HDFS-12026
> URL: https://issues.apache.org/jira/browse/HDFS-12026
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Reporter: Anatoli Shein
>Assignee: Anatoli Shein
> Attachments: HDFS-12026.HDFS-8707.000.patch, 
> HDFS-12026.HDFS-8707.001.patch, HDFS-12026.HDFS-8707.002.patch, 
> HDFS-12026.HDFS-8707.003.patch, HDFS-12026.HDFS-8707.004.patch, 
> HDFS-12026.HDFS-8707.005.patch
>
>
> Currently multiple errors and warnings prevent libhdfspp from being compiled 
> with clang. It should compile cleanly using flags:
> -std=c++11 -stdlib=libc++
> and also warning flags:
> -Weverything -Wno-c++98-compat -Wno-missing-prototypes 
> -Wno-c++98-compat-pedantic -Wno-padded -Wno-covered-switch-default 
> -Wno-missing-noreturn -Wno-unknown-pragmas -Wconversion -Werror



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11908) libhdfs++: Authentication failure when first NN of kerberized HA cluster is standby

2017-07-07 Thread James Clampffer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11908?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Clampffer updated HDFS-11908:
---
Status: Patch Available  (was: Open)

> libhdfs++: Authentication failure when first NN of kerberized HA cluster is 
> standby
> ---
>
> Key: HDFS-11908
> URL: https://issues.apache.org/jira/browse/HDFS-11908
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Reporter: James Clampffer
>Assignee: James Clampffer
> Attachments: HDFS-11908.HDFS-8707.000.patch
>
>
> Library won't properly authenticate to kerberized HA cluster if the first 
> namenode it tries to connect to is the standby.  RpcConnection ends up 
> attempting to use simple auth.
> Control flow to connect to NN for the first time:
> # RpcConnection constructed with a pointer to the RpcEngine as the only 
> argument
> # RpcConnection::Connect(server endpoints, auth_info, callback called)
> ** auth_info contains the SASL mechanism to use + the delegation token if we 
> already have one
> Control flow to connect to NN after failover:
> # RpcEngine::NewConnection called, allocates an RpcConnection exactly how 
> step 1 above would
> # RpcEngine::InitializeConnection called, sets event hooks and a string for 
> cluster name
> # Rpc calls sent using RpcConnection::PreEnqueueRequests called to add RPC 
> message that didn't make it on last call due to standby exception
> # RpcConnection::ConnectAndFlush called to send RPC packets. This only takes 
> server endpoints, no auth info
> To fix:
> RpcEngine::InitializeConnection just needs to set RpcConnection::auth_info_ 
> from the existing RpcEngine::auth_info_, even better would be setting this in 
> the constructor so if an RpcConnection exists it can be expected to be in a 
> usable state.  I'll get a diff up once I sort out CI build failures.
> Also really need to get CI test coverage for HA and kerberos because this 
> issue should not have been around for so long.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11908) libhdfs++: Authentication failure when first NN of kerberized HA cluster is standby

2017-07-07 Thread James Clampffer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11908?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Clampffer updated HDFS-11908:
---
Attachment: HDFS-11908.HDFS-8707.000.patch

The simple fix to set auth info for new connections.  Once I have some time in 
the next month or two I'd like to do some general improvements to the RPC code 
including making sure this stuff is set during initialization.  Doesn't make 
sense to call a bunch of setters right after creating a new object.  I'd like 
to start with this patch because it's been well tested (externally) and solves 
a major problem with minimal code change and add a test along with the 
improvements for HDFS-11807.  Right now the minidfscluster CI tests don't do HA 
or kerberos auth.

> libhdfs++: Authentication failure when first NN of kerberized HA cluster is 
> standby
> ---
>
> Key: HDFS-11908
> URL: https://issues.apache.org/jira/browse/HDFS-11908
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Reporter: James Clampffer
>Assignee: James Clampffer
> Attachments: HDFS-11908.HDFS-8707.000.patch
>
>
> Library won't properly authenticate to kerberized HA cluster if the first 
> namenode it tries to connect to is the standby.  RpcConnection ends up 
> attempting to use simple auth.
> Control flow to connect to NN for the first time:
> # RpcConnection constructed with a pointer to the RpcEngine as the only 
> argument
> # RpcConnection::Connect(server endpoints, auth_info, callback called)
> ** auth_info contains the SASL mechanism to use + the delegation token if we 
> already have one
> Control flow to connect to NN after failover:
> # RpcEngine::NewConnection called, allocates an RpcConnection exactly how 
> step 1 above would
> # RpcEngine::InitializeConnection called, sets event hooks and a string for 
> cluster name
> # Rpc calls sent using RpcConnection::PreEnqueueRequests called to add RPC 
> message that didn't make it on last call due to standby exception
> # RpcConnection::ConnectAndFlush called to send RPC packets. This only takes 
> server endpoints, no auth info
> To fix:
> RpcEngine::InitializeConnection just needs to set RpcConnection::auth_info_ 
> from the existing RpcEngine::auth_info_, even better would be setting this in 
> the constructor so if an RpcConnection exists it can be expected to be in a 
> usable state.  I'll get a diff up once I sort out CI build failures.
> Also really need to get CI test coverage for HA and kerberos because this 
> issue should not have been around for so long.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-12084) Scheduled Count will not decrement when file is deleted before all IBR's received

2017-07-07 Thread Rushabh S Shah (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12084?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16078140#comment-16078140
 ] 

Rushabh S Shah edited comment on HDFS-12084 at 7/7/17 6:29 PM:
---

[~brahmareddy]: thanks for the patch.
taking a look now.


was (Author: shahrs87):
[~brahma]: thanks for the patch.
taking a look now.

> Scheduled Count will not decrement when file is deleted before all IBR's 
> received
> -
>
> Key: HDFS-12084
> URL: https://issues.apache.org/jira/browse/HDFS-12084
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Reporter: Brahma Reddy Battula
>Assignee: Brahma Reddy Battula
> Attachments: HDFS-12084-001.patch, HDFS-12084-002.patch
>
>
> When small files creation && deletion happens so frequently and DN's did not 
> report blocks to NN before deletion, then scheduled count will keep on 
> increment and which will not deleted as blocks are deleted.
> *Note*: Every 20 mins,this can be rolled, but with in 20 mins, count can be 
> more as so many operations.
> when batchIBR enabled with committed allowed=1 this will be observed more.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12013) libhdfs++: read with offset at EOF should return 0 bytes instead of error

2017-07-07 Thread James Clampffer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12013?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Clampffer updated HDFS-12013:
---
Resolution: Fixed
Status: Resolved  (was: Patch Available)

Committed to HDFS-8707.  Thanks for the patch [~xiaowei.zhu]!

> libhdfs++: read with offset at EOF should return 0 bytes instead of error
> -
>
> Key: HDFS-12013
> URL: https://issues.apache.org/jira/browse/HDFS-12013
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Reporter: Xiaowei Zhu
>Assignee: Xiaowei Zhu
>Priority: Critical
> Attachments: HDFS-12013.HDFS-8707.000.patch
>
>
> The current behavior is when you read from offset == file_length, it will 
> throw error:
> "AsyncPreadSome: trying to begin a read past the EOF"
> But read with offset at EOF should just return 0 bytes. The above error 
> should only be thrown when offset > file_length.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11201) Spelling errors in the logging, help, assertions and exception messages

2017-07-07 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11201?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16078431#comment-16078431
 ] 

Hadoop QA commented on HDFS-11201:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red}  0m  4s{color} 
| {color:red} HDFS-11201 does not apply to trunk. Rebase required? Wrong 
Branch? See https://wiki.apache.org/hadoop/HowToContribute for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | HDFS-11201 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12843481/HDFS-11201.4.patch |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/20189/console |
| Powered by | Apache Yetus 0.5.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Spelling errors in the logging, help, assertions and exception messages
> ---
>
> Key: HDFS-11201
> URL: https://issues.apache.org/jira/browse/HDFS-11201
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode, diskbalancer, httpfs, namenode, nfs
>Affects Versions: 3.0.0-alpha1
>Reporter: Grant Sohn
>Assignee: Grant Sohn
>Priority: Trivial
> Fix For: 3.0.0-beta1
>
> Attachments: HDFS-11201.1.patch, HDFS-11201.2.patch, 
> HDFS-11201.3.patch, HDFS-11201.4.patch
>
>
> Found a set of spelling errors in the user-facing code.
> Examples are:
> odlest -> oldest
> Illagal -> Illegal
> bounday -> boundary



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11201) Spelling errors in the logging, help, assertions and exception messages

2017-07-07 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11201?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HDFS-11201:
---
Fix Version/s: (was: 3.0.0-alpha4)
   3.0.0-beta1

> Spelling errors in the logging, help, assertions and exception messages
> ---
>
> Key: HDFS-11201
> URL: https://issues.apache.org/jira/browse/HDFS-11201
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode, diskbalancer, httpfs, namenode, nfs
>Affects Versions: 3.0.0-alpha1
>Reporter: Grant Sohn
>Assignee: Grant Sohn
>Priority: Trivial
> Fix For: 3.0.0-beta1
>
> Attachments: HDFS-11201.1.patch, HDFS-11201.2.patch, 
> HDFS-11201.3.patch, HDFS-11201.4.patch
>
>
> Found a set of spelling errors in the user-facing code.
> Examples are:
> odlest -> oldest
> Illagal -> Illegal
> bounday -> boundary



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11942) make new chooseDataNode policy work in more operation like seek, fetch

2017-07-07 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11942?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HDFS-11942:
---
Fix Version/s: (was: 3.0.0-alpha4)
   3.0.0-beta1

> make new  chooseDataNode policy  work in more operation like seek, fetch
> 
>
> Key: HDFS-11942
> URL: https://issues.apache.org/jira/browse/HDFS-11942
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Affects Versions: 2.6.0, 2.7.0, 3.0.0-alpha3
>Reporter: Fangyuan Deng
> Fix For: 3.0.0-beta1
>
> Attachments: HDFS-11942.0.patch, HDFS-11942.1.patch, 
> ssd-first-disable(default).png, ssd-first-enable.png
>
>
> in default policy, if a file is ONE_SSD, client will prior read the local 
> disk replica rather than the remote ssd replica.
> but now, the pci-e SSD and 10G ethernet make remote read SSD more faster than 
>  the local disk.
> HDFS-9666 give us a patch,  but the code is not complete and not updated for 
> a long time.
> this sub-task issue give a complete patch and 
> we have tested on three machines [ 32 core cpu, 128G mem , 1000M network, 
> 1.2T HDD, 800G SSD(intel P3600) ].
> with this feather, throughput of hbase table(ONE_SSD) is double of which 
> without this feather



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10480) Add an admin command to list currently open files

2017-07-07 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10480?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HDFS-10480:
---
Fix Version/s: (was: 3.0.0-alpha4)
   3.0.0-beta1

> Add an admin command to list currently open files
> -
>
> Key: HDFS-10480
> URL: https://issues.apache.org/jira/browse/HDFS-10480
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Kihwal Lee
>Assignee: Manoj Govindassamy
> Fix For: 2.9.0, 3.0.0-beta1
>
> Attachments: HDFS-10480.02.patch, HDFS-10480.03.patch, 
> HDFS-10480.04.patch, HDFS-10480.05.patch, HDFS-10480.06.patch, 
> HDFS-10480.07.patch, HDFS-10480-branch-2.01.patch, 
> HDFS-10480-branch-2.8.01.patch, HDFS-10480-trunk-1.patch, 
> HDFS-10480-trunk.patch
>
>
> Currently there is no easy way to obtain the list of active leases or files 
> being written. It will be nice if we have an admin command to list open files 
> and their lease holders.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12089) Fix ambiguous NN retry log message

2017-07-07 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12089?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HDFS-12089:
---
Fix Version/s: (was: 3.0.0-alpha4)
   3.0.0-beta1

> Fix ambiguous NN retry log message
> --
>
> Key: HDFS-12089
> URL: https://issues.apache.org/jira/browse/HDFS-12089
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: webhdfs
>Reporter: Eric Badger
>Assignee: Eric Badger
> Fix For: 3.0.0-beta1, 2.8.2
>
> Attachments: HDFS-12089.001.patch
>
>
> {noformat}
> INFO [main] org.apache.hadoop.hdfs.web.WebHdfsFileSystem: Retrying connect to 
> namenode: foobar. Already tried 0 time(s); retry policy is 
> {noformat}
> The message is misleading since it has already tried once. This message 
> indicates the first retry attempt and that it had retried 0 times in the 
> past. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12090) Handling writes from HDFS to Provided storages

2017-07-07 Thread Rakesh R (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12090?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16078421#comment-16078421
 ] 

Rakesh R commented on HDFS-12090:
-

Thanks for posting the design doc. It looks really nice! Some 
comments/questions:

# {quote}
The backup starts only when the StoragePolicy of the specified hdfs directory 
(or any subtree within)
is set to include the PROVIDED StorageType.
{quote}
So, does this mean there would be order in these ops. What if the dir has 
PROVIDED StoragePolicy set, then issues cmd to create the mount point. I hope 
the movement will be triggered once user invokes HDFS-10285 
{{dfs#satisfyStoragePolicy}} api. In that case, before calling the satisfy api 
user has to set storage policy and mount, which can be in any order, right?
# {quote}
Set the StoragePolicy of hdfs://data/2016/jan/ to fDISK:2, PROVIDED:1. This 
starts backing
up data in hdfs://data/2016/jan/
{quote}
How do you handle newly writing files. While writing data to a file, it will 
respect the storage policy and will do a pipeline writes to provided store. I 
got confused the way -backup differs with -ephemeral writes.
# {quote}
For FileSystems that share the semantics of directories, permissions etc. with 
HDFS, backing up metadata
involves traversing the subtree under hdfs://srcPath (recursively) and mkdir 
all the directories on the
PROVIDED store, with the necessary permissions.
{quote}
Will this {{backing up metadata}} be the responsibility of system admin before 
triggering the backing up call?
# {quote}
We note that irrespective
of where in the write process the C-DN fails (i.e., which step in Figure 1 or 2 
it fails), re-executing the
backup operation (while potentially wasteful) will overwrite the earlier data 
(in the PROVIDED store or AliasMap),
and will eventually result in a successful backup.
{quote}
The idea looks good. {{BackupManager}} coordinates backing up of individual 
files, which involves sequence of steps, step-1: create file metadata. step-2: 
write individual blocks (sequentially or parallely based on 
concat_temp_files/append/multipart etc) on the PROVIDED store. step-3: perform 
concat/finalize_multipart etc. Here, it has to perform these operations in an 
atomic way and a failure in between shouldn't leave to a non-recoverable state. 
Overwriting is simple approach, but I'm afraid whether this works with all 
external provided stores. For example, if some external stores won't allow 
overwirte then our retry logic has to first delete it and then write back. In 
that case, we may need to expose interfaces to plugin vendor specific logic.
# {quote}
the blob can be named using the absolute path name of the file – for example, a
blob for file foo.txt under /users/user1/home/ directory can be named 
/users/user1/home/foo.txt
(This is the convention used in various blob stores today to represent 
namespaces).
{quote}
I'm just adding a very corner case, but it may happen. Say, if the same 
provided store is used by two HDFS cluster and has same path exists. So, admin 
should be careful while configuring same provided store to different clusters. 
# {quote}
Updates to data from HDFS can happen either (a) synchronously (write-through) 
or (b) lazily (write-back). Writing
data synchronously to the PROVIDED store can be done as an extension to the 
existing write pipeline: one of the
3 datanodes in the pipeline writes to the PROVIDED store as part of the pipeline
{quote}
I'd prefer lazily write-back. We([~rakeshr]/[~umamaheswararao]) have tested 
with {{s3-fuse}} approach and observed high latency during write ops, which 
results in many client socket time out exceptions due to the slowness of 
external cold store.
# {quote}
To unmount a PROVIDED store, the administrator can issue the following command:
{quote}
Many sub directories would have set provided storage policy and which results 
in failures during write/append ops. Maybe, we could also think about 
traversing and resetting all the dir's storage policy to default or provide 
fallback storages for replication. Also, the same problem would occur if some 
body sets PROVIDED storage policy without a mount point and then perform 
pipeline writes.
# {quote}Changes to SPS.{quote}
Presently, SPS doesn't do a recursive dir scanning and satisfy the sub-trees. 
We thought of making the block movement simple. Anyway user can iteratively 
scan subtree and invoke {{dfs#satisfyStoragePolicy}} API, if needed. I hope 
this proposal is expecting recursive traversal of subtree, right? If yes, we 
can capture this task also and would discuss ways to extend SPS without much 
overhead.
# {{Adding a point about EC file}} - Presently, the supported storage policies 
for EC files are All_SSD, Hot and Cold. Since it has data + parity blocks, we 
may need to consider EC as a special case and introduce a new policy, then 
backup only the EC {{data}} blocks to the {{PROVIDED}} storage 

[jira] [Commented] (HDFS-12098) Ozone: Datanode is unable to register with scm if scm starts later

2017-07-07 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12098?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16078409#comment-16078409
 ] 

Hadoop QA commented on HDFS-12098:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
9s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} HDFS-7240 Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 18m 
19s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
51s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
38s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
58s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
57s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
51s{color} | {color:green} HDFS-7240 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
49s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 35s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch 
generated 4 new + 1 unchanged - 0 fixed = 5 total (was 1) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
57s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
49s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 65m 17s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
22s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 96m 33s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.TestDFSStripedInputStreamWithRandomECPolicy |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure080 |
|   | hadoop.hdfs.server.namenode.TestNamenodeCapacityReport |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | HDFS-12098 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12876099/HDFS-12098-HDFS-7240.001.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux de496575ec93 3.13.0-116-generic #163-Ubuntu SMP Fri Mar 31 
14:13:22 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | HDFS-7240 / 5fd38a6 |
| Default Java | 1.8.0_131 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HDFS-Build/20188/artifact/patchprocess/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/20188/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/20188/testReport/ |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/20188/console |
| Powered by | Apache Yetus 0.5.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Ozone: Datanode is unable to register with scm if scm 

[jira] [Commented] (HDFS-12000) Ozone: Container : Add key versioning support

2017-07-07 Thread Weiwei Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12000?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16078405#comment-16078405
 ] 

Weiwei Yang commented on HDFS-12000:


Hi [~msingh], both are very good points, I agree. I think we will need to think 
about how to implement a thread safe library to generate unique version ID. For 
the info key, the number of key versions is the number of the key info entries. 
Thank you.

> Ozone: Container : Add key versioning support
> -
>
> Key: HDFS-12000
> URL: https://issues.apache.org/jira/browse/HDFS-12000
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Affects Versions: HDFS-7240
>Reporter: Anu Engineer
>Assignee: Chen Liang
>
> The rest interface of ozone supports versioning of keys. This support comes 
> from the containers and how chunks are managed to support this feature. This 
> JIRA tracks that feature. Will post a detailed design doc so that we can talk 
> about this feature.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12000) Ozone: Container : Add key versioning support

2017-07-07 Thread Mukul Kumar Singh (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12000?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16078375#comment-16078375
 ] 

Mukul Kumar Singh commented on HDFS-12000:
--

Thanks for the document [~cheersyang], the idea looks very good. I have some 
questions

1) I would feel that we should keep a monotonically increasing number in place 
of timestamp. This would be needed in cases where there are multiple PUT 
requests for the same key at the same time, we might end up assigning the same 
version in this case. Something like AtomicLong will help in generating 
efficient version ids.

2) for Info key would it make sense to know how many versions of the key exists 
? This will be useful to understand and debug space allocation issues.

> Ozone: Container : Add key versioning support
> -
>
> Key: HDFS-12000
> URL: https://issues.apache.org/jira/browse/HDFS-12000
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Affects Versions: HDFS-7240
>Reporter: Anu Engineer
>Assignee: Chen Liang
>
> The rest interface of ozone supports versioning of keys. This support comes 
> from the containers and how chunks are managed to support this feature. This 
> JIRA tracks that feature. Will post a detailed design doc so that we can talk 
> about this feature.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-12100) Ozone: KSM: Allocate key should honour volume quota if quota is set on the volume

2017-07-07 Thread Mukul Kumar Singh (JIRA)
Mukul Kumar Singh created HDFS-12100:


 Summary: Ozone: KSM: Allocate key should honour volume quota if 
quota is set on the volume
 Key: HDFS-12100
 URL: https://issues.apache.org/jira/browse/HDFS-12100
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: ozone
Affects Versions: HDFS-7240
Reporter: Mukul Kumar Singh
Assignee: Mukul Kumar Singh
 Fix For: HDFS-7240


KeyManagerImpl#allocateKey currently does not check the volume quota before 
allocating a key, this can cause the volume quota overrun.

Volume quota needs to be check before allocating the key in the SCM.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11696) Fix warnings from Spotbugs in hadoop-hdfs

2017-07-07 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11696?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16078334#comment-16078334
 ] 

Hadoop QA commented on HDFS-11696:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
9s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
23s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 13m 
10s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
25s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
43s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
27s{color} | {color:green} trunk passed {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
21s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs-client in trunk has 2 
extant Findbugs warnings. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
39s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs in trunk has 10 extant 
Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
2s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
8s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
42s{color} | {color:green} hadoop-hdfs-project: The patch generated 0 new + 334 
unchanged - 2 fixed = 334 total (was 336) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
22s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
27s{color} | {color:green} hadoop-hdfs-project/hadoop-hdfs-client generated 0 
new + 0 unchanged - 2 fixed = 0 total (was 2) {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
44s{color} | {color:green} hadoop-hdfs-project/hadoop-hdfs generated 0 new + 0 
unchanged - 10 fixed = 0 total (was 10) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
58s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m  
9s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 65m 12s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
20s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 98m 25s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure130 |
|   | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting |
|   | hadoop.hdfs.server.namenode.TestNamenodeCapacityReport |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure200 |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure140 |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure100 |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure210 |
|   | 

[jira] [Assigned] (HDFS-12097) libhdfs++: add Clang build and tests to the CI system

2017-07-07 Thread Anatoli Shein (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12097?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anatoli Shein reassigned HDFS-12097:


Assignee: Anatoli Shein

> libhdfs++: add Clang build and tests to the CI system
> -
>
> Key: HDFS-12097
> URL: https://issues.apache.org/jira/browse/HDFS-12097
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Reporter: Anatoli Shein
>Assignee: Anatoli Shein
>
> For better portability we should add a Clang build and tests of libhdfs++ 
> library to the CI system. To accomplish that the Dockerfile will need to be 
> updated with the environment setup, and the maven files should be updated to 
> build libhdfs++  using Clang and then run the tests.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-3821) Backport HDFS-3626 to branch-1 (Creating file with invalid path can corrupt edit log)

2017-07-07 Thread Andras Bokor (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3821?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16077887#comment-16077887
 ] 

Andras Bokor edited comment on HDFS-3821 at 7/7/17 3:48 PM:


branch-1 is EoL. Is this ticket still intended to fix?


was (Author: boky01):
branch-1 is EoL. Is this ticket still intended to fix.

> Backport HDFS-3626 to branch-1 (Creating file with invalid path can corrupt 
> edit log)
> -
>
> Key: HDFS-3821
> URL: https://issues.apache.org/jira/browse/HDFS-3821
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 1.0.0
>Reporter: Eli Collins
>
> Per [Todd's 
> comment|https://issues.apache.org/jira/browse/HDFS-3626?focusedCommentId=13413509=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-13413509]
>  this issue affects v1 as well though the problem isn't as obvious because 
> the shell doesn't use the Path(URI) constructor. To test the server side Todd 
> modified the touchz command to use new Path(new URI(src)) and was able to 
> reproduce the issue.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12098) Ozone: Datanode is unable to register with scm if scm starts later

2017-07-07 Thread Weiwei Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12098?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Weiwei Yang updated HDFS-12098:
---
Status: Patch Available  (was: In Progress)

> Ozone: Datanode is unable to register with scm if scm starts later
> --
>
> Key: HDFS-12098
> URL: https://issues.apache.org/jira/browse/HDFS-12098
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: datanode, ozone, scm
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
>Priority: Critical
> Attachments: HDFS-12098-HDFS-7240.001.patch, thread_dump.log
>
>
> Reproducing steps
> # Start datanode
> # Wait and see datanode state, it has connection issues, this is expected
> # Start SCM, expecting datanode could connect to the scm and the state 
> machine could transit to RUNNING. However in actual, its state transits to 
> SHUTDOWN, datanode enters chill mode.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work started] (HDFS-12098) Ozone: Datanode is unable to register with scm if scm starts later

2017-07-07 Thread Weiwei Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12098?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HDFS-12098 started by Weiwei Yang.
--
> Ozone: Datanode is unable to register with scm if scm starts later
> --
>
> Key: HDFS-12098
> URL: https://issues.apache.org/jira/browse/HDFS-12098
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: datanode, ozone, scm
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
>Priority: Critical
> Attachments: HDFS-12098-HDFS-7240.001.patch, thread_dump.log
>
>
> Reproducing steps
> # Start datanode
> # Wait and see datanode state, it has connection issues, this is expected
> # Start SCM, expecting datanode could connect to the scm and the state 
> machine could transit to RUNNING. However in actual, its state transits to 
> SHUTDOWN, datanode enters chill mode.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-12098) Ozone: Datanode is unable to register with scm if scm starts later

2017-07-07 Thread Weiwei Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12098?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16078213#comment-16078213
 ] 

Weiwei Yang edited comment on HDFS-12098 at 7/7/17 3:44 PM:


This is because datanode state machine leaks {{VersionEndpointTask}} thread. In 
the case scm is not yet started, more and more {{VersionEndpointTask}} threads 
keep retrying connection with scm,

{noformat}
INIT - RUNNING 
 \
GETVERSION
 new VersionEndpointTask submitted - retrying ...
   ... (HB interval)
 new VersionEndpointTask submitted - retrying ...
   ... (HB interval)
 new VersionEndpointTask submitted - retrying ...
   ...
{noformat}

the version endpoint tasks are launched in HB interval (5s on my env), so every 
5s there is a new task submitted; the retry policy for each getVersion call is 
10 * 1s = 10s, so every 10s a task can be finished. So every 10s there will be 
ONE thread leak.

Please see [^thread_dump.log], there are 20 VersionEndpointTask threads in 
WAITING state. And this number keeps increasing.

When scm is up, all pending tasks will be able to connect to scm and getVersion 
call returns, so each of them will count the state to next, since the state is 
shared in {{EndpointStateMachine}}, it increments more than 1 so when I review 
the state changes, it looks like below

{noformat}
REGISTER
HEARTBEAT
SHUTDOWN
SHUTDOWN
SHUTDOWN
... 
{noformat}

To fix this, instead of using a central ExecutorService carried in 
{{DatanodeStateMachine}}, instead we could init a fixed size of thread pool to 
execute end point tasks, and make sure the thread pool gets shutdown before 
entering next state (at end of await).


was (Author: cheersyang):
This is because datanode state machine leaks {{VersionEndpointTask}} thread. In 
the case scm is not yet started,
 more and more {{VersionEndpointTask}} threads keep retrying connection with 
scm,

{noformat}
INIT - RUNNING 
 \
GETVERSION
 new VersionEndpointTask submitted - retrying ...
   ... (HB interval)
 new VersionEndpointTask submitted - retrying ...
   ... (HB interval)
 new VersionEndpointTask submitted - retrying ...
   ...
{noformat}

the version endpoint tasks are launched in HB interval (5s on my env), so every 
5s there is a new task submitted; the retry policy for each getVersion call is 
10 * 1s = 10s, so every 10s a task can be finished. So every 10s there will be 
ONE thread leak.

Please see [^thread_dump.log], there are 20 VersionEndpointTask threads in 
WAITING state. And this number keeps increasing.

When scm is up, all pending tasks will be able to connect to scm and getVersion 
call returns, so each of them will count the state to next, since the state is 
shared in {{EndpointStateMachine}}, it increments more than 1 so when I review 
the state changes, it looks like below

{noformat}
REGISTER
HEARTBEAT
SHUTDOWN
SHUTDOWN
SHUTDOWN
... 
{noformat}

To fix this, instead of using a central ExecutorService carried in 
{{DatanodeStateMachine}}, instead we could init a fixed size of thread pool to 
execute end point tasks, and make sure the thread pool gets shutdown before 
entering next state (at end of await).

> Ozone: Datanode is unable to register with scm if scm starts later
> --
>
> Key: HDFS-12098
> URL: https://issues.apache.org/jira/browse/HDFS-12098
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: datanode, ozone, scm
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
>Priority: Critical
> Attachments: HDFS-12098-HDFS-7240.001.patch, thread_dump.log
>
>
> Reproducing steps
> # Start datanode
> # Wait and see datanode state, it has connection issues, this is expected
> # Start SCM, expecting datanode could connect to the scm and the state 
> machine could transit to RUNNING. However in actual, its state transits to 
> SHUTDOWN, datanode enters chill mode.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12098) Ozone: Datanode is unable to register with scm if scm starts later

2017-07-07 Thread Weiwei Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12098?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Weiwei Yang updated HDFS-12098:
---
Attachment: HDFS-12098-HDFS-7240.001.patch

> Ozone: Datanode is unable to register with scm if scm starts later
> --
>
> Key: HDFS-12098
> URL: https://issues.apache.org/jira/browse/HDFS-12098
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: datanode, ozone, scm
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
>Priority: Critical
> Attachments: HDFS-12098-HDFS-7240.001.patch, thread_dump.log
>
>
> Reproducing steps
> # Start datanode
> # Wait and see datanode state, it has connection issues, this is expected
> # Start SCM, expecting datanode could connect to the scm and the state 
> machine could transit to RUNNING. However in actual, its state transits to 
> SHUTDOWN, datanode enters chill mode.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-12099) Block Storage: Add flushid in place of timestamp in DirtyLog file signatures

2017-07-07 Thread Mukul Kumar Singh (JIRA)
Mukul Kumar Singh created HDFS-12099:


 Summary: Block Storage: Add flushid in place of timestamp in 
DirtyLog file signatures
 Key: HDFS-12099
 URL: https://issues.apache.org/jira/browse/HDFS-12099
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: ozone
Affects Versions: HDFS-7240
Reporter: Mukul Kumar Singh
Assignee: Mukul Kumar Singh
 Fix For: HDFS-7240


If the config  "dfs.cblock.block.buffer.flush.interval.seconds" is set to an 
extremely low value i.e 1 sec. Then multiple dirty logs files are generated 
during block buffer flush with the same timestamp signature.

This can be avoided by keeping a notion of flush id in BlockBuffer manager, 
which can be incremented in BlockBufferManager#triggerBlockBufferFlush.

When block is restarted, then the current flush id can easily be reconstructed 
by comparing the timestamp of all the dirty log files choosing the *max flushid 
+ 1* as the next flush id.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-12098) Ozone: Datanode is unable to register with scm if scm starts later

2017-07-07 Thread Weiwei Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12098?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16078213#comment-16078213
 ] 

Weiwei Yang edited comment on HDFS-12098 at 7/7/17 3:15 PM:


This is because datanode state machine leaks {{VersionEndpointTask}} thread. In 
the case scm is not yet started,
 more and more {{VersionEndpointTask}} threads keep retrying connection with 
scm,

{noformat}
INIT - RUNNING 
 \
GETVERSION
 new VersionEndpointTask submitted - retrying ...
   ... (HB interval)
 new VersionEndpointTask submitted - retrying ...
   ... (HB interval)
 new VersionEndpointTask submitted - retrying ...
   ...
{noformat}

the version endpoint tasks are launched in HB interval (5s on my env), so every 
5s there is a new task submitted; the retry policy for each getVersion call is 
10 * 1s = 10s, so every 10s a task can be finished. So every 10s there will be 
ONE thread leak.

Please see [^thread_dump.log], there are 20 VersionEndpointTask threads in 
WAITING state. And this number keeps increasing.

When scm is up, all pending tasks will be able to connect to scm and getVersion 
call returns, so each of them will count the state to next, since the state is 
shared in {{EndpointStateMachine}}, it increments more than 1 so when I review 
the state changes, it looks like below

{noformat}
REGISTER
HEARTBEAT
SHUTDOWN
SHUTDOWN
SHUTDOWN
... 
{noformat}

To fix this, instead of using a central ExecutorService carried in 
{{DatanodeStateMachine}}, instead we could init a fixed size of thread pool to 
execute end point tasks, and make sure the thread pool gets shutdown before 
entering next state (at end of await).


was (Author: cheersyang):
This is because datanode state machine leaks {{VersionEndpointTask}} thread. In 
the case scm is not yet started,
 more and more {{VersionEndpointTask}} threads keep retrying connection with 
scm,

{noformat}
INIT - RUNNING 
 \
GETVERSION
 new VersionEndpointTask submitted - retrying ...
   ... (HB interval)
 new VersionEndpointTask submitted - retrying ...
   ... (HB interval)
 new VersionEndpointTask submitted - retrying ...
   ...
{noformat}

the version endpoint tasks are launched in HB interval (5s on my env), so every 
5s there is a new task submitted; the retry policy for each getVersion call is 
10 * 1s = 10s, so every 10s a task can be finished. So every 10s there will be 
ONE thread leak.

When scm is up, all pending tasks will be able to connect to scm and getVersion 
call returns, so each of them will count the state to next, since the state is 
shared in {{EndpointStateMachine}}, it increments more than 1 so when I review 
the state changes, it looks like below

{noformat}
REGISTER
HEARTBEAT
SHUTDOWN
SHUTDOWN
SHUTDOWN
... 
{noformat}

> Ozone: Datanode is unable to register with scm if scm starts later
> --
>
> Key: HDFS-12098
> URL: https://issues.apache.org/jira/browse/HDFS-12098
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: datanode, ozone, scm
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
>Priority: Critical
> Attachments: thread_dump.log
>
>
> Reproducing steps
> # Start datanode
> # Wait and see datanode state, it has connection issues, this is expected
> # Start SCM, expecting datanode could connect to the scm and the state 
> machine could transit to RUNNING. However in actual, its state transits to 
> SHUTDOWN, datanode enters chill mode.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-12098) Ozone: Datanode is unable to register with scm if scm starts later

2017-07-07 Thread Weiwei Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12098?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16078213#comment-16078213
 ] 

Weiwei Yang edited comment on HDFS-12098 at 7/7/17 3:11 PM:


This is because datanode state machine leaks {{VersionEndpointTask}} thread. In 
the case scm is not yet started,
 more and more {{VersionEndpointTask}} threads keep retrying connection with 
scm,

{noformat}
INIT - RUNNING 
 \
GETVERSION
 new VersionEndpointTask submitted - retrying ...
   ... (HB interval)
 new VersionEndpointTask submitted - retrying ...
   ... (HB interval)
 new VersionEndpointTask submitted - retrying ...
   ...
{noformat}

the version endpoint tasks are launched in HB interval (5s on my env), so every 
5s there is a new task submitted; the retry policy for each getVersion call is 
10 * 1s = 10s, so every 10s a task can be finished. So every 10s there will be 
ONE thread leak.

When scm is up, all pending tasks will be able to connect to scm and getVersion 
call returns, so each of them will count the state to next, since the state is 
shared in {{EndpointStateMachine}}, it increments more than 1 so when I review 
the state changes, it looks like below

{noformat}
REGISTER
HEARTBEAT
SHUTDOWN
SHUTDOWN
SHUTDOWN
... 
{noformat}


was (Author: cheersyang):
This is because datanode state machine leaks {{VersionEndpointTask}} thread. In 
the case scm is not yet started,
 more and more {{VersionEndpointTask}} threads keep retrying connection with 
scm,

{noformat}
INIT - RUNNING 
 \
GETVERSION
   executor.execute(new VersionEndpointTask()) - retry on 
getVersion ...
   ... (HB interval)
   executor.execute(new VersionEndpointTask()) - retry on 
getVersion ...
   ... (HB interval)
   executor.execute(new VersionEndpointTask()) - retry on 
getVersion ...
   ...
{noformat}

the version endpoint tasks are launched in HB interval (5s on my env), so every 
5s there is a new task submitted; the retry policy for each getVersion call is 
10 * 1s = 10s, so every 10s a task can be finished. So every 10s there will be 
ONE thread leak.

When scm is up, all pending tasks will be able to connect to scm and getVersion 
call returns, so each of them will count the state to next, since the state is 
shared in {{EndpointStateMachine}}, it increments more than 1 so when I review 
the state changes, it looks like below

{noformat}
REGISTER
HEARTBEAT
SHUTDOWN
SHUTDOWN
SHUTDOWN
... 
{noformat}

> Ozone: Datanode is unable to register with scm if scm starts later
> --
>
> Key: HDFS-12098
> URL: https://issues.apache.org/jira/browse/HDFS-12098
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: datanode, ozone, scm
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
>Priority: Critical
> Attachments: thread_dump.log
>
>
> Reproducing steps
> # Start datanode
> # Wait and see datanode state, it has connection issues, this is expected
> # Start SCM, expecting datanode could connect to the scm and the state 
> machine could transit to RUNNING. However in actual, its state transits to 
> SHUTDOWN, datanode enters chill mode.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12098) Ozone: Datanode is unable to register with scm if scm starts later

2017-07-07 Thread Weiwei Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12098?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Weiwei Yang updated HDFS-12098:
---
Attachment: thread_dump.log

> Ozone: Datanode is unable to register with scm if scm starts later
> --
>
> Key: HDFS-12098
> URL: https://issues.apache.org/jira/browse/HDFS-12098
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: datanode, ozone, scm
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
>Priority: Critical
> Attachments: thread_dump.log
>
>
> Reproducing steps
> # Start datanode
> # Wait and see datanode state, it has connection issues, this is expected
> # Start SCM, expecting datanode could connect to the scm and the state 
> machine could transit to RUNNING. However in actual, its state transits to 
> SHUTDOWN, datanode enters chill mode.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12098) Ozone: Datanode is unable to register with scm if scm starts later

2017-07-07 Thread Weiwei Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12098?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16078213#comment-16078213
 ] 

Weiwei Yang commented on HDFS-12098:


This is because datanode state machine leaks {{VersionEndpointTask}} thread. In 
the case scm is not yet started,
 more and more {{VersionEndpointTask}} threads keep retrying connection with 
scm,

{noformat}
INIT - RUNNING 
 \
GETVERSION
   executor.execute(new VersionEndpointTask()) - retry on 
getVersion ...
   ... (HB interval)
   executor.execute(new VersionEndpointTask()) - retry on 
getVersion ...
   ... (HB interval)
   executor.execute(new VersionEndpointTask()) - retry on 
getVersion ...
   ...
{noformat}

the version endpoint tasks are launched in HB interval (5s on my env), so every 
5s there is a new task submitted; the retry policy for each getVersion call is 
10 * 1s = 10s, so every 10s a task can be finished. So every 10s there will be 
ONE thread leak.

When scm is up, all pending tasks will be able to connect to scm and getVersion 
call returns, so each of them will count the state to next, since the state is 
shared in {{EndpointStateMachine}}, it increments more than 1 so when I review 
the state changes, it looks like below

{noformat}
REGISTER
HEARTBEAT
SHUTDOWN
SHUTDOWN
SHUTDOWN
... 
{noformat}

> Ozone: Datanode is unable to register with scm if scm starts later
> --
>
> Key: HDFS-12098
> URL: https://issues.apache.org/jira/browse/HDFS-12098
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: datanode, ozone, scm
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
>Priority: Critical
>
> Reproducing steps
> # Start datanode
> # Wait and see datanode state, it has connection issues, this is expected
> # Start SCM, expecting datanode could connect to the scm and the state 
> machine could transit to RUNNING. However in actual, its state transits to 
> SHUTDOWN, datanode enters chill mode.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11696) Fix warnings from Spotbugs in hadoop-hdfs

2017-07-07 Thread Yiqun Lin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11696?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yiqun Lin updated HDFS-11696:
-
Attachment: HDFS-11696.009.patch

Thanks [~andrew.wang] for the review.
Add the unit test for the {{-format}} option parsing to ensure the sub option 
{{-nonInteractive}} , {{-force}} can be parsed as expected.

> Fix warnings from Spotbugs in hadoop-hdfs
> -
>
> Key: HDFS-11696
> URL: https://issues.apache.org/jira/browse/HDFS-11696
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.0.0-alpha4
>Reporter: Yiqun Lin
>Assignee: Yiqun Lin
> Attachments: findbugsHtml.html, HADOOP-14337.001.patch, 
> HADOOP-14337.002.patch, HADOOP-14337.003.patch, HDFS-11696.004.patch, 
> HDFS-11696.005.patch, HDFS-11696.006.patch, HDFS-11696.007.patch, 
> HDFS-11696.008.patch, HDFS-11696.009.patch
>
>
> There are totally 12 findbugs issues generated after switching from Findbugs 
> to Spotbugs across the project in HADOOP-14316. This JIRA focus on cleaning 
> up the part of warnings under scope of HDFS(mainly contained in hadoop-hdfs 
> and hadoop-hdfs-client).



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12026) libhdfs++: Fix compilation errors and warnings when compiling with Clang

2017-07-07 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12026?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16078170#comment-16078170
 ] 

Hadoop QA commented on HDFS-12026:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
12s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} HDFS-8707 Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
 7s{color} | {color:green} HDFS-8707 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
47s{color} | {color:green} HDFS-8707 passed with JDK v1.8.0_131 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
51s{color} | {color:green} HDFS-8707 passed with JDK v1.7.0_131 {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
15s{color} | {color:green} HDFS-8707 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
 9s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
49s{color} | {color:green} the patch passed with JDK v1.8.0_131 {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green}  6m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  6m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
52s{color} | {color:green} the patch passed with JDK v1.7.0_131 {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green}  6m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  6m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  9m 
16s{color} | {color:green} hadoop-hdfs-native-client in the patch passed with 
JDK v1.7.0_131. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
17s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 55m 44s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:5ae34ac |
| JIRA Issue | HDFS-12026 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12876087/HDFS-12026.HDFS-8707.005.patch
 |
| Optional Tests |  asflicense  compile  cc  mvnsite  javac  unit  |
| uname | Linux 6520d1b2f8f4 3.13.0-116-generic #163-Ubuntu SMP Fri Mar 31 
14:13:22 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | HDFS-8707 / 0d2d073 |
| Default Java | 1.7.0_131 |
| Multi-JDK versions |  /usr/lib/jvm/java-8-oracle:1.8.0_131 
/usr/lib/jvm/java-7-openjdk-amd64:1.7.0_131 |
| JDK v1.7.0_131  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/20186/testReport/ |
| modules | C: hadoop-hdfs-project/hadoop-hdfs-native-client U: 
hadoop-hdfs-project/hadoop-hdfs-native-client |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/20186/console |
| Powered by | Apache Yetus 0.5.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> libhdfs++: Fix compilation errors and warnings when compiling with Clang 
> -
>
> Key: HDFS-12026
> URL: https://issues.apache.org/jira/browse/HDFS-12026
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Reporter: Anatoli Shein
>Assignee: Anatoli Shein
> Attachments: 

[jira] [Commented] (HDFS-12026) libhdfs++: Fix compilation errors and warnings when compiling with Clang

2017-07-07 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12026?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16078158#comment-16078158
 ] 

Hadoop QA commented on HDFS-12026:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
10s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} HDFS-8707 Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  9m 
40s{color} | {color:green} HDFS-8707 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  7m  
4s{color} | {color:green} HDFS-8707 passed with JDK v1.8.0_131 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
51s{color} | {color:green} HDFS-8707 passed with JDK v1.7.0_131 {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
17s{color} | {color:green} HDFS-8707 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
 9s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
46s{color} | {color:green} the patch passed with JDK v1.8.0_131 {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green}  6m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  6m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
49s{color} | {color:green} the patch passed with JDK v1.7.0_131 {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green}  6m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  6m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  9m 
15s{color} | {color:green} hadoop-hdfs-native-client in the patch passed with 
JDK v1.7.0_131. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
18s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 58m 25s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:5ae34ac |
| JIRA Issue | HDFS-12026 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12876084/HDFS-12026.HDFS-8707.004.patch
 |
| Optional Tests |  asflicense  compile  cc  mvnsite  javac  unit  |
| uname | Linux f137b14a7ad9 3.13.0-116-generic #163-Ubuntu SMP Fri Mar 31 
14:13:22 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | HDFS-8707 / 0d2d073 |
| Default Java | 1.7.0_131 |
| Multi-JDK versions |  /usr/lib/jvm/java-8-oracle:1.8.0_131 
/usr/lib/jvm/java-7-openjdk-amd64:1.7.0_131 |
| JDK v1.7.0_131  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/20185/testReport/ |
| modules | C: hadoop-hdfs-project/hadoop-hdfs-native-client U: 
hadoop-hdfs-project/hadoop-hdfs-native-client |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/20185/console |
| Powered by | Apache Yetus 0.5.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> libhdfs++: Fix compilation errors and warnings when compiling with Clang 
> -
>
> Key: HDFS-12026
> URL: https://issues.apache.org/jira/browse/HDFS-12026
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Reporter: Anatoli Shein
>Assignee: Anatoli Shein
> Attachments: 

[jira] [Commented] (HDFS-12084) Scheduled Count will not decrement when file is deleted before all IBR's received

2017-07-07 Thread Rushabh S Shah (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12084?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16078140#comment-16078140
 ] 

Rushabh S Shah commented on HDFS-12084:
---

[~brahma]: thanks for the patch.
taking a look now.

> Scheduled Count will not decrement when file is deleted before all IBR's 
> received
> -
>
> Key: HDFS-12084
> URL: https://issues.apache.org/jira/browse/HDFS-12084
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Reporter: Brahma Reddy Battula
>Assignee: Brahma Reddy Battula
> Attachments: HDFS-12084-001.patch, HDFS-12084-002.patch
>
>
> When small files creation && deletion happens so frequently and DN's did not 
> report blocks to NN before deletion, then scheduled count will keep on 
> increment and which will not deleted as blocks are deleted.
> *Note*: Every 20 mins,this can be rolled, but with in 20 mins, count can be 
> more as so many operations.
> when batchIBR enabled with committed allowed=1 this will be observed more.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12026) libhdfs++: Fix compilation errors and warnings when compiling with Clang

2017-07-07 Thread Anatoli Shein (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12026?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anatoli Shein updated HDFS-12026:
-
Attachment: HDFS-12026.HDFS-8707.005.patch

Improved the error message for the compiler compatibility check.

> libhdfs++: Fix compilation errors and warnings when compiling with Clang 
> -
>
> Key: HDFS-12026
> URL: https://issues.apache.org/jira/browse/HDFS-12026
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Reporter: Anatoli Shein
>Assignee: Anatoli Shein
> Attachments: HDFS-12026.HDFS-8707.000.patch, 
> HDFS-12026.HDFS-8707.001.patch, HDFS-12026.HDFS-8707.002.patch, 
> HDFS-12026.HDFS-8707.003.patch, HDFS-12026.HDFS-8707.004.patch, 
> HDFS-12026.HDFS-8707.005.patch
>
>
> Currently multiple errors and warnings prevent libhdfspp from being compiled 
> with clang. It should compile cleanly using flags:
> -std=c++11 -stdlib=libc++
> and also warning flags:
> -Weverything -Wno-c++98-compat -Wno-missing-prototypes 
> -Wno-c++98-compat-pedantic -Wno-padded -Wno-covered-switch-default 
> -Wno-missing-noreturn -Wno-unknown-pragmas -Wconversion -Werror



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12026) libhdfs++: Fix compilation errors and warnings when compiling with Clang

2017-07-07 Thread Anatoli Shein (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12026?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anatoli Shein updated HDFS-12026:
-
Attachment: HDFS-12026.HDFS-8707.004.patch

Fixed the white space issue.

> libhdfs++: Fix compilation errors and warnings when compiling with Clang 
> -
>
> Key: HDFS-12026
> URL: https://issues.apache.org/jira/browse/HDFS-12026
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Reporter: Anatoli Shein
>Assignee: Anatoli Shein
> Attachments: HDFS-12026.HDFS-8707.000.patch, 
> HDFS-12026.HDFS-8707.001.patch, HDFS-12026.HDFS-8707.002.patch, 
> HDFS-12026.HDFS-8707.003.patch, HDFS-12026.HDFS-8707.004.patch
>
>
> Currently multiple errors and warnings prevent libhdfspp from being compiled 
> with clang. It should compile cleanly using flags:
> -std=c++11 -stdlib=libc++
> and also warning flags:
> -Weverything -Wno-c++98-compat -Wno-missing-prototypes 
> -Wno-c++98-compat-pedantic -Wno-padded -Wno-covered-switch-default 
> -Wno-missing-noreturn -Wno-unknown-pragmas -Wconversion -Werror



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-12098) Ozone: Datanode is unable to register with scm if scm starts later

2017-07-07 Thread Weiwei Yang (JIRA)
Weiwei Yang created HDFS-12098:
--

 Summary: Ozone: Datanode is unable to register with scm if scm 
starts later
 Key: HDFS-12098
 URL: https://issues.apache.org/jira/browse/HDFS-12098
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: datanode, ozone, scm
Reporter: Weiwei Yang
Assignee: Weiwei Yang
Priority: Critical


Reproducing steps
# Start datanode
# Wait and see datanode state, it has connection issues, this is expected
# Start SCM, expecting datanode could connect to the scm and the state machine 
could transit to RUNNING. However in actual, its state transits to SHUTDOWN, 
datanode enters chill mode.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12069) Ozone: Create a general abstraction for metadata store

2017-07-07 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12069?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16077948#comment-16077948
 ] 

Hadoop QA commented on HDFS-12069:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
11s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 4 new or modified test 
files. {color} |
|| || || || {color:brown} HDFS-7240 Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 
16s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
52s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
39s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
58s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
55s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
51s{color} | {color:green} HDFS-7240 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
35s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
2s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
48s{color} | {color:green} hadoop-hdfs-project_hadoop-hdfs generated 0 new + 9 
unchanged - 1 fixed = 9 total (was 10) {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 66m 48s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
21s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 95m 22s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Timed out junit tests | 
org.apache.hadoop.ozone.container.ozoneimpl.TestRatisManager |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | HDFS-12069 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12876058/HDFS-12069-HDFS-7240.008.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  xml  |
| uname | Linux 5c533092bda5 3.13.0-116-generic #163-Ubuntu SMP Fri Mar 31 
14:13:22 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | HDFS-7240 / 5fd38a6 |
| Default Java | 1.8.0_131 |
| findbugs | v3.1.0-RC1 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/20184/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/20184/testReport/ |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/20184/console |
| Powered by | Apache Yetus 0.5.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Ozone: Create a general abstraction for metadata store
> --
>
> Key: HDFS-12069
> URL: 

[jira] [Commented] (HDFS-3821) Backport HDFS-3626 to branch-1 (Creating file with invalid path can corrupt edit log)

2017-07-07 Thread Andras Bokor (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3821?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16077887#comment-16077887
 ] 

Andras Bokor commented on HDFS-3821:


branch-1 is EoL. Is this ticket still intended to fix.

> Backport HDFS-3626 to branch-1 (Creating file with invalid path can corrupt 
> edit log)
> -
>
> Key: HDFS-3821
> URL: https://issues.apache.org/jira/browse/HDFS-3821
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 1.0.0
>Reporter: Eli Collins
>
> Per [Todd's 
> comment|https://issues.apache.org/jira/browse/HDFS-3626?focusedCommentId=13413509=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-13413509]
>  this issue affects v1 as well though the problem isn't as obvious because 
> the shell doesn't use the Path(URI) constructor. To test the server side Todd 
> modified the touchz command to use new Path(new URI(src)) and was able to 
> reproduce the issue.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12069) Ozone: Create a general abstraction for metadata store

2017-07-07 Thread Weiwei Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12069?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Weiwei Yang updated HDFS-12069:
---
Attachment: HDFS-12069-HDFS-7240.008.patch

> Ozone: Create a general abstraction for metadata store
> --
>
> Key: HDFS-12069
> URL: https://issues.apache.org/jira/browse/HDFS-12069
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
> Attachments: HDFS-12069-HDFS-7240.001.patch, 
> HDFS-12069-HDFS-7240.002.patch, HDFS-12069-HDFS-7240.003.patch, 
> HDFS-12069-HDFS-7240.004.patch, HDFS-12069-HDFS-7240.005.patch, 
> HDFS-12069-HDFS-7240.006.patch, HDFS-12069-HDFS-7240.007.patch, 
> HDFS-12069-HDFS-7240.008.patch
>
>
> Create a general abstraction for metadata store so that we can plug other key 
> value store to host ozone metadata. Currently only levelDB is implemented, we 
> want to support RocksDB as it provides more production ready features.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12069) Ozone: Create a general abstraction for metadata store

2017-07-07 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12069?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16077841#comment-16077841
 ] 

Hadoop QA commented on HDFS-12069:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
9s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 4 new or modified test 
files. {color} |
|| || || || {color:brown} HDFS-7240 Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 14m 
33s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
51s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
37s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
57s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
52s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
51s{color} | {color:green} HDFS-7240 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m 
26s{color} | {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red}  0m 
27s{color} | {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red}  0m 27s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
35s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red}  0m 
28s{color} | {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
2s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
26s{color} | {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
48s{color} | {color:red} hadoop-hdfs-project_hadoop-hdfs generated 1 new + 9 
unchanged - 1 fixed = 10 total (was 10) {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  0m 28s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
17s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 25m  3s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | HDFS-12069 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12876050/HDFS-12069-HDFS-7240.007.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  xml  |
| uname | Linux 4dc8687be308 3.13.0-119-generic #166-Ubuntu SMP Wed May 3 
12:18:55 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | HDFS-7240 / 5fd38a6 |
| Default Java | 1.8.0_131 |
| findbugs | v3.1.0-RC1 |
| mvninstall | 
https://builds.apache.org/job/PreCommit-HDFS-Build/20183/artifact/patchprocess/patch-mvninstall-hadoop-hdfs-project_hadoop-hdfs.txt
 |
| compile | 
https://builds.apache.org/job/PreCommit-HDFS-Build/20183/artifact/patchprocess/patch-compile-hadoop-hdfs-project_hadoop-hdfs.txt
 |
| javac | 
https://builds.apache.org/job/PreCommit-HDFS-Build/20183/artifact/patchprocess/patch-compile-hadoop-hdfs-project_hadoop-hdfs.txt
 |
| mvnsite | 
https://builds.apache.org/job/PreCommit-HDFS-Build/20183/artifact/patchprocess/patch-mvnsite-hadoop-hdfs-project_hadoop-hdfs.txt
 |
| findbugs | 
https://builds.apache.org/job/PreCommit-HDFS-Build/20183/artifact/patchprocess/patch-findbugs-hadoop-hdfs-project_hadoop-hdfs.txt
 |
| javadoc | 

[jira] [Commented] (HDFS-12069) Ozone: Create a general abstraction for metadata store

2017-07-07 Thread Weiwei Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12069?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16077819#comment-16077819
 ] 

Weiwei Yang commented on HDFS-12069:


Hi [~xyao]

I've replaced most of {{LevelDBStore}} with general {{MetadataStore}} in v7 
patch, please take a look. There are only 2 places (I am not quite familiar 
with) are not included in this patch

# LevelDBStore references in cblocks
# LevelDBStore references in OzoneMetadataManager which is only used by 
LocalStorageHandler

I think we can track them in separate tasks.

Thank you.

> Ozone: Create a general abstraction for metadata store
> --
>
> Key: HDFS-12069
> URL: https://issues.apache.org/jira/browse/HDFS-12069
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
> Attachments: HDFS-12069-HDFS-7240.001.patch, 
> HDFS-12069-HDFS-7240.002.patch, HDFS-12069-HDFS-7240.003.patch, 
> HDFS-12069-HDFS-7240.004.patch, HDFS-12069-HDFS-7240.005.patch, 
> HDFS-12069-HDFS-7240.006.patch, HDFS-12069-HDFS-7240.007.patch
>
>
> Create a general abstraction for metadata store so that we can plug other key 
> value store to host ozone metadata. Currently only levelDB is implemented, we 
> want to support RocksDB as it provides more production ready features.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12069) Ozone: Create a general abstraction for metadata store

2017-07-07 Thread Weiwei Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12069?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Weiwei Yang updated HDFS-12069:
---
Attachment: HDFS-12069-HDFS-7240.007.patch

Rebase to latest code base, fixed the checkstyle issue.

> Ozone: Create a general abstraction for metadata store
> --
>
> Key: HDFS-12069
> URL: https://issues.apache.org/jira/browse/HDFS-12069
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
> Attachments: HDFS-12069-HDFS-7240.001.patch, 
> HDFS-12069-HDFS-7240.002.patch, HDFS-12069-HDFS-7240.003.patch, 
> HDFS-12069-HDFS-7240.004.patch, HDFS-12069-HDFS-7240.005.patch, 
> HDFS-12069-HDFS-7240.006.patch, HDFS-12069-HDFS-7240.007.patch
>
>
> Create a general abstraction for metadata store so that we can plug other key 
> value store to host ozone metadata. Currently only levelDB is implemented, we 
> want to support RocksDB as it provides more production ready features.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12069) Ozone: Create a general abstraction for metadata store

2017-07-07 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12069?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16077806#comment-16077806
 ] 

Hadoop QA commented on HDFS-12069:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
8s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 4 new or modified test 
files. {color} |
|| || || || {color:brown} HDFS-7240 Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 14m 
37s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
52s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
37s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
57s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
52s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
51s{color} | {color:green} HDFS-7240 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
48s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 34s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch 
generated 1 new + 6 unchanged - 0 fixed = 7 total (was 6) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
47s{color} | {color:green} hadoop-hdfs-project_hadoop-hdfs generated 0 new + 9 
unchanged - 1 fixed = 9 total (was 10) {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 65m  7s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
20s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 92m 35s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.qjournal.client.TestQJMWithFaults |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | HDFS-12069 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12876039/HDFS-12069-HDFS-7240.006.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  xml  |
| uname | Linux 7958a4df5646 3.13.0-119-generic #166-Ubuntu SMP Wed May 3 
12:18:55 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | HDFS-7240 / 8efb1fa |
| Default Java | 1.8.0_131 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HDFS-Build/20182/artifact/patchprocess/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/20182/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/20182/testReport/ |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/20182/console |
| Powered by | Apache Yetus 0.5.0-SNAPSHOT   http://yetus.apache.org |


This 

[jira] [Updated] (HDFS-11679) Ozone: SCM CLI: Implement list container command

2017-07-07 Thread Weiwei Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11679?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Weiwei Yang updated HDFS-11679:
---
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: HDFS-7240
   Status: Resolved  (was: Patch Available)

Just committed to the feature branch, thanks [~yuanbo] for the contribution!

> Ozone: SCM CLI: Implement list container command
> 
>
> Key: HDFS-11679
> URL: https://issues.apache.org/jira/browse/HDFS-11679
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Weiwei Yang
>Assignee: Yuanbo Liu
>  Labels: command-line
> Fix For: HDFS-7240
>
> Attachments: HDFS-11679-HDFS-7240.001.patch, 
> HDFS-11679-HDFS-7240.002.patch, HDFS-11679-HDFS-7240.003.patch, 
> HDFS-11679-HDFS-7240.004.patch, HDFS-11679-HDFS-7240.005.patch
>
>
> Implement the command to list containers
> {code}
> hdfs scm -container list -start  [-count <100> | -end 
> ]{code}
> Lists all containers known to SCM. The option -start allows the listing to 
> start from a specified container and -count controls the number of entries 
> returned but it is mutually exclusive with the -end option which returns keys 
> from the -start to -end range.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11679) Ozone: SCM CLI: Implement list container command

2017-07-07 Thread Weiwei Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11679?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16077760#comment-16077760
 ] 

Weiwei Yang commented on HDFS-11679:


V5 patch looks good to me,  1. I will commit this shortly, I will fix the 2 
minor checkstyle issue during the commit. Thanks [~yuanbo].

> Ozone: SCM CLI: Implement list container command
> 
>
> Key: HDFS-11679
> URL: https://issues.apache.org/jira/browse/HDFS-11679
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Weiwei Yang
>Assignee: Yuanbo Liu
>  Labels: command-line
> Attachments: HDFS-11679-HDFS-7240.001.patch, 
> HDFS-11679-HDFS-7240.002.patch, HDFS-11679-HDFS-7240.003.patch, 
> HDFS-11679-HDFS-7240.004.patch, HDFS-11679-HDFS-7240.005.patch
>
>
> Implement the command to list containers
> {code}
> hdfs scm -container list -start  [-count <100> | -end 
> ]{code}
> Lists all containers known to SCM. The option -start allows the listing to 
> start from a specified container and -count controls the number of entries 
> returned but it is mutually exclusive with the -end option which returns keys 
> from the -start to -end range.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12069) Ozone: Create a general abstraction for metadata store

2017-07-07 Thread Weiwei Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12069?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Weiwei Yang updated HDFS-12069:
---
Attachment: HDFS-12069-HDFS-7240.006.patch

> Ozone: Create a general abstraction for metadata store
> --
>
> Key: HDFS-12069
> URL: https://issues.apache.org/jira/browse/HDFS-12069
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
> Attachments: HDFS-12069-HDFS-7240.001.patch, 
> HDFS-12069-HDFS-7240.002.patch, HDFS-12069-HDFS-7240.003.patch, 
> HDFS-12069-HDFS-7240.004.patch, HDFS-12069-HDFS-7240.005.patch, 
> HDFS-12069-HDFS-7240.006.patch
>
>
> Create a general abstraction for metadata store so that we can plug other key 
> value store to host ozone metadata. Currently only levelDB is implemented, we 
> want to support RocksDB as it provides more production ready features.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org