[jira] [Commented] (HDFS-15099) [SBN Read] getBlockLocations() should throw ObserverRetryOnActiveException on an attempt to change aTime on ObserverNode

2020-01-09 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15099?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17012535#comment-17012535
 ] 

Hadoop QA commented on HDFS-15099:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 26m  
8s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} branch-2.10 Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
59s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 13m 
28s{color} | {color:green} branch-2.10 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 15m 
45s{color} | {color:green} branch-2.10 passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 14m 
28s{color} | {color:green} branch-2.10 passed with JDK v1.8.0_232 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
17s{color} | {color:green} branch-2.10 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
53s{color} | {color:green} branch-2.10 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
46s{color} | {color:green} branch-2.10 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
43s{color} | {color:green} branch-2.10 passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
15s{color} | {color:green} branch-2.10 passed with JDK v1.8.0_232 {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
22s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 12m 
41s{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 12m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 10m  
5s{color} | {color:green} the patch passed with JDK v1.8.0_232 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 10m  
5s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
1m 49s{color} | {color:orange} root: The patch generated 1 new + 56 unchanged - 
0 fixed = 57 total (was 56) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m  
7s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
21s{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
44s{color} | {color:green} the patch passed with JDK v1.8.0_232 {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 11m 
30s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 77m 33s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
45s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}218m 24s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.hdfs.qjournal.server.TestJournalNodeRespectsBindHostKeys |
|   | hadoop.hdfs.TestRollingUpgrade |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=19.03.5 Server=19.03.5 Image:yetus/hadoop:a969cad0a12 |
| JIRA Issue | HDFS-15099 |
| JIRA Patch URL | 
https://issues.apache.org/jira/s

[jira] [Commented] (HDFS-14578) AvailableSpaceBlockPlacementPolicy always prefers local node

2020-01-09 Thread Vinayakumar B (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14578?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17012530#comment-17012530
 ] 

Vinayakumar B commented on HDFS-14578:
--

[HDFS-14578-07.patch|https://issues.apache.org/jira/secure/attachment/12990467/HDFS-14578-07.patch]
 Looks good. 
+1
Checkstyles can be ignored.

> AvailableSpaceBlockPlacementPolicy always prefers local node
> 
>
> Key: HDFS-14578
> URL: https://issues.apache.org/jira/browse/HDFS-14578
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: block placement
>Affects Versions: 2.8.0, 2.7.4, 3.0.0-alpha1
>Reporter: Wei-Chiu Chuang
>Assignee: Ayush Saxena
>Priority: Major
> Attachments: HDFS-14578-02.patch, HDFS-14578-03.patch, 
> HDFS-14578-04.patch, HDFS-14578-05.patch, HDFS-14578-06.patch, 
> HDFS-14578-07.patch, HDFS-14578-WIP-01.patch, HDFS-14758-01.patch
>
>
> It looks like AvailableSpaceBlockPlacementPolicy prefers local disk just like 
> in the BlockPlacementPolicyDefault
>  
> As Yongjun mentioned in 
> [HDFS-8131|https://issues.apache.org/jira/browse/HDFS-8131?focusedCommentId=16558739&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16558739],
>  
> {quote}Class AvailableSpaceBlockPlacementPolicy extends 
> BlockPlacementPolicyDefault. But it doesn't change the behavior of choosing 
> the first node in BlockPlacementPolicyDefault, so even with this new feature, 
> the local DN is always chosen as the first DN (of course when it is not 
> excluded), and the new feature only changes the selection of the rest of the 
> two DNs.
> {quote}
> I'm file this Jira as I groom Cloudera's internal Jira and found this 
> unreported issue. We do have a customer hitting this problem. I don't have a 
> fix, but thought it would be beneficial to report it to Apache Jira.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15110) HttpFS : post requests are not supported for path "/"

2020-01-09 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15110?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17012523#comment-17012523
 ] 

Hadoop QA commented on HDFS-15110:
--

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
52s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 22m 
27s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
24s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
23s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
38s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
15m 26s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
34s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
25s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
31s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
15m 37s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
21s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  5m  
3s{color} | {color:green} hadoop-hdfs-httpfs in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
32s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 66m  5s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=19.03.5 Server=19.03.5 Image:yetus/hadoop:c44943d1fc3 |
| JIRA Issue | HDFS-15110 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12990481/HDFS-15110.002.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux ac879bf36a84 4.15.0-66-generic #75-Ubuntu SMP Tue Oct 1 
05:24:09 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 0315ef8 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_232 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/28637/testReport/ |
| Max. process+thread count | 606 (vs. ulimit of 5500) |
| modules | C: hadoop-hdfs-project/hadoop-hdfs-httpfs U: 
hadoop-hdfs-project/hadoop-hdfs-httpfs |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/28637/console |
| Powered by | Apache Yetus 0.8.0   http://yetus.apache.org |


This message was automatically generated.



> HttpFS :  post requests are not supported for path "/"
> --
>
> Key: HDFS-15110

[jira] [Commented] (HDFS-14787) NameNode error

2020-01-09 Thread Tao Yang (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14787?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17012519#comment-17012519
 ] 

Tao Yang commented on HDFS-14787:
-

Hi, [~lucao], any updates about this issue? We have found a similar error, you 
can see HDFS-15105 for details, Thanks.

> NameNode error 
> ---
>
> Key: HDFS-14787
> URL: https://issues.apache.org/jira/browse/HDFS-14787
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 3.0.0
>Reporter: Cao, Lionel
>Priority: Major
> Attachments: core-site.xml, 
> hadoop-cmf-hdfs-NAMENODE-smc-nn02.jq.log.out.20190827, hdfs-site.xml, 
> move&concat.java, rt-Append.txt
>
>
> Hi committee,
> We encountered a NN error as below,
> The primary NN was shut down last Thursday and we recover it by remove some 
> OP in the edit log..  But the standby NN was shut down again yesterday by the 
> same error...
> could you pls help address the possible root cause?
>  
> Attach some error log:
> Full log and NameNode configuration pls refer to the attachments.
> Besides, I have attached some java code which could cause the error,
>  # We do some append action in spark streaming program (rt-Append.txt) which 
> caused the primary NN shutdown last Thursday
>  # We do some move & concat operation in data convert 
> program(move&concat.java) which caused the standby NN shutdown yesterday
> 2019-08-27 09:51:12,409 INFO 
> org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader: replaying edit log: 
> 766146/953617 transactions completed. (80%)2019-08-27 09:51:12,409 INFO 
> org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader: replaying edit log: 
> 766146/953617 transactions completed. (80%)2019-08-27 09:51:12,858 INFO 
> org.apache.hadoop.hdfs.server.namenode.FSDirectory: Increasing replication 
> from 2 to 2 for 
> /user/smcjob/.sparkStaging/application_1561429828507_20423/__spark_libs__2381992047634476351.zip2019-08-27
>  09:51:12,870 INFO org.apache.hadoop.hdfs.server.namenode.FSDirectory: 
> Increasing replication from 2 to 2 for 
> /user/smcjob/.sparkStaging/application_1561429828507_20423/oozietest2-0.0.1-SNAPSHOT.jar2019-08-27
>  09:51:12,898 INFO org.apache.hadoop.hdfs.server.namenode.FSDirectory: 
> Increasing replication from 2 to 2 for 
> /user/smcjob/.sparkStaging/application_1561429828507_20423/__spark_conf__.zip2019-08-27
>  09:51:12,910 INFO org.apache.hadoop.hdfs.server.namenode.FSDirectory: 
> Increasing replication from 2 to 2 for 
> /user/smctest/.sparkStaging/application_1561429828507_20424/__spark_libs__8875310030853528804.zip2019-08-27
>  09:51:12,927 INFO org.apache.hadoop.hdfs.server.namenode.FSDirectory: 
> Increasing replication from 2 to 2 for 
> /user/smctest/.sparkStaging/application_1561429828507_20424/__spark_conf__.zip2019-08-27
>  09:51:13,777 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader: 
> replaying edit log: 857745/953617 transactions completed. (90%)2019-08-27 
> 09:51:14,035 INFO org.apache.hadoop.hdfs.server.namenode.FSDirectory: 
> Increasing replication from 2 to 2 for 
> /user/smc_ss/.sparkStaging/application_1561429828507_20425/__spark_libs__749681005558653.zip2019-08-27
>  09:51:14,067 INFO org.apache.hadoop.hdfs.server.namenode.FSDirectory: 
> Increasing replication from 2 to 2 for 
> /user/smc_ss/.sparkStaging/application_1561429828507_20426/__spark_libs__7479542421029947753.zip2019-08-27
>  09:51:14,070 INFO org.apache.hadoop.hdfs.server.namenode.FSDirectory: 
> Increasing replication from 2 to 2 for 
> /user/smctest/.sparkStaging/application_1561429828507_20428/__spark_libs__7647933078788028649.zip2019-08-27
>  09:51:14,075 ERROR org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader: 
> Encountered exception on operation CloseOp [length=0, inodeId=0, 
> path=/**/v2-data-20190826.mayfly.data, replication=2, 
> mtime=1566870616821, atime=1566870359230, blockSize=134217728, 
> blocks=[blk_1270599798_758966421, blk_1270599852_758967928, 
> blk_1270601282_759026903, blk_1270602443_759027052, blk_1270602446_759061086, 
> blk_1270603081_759050235], permissions=smc_ss:smc_ss:rw-r--r--, 
> aclEntries=null, clientName=, clientMachine=, overwrite=false, 
> storagePolicyId=0, erasureCodingPolicyId=0, opCode=OP_CLOSE, 
> txid=4359520942]java.io.IOException: Mismatched block IDs or generation 
> stamps, attempting to replace block blk_1270602446_759027503 with 
> blk_1270602446_759061086 as block # 4/6 of 
> /**/v2-data-20190826.mayfly.data at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.updateBlocks(FSEditLogLoader.java:1096)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.applyEditLogOp(FSEditLogLoader.java:452)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.loadEditRecords(FSEditLogLoader.java:249)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSEdi

[jira] [Commented] (HDFS-15105) Standby NN exits and fails to restart due to edit log corruption

2020-01-09 Thread Tao Yang (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15105?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17012512#comment-17012512
 ] 

Tao Yang commented on HDFS-15105:
-

Hi, [~xiaochen], I noticed that you have fixed an issue which might get a 
similar error in HDFS-12369, could you please take a look at this issue? Hope 
to hear your thoughts, Thanks.

> Standby NN exits and fails to restart due to edit log corruption
> 
>
> Key: HDFS-15105
> URL: https://issues.apache.org/jira/browse/HDFS-15105
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.8.0
>Reporter: Tao Yang
>Priority: Critical
>
> We found a issue that Standby NN exited and failed to restart until we 
> resolved the edit log corruption.
>  Error logs:
> {noformat}
> java.io.IOException: Mismatched block IDs or generation stamps, attempting to 
> replace block blk_74288647857_73526148211 with blk_74288647857_73526377369 as 
> block # 15/17 of 
> /maindump/mainv10/dump_online/lasttable/20200105015500/part-319
>         at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.updateBlocks(FSEditLogLoader.java:1019)
>         at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.applyEditLogOp(FSEditLogLoader.java:431)
>         at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.loadEditRecords(FSEditLogLoader.java:234)
>         at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.loadFSEdits(FSEditLogLoader.java:143)
>         at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.loadEdits(FSImage.java:885)
>         at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.loadEdits(FSImage.java:866)
>         at 
> org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer.doTailEdits(EditLogTailer.java:234)
>         at 
> org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread.doWork(EditLogTailer.java:342)
>         at 
> org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread.access$200(EditLogTailer.java:295)
>         at 
> org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread$1.run(EditLogTailer.java:312)
>         at 
> org.apache.hadoop.security.SecurityUtil.doAsLoginUserOrFatal(SecurityUtil.java:455)
>         at 
> org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread.run(EditLogTailer.java:308)
> {noformat}
> Related edit log transactions of the same file:
> {noformat}
> 1. TXID=444341628498  time=1578251449632
> OP_UPDATE_BLOCKS
> blocks: ... blk_74288647857_73526148211   blk_74454090866_73526215536
> 2. TXID=444342382774   time=1578251520740
> OP_REASSIGN_LEASE
> 3. TXID=444342401216  time=1578251522779
> OP_CLOSE
> blocks: ... blk_74288647857_73526377369   blk_74454090866_73526374095
> 4. TXID=444342401394
> OP_SET_GENSTAMP_V2 
> generate stamp: 73526377369
> 5. TXID=444342401395  time=1578251522835
> OP_TRUNCATE
> 6. TXID=444342402176  time=1578251523246
> OP_CLOSE
> blocks: ... blk_74288647857_73526377369 
> {noformat}
> According to the edit logs, it's wield to see that stamp(73526377369) was 
> generated in transaction 4 but already used in transaction 3, and for 
> transaction 3 there should be only the last block changed but in fact the 
> last two blocks are both changed.
> This problem might be produced in a complex scenario that truncate operation 
> immediately followed the recover-lease operation for the same file. A 
> suspicious point is that between creation and being written for transaction 
> 3, stamp of the second last block was updated when committing block 
> synchronization caused by the truncate operation.
> Related calling stack is as follows: 
> {noformat}
> NameNodeRpcServer#commitBlockSynchronization
>   FSNamesystem#commitBlockSynchronization
>     // update last block
>     if(!copyTruncate) {
>       storedBlock.setGenerationStamp(newgenerationstamp); //updated the stamp 
> of the second last block in transaction 3 before being written
>       storedBlock.setNumBytes(newlength);
>     }
> {noformat}
> Any comments are welcome. Thanks.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15110) HttpFS : post requests are not supported for path "/"

2020-01-09 Thread hemanthboyina (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15110?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

hemanthboyina updated HDFS-15110:
-
Attachment: HDFS-15110.002.patch

> HttpFS :  post requests are not supported for path "/"
> --
>
> Key: HDFS-15110
> URL: https://issues.apache.org/jira/browse/HDFS-15110
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: hemanthboyina
>Assignee: hemanthboyina
>Priority: Major
> Attachments: HDFS-15110.001.patch, HDFS-15110.002.patch
>
>
> POST requests in HttpFS with  path as "/" were not supported .



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15097) Purge log in KMS and HttpFS

2020-01-09 Thread Doris Gu (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15097?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Doris Gu updated HDFS-15097:

Status: Patch Available  (was: Open)

> Purge log in KMS and HttpFS
> ---
>
> Key: HDFS-15097
> URL: https://issues.apache.org/jira/browse/HDFS-15097
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: httpfs, kms
>Affects Versions: 3.1.3, 3.2.1, 3.0.3, 3.3.0
>Reporter: Doris Gu
>Assignee: Doris Gu
>Priority: Minor
> Attachments: HDFS-15097.001.patch
>
>
> KMS and HttpFS uses ConfigurationWithLogging instead of Configuration,  which 
> logs a configuration object each access.  It's more like a development use.
> {code:java}
> 2020-01-07 16:52:00,456 INFO org.apache.hadoop.conf.ConfigurationWithLogging: 
> Got hadoop.security.instrumentation.requires.admin = 'false' 
> 2020-01-07 16:52:00,456 INFO org.apache.hadoop.conf.ConfigurationWithLogging: 
> Got hadoop.security.instrumentation.requires.admin = 'false' (default 
> 'false') 
> 2020-01-07 16:52:15,091 INFO org.apache.hadoop.conf.ConfigurationWithLogging: 
> Got hadoop.security.instrumentation.requires.admin = 'false' 
> 2020-01-07 16:52:15,091 INFO org.apache.hadoop.conf.ConfigurationWithLogging: 
> Got hadoop.security.instrumentation.requires.admin = 'false' (default 'false')
> {code}
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15097) Purge log in KMS and HttpFS

2020-01-09 Thread Doris Gu (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15097?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Doris Gu updated HDFS-15097:

Status: Open  (was: Patch Available)

> Purge log in KMS and HttpFS
> ---
>
> Key: HDFS-15097
> URL: https://issues.apache.org/jira/browse/HDFS-15097
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: httpfs, kms
>Affects Versions: 3.1.3, 3.2.1, 3.0.3, 3.3.0
>Reporter: Doris Gu
>Assignee: Doris Gu
>Priority: Minor
> Attachments: HDFS-15097.001.patch
>
>
> KMS and HttpFS uses ConfigurationWithLogging instead of Configuration,  which 
> logs a configuration object each access.  It's more like a development use.
> {code:java}
> 2020-01-07 16:52:00,456 INFO org.apache.hadoop.conf.ConfigurationWithLogging: 
> Got hadoop.security.instrumentation.requires.admin = 'false' 
> 2020-01-07 16:52:00,456 INFO org.apache.hadoop.conf.ConfigurationWithLogging: 
> Got hadoop.security.instrumentation.requires.admin = 'false' (default 
> 'false') 
> 2020-01-07 16:52:15,091 INFO org.apache.hadoop.conf.ConfigurationWithLogging: 
> Got hadoop.security.instrumentation.requires.admin = 'false' 
> 2020-01-07 16:52:15,091 INFO org.apache.hadoop.conf.ConfigurationWithLogging: 
> Got hadoop.security.instrumentation.requires.admin = 'false' (default 'false')
> {code}
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15110) HttpFS : post requests are not supported for path "/"

2020-01-09 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15110?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17012491#comment-17012491
 ] 

Hadoop QA commented on HDFS-15110:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
31s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 17m 
11s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
27s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
29s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
34s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 41s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
34s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
24s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
22s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
28s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 33s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
40s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
22s{color} | {color:red} hadoop-hdfs-project_hadoop-hdfs-httpfs generated 1 new 
+ 5 unchanged - 0 fixed = 6 total (was 5) {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  4m 
42s{color} | {color:green} hadoop-hdfs-httpfs in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
28s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 54m  0s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=19.03.5 Server=19.03.5 Image:yetus/hadoop:c44943d1fc3 |
| JIRA Issue | HDFS-15110 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12990479/HDFS-15110.001.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 56e327e14375 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 0315ef8 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_232 |
| findbugs | v3.1.0-RC1 |
| javadoc | 
https://builds.apache.org/job/PreCommit-HDFS-Build/28636/artifact/out/diff-javadoc-javadoc-hadoop-hdfs-project_hadoop-hdfs-httpfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/28636/testReport/ |
| Max. process+thread count | 644 (vs. ulimit of 5500) |
| modules | C: hadoop-hdfs-project/hadoop-hdfs-httpfs U: 
hadoop-hdfs-project/hadoop-hdfs-httpfs |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/28636/console |
| Powered by | Apache Yetus 0.8.0   h

[jira] [Commented] (HDFS-14578) AvailableSpaceBlockPlacementPolicy always prefers local node

2020-01-09 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14578?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17012481#comment-17012481
 ] 

Hadoop QA commented on HDFS-14578:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  1m 
14s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 25m 
11s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
35s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
 6s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
30s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
17m 17s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
16s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
12s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
59s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
53s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 42s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch 
generated 7 new + 459 unchanged - 0 fixed = 466 total (was 459) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
2s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 30s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
10s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}117m  3s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
31s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}188m 34s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.TestDeadNodeDetection |
|   | hadoop.hdfs.server.namenode.TestRedudantBlocks |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=19.03.5 Server=19.03.5 Image:yetus/hadoop:c44943d1fc3 |
| JIRA Issue | HDFS-14578 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12990467/HDFS-14578-07.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  xml  |
| uname | Linux fe19d100ba0f 4.15.0-66-generic #75-Ubuntu SMP Tue Oct 1 
05:24:09 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 782c055 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_232 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HDFS-Build/28634/artifact/out/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/28634/artifact/o

[jira] [Updated] (HDFS-15110) HttpFS : post requests are not supported for path "/"

2020-01-09 Thread hemanthboyina (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15110?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

hemanthboyina updated HDFS-15110:
-
Attachment: HDFS-15110.001.patch
Status: Patch Available  (was: Open)

> HttpFS :  post requests are not supported for path "/"
> --
>
> Key: HDFS-15110
> URL: https://issues.apache.org/jira/browse/HDFS-15110
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: hemanthboyina
>Assignee: hemanthboyina
>Priority: Major
> Attachments: HDFS-15110.001.patch
>
>
> POST requests in HttpFS with  path as "/" were not supported .



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15102) HttpFS: put requests are not supported for path "/"

2020-01-09 Thread hemanthboyina (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15102?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17012454#comment-17012454
 ] 

hemanthboyina commented on HDFS-15102:
--

thanks for the review [~elgoiri] [~tasanuma]  , thanks for the commit 
[~tasanuma]
{quote}BTW, I think we also need postRoot, like when executing UNSETECPOLICY 
with "/" path
{quote}
have raised HDFS-15110 for this 

> HttpFS: put requests are not supported for path "/"
> ---
>
> Key: HDFS-15102
> URL: https://issues.apache.org/jira/browse/HDFS-15102
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: hemanthboyina
>Assignee: hemanthboyina
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: HDFS-15102.001.patch
>
>
> PUT requests in HttpFS with  path as "/" were not supported .
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15100) RBF: Print stacktrace when DFSRouter fails to fetch/parse JMX output from NameNode

2020-01-09 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15100?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17012448#comment-17012448
 ] 

Hudson commented on HDFS-15100:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #17842 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17842/])
HDFS-15100. RBF: Print stacktrace when DFSRouter fails to fetch/parse 
(tasanuma: rev 0315ef844862ee863d646b562ba6d8889876ffa9)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/FederationUtil.java


> RBF: Print stacktrace when DFSRouter fails to fetch/parse JMX output from 
> NameNode
> --
>
> Key: HDFS-15100
> URL: https://issues.apache.org/jira/browse/HDFS-15100
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: rbf
>Reporter: Akira Ajisaka
>Assignee: Akira Ajisaka
>Priority: Major
>  Labels: supportability
> Fix For: 3.3.0
>
>
> When DFSRouter fails to fetch or parse JMX output from NameNode, it prints 
> only the error message. Therefore we had to modify the source code to print 
> the stacktrace of the exception to find the root cause.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15100) RBF: Print stacktrace when DFSRouter fails to fetch/parse JMX output from NameNode

2020-01-09 Thread Takanobu Asanuma (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15100?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Takanobu Asanuma updated HDFS-15100:

Fix Version/s: 3.3.0
   Resolution: Fixed
   Status: Resolved  (was: Patch Available)

Committed to trunk. Thanks for your contribution, [~aajisaka]. Thanks for your 
review, [~elgoiri].

> RBF: Print stacktrace when DFSRouter fails to fetch/parse JMX output from 
> NameNode
> --
>
> Key: HDFS-15100
> URL: https://issues.apache.org/jira/browse/HDFS-15100
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: rbf
>Reporter: Akira Ajisaka
>Assignee: Akira Ajisaka
>Priority: Major
>  Labels: supportability
> Fix For: 3.3.0
>
>
> When DFSRouter fails to fetch or parse JMX output from NameNode, it prints 
> only the error message. Therefore we had to modify the source code to print 
> the stacktrace of the exception to find the root cause.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15099) [SBN Read] getBlockLocations() should throw ObserverRetryOnActiveException on an attempt to change aTime on ObserverNode

2020-01-09 Thread Chen Liang (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15099?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chen Liang updated HDFS-15099:
--
Status: Patch Available  (was: Open)

> [SBN Read] getBlockLocations() should throw ObserverRetryOnActiveException on 
> an attempt to change aTime on ObserverNode
> 
>
> Key: HDFS-15099
> URL: https://issues.apache.org/jira/browse/HDFS-15099
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 2.10.0
>Reporter: Konstantin Shvachko
>Assignee: Chen Liang
>Priority: Major
> Attachments: HDFS-15099-branch-2.10.001.patch
>
>
> The precision of updating an INode's aTime while executing 
> {{getBlockLocations()}} is 1 hour by default. Updates cannot be handled by 
> ObserverNode, so the call should be redirected to Active NameNode. In order 
> to redirect to active the ObserverNode should through 
> {{ObserverRetryOnActiveException}}.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15099) [SBN Read] getBlockLocations() should throw ObserverRetryOnActiveException on an attempt to change aTime on ObserverNode

2020-01-09 Thread Chen Liang (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15099?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chen Liang updated HDFS-15099:
--
Attachment: HDFS-15099-branch-2.10.001.patch

> [SBN Read] getBlockLocations() should throw ObserverRetryOnActiveException on 
> an attempt to change aTime on ObserverNode
> 
>
> Key: HDFS-15099
> URL: https://issues.apache.org/jira/browse/HDFS-15099
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 2.10.0
>Reporter: Konstantin Shvachko
>Assignee: Chen Liang
>Priority: Major
> Attachments: HDFS-15099-branch-2.10.001.patch
>
>
> The precision of updating an INode's aTime while executing 
> {{getBlockLocations()}} is 1 hour by default. Updates cannot be handled by 
> ObserverNode, so the call should be redirected to Active NameNode. In order 
> to redirect to active the ObserverNode should through 
> {{ObserverRetryOnActiveException}}.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-15110) HttpFS : post requests are not supported for path "/"

2020-01-09 Thread hemanthboyina (Jira)
hemanthboyina created HDFS-15110:


 Summary: HttpFS :  post requests are not supported for path "/"
 Key: HDFS-15110
 URL: https://issues.apache.org/jira/browse/HDFS-15110
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: hemanthboyina
Assignee: hemanthboyina


POST requests in HttpFS with  path as "/" were not supported .



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15083) Add new trash rpc which move the trash (mkdir and the rename) operation to the server side.

2020-01-09 Thread zhuqi (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15083?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17012433#comment-17012433
 ] 

zhuqi commented on HDFS-15083:
--

cc [~weichiu]

Thanks for your comment, sorry for my draft patch, the cloud storage should be 
supported, i think so.

I just change the TrashPolicyDefault in order to support the 
DistributedFileSystem trash in the server side quickly for our cluster need, 
for our Router trash need in HDFS-14117  , i think the trash in server side is 
graceful compare the HDFS-14117 , and also can reduce the trash rpc to 50%, 
because of that our hdfs life time system's trash action will lead to heavy 
load to namenode.

If you any advice to push the graceful trash and reduce the trash rpc ?

 

> Add new trash rpc which move the trash (mkdir and the rename) operation to 
> the server side.
> ---
>
> Key: HDFS-15083
> URL: https://issues.apache.org/jira/browse/HDFS-15083
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: dfsclient, namenode, rbf
>Affects Versions: 2.10.0, 3.2.0
>Reporter: zhuqi
>Assignee: zhuqi
>Priority: Major
> Attachments: HDFS-15083.001.patch
>
>
> Now the rbf trash with multi cluster mounted  in 
> [HDFS-14117|https://issues.apache.org/jira/browse/HDFS-14117] , the solution 
> is not graceful。
> If we can move the client side trash (mkdir and rename) to the  server side, 
> we can not only solve the problem gracefully, but also reduce the trash rpc 
> load in server side to about %50 compare to the origin trash which call two 
> times rpc(mkdir and rename).



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-7765) FSOutputSummer throwing ArrayIndexOutOfBoundsException

2020-01-09 Thread Hongbing Wang (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-7765?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17012431#comment-17012431
 ] 

Hongbing Wang commented on HDFS-7765:
-

[~jeagles] ok. I will improve the code and test these days.

> FSOutputSummer throwing ArrayIndexOutOfBoundsException
> --
>
> Key: HDFS-7765
> URL: https://issues.apache.org/jira/browse/HDFS-7765
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs-client
>Affects Versions: 2.6.0
> Environment: Centos 6, Open JDK 7, Amazon EC2, Accumulo 1.6.2RC4
>Reporter: Keith Turner
>Assignee: Janmejay Singh
>Priority: Major
> Attachments: 
> 0001-PATCH-HDFS-7765-FSOutputSummer-throwing-ArrayIndexOu.patch, 
> HDFS-7765.patch
>
>
> While running an Accumulo test, saw exceptions like the following while 
> trying to write to write ahead log in HDFS. 
> The exception occurrs at 
> [FSOutputSummer.java:76|https://github.com/apache/hadoop/blob/release-2.6.0/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FSOutputSummer.java#L76]
>  which is attempting to update a byte array.
> {noformat}
> 2015-02-06 19:46:49,769 [log.DfsLogger] WARN : Exception syncing 
> java.lang.reflect.InvocationTargetException
> java.lang.ArrayIndexOutOfBoundsException: 4608
> at org.apache.hadoop.fs.FSOutputSummer.write(FSOutputSummer.java:76)
> at 
> org.apache.hadoop.fs.FSDataOutputStream$PositionCache.write(FSDataOutputStream.java:50)
> at java.io.DataOutputStream.write(DataOutputStream.java:88)
> at java.io.DataOutputStream.writeByte(DataOutputStream.java:153)
> at 
> org.apache.accumulo.tserver.logger.LogFileKey.write(LogFileKey.java:87)
> at org.apache.accumulo.tserver.log.DfsLogger.write(DfsLogger.java:526)
> at 
> org.apache.accumulo.tserver.log.DfsLogger.logFileData(DfsLogger.java:540)
> at 
> org.apache.accumulo.tserver.log.DfsLogger.logManyTablets(DfsLogger.java:573)
> at 
> org.apache.accumulo.tserver.log.TabletServerLogger$6.write(TabletServerLogger.java:373)
> at 
> org.apache.accumulo.tserver.log.TabletServerLogger.write(TabletServerLogger.java:274)
> at 
> org.apache.accumulo.tserver.log.TabletServerLogger.logManyTablets(TabletServerLogger.java:365)
> at 
> org.apache.accumulo.tserver.TabletServer$ThriftClientHandler.flush(TabletServer.java:1667)
> at 
> org.apache.accumulo.tserver.TabletServer$ThriftClientHandler.closeUpdate(TabletServer.java:1754)
> at sun.reflect.GeneratedMethodAccessor8.invoke(Unknown Source)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:606)
> at 
> org.apache.accumulo.trace.instrument.thrift.RpcServerInvocationHandler.invoke(RpcServerInvocationHandler.java:46)
> at 
> org.apache.accumulo.server.util.RpcWrapper$1.invoke(RpcWrapper.java:47)
> at com.sun.proxy.$Proxy22.closeUpdate(Unknown Source)
> at 
> org.apache.accumulo.core.tabletserver.thrift.TabletClientService$Processor$closeUpdate.getResult(TabletClientService.java:2370)
> at 
> org.apache.accumulo.core.tabletserver.thrift.TabletClientService$Processor$closeUpdate.getResult(TabletClientService.java:2354)
> at org.apache.thrift.ProcessFunction.process(ProcessFunction.java:39)
> at org.apache.thrift.TBaseProcessor.process(TBaseProcessor.java:39)
> at 
> org.apache.accumulo.server.util.TServerUtils$TimedProcessor.process(TServerUtils.java:168)
> at 
> org.apache.thrift.server.AbstractNonblockingServer$FrameBuffer.invoke(AbstractNonblockingServer.java:516)
> at 
> org.apache.accumulo.server.util.CustomNonBlockingServer$1.run(CustomNonBlockingServer.java:77)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
> at 
> org.apache.accumulo.trace.instrument.TraceRunnable.run(TraceRunnable.java:47)
> at 
> org.apache.accumulo.core.util.LoggingRunnable.run(LoggingRunnable.java:34)
> at java.lang.Thread.run(Thread.java:744)
> 2015-02-06 19:46:49,769 [log.DfsLogger] WARN : Exception syncing 
> java.lang.reflect.InvocationTargetException
> 2015-02-06 19:46:49,772 [log.DfsLogger] ERROR: 
> java.lang.ArrayIndexOutOfBoundsException: 4609
> java.lang.ArrayIndexOutOfBoundsException: 4609
> at org.apache.hadoop.fs.FSOutputSummer.write(FSOutputSummer.java:76)
> at 
> org.apache.hadoop.fs.FSDataOutputStream$PositionCache.write(FSDataOutputStream.java:50)
> at java.io.DataOutputStream.write(DataOutputStream.java:88)
> at java.io.DataOutputStrea

[jira] [Created] (HDFS-15109) RBF: Plugin interface to enable delegation of Router

2020-01-09 Thread zhuqi (Jira)
zhuqi created HDFS-15109:


 Summary: RBF: Plugin interface to enable delegation of Router 
 Key: HDFS-15109
 URL: https://issues.apache.org/jira/browse/HDFS-15109
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: zhuqi


If we can  support plugin interface in router side, may be we can Implement 
permission control and other important need in router side, and the control is 
Independent of the namenode side default control.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15105) Standby NN exits and fails to restart due to edit log corruption

2020-01-09 Thread Tao Yang (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15105?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tao Yang updated HDFS-15105:

Description: 
We found a issue that Standby NN exited and failed to restart until we resolved 
the edit log corruption.
 Error logs:
{noformat}
java.io.IOException: Mismatched block IDs or generation stamps, attempting to 
replace block blk_74288647857_73526148211 with blk_74288647857_73526377369 as 
block # 15/17 of /maindump/mainv10/dump_online/lasttable/20200105015500/part-319
        at 
org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.updateBlocks(FSEditLogLoader.java:1019)
        at 
org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.applyEditLogOp(FSEditLogLoader.java:431)
        at 
org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.loadEditRecords(FSEditLogLoader.java:234)
        at 
org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.loadFSEdits(FSEditLogLoader.java:143)
        at 
org.apache.hadoop.hdfs.server.namenode.FSImage.loadEdits(FSImage.java:885)
        at 
org.apache.hadoop.hdfs.server.namenode.FSImage.loadEdits(FSImage.java:866)
        at 
org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer.doTailEdits(EditLogTailer.java:234)
        at 
org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread.doWork(EditLogTailer.java:342)
        at 
org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread.access$200(EditLogTailer.java:295)
        at 
org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread$1.run(EditLogTailer.java:312)
        at 
org.apache.hadoop.security.SecurityUtil.doAsLoginUserOrFatal(SecurityUtil.java:455)
        at 
org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread.run(EditLogTailer.java:308)
{noformat}

Related edit log transactions of the same file:
{noformat}
1. TXID=444341628498  time=1578251449632
OP_UPDATE_BLOCKS
blocks: ... blk_74288647857_73526148211   blk_74454090866_73526215536

2. TXID=444342382774   time=1578251520740
OP_REASSIGN_LEASE

3. TXID=444342401216  time=1578251522779
OP_CLOSE
blocks: ... blk_74288647857_73526377369   blk_74454090866_73526374095

4. TXID=444342401394
OP_SET_GENSTAMP_V2 
generate stamp: 73526377369

5. TXID=444342401395  time=1578251522835
OP_TRUNCATE

6. TXID=444342402176  time=1578251523246
OP_CLOSE
blocks: ... blk_74288647857_73526377369 
{noformat}

According to the edit logs, it's wield to see that stamp(73526377369) was 
generated in transaction 4 but already used in transaction 3, and for 
transaction 3 there should be only the last block changed but in fact the last 
two blocks are both changed.

This problem might be produced in a complex scenario that truncate operation 
immediately followed the recover-lease operation for the same file. A 
suspicious point is that between creation and being written for transaction 3, 
stamp of the second last block was updated when committing block 
synchronization caused by the truncate operation.
Related calling stack is as follows: 
{noformat}
NameNodeRpcServer#commitBlockSynchronization
  FSNamesystem#commitBlockSynchronization
    // update last block
    if(!copyTruncate) {
      storedBlock.setGenerationStamp(newgenerationstamp); //updated the stamp 
of the second last block in transaction 3 before being written
      storedBlock.setNumBytes(newlength);
    }
{noformat}

Any comments are welcome. Thanks.

  was:
We found a issue that Standby NN exited and failed to restart until we resolved 
the edit log corruption.
 Error logs:
{noformat}
java.io.IOException: Mismatched block IDs or generation stamps, attempting to 
replace block blk_74288647857_73526148211 with blk_74288647857_73526377369 as 
block # 15/17 of /maindump/mainv10/dump_online/lasttable/20200105015500/part-319
        at 
org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.updateBlocks(FSEditLogLoader.java:1019)
        at 
org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.applyEditLogOp(FSEditLogLoader.java:431)
        at 
org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.loadEditRecords(FSEditLogLoader.java:234)
        at 
org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.loadFSEdits(FSEditLogLoader.java:143)
        at 
org.apache.hadoop.hdfs.server.namenode.FSImage.loadEdits(FSImage.java:885)
        at 
org.apache.hadoop.hdfs.server.namenode.FSImage.loadEdits(FSImage.java:866)
        at 
org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer.doTailEdits(EditLogTailer.java:234)
        at 
org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread.doWork(EditLogTailer.java:342)
        at 
org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread.access$200(EditLogTailer.java:295)
        at 
org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread$1.run(EditLogTailer.java:312)
        at 
org.apache.hadoop.security.SecurityUtil.doAsLoginUserOrFata

[jira] [Commented] (HDFS-15107) dfs.client.server-defaults.validity.period.ms to support time units

2020-01-09 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15107?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17012417#comment-17012417
 ] 

Hudson commented on HDFS-15107:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #17841 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17841/])
HDFS-15107. dfs.client.server-defaults.validity.period.ms to support 
(ayushsaxena: rev b32757c616cc89c6df2312edd1aa05b7dab6ee6c)
* (edit) hadoop-hdfs-project/hadoop-hdfs/src/main/resources/hdfs-default.xml
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterClientProtocol.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DFSClient.java


> dfs.client.server-defaults.validity.period.ms to support time units
> ---
>
> Key: HDFS-15107
> URL: https://issues.apache.org/jira/browse/HDFS-15107
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Ayush Saxena
>Assignee: Ayush Saxena
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: HDFS-15107-01.patch
>
>
> Add support for time units for dfs.client.server-defaults.validity.period.ms



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15107) dfs.client.server-defaults.validity.period.ms to support time units

2020-01-09 Thread Ayush Saxena (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15107?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17012415#comment-17012415
 ] 

Ayush Saxena commented on HDFS-15107:
-

Committed to trunk.
Thanx [~elgoiri] for the review!!!

> dfs.client.server-defaults.validity.period.ms to support time units
> ---
>
> Key: HDFS-15107
> URL: https://issues.apache.org/jira/browse/HDFS-15107
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Ayush Saxena
>Assignee: Ayush Saxena
>Priority: Major
> Attachments: HDFS-15107-01.patch
>
>
> Add support for time units for dfs.client.server-defaults.validity.period.ms



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15107) dfs.client.server-defaults.validity.period.ms to support time units

2020-01-09 Thread Ayush Saxena (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15107?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ayush Saxena updated HDFS-15107:

Fix Version/s: 3.3.0
 Hadoop Flags: Reviewed
   Resolution: Fixed
   Status: Resolved  (was: Patch Available)

> dfs.client.server-defaults.validity.period.ms to support time units
> ---
>
> Key: HDFS-15107
> URL: https://issues.apache.org/jira/browse/HDFS-15107
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Ayush Saxena
>Assignee: Ayush Saxena
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: HDFS-15107-01.patch
>
>
> Add support for time units for dfs.client.server-defaults.validity.period.ms



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14578) AvailableSpaceBlockPlacementPolicy always prefers local node

2020-01-09 Thread Ayush Saxena (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-14578?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ayush Saxena updated HDFS-14578:

Attachment: HDFS-14578-07.patch

> AvailableSpaceBlockPlacementPolicy always prefers local node
> 
>
> Key: HDFS-14578
> URL: https://issues.apache.org/jira/browse/HDFS-14578
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: block placement
>Affects Versions: 2.8.0, 2.7.4, 3.0.0-alpha1
>Reporter: Wei-Chiu Chuang
>Assignee: Ayush Saxena
>Priority: Major
> Attachments: HDFS-14578-02.patch, HDFS-14578-03.patch, 
> HDFS-14578-04.patch, HDFS-14578-05.patch, HDFS-14578-06.patch, 
> HDFS-14578-07.patch, HDFS-14578-WIP-01.patch, HDFS-14758-01.patch
>
>
> It looks like AvailableSpaceBlockPlacementPolicy prefers local disk just like 
> in the BlockPlacementPolicyDefault
>  
> As Yongjun mentioned in 
> [HDFS-8131|https://issues.apache.org/jira/browse/HDFS-8131?focusedCommentId=16558739&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16558739],
>  
> {quote}Class AvailableSpaceBlockPlacementPolicy extends 
> BlockPlacementPolicyDefault. But it doesn't change the behavior of choosing 
> the first node in BlockPlacementPolicyDefault, so even with this new feature, 
> the local DN is always chosen as the first DN (of course when it is not 
> excluded), and the new feature only changes the selection of the rest of the 
> two DNs.
> {quote}
> I'm file this Jira as I groom Cloudera's internal Jira and found this 
> unreported issue. We do have a customer hitting this problem. I don't have a 
> fix, but thought it would be beneficial to report it to Apache Jira.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15107) dfs.client.server-defaults.validity.period.ms to support time units

2020-01-09 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15107?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17012342#comment-17012342
 ] 

Hadoop QA commented on HDFS-15107:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 42m 
48s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  2m  
6s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 25m 
34s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  3m  
9s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
52s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
28s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
15m 54s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  5m 
55s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
47s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
11s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
30s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  3m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  3m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
26s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
14m 23s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  6m 
31s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
39s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m  
3s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}130m  8s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  7m  
9s{color} | {color:green} hadoop-hdfs-rbf in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
33s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}273m 45s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.TestRefreshCallQueue |
|   | hadoop.hdfs.TestBlockTokenWrappingQOP |
|   | hadoop.hdfs.tools.TestDFSAdminWithHA |
|   | hadoop.hdfs.TestDFSUpgradeFromImage |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailureWithRandomECPolicy |
|   | hadoop.security.TestPermission |
|   | hadoop.tools.TestJMXGet |
|   | 
hadoop.hdfs.tools.offlineImageViewer.TestOfflineImageViewerWithStripedBlocks |
|   | hadoop.hd

[jira] [Commented] (HDFS-15102) HttpFS: put requests are not supported for path "/"

2020-01-09 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15102?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17012339#comment-17012339
 ] 

Hudson commented on HDFS-15102:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #17840 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17840/])
HDFS-15102. HttpFS: put requests are not supported for path "/". (tasanuma: rev 
782c0556fb413d54c9d028ddc11d67cdc32585ff)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/fs/http/server/HttpFSServer.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-httpfs/src/test/java/org/apache/hadoop/fs/http/server/TestHttpFSServer.java


> HttpFS: put requests are not supported for path "/"
> ---
>
> Key: HDFS-15102
> URL: https://issues.apache.org/jira/browse/HDFS-15102
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: hemanthboyina
>Assignee: hemanthboyina
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: HDFS-15102.001.patch
>
>
> PUT requests in HttpFS with  path as "/" were not supported .
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15102) HttpFS: put requests are not supported for path "/"

2020-01-09 Thread Takanobu Asanuma (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15102?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17012335#comment-17012335
 ] 

Takanobu Asanuma commented on HDFS-15102:
-

BTW, I think we also need postRoot, like when executing UNSETECPOLICY with "/" 
path. Could you also implement it by another jira, [~hemanthboyina] ?

> HttpFS: put requests are not supported for path "/"
> ---
>
> Key: HDFS-15102
> URL: https://issues.apache.org/jira/browse/HDFS-15102
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: hemanthboyina
>Assignee: hemanthboyina
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: HDFS-15102.001.patch
>
>
> PUT requests in HttpFS with  path as "/" were not supported .
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15102) HttpFS: put requests are not supported for path "/"

2020-01-09 Thread Takanobu Asanuma (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15102?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Takanobu Asanuma updated HDFS-15102:

Fix Version/s: 3.3.0
   Resolution: Fixed
   Status: Resolved  (was: Patch Available)

Committed to trunk. Thanks for your contribution, [~hemanthboyina]. Thanks for 
your review, [~elgoiri].

> HttpFS: put requests are not supported for path "/"
> ---
>
> Key: HDFS-15102
> URL: https://issues.apache.org/jira/browse/HDFS-15102
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: hemanthboyina
>Assignee: hemanthboyina
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: HDFS-15102.001.patch
>
>
> PUT requests in HttpFS with  path as "/" were not supported .
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-15102) HttpFS: put requests are not supported for path "/"

2020-01-09 Thread Takanobu Asanuma (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15102?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17012332#comment-17012332
 ] 

Takanobu Asanuma edited comment on HDFS-15102 at 1/10/20 12:48 AM:
---

+1.


was (Author: tasanuma0829):
+1. I'd like to make this jira a subtask of HDFS-15064.

> HttpFS: put requests are not supported for path "/"
> ---
>
> Key: HDFS-15102
> URL: https://issues.apache.org/jira/browse/HDFS-15102
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: hemanthboyina
>Assignee: hemanthboyina
>Priority: Major
> Attachments: HDFS-15102.001.patch
>
>
> PUT requests in HttpFS with  path as "/" were not supported .
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15102) HttpFS: put requests are not supported for path "/"

2020-01-09 Thread Takanobu Asanuma (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15102?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17012332#comment-17012332
 ] 

Takanobu Asanuma commented on HDFS-15102:
-

+1. I'd like to make this jira a subtask of HDFS-15064.

> HttpFS: put requests are not supported for path "/"
> ---
>
> Key: HDFS-15102
> URL: https://issues.apache.org/jira/browse/HDFS-15102
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: hemanthboyina
>Assignee: hemanthboyina
>Priority: Major
> Attachments: HDFS-15102.001.patch
>
>
> PUT requests in HttpFS with  path as "/" were not supported .
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15071) Add DataNode Read and Write throughput percentile metrics

2020-01-09 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15071?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17012290#comment-17012290
 ] 

Hadoop QA commented on HDFS-15071:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
54s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 21m 
26s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
1s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
52s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
9s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
15m 16s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
21s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
16s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
 4s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
59s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
59s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 48s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch 
generated 3 new + 622 unchanged - 0 fixed = 625 total (was 622) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
5s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
2s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
15m 41s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
38s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}138m 46s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  1m 
 4s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}208m 32s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.hdfs.server.datanode.TestDataNodeErasureCodingMetrics |
|   | hadoop.hdfs.server.namenode.snapshot.TestRandomOpsWithSnapshots |
|   | hadoop.hdfs.server.namenode.snapshot.TestSnapshotDiffReport |
|   | hadoop.hdfs.server.namenode.snapshot.TestOpenFilesWithSnapshot |
|   | hadoop.hdfs.server.namenode.snapshot.TestSnapRootDescendantDiff |
|   | hadoop.hdfs.server.blockmanagement.TestBlockStatsMXBean |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithRandomECPolicy |
|   | hadoop.hdfs.TestLeaseRecovery2 |
|   | hadoop.hdfs.server.datanode.TestDataNodeLifeline |
|   | hadoop.hdfs.server.blockmanagement.TestUnderReplicatedBlocks |
|   | hadoop.hdfs.server.namenode.TestRedudantBlocks |
|   | hadoop.hdfs.server.namenode.snapshot.TestRenameWithSnapshots |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=19.03.5 Server=19.03.5 Image:yetus/hadoop:c44943d1fc3 |
| JIRA Issue | HDFS-15071 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12990452/HDFS-15071.003.patch |
| Optional Tests |  dupname  asf

[jira] [Commented] (HDFS-14578) AvailableSpaceBlockPlacementPolicy always prefers local node

2020-01-09 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14578?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17012249#comment-17012249
 ] 

Hadoop QA commented on HDFS-14578:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
19s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 26m 
17s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
58s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
50s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
6s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
14m 32s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
15s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
12s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
59s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
56s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 43s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch 
generated 7 new + 459 unchanged - 0 fixed = 466 total (was 459) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
2s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
2s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 40s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
9s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}101m 37s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
44s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}170m 42s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.TestDFSInotifyEventInputStreamKerberized |
|   | hadoop.hdfs.server.namenode.TestRedudantBlocks |
|   | hadoop.hdfs.TestLeaseRecovery2 |
|   | hadoop.hdfs.TestDecommissionWithStripedBackoffMonitor |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=19.03.4 Server=19.03.4 Image:yetus/hadoop:c44943d1fc3 |
| JIRA Issue | HDFS-14578 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12990450/HDFS-14578-06.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  xml  |
| uname | Linux e8cd2a86f7e9 4.15.0-70-generic #79-Ubuntu SMP Tue Nov 12 
10:36:11 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 93233a7 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_232 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HDFS-Build/28631/artifact/out/diff-checks

[jira] [Assigned] (HDFS-15099) [SBN Read] getBlockLocations() should throw ObserverRetryOnActiveException on an attempt to change aTime on ObserverNode

2020-01-09 Thread Konstantin Shvachko (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15099?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantin Shvachko reassigned HDFS-15099:
--

Assignee: Chen Liang

> [SBN Read] getBlockLocations() should throw ObserverRetryOnActiveException on 
> an attempt to change aTime on ObserverNode
> 
>
> Key: HDFS-15099
> URL: https://issues.apache.org/jira/browse/HDFS-15099
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 2.10.0
>Reporter: Konstantin Shvachko
>Assignee: Chen Liang
>Priority: Major
>
> The precision of updating an INode's aTime while executing 
> {{getBlockLocations()}} is 1 hour by default. Updates cannot be handled by 
> ObserverNode, so the call should be redirected to Active NameNode. In order 
> to redirect to active the ObserverNode should through 
> {{ObserverRetryOnActiveException}}.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15102) HttpFS: put requests are not supported for path "/"

2020-01-09 Thread Jira


[ 
https://issues.apache.org/jira/browse/HDFS-15102?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17012224#comment-17012224
 ] 

Íñigo Goiri commented on HDFS-15102:


OK, so we are just doing the same as the GET does.
+1

> HttpFS: put requests are not supported for path "/"
> ---
>
> Key: HDFS-15102
> URL: https://issues.apache.org/jira/browse/HDFS-15102
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: hemanthboyina
>Assignee: hemanthboyina
>Priority: Major
> Attachments: HDFS-15102.001.patch
>
>
> PUT requests in HttpFS with  path as "/" were not supported .
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15108) RBF: MembershipNamenodeResolver should invalidate cache incase of active namenode update

2020-01-09 Thread Jira


[ 
https://issues.apache.org/jira/browse/HDFS-15108?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17012223#comment-17012223
 ] 

Íñigo Goiri commented on HDFS-15108:


Can we do a better cleanup of the cache and just remove what we need instead of 
everything?
Otherwise, I'm fine with this approach even though is a little invasive.

Let's create a method to do String to InetSocketAddress.
In there we can cache rpcAddr.indexOf(":") and probably do a split.

> RBF: MembershipNamenodeResolver should invalidate cache incase of active 
> namenode update
> 
>
> Key: HDFS-15108
> URL: https://issues.apache.org/jira/browse/HDFS-15108
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Ayush Saxena
>Assignee: Ayush Saxena
>Priority: Major
> Attachments: HDFS-15108-01.patch, HDFS-15108-02.patch
>
>
> If a failover happens, {{namenodeResolver.updateActiveNamenode(nsId, 
> address);}} is called, but this doesn't invalidates the cache, so as the next 
> time the correct active is fetched.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-15108) RBF: MembershipNamenodeResolver should invalidate cache incase of active namenode update

2020-01-09 Thread Jira


[ 
https://issues.apache.org/jira/browse/HDFS-15108?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17012223#comment-17012223
 ] 

Íñigo Goiri edited comment on HDFS-15108 at 1/9/20 8:51 PM:


Can we do a better cleanup of the cache and just remove what we need instead of 
everything?
Otherwise, I'm fine with this approach even though is a little invasive.

Let's create a method to do String to InetSocketAddress.
In there we can cache rpcAddr.indexOf(":") and probably do a split.

Also, for the second 
{{namenodeResolver.getNamenodesForNameserviceId(NAMESERVICES[0]).get(0);}} we 
should create a new FederationNamenodeContext (namenode1).


was (Author: elgoiri):
Can we do a better cleanup of the cache and just remove what we need instead of 
everything?
Otherwise, I'm fine with this approach even though is a little invasive.

Let's create a method to do String to InetSocketAddress.
In there we can cache rpcAddr.indexOf(":") and probably do a split.

> RBF: MembershipNamenodeResolver should invalidate cache incase of active 
> namenode update
> 
>
> Key: HDFS-15108
> URL: https://issues.apache.org/jira/browse/HDFS-15108
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Ayush Saxena
>Assignee: Ayush Saxena
>Priority: Major
> Attachments: HDFS-15108-01.patch, HDFS-15108-02.patch
>
>
> If a failover happens, {{namenodeResolver.updateActiveNamenode(nsId, 
> address);}} is called, but this doesn't invalidates the cache, so as the next 
> time the correct active is fetched.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15107) dfs.client.server-defaults.validity.period.ms to support time units

2020-01-09 Thread Jira


[ 
https://issues.apache.org/jira/browse/HDFS-15107?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17012219#comment-17012219
 ] 

Íñigo Goiri commented on HDFS-15107:


I think the failed tests are unrelated.
+1 on  [^HDFS-15107-01.patch].

> dfs.client.server-defaults.validity.period.ms to support time units
> ---
>
> Key: HDFS-15107
> URL: https://issues.apache.org/jira/browse/HDFS-15107
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Ayush Saxena
>Assignee: Ayush Saxena
>Priority: Major
> Attachments: HDFS-15107-01.patch
>
>
> Add support for time units for dfs.client.server-defaults.validity.period.ms



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15107) dfs.client.server-defaults.validity.period.ms to support time units

2020-01-09 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15107?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17012183#comment-17012183
 ] 

Hadoop QA commented on HDFS-15107:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
32s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
13s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 17m 
10s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  3m  
4s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
53s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
29s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
15m 11s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  5m 
20s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
41s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
12s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  3m  
3s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  3m  
3s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m  
9s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 11s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  5m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
38s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
52s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}171m 14s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  7m 
41s{color} | {color:green} hadoop-hdfs-rbf in the patch passed. {color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  0m 
41s{color} | {color:red} The patch generated 1 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black}256m 34s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.TestModTime |
|   | hadoop.hdfs.server.namenode.TestReencryption |
|   | hadoop.hdfs.server.blockmanagement.TestUnderReplicatedBlocks |
|   | hadoop.hdfs.server.namenode.TestBlockPlacementPolicyRackFaultTolerant |
|   | hadoop.hdfs.server.namenode.TestDiskspaceQuotaUpdate |
|   | hadoop.hdfs.server.namenode.TestRedudantBlocks |
|   | hadoop.hdfs.server.namenode.TestStripedINodeFile |
|   | hadoop.hdfs.server.name

[jira] [Commented] (HDFS-15071) Add DataNode Read and Write throughput percentile metrics

2020-01-09 Thread Danny Becker (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15071?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17012176#comment-17012176
 ] 

Danny Becker commented on HDFS-15071:
-

v003 should fix all checkstyle errors except for two "line longer than 80 
characters" errors. Those two errors are following convention in the code 
around it.

> Add DataNode Read and Write throughput percentile metrics
> -
>
> Key: HDFS-15071
> URL: https://issues.apache.org/jira/browse/HDFS-15071
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode, hdfs, metrics
>Reporter: Danny Becker
>Assignee: Danny Becker
>Priority: Minor
> Attachments: HDFS-15071.000.patch, HDFS-15071.001.patch, 
> HDFS-15071.002.patch, HDFS-15071.003.patch
>
>
> Add DataNode throughput metrics for read and write.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15071) Add DataNode Read and Write throughput percentile metrics

2020-01-09 Thread Danny Becker (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15071?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Danny Becker updated HDFS-15071:

Attachment: HDFS-15071.003.patch

> Add DataNode Read and Write throughput percentile metrics
> -
>
> Key: HDFS-15071
> URL: https://issues.apache.org/jira/browse/HDFS-15071
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode, hdfs, metrics
>Reporter: Danny Becker
>Assignee: Danny Becker
>Priority: Minor
> Attachments: HDFS-15071.000.patch, HDFS-15071.001.patch, 
> HDFS-15071.002.patch, HDFS-15071.003.patch
>
>
> Add DataNode throughput metrics for read and write.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15108) RBF: MembershipNamenodeResolver should invalidate cache incase of active namenode update

2020-01-09 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15108?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17012172#comment-17012172
 ] 

Hadoop QA commented on HDFS-15108:
--

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
43s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 19m 
49s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
30s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
18s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
33s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 52s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
12s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
52s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
31s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
15s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
29s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 56s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
48s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  7m 
45s{color} | {color:green} hadoop-hdfs-rbf in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
26s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 64m 34s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=19.03.5 Server=19.03.5 Image:yetus/hadoop:c44943d1fc3 |
| JIRA Issue | HDFS-15108 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12990446/HDFS-15108-02.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 82eed75c5172 4.15.0-66-generic #75-Ubuntu SMP Tue Oct 1 
05:24:09 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 93233a7 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_232 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/28630/testReport/ |
| Max. process+thread count | 2665 (vs. ulimit of 5500) |
| modules | C: hadoop-hdfs-project/hadoop-hdfs-rbf U: 
hadoop-hdfs-project/hadoop-hdfs-rbf |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/28630/console |
| Powered by | Apache Yetus 0.8.0   http://yetus.apache.org |


This message was automatically generated.



> RBF: MembershipNamenodeResolver should invalidate cache incase of active 
> namenode update
> --

[jira] [Commented] (HDFS-14578) AvailableSpaceBlockPlacementPolicy always prefers local node

2020-01-09 Thread Ayush Saxena (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14578?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17012154#comment-17012154
 ] 

Ayush Saxena commented on HDFS-14578:
-

Thanx [~vinayakumarb] for the review, Changed the test as per suggestion, 
Checkstyle is due to line length for the configuration name. Should be 
tolerable.

Pls Review!!!

> AvailableSpaceBlockPlacementPolicy always prefers local node
> 
>
> Key: HDFS-14578
> URL: https://issues.apache.org/jira/browse/HDFS-14578
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: block placement
>Affects Versions: 2.8.0, 2.7.4, 3.0.0-alpha1
>Reporter: Wei-Chiu Chuang
>Assignee: Ayush Saxena
>Priority: Major
> Attachments: HDFS-14578-02.patch, HDFS-14578-03.patch, 
> HDFS-14578-04.patch, HDFS-14578-05.patch, HDFS-14578-06.patch, 
> HDFS-14578-WIP-01.patch, HDFS-14758-01.patch
>
>
> It looks like AvailableSpaceBlockPlacementPolicy prefers local disk just like 
> in the BlockPlacementPolicyDefault
>  
> As Yongjun mentioned in 
> [HDFS-8131|https://issues.apache.org/jira/browse/HDFS-8131?focusedCommentId=16558739&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16558739],
>  
> {quote}Class AvailableSpaceBlockPlacementPolicy extends 
> BlockPlacementPolicyDefault. But it doesn't change the behavior of choosing 
> the first node in BlockPlacementPolicyDefault, so even with this new feature, 
> the local DN is always chosen as the first DN (of course when it is not 
> excluded), and the new feature only changes the selection of the rest of the 
> two DNs.
> {quote}
> I'm file this Jira as I groom Cloudera's internal Jira and found this 
> unreported issue. We do have a customer hitting this problem. I don't have a 
> fix, but thought it would be beneficial to report it to Apache Jira.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14578) AvailableSpaceBlockPlacementPolicy always prefers local node

2020-01-09 Thread Ayush Saxena (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-14578?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ayush Saxena updated HDFS-14578:

Attachment: HDFS-14578-06.patch

> AvailableSpaceBlockPlacementPolicy always prefers local node
> 
>
> Key: HDFS-14578
> URL: https://issues.apache.org/jira/browse/HDFS-14578
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: block placement
>Affects Versions: 2.8.0, 2.7.4, 3.0.0-alpha1
>Reporter: Wei-Chiu Chuang
>Assignee: Ayush Saxena
>Priority: Major
> Attachments: HDFS-14578-02.patch, HDFS-14578-03.patch, 
> HDFS-14578-04.patch, HDFS-14578-05.patch, HDFS-14578-06.patch, 
> HDFS-14578-WIP-01.patch, HDFS-14758-01.patch
>
>
> It looks like AvailableSpaceBlockPlacementPolicy prefers local disk just like 
> in the BlockPlacementPolicyDefault
>  
> As Yongjun mentioned in 
> [HDFS-8131|https://issues.apache.org/jira/browse/HDFS-8131?focusedCommentId=16558739&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16558739],
>  
> {quote}Class AvailableSpaceBlockPlacementPolicy extends 
> BlockPlacementPolicyDefault. But it doesn't change the behavior of choosing 
> the first node in BlockPlacementPolicyDefault, so even with this new feature, 
> the local DN is always chosen as the first DN (of course when it is not 
> excluded), and the new feature only changes the selection of the rest of the 
> two DNs.
> {quote}
> I'm file this Jira as I groom Cloudera's internal Jira and found this 
> unreported issue. We do have a customer hitting this problem. I don't have a 
> fix, but thought it would be beneficial to report it to Apache Jira.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14578) AvailableSpaceBlockPlacementPolicy always prefers local node

2020-01-09 Thread Ayush Saxena (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-14578?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ayush Saxena updated HDFS-14578:

Attachment: HDFS-14578-06.patch

> AvailableSpaceBlockPlacementPolicy always prefers local node
> 
>
> Key: HDFS-14578
> URL: https://issues.apache.org/jira/browse/HDFS-14578
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: block placement
>Affects Versions: 2.8.0, 2.7.4, 3.0.0-alpha1
>Reporter: Wei-Chiu Chuang
>Assignee: Ayush Saxena
>Priority: Major
> Attachments: HDFS-14578-02.patch, HDFS-14578-03.patch, 
> HDFS-14578-04.patch, HDFS-14578-05.patch, HDFS-14578-WIP-01.patch, 
> HDFS-14758-01.patch
>
>
> It looks like AvailableSpaceBlockPlacementPolicy prefers local disk just like 
> in the BlockPlacementPolicyDefault
>  
> As Yongjun mentioned in 
> [HDFS-8131|https://issues.apache.org/jira/browse/HDFS-8131?focusedCommentId=16558739&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16558739],
>  
> {quote}Class AvailableSpaceBlockPlacementPolicy extends 
> BlockPlacementPolicyDefault. But it doesn't change the behavior of choosing 
> the first node in BlockPlacementPolicyDefault, so even with this new feature, 
> the local DN is always chosen as the first DN (of course when it is not 
> excluded), and the new feature only changes the selection of the rest of the 
> two DNs.
> {quote}
> I'm file this Jira as I groom Cloudera's internal Jira and found this 
> unreported issue. We do have a customer hitting this problem. I don't have a 
> fix, but thought it would be beneficial to report it to Apache Jira.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14578) AvailableSpaceBlockPlacementPolicy always prefers local node

2020-01-09 Thread Ayush Saxena (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-14578?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ayush Saxena updated HDFS-14578:

Attachment: (was: HDFS-14578-06.patch)

> AvailableSpaceBlockPlacementPolicy always prefers local node
> 
>
> Key: HDFS-14578
> URL: https://issues.apache.org/jira/browse/HDFS-14578
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: block placement
>Affects Versions: 2.8.0, 2.7.4, 3.0.0-alpha1
>Reporter: Wei-Chiu Chuang
>Assignee: Ayush Saxena
>Priority: Major
> Attachments: HDFS-14578-02.patch, HDFS-14578-03.patch, 
> HDFS-14578-04.patch, HDFS-14578-05.patch, HDFS-14578-WIP-01.patch, 
> HDFS-14758-01.patch
>
>
> It looks like AvailableSpaceBlockPlacementPolicy prefers local disk just like 
> in the BlockPlacementPolicyDefault
>  
> As Yongjun mentioned in 
> [HDFS-8131|https://issues.apache.org/jira/browse/HDFS-8131?focusedCommentId=16558739&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16558739],
>  
> {quote}Class AvailableSpaceBlockPlacementPolicy extends 
> BlockPlacementPolicyDefault. But it doesn't change the behavior of choosing 
> the first node in BlockPlacementPolicyDefault, so even with this new feature, 
> the local DN is always chosen as the first DN (of course when it is not 
> excluded), and the new feature only changes the selection of the rest of the 
> two DNs.
> {quote}
> I'm file this Jira as I groom Cloudera's internal Jira and found this 
> unreported issue. We do have a customer hitting this problem. I don't have a 
> fix, but thought it would be beneficial to report it to Apache Jira.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15107) dfs.client.server-defaults.validity.period.ms to support time units

2020-01-09 Thread Ayush Saxena (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15107?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17012133#comment-17012133
 ] 

Ayush Saxena commented on HDFS-15107:
-

Thanx [~elgoiri] for the review.
I followed the same way as in HDFS-9847.
There none default value was changed, so kept it like that, and to my belief it 
is compatible, if no unit is specified it shall consider ms only.

> dfs.client.server-defaults.validity.period.ms to support time units
> ---
>
> Key: HDFS-15107
> URL: https://issues.apache.org/jira/browse/HDFS-15107
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Ayush Saxena
>Assignee: Ayush Saxena
>Priority: Major
> Attachments: HDFS-15107-01.patch
>
>
> Add support for time units for dfs.client.server-defaults.validity.period.ms



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15108) RBF: MembershipNamenodeResolver should invalidate cache incase of active namenode update

2020-01-09 Thread Ayush Saxena (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15108?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17012130#comment-17012130
 ] 

Ayush Saxena commented on HDFS-15108:
-

In {{RouterRpcClient#InvokeMethod}} L425, if failover occurs, the namenode 
other than the know ACTIVE serves the request, then that namenode is updated as 
ACTIVE by {{namenodeResolver.updateActiveNamenode(nsId, address);}}, but in the 
subsequent calls, when {{getNamenodesForNameserviceId}} is called again the 
wrong ACTIVE is returned and again this procedure repeats as this method gets 
entry from cacheNs as {{cacheNS.get(nsId);}} 

> RBF: MembershipNamenodeResolver should invalidate cache incase of active 
> namenode update
> 
>
> Key: HDFS-15108
> URL: https://issues.apache.org/jira/browse/HDFS-15108
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Ayush Saxena
>Assignee: Ayush Saxena
>Priority: Major
> Attachments: HDFS-15108-01.patch, HDFS-15108-02.patch
>
>
> If a failover happens, {{namenodeResolver.updateActiveNamenode(nsId, 
> address);}} is called, but this doesn't invalidates the cache, so as the next 
> time the correct active is fetched.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15102) HttpFS: put requests are not supported for path "/"

2020-01-09 Thread hemanthboyina (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15102?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17012129#comment-17012129
 ] 

hemanthboyina commented on HDFS-15102:
--

faced an issue while implementing enableECPolicy for HttpFS  . On request of 
enableECPolicy ,  HttpFSServer throwing  Internal_Server_Error 

As there was no path parameter for enableECPolicy method , i have passed path 
as  "/" in the httpfs request  , and got Internal_Server_Error 

Been tested some PUT request API'S , found that any PUT requests with path as 
"/"  or  "" was not supported in HttpFS

Though it was supported for GET requests .

> HttpFS: put requests are not supported for path "/"
> ---
>
> Key: HDFS-15102
> URL: https://issues.apache.org/jira/browse/HDFS-15102
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: hemanthboyina
>Assignee: hemanthboyina
>Priority: Major
> Attachments: HDFS-15102.001.patch
>
>
> PUT requests in HttpFS with  path as "/" were not supported .
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15108) RBF: MembershipNamenodeResolver should invalidate cache incase of active namenode update

2020-01-09 Thread Ayush Saxena (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15108?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ayush Saxena updated HDFS-15108:

Attachment: HDFS-15108-02.patch

> RBF: MembershipNamenodeResolver should invalidate cache incase of active 
> namenode update
> 
>
> Key: HDFS-15108
> URL: https://issues.apache.org/jira/browse/HDFS-15108
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Ayush Saxena
>Assignee: Ayush Saxena
>Priority: Major
> Attachments: HDFS-15108-01.patch, HDFS-15108-02.patch
>
>
> If a failover happens, {{namenodeResolver.updateActiveNamenode(nsId, 
> address);}} is called, but this doesn't invalidates the cache, so as the next 
> time the correct active is fetched.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15108) RBF: MembershipNamenodeResolver should invalidate cache incase of active namenode update

2020-01-09 Thread Ayush Saxena (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15108?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ayush Saxena updated HDFS-15108:

Attachment: HDFS-15108-02.patch

> RBF: MembershipNamenodeResolver should invalidate cache incase of active 
> namenode update
> 
>
> Key: HDFS-15108
> URL: https://issues.apache.org/jira/browse/HDFS-15108
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Ayush Saxena
>Assignee: Ayush Saxena
>Priority: Major
> Attachments: HDFS-15108-01.patch
>
>
> If a failover happens, {{namenodeResolver.updateActiveNamenode(nsId, 
> address);}} is called, but this doesn't invalidates the cache, so as the next 
> time the correct active is fetched.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15108) RBF: MembershipNamenodeResolver should invalidate cache incase of active namenode update

2020-01-09 Thread Ayush Saxena (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15108?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ayush Saxena updated HDFS-15108:

Attachment: (was: HDFS-15108-02.patch)

> RBF: MembershipNamenodeResolver should invalidate cache incase of active 
> namenode update
> 
>
> Key: HDFS-15108
> URL: https://issues.apache.org/jira/browse/HDFS-15108
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Ayush Saxena
>Assignee: Ayush Saxena
>Priority: Major
> Attachments: HDFS-15108-01.patch
>
>
> If a failover happens, {{namenodeResolver.updateActiveNamenode(nsId, 
> address);}} is called, but this doesn't invalidates the cache, so as the next 
> time the correct active is fetched.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14578) AvailableSpaceBlockPlacementPolicy always prefers local node

2020-01-09 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14578?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17012106#comment-17012106
 ] 

Hadoop QA commented on HDFS-14578:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  1m  
6s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 23m 
28s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
18s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
58s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
21s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
17m  3s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
36s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
33s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
22s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
11s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 55s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch 
generated 7 new + 459 unchanged - 0 fixed = 466 total (was 459) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
22s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
15m 42s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
19s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
7s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}104m 44s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
32s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}178m  8s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.TestDeadNodeDetection |
|   | hadoop.hdfs.server.namenode.TestRedudantBlocks |
|   | hadoop.hdfs.TestDFSUpgradeFromImage |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=19.03.5 Server=19.03.5 Image:yetus/hadoop:c44943d1fc3 |
| JIRA Issue | HDFS-14578 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12990429/HDFS-14578-05.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  xml  |
| uname | Linux 1c5a1deb9adb 4.15.0-66-generic #75-Ubuntu SMP Tue Oct 1 
05:24:09 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / a40dc9e |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_232 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HDFS-Build/28627/artifact/out/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt
 |
| unit | 
https://builds.apache.o

[jira] [Updated] (HDFS-15102) HttpFS: put requests are not supported for path "/"

2020-01-09 Thread hemanthboyina (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15102?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

hemanthboyina updated HDFS-15102:
-
Description: 
PUT requests in HttpFS with  path as "/" were not supported .

 

> HttpFS: put requests are not supported for path "/"
> ---
>
> Key: HDFS-15102
> URL: https://issues.apache.org/jira/browse/HDFS-15102
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: hemanthboyina
>Assignee: hemanthboyina
>Priority: Major
> Attachments: HDFS-15102.001.patch
>
>
> PUT requests in HttpFS with  path as "/" were not supported .
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15102) HttpFS: put requests are not supported for path "/"

2020-01-09 Thread hemanthboyina (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15102?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

hemanthboyina updated HDFS-15102:
-
Summary: HttpFS: put requests are not supported for path "/"  (was: HttpFS: 
put operation is not supported with path "/")

> HttpFS: put requests are not supported for path "/"
> ---
>
> Key: HDFS-15102
> URL: https://issues.apache.org/jira/browse/HDFS-15102
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: hemanthboyina
>Assignee: hemanthboyina
>Priority: Major
> Attachments: HDFS-15102.001.patch
>
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15102) HttpFS: put operation is not supported with path "/"

2020-01-09 Thread Jira


 [ 
https://issues.apache.org/jira/browse/HDFS-15102?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Íñigo Goiri updated HDFS-15102:
---
Summary: HttpFS: put operation is not supported with path "/"  (was: HttpFS 
: put operation is not supported with path "/")

> HttpFS: put operation is not supported with path "/"
> 
>
> Key: HDFS-15102
> URL: https://issues.apache.org/jira/browse/HDFS-15102
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: hemanthboyina
>Assignee: hemanthboyina
>Priority: Major
> Attachments: HDFS-15102.001.patch
>
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15102) HttpFS: put operation is not supported with path "/"

2020-01-09 Thread Jira


[ 
https://issues.apache.org/jira/browse/HDFS-15102?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17012101#comment-17012101
 ] 

Íñigo Goiri commented on HDFS-15102:


In what case have you seen issues?
Can you update the description?

> HttpFS: put operation is not supported with path "/"
> 
>
> Key: HDFS-15102
> URL: https://issues.apache.org/jira/browse/HDFS-15102
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: hemanthboyina
>Assignee: hemanthboyina
>Priority: Major
> Attachments: HDFS-15102.001.patch
>
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15107) dfs.client.server-defaults.validity.period.ms to support time units

2020-01-09 Thread Jira


[ 
https://issues.apache.org/jira/browse/HDFS-15107?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17012088#comment-17012088
 ] 

Íñigo Goiri commented on HDFS-15107:


Do we want to change the value in hdfs-default.xml to be 2s?
BTW, just double checking but, I'm guessing this is backwards compatible right?

> dfs.client.server-defaults.validity.period.ms to support time units
> ---
>
> Key: HDFS-15107
> URL: https://issues.apache.org/jira/browse/HDFS-15107
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Ayush Saxena
>Assignee: Ayush Saxena
>Priority: Major
> Attachments: HDFS-15107-01.patch
>
>
> Add support for time units for dfs.client.server-defaults.validity.period.ms



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15108) RBF: MembershipNamenodeResolver should invalidate cache incase of active namenode update

2020-01-09 Thread Jira


[ 
https://issues.apache.org/jira/browse/HDFS-15108?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17012084#comment-17012084
 ] 

Íñigo Goiri commented on HDFS-15108:


Thanks for bringing this up, can you give a little more context on what was the 
effect you saw?

A minor comment, I would avoid changing the 
FederationNamenodeServiceState.STANDBY to STANDBY in this JIRA.
Given that we have HAServiceState.STANDBY and 
FederationNamenodeServiceState.STANDBY, I prefer stating both.

This is hard to read:
{code}
297 String rpcAddr = namenode.getRpcAddress();
298 int port = Integer
299 .parseInt(rpcAddr.substring(rpcAddr.indexOf(":") +1, 
rpcAddr.length()));
300 InetSocketAddress inetAddr =
301 new InetSocketAddress(rpcAddr.substring(0, 
rpcAddr.indexOf(':')), port);
{code}

Can we extract this so we have a comment mentioning what is the high level idea?
I really don't get what this is doing as you will probably end up with just 
String to InetSocketAddress.

> RBF: MembershipNamenodeResolver should invalidate cache incase of active 
> namenode update
> 
>
> Key: HDFS-15108
> URL: https://issues.apache.org/jira/browse/HDFS-15108
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Ayush Saxena
>Assignee: Ayush Saxena
>Priority: Major
> Attachments: HDFS-15108-01.patch
>
>
> If a failover happens, {{namenodeResolver.updateActiveNamenode(nsId, 
> address);}} is called, but this doesn't invalidates the cache, so as the next 
> time the correct active is fetched.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15078) RBF: Should check connection channel before sending rpc to namenode

2020-01-09 Thread Jira


[ 
https://issues.apache.org/jira/browse/HDFS-15078?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17012082#comment-17012082
 ] 

Íñigo Goiri commented on HDFS-15078:


Just to summarize, I agree with [~ayushtkn] that the final solution would be to 
modify the connection and carry the caller id, etc.
For now, I suggest that in this JIRA we basically catch this exception and show 
it in a more friendly way and just clean up after it.
Not sure how easy is to catch this ClosedChannelException exception though.

> RBF: Should check connection channel before sending rpc to namenode
> ---
>
> Key: HDFS-15078
> URL: https://issues.apache.org/jira/browse/HDFS-15078
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: rbf
>Affects Versions: 3.3.0
>Reporter: Fei Hui
>Assignee: Fei Hui
>Priority: Major
> Attachments: HDFS-15078.001.patch, HDFS-15078.002.patch
>
>
> dfsrouter logs show that
> {quote}
> 2019-12-20 04:11:26,724 WARN org.apache.hadoop.ipc.Server: IPC Server handler 
> 6400 on , call org.apache.hadoop.hdfs.protocol.ClientProtocol.create from 
> 10.83.164.11:56908 Call#2 Retry#0: output error
> 2019-12-20 04:11:26,724 INFO org.apache.hadoop.ipc.Server: IPC Server handler 
> 125 on  caught an exception
> java.nio.channels.ClosedChannelException
> at 
> sun.nio.ch.SocketChannelImpl.ensureWriteOpen(SocketChannelImpl.java:270)
> at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:461)
> at org.apache.hadoop.ipc.Server.channelWrite(Server.java:2731)
> at org.apache.hadoop.ipc.Server.access$2100(Server.java:134)
> at 
> org.apache.hadoop.ipc.Server$Responder.processResponse(Server.java:1089)
> at org.apache.hadoop.ipc.Server$Responder.doRespond(Server.java:1161)
> at 
> org.apache.hadoop.ipc.Server$Connection.sendResponse(Server.java:2109)
> at 
> org.apache.hadoop.ipc.Server$Connection.access$400(Server.java:1229)
> at org.apache.hadoop.ipc.Server$Call.sendResponse(Server.java:631)
> at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2245)
> {quote}
> Maybe checking connection between client and router is better before 
> sendingrpc to namenode



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15100) RBF: Print stacktrace when DFSRouter fails to fetch/parse JMX output from NameNode

2020-01-09 Thread Jira


[ 
https://issues.apache.org/jira/browse/HDFS-15100?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17012072#comment-17012072
 ] 

Íñigo Goiri commented on HDFS-15100:


A little verbose but I guess that's what we need, let me comment in the PR.

> RBF: Print stacktrace when DFSRouter fails to fetch/parse JMX output from 
> NameNode
> --
>
> Key: HDFS-15100
> URL: https://issues.apache.org/jira/browse/HDFS-15100
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: rbf
>Reporter: Akira Ajisaka
>Assignee: Akira Ajisaka
>Priority: Major
>  Labels: supportability
>
> When DFSRouter fails to fetch or parse JMX output from NameNode, it prints 
> only the error message. Therefore we had to modify the source code to print 
> the stacktrace of the exception to find the root cause.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15108) RBF: MembershipNamenodeResolver should invalidate cache incase of active namenode update

2020-01-09 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15108?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17012058#comment-17012058
 ] 

Hadoop QA commented on HDFS-15108:
--

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
34s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 19m 
 1s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
32s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
20s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
37s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 28s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
57s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
54s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
28s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
29s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
14m 35s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
51s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  8m  
9s{color} | {color:green} hadoop-hdfs-rbf in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
30s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 63m  7s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=19.03.5 Server=19.03.5 Image:yetus/hadoop:c44943d1fc3 |
| JIRA Issue | HDFS-15108 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12990433/HDFS-15108-01.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 5f821955c3c6 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / a40dc9e |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_232 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/28629/testReport/ |
| Max. process+thread count | 3110 (vs. ulimit of 5500) |
| modules | C: hadoop-hdfs-project/hadoop-hdfs-rbf U: 
hadoop-hdfs-project/hadoop-hdfs-rbf |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/28629/console |
| Powered by | Apache Yetus 0.8.0   http://yetus.apache.org |


This message was automatically generated.



> RBF: MembershipNamenodeResolver should invalidate cache incase of active 
> namenode update
> --

[jira] [Commented] (HDFS-15106) Remove unused code.

2020-01-09 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15106?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17012012#comment-17012012
 ] 

Hadoop QA commented on HDFS-15106:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
44s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 19m 
11s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
59s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
42s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
5s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
14m 34s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
16s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
12s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
59s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
59s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 30s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
12s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}110m  7s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
32s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}171m 45s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.server.namenode.TestEditLogRace |
|   | hadoop.hdfs.server.blockmanagement.TestBlockStatsMXBean |
|   | hadoop.hdfs.TestLeaseRecoveryStriped |
|   | hadoop.hdfs.server.namenode.TestRedudantBlocks |
|   | hadoop.hdfs.server.datanode.TestNNHandlesBlockReportPerStorage |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=19.03.5 Server=19.03.5 Image:yetus/hadoop:c44943d1fc3 |
| JIRA Issue | HDFS-15106 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12990423/HDFS-15106.001.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux fe3344adca3a 4.15.0-66-generic #75-Ubuntu SMP Tue Oct 1 
05:24:09 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / a40dc9e |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_232 |
| findbugs | v3.1.0-RC1 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/28626/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 

[jira] [Commented] (HDFS-7765) FSOutputSummer throwing ArrayIndexOutOfBoundsException

2020-01-09 Thread Jonathan Turner Eagles (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-7765?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17012009#comment-17012009
 ] 

Jonathan Turner Eagles commented on HDFS-7765:
--

[~wanghongbing], would you be willing to take over this jira? It's been over a 
year since the assignee has made a comment.

> FSOutputSummer throwing ArrayIndexOutOfBoundsException
> --
>
> Key: HDFS-7765
> URL: https://issues.apache.org/jira/browse/HDFS-7765
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs-client
>Affects Versions: 2.6.0
> Environment: Centos 6, Open JDK 7, Amazon EC2, Accumulo 1.6.2RC4
>Reporter: Keith Turner
>Assignee: Janmejay Singh
>Priority: Major
> Attachments: 
> 0001-PATCH-HDFS-7765-FSOutputSummer-throwing-ArrayIndexOu.patch, 
> HDFS-7765.patch
>
>
> While running an Accumulo test, saw exceptions like the following while 
> trying to write to write ahead log in HDFS. 
> The exception occurrs at 
> [FSOutputSummer.java:76|https://github.com/apache/hadoop/blob/release-2.6.0/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FSOutputSummer.java#L76]
>  which is attempting to update a byte array.
> {noformat}
> 2015-02-06 19:46:49,769 [log.DfsLogger] WARN : Exception syncing 
> java.lang.reflect.InvocationTargetException
> java.lang.ArrayIndexOutOfBoundsException: 4608
> at org.apache.hadoop.fs.FSOutputSummer.write(FSOutputSummer.java:76)
> at 
> org.apache.hadoop.fs.FSDataOutputStream$PositionCache.write(FSDataOutputStream.java:50)
> at java.io.DataOutputStream.write(DataOutputStream.java:88)
> at java.io.DataOutputStream.writeByte(DataOutputStream.java:153)
> at 
> org.apache.accumulo.tserver.logger.LogFileKey.write(LogFileKey.java:87)
> at org.apache.accumulo.tserver.log.DfsLogger.write(DfsLogger.java:526)
> at 
> org.apache.accumulo.tserver.log.DfsLogger.logFileData(DfsLogger.java:540)
> at 
> org.apache.accumulo.tserver.log.DfsLogger.logManyTablets(DfsLogger.java:573)
> at 
> org.apache.accumulo.tserver.log.TabletServerLogger$6.write(TabletServerLogger.java:373)
> at 
> org.apache.accumulo.tserver.log.TabletServerLogger.write(TabletServerLogger.java:274)
> at 
> org.apache.accumulo.tserver.log.TabletServerLogger.logManyTablets(TabletServerLogger.java:365)
> at 
> org.apache.accumulo.tserver.TabletServer$ThriftClientHandler.flush(TabletServer.java:1667)
> at 
> org.apache.accumulo.tserver.TabletServer$ThriftClientHandler.closeUpdate(TabletServer.java:1754)
> at sun.reflect.GeneratedMethodAccessor8.invoke(Unknown Source)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:606)
> at 
> org.apache.accumulo.trace.instrument.thrift.RpcServerInvocationHandler.invoke(RpcServerInvocationHandler.java:46)
> at 
> org.apache.accumulo.server.util.RpcWrapper$1.invoke(RpcWrapper.java:47)
> at com.sun.proxy.$Proxy22.closeUpdate(Unknown Source)
> at 
> org.apache.accumulo.core.tabletserver.thrift.TabletClientService$Processor$closeUpdate.getResult(TabletClientService.java:2370)
> at 
> org.apache.accumulo.core.tabletserver.thrift.TabletClientService$Processor$closeUpdate.getResult(TabletClientService.java:2354)
> at org.apache.thrift.ProcessFunction.process(ProcessFunction.java:39)
> at org.apache.thrift.TBaseProcessor.process(TBaseProcessor.java:39)
> at 
> org.apache.accumulo.server.util.TServerUtils$TimedProcessor.process(TServerUtils.java:168)
> at 
> org.apache.thrift.server.AbstractNonblockingServer$FrameBuffer.invoke(AbstractNonblockingServer.java:516)
> at 
> org.apache.accumulo.server.util.CustomNonBlockingServer$1.run(CustomNonBlockingServer.java:77)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
> at 
> org.apache.accumulo.trace.instrument.TraceRunnable.run(TraceRunnable.java:47)
> at 
> org.apache.accumulo.core.util.LoggingRunnable.run(LoggingRunnable.java:34)
> at java.lang.Thread.run(Thread.java:744)
> 2015-02-06 19:46:49,769 [log.DfsLogger] WARN : Exception syncing 
> java.lang.reflect.InvocationTargetException
> 2015-02-06 19:46:49,772 [log.DfsLogger] ERROR: 
> java.lang.ArrayIndexOutOfBoundsException: 4609
> java.lang.ArrayIndexOutOfBoundsException: 4609
> at org.apache.hadoop.fs.FSOutputSummer.write(FSOutputSummer.java:76)
> at 
> org.apache.hadoop.fs.FSDataOutputStream$PositionCache.write(FSDataOutputStream.java:50)
> at java.io.Dat

[jira] [Updated] (HDFS-15108) RBF: MembershipNamenodeResolver should invalidate cache incase of active namenode update

2020-01-09 Thread Ayush Saxena (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15108?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ayush Saxena updated HDFS-15108:

Status: Patch Available  (was: Open)

> RBF: MembershipNamenodeResolver should invalidate cache incase of active 
> namenode update
> 
>
> Key: HDFS-15108
> URL: https://issues.apache.org/jira/browse/HDFS-15108
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Ayush Saxena
>Assignee: Ayush Saxena
>Priority: Major
> Attachments: HDFS-15108-01.patch
>
>
> If a failover happens, {{namenodeResolver.updateActiveNamenode(nsId, 
> address);}} is called, but this doesn't invalidates the cache, so as the next 
> time the correct active is fetched.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15108) RBF: MembershipNamenodeResolver should invalidate cache incase of active namenode update

2020-01-09 Thread Ayush Saxena (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15108?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ayush Saxena updated HDFS-15108:

Attachment: HDFS-15108-01.patch

> RBF: MembershipNamenodeResolver should invalidate cache incase of active 
> namenode update
> 
>
> Key: HDFS-15108
> URL: https://issues.apache.org/jira/browse/HDFS-15108
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Ayush Saxena
>Assignee: Ayush Saxena
>Priority: Major
> Attachments: HDFS-15108-01.patch
>
>
> If a failover happens, {{namenodeResolver.updateActiveNamenode(nsId, 
> address);}} is called, but this doesn't invalidates the cache, so as the next 
> time the correct active is fetched.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-15108) RBF: MembershipNamenodeResolver should invalidate cache incase of active namenode update

2020-01-09 Thread Ayush Saxena (Jira)
Ayush Saxena created HDFS-15108:
---

 Summary: RBF: MembershipNamenodeResolver should invalidate cache 
incase of active namenode update
 Key: HDFS-15108
 URL: https://issues.apache.org/jira/browse/HDFS-15108
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Ayush Saxena
Assignee: Ayush Saxena


If a failover happens, {{namenodeResolver.updateActiveNamenode(nsId, 
address);}} is called, but this doesn't invalidates the cache, so as the next 
time the correct active is fetched.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15107) dfs.client.server-defaults.validity.period.ms to support time units

2020-01-09 Thread Ayush Saxena (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15107?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ayush Saxena updated HDFS-15107:

Attachment: HDFS-15107-01.patch

> dfs.client.server-defaults.validity.period.ms to support time units
> ---
>
> Key: HDFS-15107
> URL: https://issues.apache.org/jira/browse/HDFS-15107
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Ayush Saxena
>Assignee: Ayush Saxena
>Priority: Major
> Attachments: HDFS-15107-01.patch
>
>
> Add support for time units for dfs.client.server-defaults.validity.period.ms



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15107) dfs.client.server-defaults.validity.period.ms to support time units

2020-01-09 Thread Ayush Saxena (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15107?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ayush Saxena updated HDFS-15107:

Status: Patch Available  (was: Open)

> dfs.client.server-defaults.validity.period.ms to support time units
> ---
>
> Key: HDFS-15107
> URL: https://issues.apache.org/jira/browse/HDFS-15107
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Ayush Saxena
>Assignee: Ayush Saxena
>Priority: Major
> Attachments: HDFS-15107-01.patch
>
>
> Add support for time units for dfs.client.server-defaults.validity.period.ms



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-15107) dfs.client.server-defaults.validity.period.ms to support time units

2020-01-09 Thread Ayush Saxena (Jira)
Ayush Saxena created HDFS-15107:
---

 Summary: dfs.client.server-defaults.validity.period.ms to support 
time units
 Key: HDFS-15107
 URL: https://issues.apache.org/jira/browse/HDFS-15107
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: Ayush Saxena
Assignee: Ayush Saxena


Add support for time units for dfs.client.server-defaults.validity.period.ms



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15106) Remove unused code.

2020-01-09 Thread Ayush Saxena (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15106?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17011942#comment-17011942
 ] 

Ayush Saxena commented on HDFS-15106:
-

+1(Pending Jenkins)

> Remove unused code.
> ---
>
> Key: HDFS-15106
> URL: https://issues.apache.org/jira/browse/HDFS-15106
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Jinglun
>Assignee: Jinglun
>Priority: Trivial
> Attachments: HDFS-15106.001.patch
>
>
> I was reading code about rpc concat()  and found an unused variable named 
> count. It was originally used to compute the namespace delta. Now we use 
> QuotaCounts deltas so it becomes useless.  To be clear we should remove it.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDFS-15106) Remove unused code.

2020-01-09 Thread Ayush Saxena (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15106?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ayush Saxena reassigned HDFS-15106:
---

Assignee: Jinglun  (was: Ayush Saxena)

> Remove unused code.
> ---
>
> Key: HDFS-15106
> URL: https://issues.apache.org/jira/browse/HDFS-15106
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Jinglun
>Assignee: Jinglun
>Priority: Trivial
> Attachments: HDFS-15106.001.patch
>
>
> I was reading code about rpc concat()  and found an unused variable named 
> count. It was originally used to compute the namespace delta. Now we use 
> QuotaCounts deltas so it becomes useless.  To be clear we should remove it.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDFS-15106) Remove unused code.

2020-01-09 Thread Ayush Saxena (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15106?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ayush Saxena reassigned HDFS-15106:
---

Assignee: Ayush Saxena  (was: Jinglun)

> Remove unused code.
> ---
>
> Key: HDFS-15106
> URL: https://issues.apache.org/jira/browse/HDFS-15106
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Jinglun
>Assignee: Ayush Saxena
>Priority: Trivial
> Attachments: HDFS-15106.001.patch
>
>
> I was reading code about rpc concat()  and found an unused variable named 
> count. It was originally used to compute the namespace delta. Now we use 
> QuotaCounts deltas so it becomes useless.  To be clear we should remove it.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14578) AvailableSpaceBlockPlacementPolicy always prefers local node

2020-01-09 Thread Ayush Saxena (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-14578?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ayush Saxena updated HDFS-14578:

Attachment: HDFS-14578-05.patch

> AvailableSpaceBlockPlacementPolicy always prefers local node
> 
>
> Key: HDFS-14578
> URL: https://issues.apache.org/jira/browse/HDFS-14578
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: block placement
>Affects Versions: 2.8.0, 2.7.4, 3.0.0-alpha1
>Reporter: Wei-Chiu Chuang
>Assignee: Ayush Saxena
>Priority: Major
> Attachments: HDFS-14578-02.patch, HDFS-14578-03.patch, 
> HDFS-14578-04.patch, HDFS-14578-05.patch, HDFS-14578-WIP-01.patch, 
> HDFS-14758-01.patch
>
>
> It looks like AvailableSpaceBlockPlacementPolicy prefers local disk just like 
> in the BlockPlacementPolicyDefault
>  
> As Yongjun mentioned in 
> [HDFS-8131|https://issues.apache.org/jira/browse/HDFS-8131?focusedCommentId=16558739&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16558739],
>  
> {quote}Class AvailableSpaceBlockPlacementPolicy extends 
> BlockPlacementPolicyDefault. But it doesn't change the behavior of choosing 
> the first node in BlockPlacementPolicyDefault, so even with this new feature, 
> the local DN is always chosen as the first DN (of course when it is not 
> excluded), and the new feature only changes the selection of the rest of the 
> two DNs.
> {quote}
> I'm file this Jira as I groom Cloudera's internal Jira and found this 
> unreported issue. We do have a customer hitting this problem. I don't have a 
> fix, but thought it would be beneficial to report it to Apache Jira.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15104) If block is not reported by any Datanode, the flag corrupt of BlockLocation should be marked as true.

2020-01-09 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15104?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17011836#comment-17011836
 ] 

Hadoop QA commented on HDFS-15104:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
27s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 20m 
43s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
2s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
45s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
10s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
15m 57s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
15s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
29s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
16m  3s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
59s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
26s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}109m 37s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
31s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}180m 31s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.hdfs.server.datanode.fsdataset.impl.TestSpaceReservation |
|   | hadoop.hdfs.TestRollingUpgrade |
|   | hadoop.hdfs.server.namenode.TestRedudantBlocks |
|   | hadoop.hdfs.server.datanode.TestDataNodeLifeline |
|   | hadoop.hdfs.server.namenode.TestCheckpoint |
|   | hadoop.hdfs.server.namenode.ha.TestXAttrsWithHA |
|   | hadoop.hdfs.server.namenode.ha.TestEditLogTailer |
|   | hadoop.hdfs.TestDecommissionWithStriped |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=19.03.4 Server=19.03.4 Image:yetus/hadoop:c44943d1fc3 |
| JIRA Issue | HDFS-15104 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12990410/HDFS-15104.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 1123587f2934 4.15.0-70-generic #79-Ubuntu SMP Tue Nov 12 
10:36:11 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / a40dc9e |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_232 |
| findbugs | v3.1.0-RC1 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/28624/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |

[jira] [Updated] (HDFS-15106) Remove unused code.

2020-01-09 Thread Jinglun (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15106?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jinglun updated HDFS-15106:
---
Attachment: HDFS-15106.001.patch
Status: Patch Available  (was: Open)

> Remove unused code.
> ---
>
> Key: HDFS-15106
> URL: https://issues.apache.org/jira/browse/HDFS-15106
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Jinglun
>Assignee: Jinglun
>Priority: Trivial
> Attachments: HDFS-15106.001.patch
>
>
> I was reading code about rpc concat()  and found an unused variable named 
> count. It was originally used to compute the namespace delta. Now we use 
> QuotaCounts deltas so it becomes useless.  To be clear we should remove it.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-15106) Remove unused code.

2020-01-09 Thread Jinglun (Jira)
Jinglun created HDFS-15106:
--

 Summary: Remove unused code.
 Key: HDFS-15106
 URL: https://issues.apache.org/jira/browse/HDFS-15106
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: Jinglun
Assignee: Jinglun


I was reading code about rpc concat()  and found an unused variable named 
count. It was originally used to compute the namespace delta. Now we use 
QuotaCounts deltas so it becomes useless.  To be clear we should remove it.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15087) RBF: Balance/Rename across federation namespaces

2020-01-09 Thread Jinglun (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15087?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17011801#comment-17011801
 ] 

Jinglun commented on HDFS-15087:


I think we can separate it to subtasks below:
 * saveTree rpc: save src to external storage(HDFS).
 * graftTree rpc: construct dst.
 * DN hardlink rpc: hardlink replicas in batch.
 * HardLink executor: collect locations and call hardlink rpcs to DNs.
 * Scheduler model.
 * Consistency check: before deleting src, check consistency of src and dst.
 * HFR.
 * Distcp version of HFR.

> RBF: Balance/Rename across federation namespaces
> 
>
> Key: HDFS-15087
> URL: https://issues.apache.org/jira/browse/HDFS-15087
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Jinglun
>Priority: Major
> Attachments: HDFS-15087.initial.patch, HFR_Rename Across Federation 
> Namespaces.pdf
>
>
> The Xiaomi storage team has developed a new feature called HFR(HDFS 
> Federation Rename) that enables us to do balance/rename across federation 
> namespaces. The idea is to first move the meta to the dst NameNode and then 
> link all the replicas. It has been working in our largest production cluster 
> for 2 months. We use it to balance the namespaces. It turns out HFR is fast 
> and flexible. The detail could be found in the design doc. 
> Looking forward to a lively discussion.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15083) Add new trash rpc which move the trash (mkdir and the rename) operation to the server side.

2020-01-09 Thread Wei-Chiu Chuang (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15083?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17011798#comment-17011798
 ] 

Wei-Chiu Chuang commented on HDFS-15083:


Also, the existing trash implementation is pluggable, with the default 
implementation TrashPolicyDefault.
This patch changes that behavior. While I am not aware of any one using 
non-default trash implementation, we do have users requesting enhancement to 
the existing trash behavior to support more complex use cases (e.g. 
configurable trash root directory)

> Add new trash rpc which move the trash (mkdir and the rename) operation to 
> the server side.
> ---
>
> Key: HDFS-15083
> URL: https://issues.apache.org/jira/browse/HDFS-15083
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: dfsclient, namenode, rbf
>Affects Versions: 2.10.0, 3.2.0
>Reporter: zhuqi
>Assignee: zhuqi
>Priority: Major
> Attachments: HDFS-15083.001.patch
>
>
> Now the rbf trash with multi cluster mounted  in 
> [HDFS-14117|https://issues.apache.org/jira/browse/HDFS-14117] , the solution 
> is not graceful。
> If we can move the client side trash (mkdir and rename) to the  server side, 
> we can not only solve the problem gracefully, but also reduce the trash rpc 
> load in server side to about %50 compare to the origin trash which call two 
> times rpc(mkdir and rename).



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15083) Add new trash rpc which move the trash (mkdir and the rename) operation to the server side.

2020-01-09 Thread Wei-Chiu Chuang (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15083?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17011794#comment-17011794
 ] 

Wei-Chiu Chuang commented on HDFS-15083:


This patch is not going to work for cloud storage as well as webhdfs/httpfs. 
Those are important use cases so please make sure to make this patch compatible 
with them.

{code}
  DfsClientConf dfsClientConf = new DfsClientConf(conf);
  return permission.applyUMask(dfsClientConf.getUMask());
{code}
Unless absolutely necessary i would like to avoid using DfsclientConf in the 
NameNode.



> Add new trash rpc which move the trash (mkdir and the rename) operation to 
> the server side.
> ---
>
> Key: HDFS-15083
> URL: https://issues.apache.org/jira/browse/HDFS-15083
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: dfsclient, namenode, rbf
>Affects Versions: 2.10.0, 3.2.0
>Reporter: zhuqi
>Assignee: zhuqi
>Priority: Major
> Attachments: HDFS-15083.001.patch
>
>
> Now the rbf trash with multi cluster mounted  in 
> [HDFS-14117|https://issues.apache.org/jira/browse/HDFS-14117] , the solution 
> is not graceful。
> If we can move the client side trash (mkdir and rename) to the  server side, 
> we can not only solve the problem gracefully, but also reduce the trash rpc 
> load in server side to about %50 compare to the origin trash which call two 
> times rpc(mkdir and rename).



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15067) Optimize heartbeat for large cluster

2020-01-09 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15067?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17011790#comment-17011790
 ] 

Hadoop QA commented on HDFS-15067:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red}  0m  7s{color} 
| {color:red} HDFS-15067 does not apply to trunk. Rebase required? Wrong 
Branch? See https://wiki.apache.org/hadoop/HowToContribute for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | HDFS-15067 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12990421/HDFS-15067.01.patch |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/28625/console |
| Powered by | Apache Yetus 0.8.0   http://yetus.apache.org |


This message was automatically generated.



> Optimize heartbeat for large cluster
> 
>
> Key: HDFS-15067
> URL: https://issues.apache.org/jira/browse/HDFS-15067
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Affects Versions: 3.1.1
>Reporter: Surendra Singh Lilhore
>Assignee: Surendra Singh Lilhore
>Priority: Major
> Attachments: HDFS-15067.01.patch, image-2020-01-09-18-00-49-556.png
>
>
> In a large cluster Namenode spend some time in processing heartbeats. For 
> example, in 10K node cluster namenode process 10K RPC's for heartbeat in each 
> 3sec. This will impact the client response time. This heart beat can be 
> optimized. DN can start skipping one heart beat if no 
> work(Write/replication/Delete) is allocated from long time. DN can start 
> sending heart beat in 6 sec. Once the DN stating getting work from NN , it 
> can start sending heart beat normally.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15067) Optimize heartbeat for large cluster

2020-01-09 Thread Surendra Singh Lilhore (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15067?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Surendra Singh Lilhore updated HDFS-15067:
--
Status: Patch Available  (was: Open)

> Optimize heartbeat for large cluster
> 
>
> Key: HDFS-15067
> URL: https://issues.apache.org/jira/browse/HDFS-15067
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Affects Versions: 3.1.1
>Reporter: Surendra Singh Lilhore
>Assignee: Surendra Singh Lilhore
>Priority: Major
> Attachments: HDFS-15067.01.patch, image-2020-01-09-18-00-49-556.png
>
>
> In a large cluster Namenode spend some time in processing heartbeats. For 
> example, in 10K node cluster namenode process 10K RPC's for heartbeat in each 
> 3sec. This will impact the client response time. This heart beat can be 
> optimized. DN can start skipping one heart beat if no 
> work(Write/replication/Delete) is allocated from long time. DN can start 
> sending heart beat in 6 sec. Once the DN stating getting work from NN , it 
> can start sending heart beat normally.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15067) Optimize heartbeat for large cluster

2020-01-09 Thread Surendra Singh Lilhore (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15067?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17011784#comment-17011784
 ] 

Surendra Singh Lilhore commented on HDFS-15067:
---

Attached initial patch for review and Idea. I will improve and add new UTs' in 
next patch.

> Optimize heartbeat for large cluster
> 
>
> Key: HDFS-15067
> URL: https://issues.apache.org/jira/browse/HDFS-15067
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Affects Versions: 3.1.1
>Reporter: Surendra Singh Lilhore
>Assignee: Surendra Singh Lilhore
>Priority: Major
> Attachments: HDFS-15067.01.patch, image-2020-01-09-18-00-49-556.png
>
>
> In a large cluster Namenode spend some time in processing heartbeats. For 
> example, in 10K node cluster namenode process 10K RPC's for heartbeat in each 
> 3sec. This will impact the client response time. This heart beat can be 
> optimized. DN can start skipping one heart beat if no 
> work(Write/replication/Delete) is allocated from long time. DN can start 
> sending heart beat in 6 sec. Once the DN stating getting work from NN , it 
> can start sending heart beat normally.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15067) Optimize heartbeat for large cluster

2020-01-09 Thread Surendra Singh Lilhore (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15067?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Surendra Singh Lilhore updated HDFS-15067:
--
Attachment: HDFS-15067.01.patch

> Optimize heartbeat for large cluster
> 
>
> Key: HDFS-15067
> URL: https://issues.apache.org/jira/browse/HDFS-15067
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Affects Versions: 3.1.1
>Reporter: Surendra Singh Lilhore
>Assignee: Surendra Singh Lilhore
>Priority: Major
> Attachments: HDFS-15067.01.patch, image-2020-01-09-18-00-49-556.png
>
>
> In a large cluster Namenode spend some time in processing heartbeats. For 
> example, in 10K node cluster namenode process 10K RPC's for heartbeat in each 
> 3sec. This will impact the client response time. This heart beat can be 
> optimized. DN can start skipping one heart beat if no 
> work(Write/replication/Delete) is allocated from long time. DN can start 
> sending heart beat in 6 sec. Once the DN stating getting work from NN , it 
> can start sending heart beat normally.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15105) Standby NN exits and fails to restart due to edit log corruption

2020-01-09 Thread Tao Yang (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15105?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tao Yang updated HDFS-15105:

Description: 
We found a issue that Standby NN exited and failed to restart until we resolved 
the edit log corruption.
 Error logs:
{noformat}
java.io.IOException: Mismatched block IDs or generation stamps, attempting to 
replace block blk_74288647857_73526148211 with blk_74288647857_73526377369 as 
block # 15/17 of /maindump/mainv10/dump_online/lasttable/20200105015500/part-319
        at 
org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.updateBlocks(FSEditLogLoader.java:1019)
        at 
org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.applyEditLogOp(FSEditLogLoader.java:431)
        at 
org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.loadEditRecords(FSEditLogLoader.java:234)
        at 
org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.loadFSEdits(FSEditLogLoader.java:143)
        at 
org.apache.hadoop.hdfs.server.namenode.FSImage.loadEdits(FSImage.java:885)
        at 
org.apache.hadoop.hdfs.server.namenode.FSImage.loadEdits(FSImage.java:866)
        at 
org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer.doTailEdits(EditLogTailer.java:234)
        at 
org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread.doWork(EditLogTailer.java:342)
        at 
org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread.access$200(EditLogTailer.java:295)
        at 
org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread$1.run(EditLogTailer.java:312)
        at 
org.apache.hadoop.security.SecurityUtil.doAsLoginUserOrFatal(SecurityUtil.java:455)
        at 
org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread.run(EditLogTailer.java:308)
{noformat}

Related edit log transactions of the same file:
{noformat}
1. TXID=444341628498  time=1578251449632
OP_UPDATE_BLOCKS
blocks: ... blk_74288647857_73526148211   blk_74454090866_73526215536

2. TXID=444342382774   time=1578251520740
OP_REASSIGN_LEASE

3. TXID=444342401216  time=1578251522779
OP_CLOSE
blocks: ... blk_74288647857_73526377369   blk_74454090866_73526374095

4. TXID=444342401394
OP_SET_GENSTAMP_V2 
generate stamp: 73526377369

5. TXID=444342401395  time=1578251522835  (03:12:02,835)
OP_TRUNCATE

6. TXID=444342402176  time=1578251523246  (03:12:03,246)
OP_CLOSE
blocks: ... blk_74288647857_73526377369 
{noformat}

According to the edit logs, it's wield to see that stamp(73526377369) was 
generated in transaction 4 but already used in transaction 3, and for 
transaction 3 there should be only the last block changed but in fact the last 
two blocks are both changed.

This problem might be produced in a complex scenario that truncate operation 
immediately followed the recover-lease operation for the same file. A 
suspicious point is that between creation and being written for transaction 3, 
stamp of the second last block was updated when committing block 
synchronization caused by the truncate operation.
Related calling stack is as follows: 
{noformat}
NameNodeRpcServer#commitBlockSynchronization
  FSNamesystem#commitBlockSynchronization
    // update last block
    if(!copyTruncate) {
      storedBlock.setGenerationStamp(newgenerationstamp); //updated now!!!
      storedBlock.setNumBytes(newlength);
    }
{noformat}

Any comments are welcome. Thanks.

  was:
We found a issue that Standby NN exited and failed to restart until we resolved 
the edit log corruption.
 Error logs:
{noformat}
java.io.IOException: Mismatched block IDs or generation stamps, attempting to 
replace block blk_74288647857_73526148211 with blk_74288647857_73526377369 as 
block # 15/17 of /maindump/mainv10/dump_online/lasttable/20200105015500/part-319
        at 
org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.updateBlocks(FSEditLogLoader.java:1019)
        at 
org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.applyEditLogOp(FSEditLogLoader.java:431)
        at 
org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.loadEditRecords(FSEditLogLoader.java:234)
        at 
org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.loadFSEdits(FSEditLogLoader.java:143)
        at 
org.apache.hadoop.hdfs.server.namenode.FSImage.loadEdits(FSImage.java:885)
        at 
org.apache.hadoop.hdfs.server.namenode.FSImage.loadEdits(FSImage.java:866)
        at 
org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer.doTailEdits(EditLogTailer.java:234)
        at 
org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread.doWork(EditLogTailer.java:342)
        at 
org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread.access$200(EditLogTailer.java:295)
        at 
org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread$1.run(EditLogTailer.java:312)
        at 
org.apache.hadoop.security.SecurityUtil.doAsLoginUserOrFatal(SecurityUtil.java:455)
        at

[jira] [Created] (HDFS-15105) Standby NN exits and fails to restart due to edit log corruption

2020-01-09 Thread Tao Yang (Jira)
Tao Yang created HDFS-15105:
---

 Summary: Standby NN exits and fails to restart due to edit log 
corruption
 Key: HDFS-15105
 URL: https://issues.apache.org/jira/browse/HDFS-15105
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 2.8.0
Reporter: Tao Yang


We found a issue that Standby NN exited and failed to restart until we resolved 
the edit log corruption.
 Error logs:
{noformat}
java.io.IOException: Mismatched block IDs or generation stamps, attempting to 
replace block blk_74288647857_73526148211 with blk_74288647857_73526377369 as 
block # 15/17 of /maindump/mainv10/dump_online/lasttable/20200105015500/part-319
        at 
org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.updateBlocks(FSEditLogLoader.java:1019)
        at 
org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.applyEditLogOp(FSEditLogLoader.java:431)
        at 
org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.loadEditRecords(FSEditLogLoader.java:234)
        at 
org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.loadFSEdits(FSEditLogLoader.java:143)
        at 
org.apache.hadoop.hdfs.server.namenode.FSImage.loadEdits(FSImage.java:885)
        at 
org.apache.hadoop.hdfs.server.namenode.FSImage.loadEdits(FSImage.java:866)
        at 
org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer.doTailEdits(EditLogTailer.java:234)
        at 
org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread.doWork(EditLogTailer.java:342)
        at 
org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread.access$200(EditLogTailer.java:295)
        at 
org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread$1.run(EditLogTailer.java:312)
        at 
org.apache.hadoop.security.SecurityUtil.doAsLoginUserOrFatal(SecurityUtil.java:455)
        at 
org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread.run(EditLogTailer.java:308)
{noformat}

Related edit log transactions of the same file:
{noformat}
1. TXID=444341628498  time=1578251449632
OP_UPDATE_BLOCKS
blocks: ... blk_74288647857_73526148211   blk_74454090866_73526215536

2. TXID=444342382774   time=1578251520740
OP_REASSIGN_LEASE

3. TXID=444342401216  time=1578251522779
OP_CLOSE
blocks: ... blk_74288647857_73526377369   blk_74454090866_73526374095

4. TXID=444342401394
OP_SET_GENSTAMP_V2 
generate stamp: 73526377369    this stamp is generated but already used in 
the previous edit log

5. TXID=444342401395  time=1578251522835  (03:12:02,835)
OP_TRUNCATE

6. TXID=444342402176  time=1578251523246  (03:12:03,246)
OP_CLOSE
blocks: ... blk_74288647857_73526377369 
{noformat}

According to the edit logs, it's wield to see that stamp(73526377369) was 
generated in transaction 4 but already used in transaction 3, and for 
transaction 3 there should be only the last block changed but in fact the last 
two blocks are both changed.

This problem might be produced in a complex scenario that truncate operation 
immediately followed the recover-lease operation for the same file. A 
suspicious point is that between creation and being written for transaction 3, 
stamp of the second last block was updated when committing block 
synchronization caused by the truncate operation.
Related calling stack is as follows: 
{noformat}
NameNodeRpcServer#commitBlockSynchronization
  FSNamesystem#commitBlockSynchronization
    // update last block
    if(!copyTruncate) {
      storedBlock.setGenerationStamp(newgenerationstamp); //updated now!!!
      storedBlock.setNumBytes(newlength);
    }
{noformat}

Any comments are welcome. Thanks.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15067) Optimize heartbeat for large cluster

2020-01-09 Thread Surendra Singh Lilhore (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15067?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17011764#comment-17011764
 ] 

Surendra Singh Lilhore commented on HDFS-15067:
---

Design Idea :

=

Added two new property 
|*Property*|*Description*|*Default value*|
|dfs.datanode.heartbeat.optimizer.skip.max.heartbeat|Max number of heartbeat 
can be skipped in one time.|3|
|dfs.datanode.heartbeat.optimizer.max.idle.time|Datanode idle time after which 
it start skipping heartbeat. Default value is 0, means this feature is 
disabled.|0|

User need to configure max heartbeat to skip and datanode max idle time, after 
this time datanode start skipping heartbeat incrementlly. After elapsing first 
idle window it will skip one heartbeat, after elapsing 2 idle window it will 
skip two heartbeat and so on but it will give guaranty to send at least one 
heartbeat before stale interval.

How many heartbeat can be skipped in one time is main logic and this is depend 
on stale interval of namenode. This property is not available in datanode this 
value we need get from Namenode. This we can receive in DatanodeRegistration 
from namenode at the time of registration. Skipping max heartbeat is depend one 
stale interval. And it will be calculated based on this formula.

*Max heartbeat to skip = min((staleInterval – heartbeatInterval)/ 
heartbeatInterval, configuredMaxHeartbeatSkip);*

!image-2020-01-09-18-00-49-556.png!

> Optimize heartbeat for large cluster
> 
>
> Key: HDFS-15067
> URL: https://issues.apache.org/jira/browse/HDFS-15067
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Affects Versions: 3.1.1
>Reporter: Surendra Singh Lilhore
>Assignee: Surendra Singh Lilhore
>Priority: Major
> Attachments: image-2020-01-09-18-00-49-556.png
>
>
> In a large cluster Namenode spend some time in processing heartbeats. For 
> example, in 10K node cluster namenode process 10K RPC's for heartbeat in each 
> 3sec. This will impact the client response time. This heart beat can be 
> optimized. DN can start skipping one heart beat if no 
> work(Write/replication/Delete) is allocated from long time. DN can start 
> sending heart beat in 6 sec. Once the DN stating getting work from NN , it 
> can start sending heart beat normally.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15067) Optimize heartbeat for large cluster

2020-01-09 Thread Surendra Singh Lilhore (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15067?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Surendra Singh Lilhore updated HDFS-15067:
--
Attachment: image-2020-01-09-18-00-49-556.png

> Optimize heartbeat for large cluster
> 
>
> Key: HDFS-15067
> URL: https://issues.apache.org/jira/browse/HDFS-15067
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Affects Versions: 3.1.1
>Reporter: Surendra Singh Lilhore
>Assignee: Surendra Singh Lilhore
>Priority: Major
> Attachments: image-2020-01-09-18-00-49-556.png
>
>
> In a large cluster Namenode spend some time in processing heartbeats. For 
> example, in 10K node cluster namenode process 10K RPC's for heartbeat in each 
> 3sec. This will impact the client response time. This heart beat can be 
> optimized. DN can start skipping one heart beat if no 
> work(Write/replication/Delete) is allocated from long time. DN can start 
> sending heart beat in 6 sec. Once the DN stating getting work from NN , it 
> can start sending heart beat normally.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15067) Optimize heartbeat for large cluster

2020-01-09 Thread Surendra Singh Lilhore (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15067?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Surendra Singh Lilhore updated HDFS-15067:
--
Attachment: (was: image-2020-01-09-17-56-13-814.png)

> Optimize heartbeat for large cluster
> 
>
> Key: HDFS-15067
> URL: https://issues.apache.org/jira/browse/HDFS-15067
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Affects Versions: 3.1.1
>Reporter: Surendra Singh Lilhore
>Assignee: Surendra Singh Lilhore
>Priority: Major
>
> In a large cluster Namenode spend some time in processing heartbeats. For 
> example, in 10K node cluster namenode process 10K RPC's for heartbeat in each 
> 3sec. This will impact the client response time. This heart beat can be 
> optimized. DN can start skipping one heart beat if no 
> work(Write/replication/Delete) is allocated from long time. DN can start 
> sending heart beat in 6 sec. Once the DN stating getting work from NN , it 
> can start sending heart beat normally.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15067) Optimize heartbeat for large cluster

2020-01-09 Thread Surendra Singh Lilhore (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15067?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Surendra Singh Lilhore updated HDFS-15067:
--
Attachment: (was: image-2020-01-09-17-45-50-076.png)

> Optimize heartbeat for large cluster
> 
>
> Key: HDFS-15067
> URL: https://issues.apache.org/jira/browse/HDFS-15067
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Affects Versions: 3.1.1
>Reporter: Surendra Singh Lilhore
>Assignee: Surendra Singh Lilhore
>Priority: Major
>
> In a large cluster Namenode spend some time in processing heartbeats. For 
> example, in 10K node cluster namenode process 10K RPC's for heartbeat in each 
> 3sec. This will impact the client response time. This heart beat can be 
> optimized. DN can start skipping one heart beat if no 
> work(Write/replication/Delete) is allocated from long time. DN can start 
> sending heart beat in 6 sec. Once the DN stating getting work from NN , it 
> can start sending heart beat normally.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15067) Optimize heartbeat for large cluster

2020-01-09 Thread Surendra Singh Lilhore (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15067?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Surendra Singh Lilhore updated HDFS-15067:
--
Attachment: image-2020-01-09-17-56-13-814.png

> Optimize heartbeat for large cluster
> 
>
> Key: HDFS-15067
> URL: https://issues.apache.org/jira/browse/HDFS-15067
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Affects Versions: 3.1.1
>Reporter: Surendra Singh Lilhore
>Assignee: Surendra Singh Lilhore
>Priority: Major
> Attachments: image-2020-01-09-17-45-50-076.png, 
> image-2020-01-09-17-56-13-814.png
>
>
> In a large cluster Namenode spend some time in processing heartbeats. For 
> example, in 10K node cluster namenode process 10K RPC's for heartbeat in each 
> 3sec. This will impact the client response time. This heart beat can be 
> optimized. DN can start skipping one heart beat if no 
> work(Write/replication/Delete) is allocated from long time. DN can start 
> sending heart beat in 6 sec. Once the DN stating getting work from NN , it 
> can start sending heart beat normally.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15067) Optimize heartbeat for large cluster

2020-01-09 Thread Surendra Singh Lilhore (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15067?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Surendra Singh Lilhore updated HDFS-15067:
--
Attachment: image-2020-01-09-17-45-50-076.png

> Optimize heartbeat for large cluster
> 
>
> Key: HDFS-15067
> URL: https://issues.apache.org/jira/browse/HDFS-15067
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Affects Versions: 3.1.1
>Reporter: Surendra Singh Lilhore
>Assignee: Surendra Singh Lilhore
>Priority: Major
> Attachments: image-2020-01-09-17-45-50-076.png
>
>
> In a large cluster Namenode spend some time in processing heartbeats. For 
> example, in 10K node cluster namenode process 10K RPC's for heartbeat in each 
> 3sec. This will impact the client response time. This heart beat can be 
> optimized. DN can start skipping one heart beat if no 
> work(Write/replication/Delete) is allocated from long time. DN can start 
> sending heart beat in 6 sec. Once the DN stating getting work from NN , it 
> can start sending heart beat normally.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15097) Purge log in KMS and HttpFS

2020-01-09 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15097?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17011718#comment-17011718
 ] 

Hadoop QA commented on HDFS-15097:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
40s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
29s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 20m 
20s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 16m 
44s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
39s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
10s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
19m  6s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
13s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
1s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
23s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 21m  
8s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 21m  
8s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
3m 41s{color} | {color:orange} root: The patch generated 2 new + 0 unchanged - 
0 fixed = 2 total (was 0) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
28s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 1s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
15m 52s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
36s{color} | {color:red} hadoop-kms in the patch failed. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
9s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  4m  
1s{color} | {color:green} hadoop-kms in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  5m 
48s{color} | {color:green} hadoop-hdfs-httpfs in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
49s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}120m 21s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=19.03.5 Server=19.03.5 Image:yetus/hadoop:c44943d1fc3 |
| JIRA Issue | HDFS-15097 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12990082/HDFS-15097.001.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 953e906e5a88 4.15.0-66-generic #75-Ubuntu SMP Tue Oct 1 
05:24:09 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / a40dc9e |
| maven | version: Apache Maven 3.3.9 |
| Default Java |

[jira] [Commented] (HDFS-15102) HttpFS : put operation is not supported with path "/"

2020-01-09 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15102?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17011691#comment-17011691
 ] 

Hadoop QA commented on HDFS-15102:
--

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
35s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 17m 
53s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
25s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
22s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
33s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 46s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
32s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
24s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
19s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
19s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
22s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
28s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 39s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
40s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
21s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  4m 
49s{color} | {color:green} hadoop-hdfs-httpfs in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
31s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 54m 57s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=19.03.5 Server=19.03.5 Image:yetus/hadoop:c44943d1fc3 |
| JIRA Issue | HDFS-15102 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12990405/HDFS-15102.001.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux ec56717a67ad 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / a40dc9e |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_232 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/28623/testReport/ |
| Max. process+thread count | 634 (vs. ulimit of 5500) |
| modules | C: hadoop-hdfs-project/hadoop-hdfs-httpfs U: 
hadoop-hdfs-project/hadoop-hdfs-httpfs |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/28623/console |
| Powered by | Apache Yetus 0.8.0   http://yetus.apache.org |


This message was automatically generated.



> HttpFS : put operation is not supported with path "/"
> -
>
> Key: HDFS-15102
>

[jira] [Updated] (HDFS-15104) If block is not reported by any Datanode, the flag corrupt of BlockLocation should be marked as true.

2020-01-09 Thread Yang Yun (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15104?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yang Yun updated HDFS-15104:

Issue Type: Bug  (was: Improvement)

> If block is  not reported by any Datanode, the flag corrupt of BlockLocation 
> should be marked as true.
> --
>
> Key: HDFS-15104
> URL: https://issues.apache.org/jira/browse/HDFS-15104
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Reporter: Yang Yun
>Assignee: Yang Yun
>Priority: Major
> Attachments: HDFS-15104.patch
>
>
> The flag corrupt of BlockLocation returned from getFileBlockLocations() is 
> not marked true even the block is not reported by any Datanode( the hosts is 
> empty).



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15104) If block is not reported by any Datanode, the flag corrupt of BlockLocation should be marked as true.

2020-01-09 Thread Yang Yun (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15104?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yang Yun updated HDFS-15104:

Attachment: HDFS-15104.patch
  Assignee: Yang Yun
Status: Patch Available  (was: Open)

> If block is  not reported by any Datanode, the flag corrupt of BlockLocation 
> should be marked as true.
> --
>
> Key: HDFS-15104
> URL: https://issues.apache.org/jira/browse/HDFS-15104
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Reporter: Yang Yun
>Assignee: Yang Yun
>Priority: Major
> Attachments: HDFS-15104.patch
>
>
> The flag corrupt of BlockLocation returned from getFileBlockLocations() is 
> not marked true even the block is not reported by any Datanode( the hosts is 
> empty).



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-15104) If block is not reported by any Datanode, the flag corrupt of BlockLocation should be marked as true.

2020-01-09 Thread Yang Yun (Jira)
Yang Yun created HDFS-15104:
---

 Summary: If block is  not reported by any Datanode, the flag 
corrupt of BlockLocation should be marked as true.
 Key: HDFS-15104
 URL: https://issues.apache.org/jira/browse/HDFS-15104
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: namenode
Reporter: Yang Yun


The flag corrupt of BlockLocation returned from getFileBlockLocations() is not 
marked true even the block is not reported by any Datanode( the hosts is empty).



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15102) HttpFS : put operation is not supported with path "/"

2020-01-09 Thread hemanthboyina (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15102?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

hemanthboyina updated HDFS-15102:
-
Attachment: HDFS-15102.001.patch
Status: Patch Available  (was: Open)

> HttpFS : put operation is not supported with path "/"
> -
>
> Key: HDFS-15102
> URL: https://issues.apache.org/jira/browse/HDFS-15102
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: hemanthboyina
>Assignee: hemanthboyina
>Priority: Major
> Attachments: HDFS-15102.001.patch
>
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15097) Purge log in KMS and HttpFS

2020-01-09 Thread Doris Gu (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15097?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Doris Gu updated HDFS-15097:

Status: Patch Available  (was: Open)

> Purge log in KMS and HttpFS
> ---
>
> Key: HDFS-15097
> URL: https://issues.apache.org/jira/browse/HDFS-15097
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: httpfs, kms
>Affects Versions: 3.1.3, 3.2.1, 3.0.3, 3.3.0
>Reporter: Doris Gu
>Assignee: Doris Gu
>Priority: Minor
> Attachments: HDFS-15097.001.patch
>
>
> KMS and HttpFS uses ConfigurationWithLogging instead of Configuration,  which 
> logs a configuration object each access.  It's more like a development use.
> {code:java}
> 2020-01-07 16:52:00,456 INFO org.apache.hadoop.conf.ConfigurationWithLogging: 
> Got hadoop.security.instrumentation.requires.admin = 'false' 
> 2020-01-07 16:52:00,456 INFO org.apache.hadoop.conf.ConfigurationWithLogging: 
> Got hadoop.security.instrumentation.requires.admin = 'false' (default 
> 'false') 
> 2020-01-07 16:52:15,091 INFO org.apache.hadoop.conf.ConfigurationWithLogging: 
> Got hadoop.security.instrumentation.requires.admin = 'false' 
> 2020-01-07 16:52:15,091 INFO org.apache.hadoop.conf.ConfigurationWithLogging: 
> Got hadoop.security.instrumentation.requires.admin = 'false' (default 'false')
> {code}
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15097) Purge log in KMS and HttpFS

2020-01-09 Thread Doris Gu (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15097?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17011600#comment-17011600
 ] 

Doris Gu commented on HDFS-15097:
-

Thanks, [~weichiu].  By the way, since no other use of class 
ConfigurationWithLogging, is it suitable to delete it or not?

 

> Purge log in KMS and HttpFS
> ---
>
> Key: HDFS-15097
> URL: https://issues.apache.org/jira/browse/HDFS-15097
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: httpfs, kms
>Affects Versions: 3.0.3, 3.3.0, 3.2.1, 3.1.3
>Reporter: Doris Gu
>Assignee: Doris Gu
>Priority: Minor
> Attachments: HDFS-15097.001.patch
>
>
> KMS and HttpFS uses ConfigurationWithLogging instead of Configuration,  which 
> logs a configuration object each access.  It's more like a development use.
> {code:java}
> 2020-01-07 16:52:00,456 INFO org.apache.hadoop.conf.ConfigurationWithLogging: 
> Got hadoop.security.instrumentation.requires.admin = 'false' 
> 2020-01-07 16:52:00,456 INFO org.apache.hadoop.conf.ConfigurationWithLogging: 
> Got hadoop.security.instrumentation.requires.admin = 'false' (default 
> 'false') 
> 2020-01-07 16:52:15,091 INFO org.apache.hadoop.conf.ConfigurationWithLogging: 
> Got hadoop.security.instrumentation.requires.admin = 'false' 
> 2020-01-07 16:52:15,091 INFO org.apache.hadoop.conf.ConfigurationWithLogging: 
> Got hadoop.security.instrumentation.requires.admin = 'false' (default 'false')
> {code}
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



  1   2   >