[jira] [Updated] (HDFS-14674) [SBN read] Got an unexpected txid when tail editlog

2019-08-07 Thread wangzhaohui (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14674?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

wangzhaohui updated HDFS-14674:
---
Attachment: HDFS-14674-006.patch

> [SBN read] Got an unexpected txid when tail editlog
> ---
>
> Key: HDFS-14674
> URL: https://issues.apache.org/jira/browse/HDFS-14674
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: wangzhaohui
>Assignee: wangzhaohui
>Priority: Blocker
> Attachments: HDFS-14674-001.patch, HDFS-14674-003.patch, 
> HDFS-14674-004.patch, HDFS-14674-005.patch, HDFS-14674-006.patch, 
> image-2019-07-26-11-34-23-405.png, image.png
>
>
> Add the following configuration
> !image-2019-07-26-11-34-23-405.png!
> error:
> {code:java}
> //
> [2019-07-17T11:50:21.048+08:00] [INFO] [Edit log tailer] : replaying edit 
> log: 1/20512836 transactions completed. (0%) [2019-07-17T11:50:21.059+08:00] 
> [INFO] [Edit log tailer] : Edits file 
> http://ip/getJournal?jid=ns1003&segmentTxId=232056426162&storageInfo=-63%3A1902204348%3A0%3ACID-hope-20180214-20161018-SQYH,
>  
> http://ip/getJournal?ipjid=ns1003&segmentTxId=232056426162&storageInfo=-63%3A1902204348%3A0%3ACID-hope-20180214-20161018-SQYH,
>  
> http://ip/getJournal?ipjid=ns1003&segmentTxId=232056426162&storageInfo=-63%3A1902204348%3A0%3ACID-hope-20180214-20161018-SQYH
>  of size 3126782311 edits # 500 loaded in 3 seconds 
> [2019-07-17T11:50:21.059+08:00] [INFO] [Edit log tailer] : Reading 
> org.apache.hadoop.hdfs.server.namenode.RedundantEditLogInputStream@51ceb7bc 
> expecting start txid #232056752162 [2019-07-17T11:50:21.059+08:00] [INFO] 
> [Edit log tailer] : Start loading edits file 
> http://ip/getJournal?ipjid=ns1003&segmentTxId=232077264498&storageInfo=-63%3A1902204348%3A0%3ACID-hope-20180214-20161018-SQYH,
>  
> http://ip/getJournal?ipjid=ns1003&segmentTxId=232077264498&storageInfo=-63%3A1902204348%3A0%3ACID-hope-20180214-20161018-SQYH,
>  
> http://ip/getJournal?ipjid=ns1003&segmentTxId=232077264498&storageInfo=-63%3A1902204348%3A0%3ACID-hope-20180214-20161018-SQYH
>  maxTxnipsToRead = 500 [2019-07-17T11:50:21.059+08:00] [INFO] [Edit log 
> tailer] : Fast-forwarding stream 
> 'http://ip/getJournal?jid=ns1003&segmentTxId=232077264498&storageInfo=-63%3A1902204348%3A0%3ACID-hope-20180214-20161018-SQYH,
>  
> http://ip/getJournal?ipjid=ns1003&segmentTxId=232077264498&storageInfo=-63%3A1902204348%3A0%3ACID-hope-20180214-20161018-SQYH,
>  
> http://ip/getJournal?ipjid=ns1003&segmentTxId=232077264498&storageInfo=-63%3A1902204348%3A0%3ACID-hope-20180214-20161018-SQYH'
>  to transaction ID 232056751662 [2019-07-17T11:50:21.059+08:00] [INFO] [Edit 
> log tailer] ip: Fast-forwarding stream 
> 'http://ip/getJournal?jid=ns1003&segmentTxId=232077264498&storageInfo=-63%3A1902204348%3A0%3ACID-hope-20180214-20161018-SQYH'
>  to transaction ID 232056751662 [2019-07-17T11:50:21.061+08:00] [ERROR] [Edit 
> log tailer] : Unknown error encountered while tailing edits. Shutting down 
> standby NN. java.io.IOException: There appears to be a gap in the edit log. 
> We expected txid 232056752162, but got txid 232077264498. at 
> org.apache.hadoop.hdfs.server.namenode.MetaRecoveryContext.editLogLoaderPrompt(MetaRecoveryContext.java:94)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.loadEditRecords(FSEditLogLoader.java:239)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.loadFSEdits(FSEditLogLoader.java:161)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.loadEdits(FSImage.java:895) at 
> org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer.doTailEdits(EditLogTailer.java:321)
>  at 
> org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread.doWork(EditLogTailer.java:460)
>  at 
> org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread.access$400(EditLogTailer.java:410)
>  at 
> org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread$1.run(EditLogTailer.java:427)
>  at 
> org.apache.hadoop.security.SecurityUtil.doAsLoginUserOrFatal(SecurityUtil.java:414)
>  at 
> org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread.run(EditLogTailer.java:423)
>  [2019-07-17T11:50:21.064+08:00] [INFO] [Edit log tailer] : Exiting with 
> status 1 [2019-07-17T11:50:21.066+08:00] [INFO] [Thread-1] : SHUTDOWN_MSG: 
> / SHUTDOWN_MSG: 
> Shutting down NameNode at ip 
> /
> {code}
>  
> if dfs.ha.tail-edits.max-txns-per-lock value is 500,when the namenode load 
> the editlog util 500,the current namenode will load the next editlog,but 
> editlog more than 500.So,namenode got an unexpected txid when tail editlog.
>  
>  
> {code:java}
> //
> [2019-07-17T11:50:21.059+08:00] [INFO] [Edit log tailer] : Edits f

[jira] [Commented] (HDFS-14099) Unknown frame descriptor when decompressing multiple frames in ZStandardDecompressor

2019-08-07 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14099?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16902721#comment-16902721
 ] 

Hadoop QA commented on HDFS-14099:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
50s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} dupname {color} | {color:green}  0m  
0s{color} | {color:green} No case conflicting files found. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 17m 
49s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 19m 
16s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
39s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
10s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m  5s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
59s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} spotbugs {color} | {color:blue}  1m 
49s{color} | {color:blue} Used deprecated FindBugs config; considering 
switching to SpotBugs. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
47s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 17m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 17m 
53s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 35s{color} | {color:orange} hadoop-common-project/hadoop-common: The patch 
generated 2 new + 0 unchanged - 0 fixed = 2 total (was 0) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
7s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 1s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 15s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
2s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  8m 18s{color} 
| {color:red} hadoop-common in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
51s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 98m 34s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.util.TestDiskChecker |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=19.03.1 Server=19.03.1 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-441/4/artifact/out/Dockerfile
 |
| GITHUB PR | https://github.com/apache/hadoop/pull/441 |
| JIRA Issue | HDFS-14099 |
| Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite 
unit shadedclient findbugs checkstyle |
| uname | Linux 4878a7771852 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | personality/hadoop.sh |
| git revision | trunk / 00b5a27 |
| Default Java | 1.8.0_212 |
| checkstyle | 
https://builds.apache.org/job/hadoop-multibranch/job

[jira] [Commented] (HDFS-14350) dfs.datanode.ec.reconstruction.threads not take effect

2019-08-07 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14350?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16902718#comment-16902718
 ] 

Hadoop QA commented on HDFS-14350:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
52s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} dupname {color} | {color:green}  0m  
0s{color} | {color:green} No case conflicting files found. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 17m 
53s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
57s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
42s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
5s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 12s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
48s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} spotbugs {color} | {color:blue}  2m 
50s{color} | {color:blue} Used deprecated FindBugs config; considering 
switching to SpotBugs. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
47s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
 0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
3s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m  1s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
49s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 82m 54s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  0m 
31s{color} | {color:red} The patch generated 4 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black}141m 19s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.server.datanode.TestLargeBlockReport |
|   | hadoop.hdfs.server.blockmanagement.TestUnderReplicatedBlocks |
|   | hadoop.hdfs.server.datanode.TestDirectoryScanner |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=19.03.1 Server=19.03.1 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-582/5/artifact/out/Dockerfile
 |
| GITHUB PR | https://github.com/apache/hadoop/pull/582 |
| JIRA Issue | HDFS-14350 |
| Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite 
unit shadedclient findbugs checkstyle |
| uname | Linux 007c7e0882a6 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | personality/hadoop.sh |
| git revision | trunk / b0131bc |
| Default Java | 1.8.0_212 |
| unit | 
https://builds.apache

[jira] [Commented] (HDFS-14456) HAState#prepareToEnterState needn't a lock

2019-08-07 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14456?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16902710#comment-16902710
 ] 

Hadoop QA commented on HDFS-14456:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  2m  
2s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} dupname {color} | {color:green}  0m  
0s{color} | {color:green} No case conflicting files found. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 20m 
33s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
2s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
45s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
8s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
14m 28s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
56s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} spotbugs {color} | {color:blue} 17m 
20s{color} | {color:blue} Used deprecated FindBugs config; considering 
switching to SpotBugs. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
53s{color} | {color:red} hadoop-hdfs in trunk failed. {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
9s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m  
9s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
19s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 165 line(s) that end in whitespace. Use 
git apply --whitespace=fix <>. Refer 
https://git-scm.com/docs/git-apply {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
1s{color} | {color:red} The patch 400 line(s) with tabs. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
14m 15s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
18s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}101m 29s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  0m 
34s{color} | {color:red} The patch generated 1 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black}169m  4s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.TestBlockTokenWrappingQOP |
|   | hadoop.hdfs.server.datanode.TestLargeBlockReport |
|   | hadoop.hdfs.tools.TestDFSZKFailoverController |
|   | hadoop.hdfs.server.balancer.TestBalancer |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=19.03.1 Server=19.03.1 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-770/4/artifact/out/Dockerfile
 |
| GITHUB PR | https://github.com/apache/hadoop/pull/770 |
| JIRA Issue | HDFS-14456 |
| Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite 
unit shadedclient findbugs checkstyle |
| uname | Linux 70aeb9764

[jira] [Created] (HDDS-1931) Recon cannot download OM DB snapshot in ozonesecure

2019-08-07 Thread Doroszlai, Attila (JIRA)
Doroszlai, Attila created HDDS-1931:
---

 Summary: Recon cannot download OM DB snapshot in ozonesecure 
 Key: HDDS-1931
 URL: https://issues.apache.org/jira/browse/HDDS-1931
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
  Components: docker, Ozone Recon
Affects Versions: 0.5.0
Reporter: Doroszlai, Attila


{code}
recon_1 | 2019-08-07 22:09:40 ERROR OzoneManagerServiceProviderImpl:186 - 
Unable to obtain Ozone Manager DB Snapshot.
recon_1 | java.io.IOException: Unexpected exception when trying to reach 
Ozone Manager, 
recon_1 | 
recon_1 | 
recon_1 | Error 401 Authentication required
recon_1 | 
recon_1 | HTTP ERROR 401
recon_1 | Problem accessing /dbCheckpoint. Reason:
recon_1 | Authentication required
recon_1 | 
recon_1 | 
recon_1 |
recon_1 |   at 
org.apache.hadoop.ozone.recon.ReconUtils.makeHttpCall(ReconUtils.java:171)
recon_1 |   at 
org.apache.hadoop.ozone.recon.spi.impl.OzoneManagerServiceProviderImpl.getOzoneManagerDBSnapshot(OzoneManagerServiceProviderImpl.java:170)
recon_1 |   at 
org.apache.hadoop.ozone.recon.spi.impl.OzoneManagerServiceProviderImpl.updateReconOmDBWithNewSnapshot(OzoneManagerServiceProviderImpl.java:141)
recon_1 |   at 
org.apache.hadoop.ozone.recon.ReconServer.lambda$scheduleReconTasks$1(ReconServer.java:138)
{code}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Reopened] (HDFS-14034) Support getQuotaUsage API in WebHDFS

2019-08-07 Thread Chao Sun (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14034?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chao Sun reopened HDFS-14034:
-

Re-opening this for backporting to branch-2.

> Support getQuotaUsage API in WebHDFS
> 
>
> Key: HDFS-14034
> URL: https://issues.apache.org/jira/browse/HDFS-14034
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: fs, webhdfs
>Reporter: Erik Krogen
>Assignee: Chao Sun
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: HDFS-14034-branch-2.000.patch, 
> HDFS-14034-branch-2.001.patch, HDFS-14034.000.patch, HDFS-14034.001.patch, 
> HDFS-14034.002.patch, HDFS-14034.004.patch
>
>
> HDFS-8898 added support for a new API, {{getQuotaUsage}} which can fetch 
> quota usage on a directory with significantly lower impact than the similar 
> {{getContentSummary}}. This JIRA is to track adding support for this API to 
> WebHDFS. 



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14034) Support getQuotaUsage API in WebHDFS

2019-08-07 Thread Chao Sun (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14034?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16902688#comment-16902688
 ] 

Chao Sun commented on HDFS-14034:
-

Thanks [~ayushtkn]. Let me re-open the Jira and try that.

> Support getQuotaUsage API in WebHDFS
> 
>
> Key: HDFS-14034
> URL: https://issues.apache.org/jira/browse/HDFS-14034
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: fs, webhdfs
>Reporter: Erik Krogen
>Assignee: Chao Sun
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: HDFS-14034-branch-2.000.patch, 
> HDFS-14034-branch-2.001.patch, HDFS-14034.000.patch, HDFS-14034.001.patch, 
> HDFS-14034.002.patch, HDFS-14034.004.patch
>
>
> HDFS-8898 added support for a new API, {{getQuotaUsage}} which can fetch 
> quota usage on a directory with significantly lower impact than the similar 
> {{getContentSummary}}. This JIRA is to track adding support for this API to 
> WebHDFS. 



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14034) Support getQuotaUsage API in WebHDFS

2019-08-07 Thread Chao Sun (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14034?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chao Sun updated HDFS-14034:

Status: Patch Available  (was: Reopened)

> Support getQuotaUsage API in WebHDFS
> 
>
> Key: HDFS-14034
> URL: https://issues.apache.org/jira/browse/HDFS-14034
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: fs, webhdfs
>Reporter: Erik Krogen
>Assignee: Chao Sun
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: HDFS-14034-branch-2.000.patch, 
> HDFS-14034-branch-2.001.patch, HDFS-14034.000.patch, HDFS-14034.001.patch, 
> HDFS-14034.002.patch, HDFS-14034.004.patch
>
>
> HDFS-8898 added support for a new API, {{getQuotaUsage}} which can fetch 
> quota usage on a directory with significantly lower impact than the similar 
> {{getContentSummary}}. This JIRA is to track adding support for this API to 
> WebHDFS. 



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14034) Support getQuotaUsage API in WebHDFS

2019-08-07 Thread Ayush Saxena (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14034?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16902687#comment-16902687
 ] 

Ayush Saxena commented on HDFS-14034:
-

To trigger CI, the State should be patch available I guess.

> Support getQuotaUsage API in WebHDFS
> 
>
> Key: HDFS-14034
> URL: https://issues.apache.org/jira/browse/HDFS-14034
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: fs, webhdfs
>Reporter: Erik Krogen
>Assignee: Chao Sun
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: HDFS-14034-branch-2.000.patch, 
> HDFS-14034-branch-2.001.patch, HDFS-14034.000.patch, HDFS-14034.001.patch, 
> HDFS-14034.002.patch, HDFS-14034.004.patch
>
>
> HDFS-8898 added support for a new API, {{getQuotaUsage}} which can fetch 
> quota usage on a directory with significantly lower impact than the similar 
> {{getContentSummary}}. This JIRA is to track adding support for this API to 
> WebHDFS. 



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14034) Support getQuotaUsage API in WebHDFS

2019-08-07 Thread Chao Sun (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14034?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chao Sun updated HDFS-14034:

Attachment: HDFS-14034-branch-2.001.patch

> Support getQuotaUsage API in WebHDFS
> 
>
> Key: HDFS-14034
> URL: https://issues.apache.org/jira/browse/HDFS-14034
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: fs, webhdfs
>Reporter: Erik Krogen
>Assignee: Chao Sun
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: HDFS-14034-branch-2.000.patch, 
> HDFS-14034-branch-2.001.patch, HDFS-14034.000.patch, HDFS-14034.001.patch, 
> HDFS-14034.002.patch, HDFS-14034.004.patch
>
>
> HDFS-8898 added support for a new API, {{getQuotaUsage}} which can fetch 
> quota usage on a directory with significantly lower impact than the similar 
> {{getContentSummary}}. This JIRA is to track adding support for this API to 
> WebHDFS. 



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14034) Support getQuotaUsage API in WebHDFS

2019-08-07 Thread Chao Sun (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14034?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16902686#comment-16902686
 ] 

Chao Sun commented on HDFS-14034:
-

Not sure why CI wasn't triggered for branch-2 patch. Re-attach patch v1 to try.

> Support getQuotaUsage API in WebHDFS
> 
>
> Key: HDFS-14034
> URL: https://issues.apache.org/jira/browse/HDFS-14034
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: fs, webhdfs
>Reporter: Erik Krogen
>Assignee: Chao Sun
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: HDFS-14034-branch-2.000.patch, 
> HDFS-14034-branch-2.001.patch, HDFS-14034.000.patch, HDFS-14034.001.patch, 
> HDFS-14034.002.patch, HDFS-14034.004.patch
>
>
> HDFS-8898 added support for a new API, {{getQuotaUsage}} which can fetch 
> quota usage on a directory with significantly lower impact than the similar 
> {{getContentSummary}}. This JIRA is to track adding support for this API to 
> WebHDFS. 



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14318) dn cannot be recognized and must be restarted to recognize the Repaired disk

2019-08-07 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14318?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16902678#comment-16902678
 ] 

Hadoop QA commented on HDFS-14318:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
44s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} dupname {color} | {color:green}  0m  
0s{color} | {color:green} No case conflicting files found. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 17m 
50s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
54s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
47s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
2s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 17s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
46s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} spotbugs {color} | {color:blue}  2m 
36s{color} | {color:blue} Used deprecated FindBugs config; considering 
switching to SpotBugs. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
35s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
51s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 37s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch 
generated 1 new + 154 unchanged - 0 fixed = 155 total (was 154) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
59s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 30s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
50s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  2m 
56s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs generated 2 new + 0 
unchanged - 0 fixed = 2 total (was 0) {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 99m 48s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
35s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}156m 44s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| FindBugs | module:hadoop-hdfs-project/hadoop-hdfs |
|  |  Possible doublecheck on 
org.apache.hadoop.hdfs.server.datanode.DataNode.checkDiskThread in 
org.apache.hadoop.hdfs.server.datanode.DataNode.startCheckDiskThread()  At 
DataNode.java:org.apache.hadoop.hdfs.server.datanode.DataNode.startCheckDiskThread()
  At DataNode.java:[lines 2212-2214] |
|  |  Null pointer dereference of DataNode.errorDisk in 
org.apache.hadoop.hdfs.server.datanode.DataNode.checkDiskError()  Dereferenced 
at DataNode.java:in 
org.apache.hadoop.hdfs.server.datanode.DataNode.checkDiskError()  Dereferenced 
at DataNode.java:[line 3489] |
| Failed junit tests | hadoop.hdfs.server.datanode.TestDataNodeHot

[jira] [Commented] (HDFS-14564) Add libhdfs APIs for readFully; add readFully to ByteBufferPositionedReadable

2019-08-07 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14564?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16902677#comment-16902677
 ] 

Hadoop QA commented on HDFS-14564:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
41s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} dupname {color} | {color:green}  0m  
0s{color} | {color:green} No case conflicting files found. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 5 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
22s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 17m 
12s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 17m 
56s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
24s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  3m 
31s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
17m 22s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
52s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} spotbugs {color} | {color:blue}  0m 
19s{color} | {color:blue} Used deprecated FindBugs config; considering 
switching to SpotBugs. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m 
19s{color} | {color:blue} branch/hadoop-hdfs-project/hadoop-hdfs-native-client 
no findbugs output file (findbugsXml.xml) {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
22s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
30s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 17m 
38s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green} 17m 
38s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 17m 
38s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
15s{color} | {color:green} root: The patch generated 0 new + 110 unchanged - 1 
fixed = 110 total (was 111) {color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red}  0m 
43s{color} | {color:red} hadoop-hdfs-client in the patch failed. {color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red}  1m  
3s{color} | {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 57s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
49s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
41s{color} | {color:red} hadoop-hdfs-client in the patch failed. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
58s{color} | {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m 
23s{color} | {color:blue} hadoop-hdfs-project/hadoop-hdfs-native-client has no 
data from findbugs {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  8m 
48s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  0m 41s{color} 
| {color:red} hadoop-hdfs-client in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  0m 59

[jira] [Work logged] (HDDS-1863) Freon RandomKeyGenerator even if keySize is set to 0, it returns some random data to key

2019-08-07 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1863?focusedWorklogId=290990&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-290990
 ]

ASF GitHub Bot logged work on HDDS-1863:


Author: ASF GitHub Bot
Created on: 08/Aug/19 04:30
Start Date: 08/Aug/19 04:30
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #1167: HDDS-1863. Freon 
RandomKeyGenerator even if keySize is set to 0, it returns some random data to 
key.
URL: https://github.com/apache/hadoop/pull/1167#issuecomment-519358446
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 131 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 1 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | +1 | mvninstall | 748 | trunk passed |
   | +1 | compile | 461 | trunk passed |
   | +1 | checkstyle | 103 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 1168 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 227 | trunk passed |
   | 0 | spotbugs | 555 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 812 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | +1 | mvninstall | 706 | the patch passed |
   | +1 | compile | 439 | the patch passed |
   | +1 | javac | 439 | the patch passed |
   | +1 | checkstyle | 93 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 859 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 217 | the patch passed |
   | +1 | findbugs | 771 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 411 | hadoop-hdds in the patch passed. |
   | -1 | unit | 3099 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 47 | The patch does not generate ASF License warnings. |
   | | | 10479 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | 
hadoop.hdds.scm.pipeline.TestRatisPipelineCreateAndDestory |
   |   | hadoop.ozone.om.TestSecureOzoneManager |
   |   | hadoop.ozone.client.rpc.Test2WayCommitInRatis |
   |   | hadoop.ozone.client.rpc.TestCloseContainerHandlingByClient |
   |   | hadoop.ozone.client.rpc.TestBlockOutputStream |
   |   | hadoop.ozone.client.rpc.TestOzoneAtRestEncryption |
   |   | hadoop.ozone.om.TestScmSafeMode |
   |   | hadoop.ozone.client.rpc.TestMultiBlockWritesWithDnFailures |
   |   | hadoop.ozone.client.rpc.TestCommitWatcher |
   |   | hadoop.ozone.TestMiniOzoneCluster |
   |   | hadoop.ozone.om.TestKeyManagerImpl |
   |   | hadoop.ozone.client.rpc.TestBlockOutputStreamWithFailures |
   |   | hadoop.ozone.client.rpc.TestOzoneRpcClient |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=18.09.7 Server=18.09.7 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1167/7/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1167 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux a4622866266d 4.15.0-52-generic #56-Ubuntu SMP Tue Jun 4 
22:49:08 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 70b4617 |
   | Default Java | 1.8.0_212 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1167/7/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1167/7/testReport/ |
   | Max. process+thread count | 5370 (vs. ulimit of 5500) |
   | modules | C: hadoop-ozone/tools U: hadoop-ozone/tools |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1167/7/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 290990)
Time Spent: 3h 20m  (was: 3h 10m)

> Freon RandomKeyGenerator even if keySize is set to 0, it returns some random 
> data to key
> 

[jira] [Commented] (HDFS-13762) Support non-volatile storage class memory(SCM) in HDFS cache directives

2019-08-07 Thread Akira Ajisaka (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13762?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16902670#comment-16902670
 ] 

Akira Ajisaka commented on HDFS-13762:
--

In [^SCM_Cache_Perf_Results-v1.pdf], DFSIO 1TB sequential read performance 
(HDD) is 4.74MB/sec and I think the result is very slow. Normally HDD 
sequential read performance is about 80MB~100MB/sec and the result of TestDFSIO 
should be similar. Am I missing something?

> Support non-volatile storage class memory(SCM) in HDFS cache directives
> ---
>
> Key: HDFS-13762
> URL: https://issues.apache.org/jira/browse/HDFS-13762
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: caching, datanode
>Reporter: Sammi Chen
>Assignee: Feilong He
>Priority: Major
> Attachments: HDFS-13762.000.patch, HDFS-13762.001.patch, 
> HDFS-13762.002.patch, HDFS-13762.003.patch, HDFS-13762.004.patch, 
> HDFS-13762.005.patch, HDFS-13762.006.patch, HDFS-13762.007.patch, 
> HDFS-13762.008.patch, SCMCacheDesign-2018-11-08.pdf, 
> SCMCacheDesign-2019-07-12.pdf, SCMCacheDesign-2019-07-16.pdf, 
> SCMCacheDesign-2019-3-26.pdf, SCMCacheTestPlan-2019-3-27.pdf, 
> SCMCacheTestPlan.pdf, SCM_Cache_Perf_Results-v1.pdf
>
>
> No-volatile storage class memory is a type of memory that can keep the data 
> content after power failure or between the power cycle. Non-volatile storage 
> class memory device usually has near access speed as memory DIMM while has 
> lower cost than memory.  So today It is usually used as a supplement to 
> memory to hold long tern persistent data, such as data in cache. 
> Currently in HDFS, we have OS page cache backed read only cache and RAMDISK 
> based lazy write cache.  Non-volatile memory suits for both these functions. 
> This Jira aims to enable storage class memory first in read cache. Although 
> storage class memory has non-volatile characteristics, to keep the same 
> behavior as current read only cache, we don't use its persistent 
> characteristics currently.  
>  
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14260) Replace synchronized method in BlockReceiver with atomic value

2019-08-07 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14260?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16902661#comment-16902661
 ] 

Hadoop QA commented on HDFS-14260:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red}  0m  9s{color} 
| {color:red} https://github.com/apache/hadoop/pull/483 does not apply to 
trunk. Rebase required? Wrong Branch? See 
https://wiki.apache.org/hadoop/HowToContribute for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| GITHUB PR | https://github.com/apache/hadoop/pull/483 |
| JIRA Issue | HDFS-14260 |
| Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-483/4/console |
| versions | git=2.17.1 |
| Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |


This message was automatically generated.



> Replace synchronized method in BlockReceiver with atomic value
> --
>
> Key: HDFS-14260
> URL: https://issues.apache.org/jira/browse/HDFS-14260
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Affects Versions: 3.2.0
>Reporter: David Mollitor
>Assignee: David Mollitor
>Priority: Minor
> Fix For: 3.3.0
>
> Attachments: HDFS-14260.1.patch, HDFS-14260.2.patch
>
>
> This synchronized block is protecting {{lastSentTime}} which is a native 
> long.  Can use AtomicLong and remove this synchronization.
> {code}
>   synchronized boolean packetSentInTime() {
> long diff = Time.monotonicNow() - lastSentTime;
> if (diff > maxSendIdleTime) {
>   LOG.info("A packet was last sent " + diff + " milliseconds ago.");
>   return false;
> }
> return true;
>   }
> {code}
> https://github.com/apache/hadoop/blob/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BlockReceiver.java#L392-L399



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14617) Improve fsimage load time by writing sub-sections to the fsimage index

2019-08-07 Thread Wei-Chiu Chuang (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14617?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16902657#comment-16902657
 ] 

Wei-Chiu Chuang commented on HDFS-14617:


{quote}
Test with fsimage of over 165M inodes and ~23 GB on disk and using 4 threads by 
default.
Before patch, loaded fsimage in 701422ms (include md5 calculation cost 82s)
After patch, loaded fsimage in 409760ms (include md5 calculation cost 82s)
{quote}
Excluding the md5 calculation, this is about 2x improvement with 4x threads. 
Not bad! I wonder what else can we do to further reduce overhead.

> Improve fsimage load time by writing sub-sections to the fsimage index
> --
>
> Key: HDFS-14617
> URL: https://issues.apache.org/jira/browse/HDFS-14617
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Reporter: Stephen O'Donnell
>Assignee: Stephen O'Donnell
>Priority: Major
> Attachments: HDFS-14617.001.patch, ParallelLoading.svg, 
> SerialLoading.svg, dirs-single.svg, flamegraph.parallel.svg, 
> flamegraph.serial.svg, inodes.svg
>
>
> Loading an fsimage is basically a single threaded process. The current 
> fsimage is written out in sections, eg iNode, iNode_Directory, Snapshots, 
> Snapshot_Diff etc. Then at the end of the file, an index is written that 
> contains the offset and length of each section. The image loader code uses 
> this index to initialize an input stream to read and process each section. It 
> is important that one section is fully loaded before another is started, as 
> the next section depends on the results of the previous one.
> What I would like to propose is the following:
> 1. When writing the image, we can optionally output sub_sections to the 
> index. That way, a given section would effectively be split into several 
> sections, eg:
> {code:java}
>inode_section offset 10 length 1000
>  inode_sub_section offset 10 length 500
>  inode_sub_section offset 510 length 500
>  
>inode_dir_section offset 1010 length 1000
>  inode_dir_sub_section offset 1010 length 500
>  inode_dir_sub_section offset 1010 length 500
> {code}
> Here you can see we still have the original section index, but then we also 
> have sub-section entries that cover the entire section. Then a processor can 
> either read the full section in serial, or read each sub-section in parallel.
> 2. In the Image Writer code, we should set a target number of sub-sections, 
> and then based on the total inodes in memory, it will create that many 
> sub-sections per major image section. I think the only sections worth doing 
> this for are inode, inode_reference, inode_dir and snapshot_diff. All others 
> tend to be fairly small in practice.
> 3. If there are under some threshold of inodes (eg 10M) then don't bother 
> with the sub-sections as a serial load only takes a few seconds at that scale.
> 4. The image loading code can then have a switch to enable 'parallel loading' 
> and a 'number of threads' where it uses the sub-sections, or if not enabled 
> falls back to the existing logic to read the entire section in serial.
> Working with a large image of 316M inodes and 35GB on disk, I have a proof of 
> concept of this change working, allowing just inode and inode_dir to be 
> loaded in parallel, but I believe inode_reference and snapshot_diff can be 
> make parallel with the same technique.
> Some benchmarks I have are as follows:
> {code:java}
> Threads   1 2 3 4 
> 
> inodes448   290   226   189 
> inode_dir 326   211   170   161 
> Total 927   651   535   488 (MD5 calculation about 100 seconds)
> {code}
> The above table shows the time in seconds to load the inode section and the 
> inode_directory section, and then the total load time of the image.
> With 4 threads using the above technique, we are able to better than half the 
> load time of the two sections. With the patch in HDFS-13694 it would take a 
> further 100 seconds off the run time, going from 927 seconds to 388, which is 
> a significant improvement. Adding more threads beyond 4 has diminishing 
> returns as there are some synchronized points in the loading code to protect 
> the in memory structures.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14295) Add Threadpool for DataTransfers

2019-08-07 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14295?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16902659#comment-16902659
 ] 

Hadoop QA commented on HDFS-14295:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red}  0m  9s{color} 
| {color:red} https://github.com/apache/hadoop/pull/497 does not apply to 
trunk. Rebase required? Wrong Branch? See 
https://wiki.apache.org/hadoop/HowToContribute for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| GITHUB PR | https://github.com/apache/hadoop/pull/497 |
| JIRA Issue | HDFS-14295 |
| Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-497/6/console |
| versions | git=2.17.1 |
| Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |


This message was automatically generated.



> Add Threadpool for DataTransfers
> 
>
> Key: HDFS-14295
> URL: https://issues.apache.org/jira/browse/HDFS-14295
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Affects Versions: 3.2.0
>Reporter: David Mollitor
>Assignee: David Mollitor
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: HDFS-14295.1.patch, HDFS-14295.10.patch, 
> HDFS-14295.2.patch, HDFS-14295.3.patch, HDFS-14295.4.patch, 
> HDFS-14295.5.patch, HDFS-14295.6.patch, HDFS-14295.7.patch, 
> HDFS-14295.8.patch, HDFS-14295.9.patch
>
>
> When a DataNode data transfers a block, is spins up a new thread for each 
> transfer.  
> [Here|https://github.com/apache/hadoop/blob/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java#L2339]
>  and 
> [Here|https://github.com/apache/hadoop/blob/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java#L3019-L3022].
>    Instead, add the threads to a {{CachedThreadPool}} so that when their 
> threads complete the transfer, they can be re-used for another transfer. This 
> should save resources spent on creating and spinning up transfer threads.
> One thing I'll point out that's a bit off, which I address in this patch, ...
> There are two places in the code where a {{DataTransfer}} thread is started. 
> In [one 
> place|https://github.com/apache/hadoop/blob/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java#L2339-L2341],
>  it's started in a default thread group. In [another 
> place|https://github.com/apache/hadoop/blob/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java#L3019-L3022],
>  it's started in the 
> [dataXceiverServer|https://github.com/apache/hadoop/blob/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java#L1164]
>  thread group.
> I do not think it's correct to include any of these threads in the 
> {{dataXceiverServer}} thread group. Anything submitted to the 
> {{dataXceiverServer}} should probably be tied to the 
> {{dfs.datanode.max.transfer.threads}} configurations, and neither of these 
> methods are. Instead, they should be submitted into the same thread pool with 
> its own thread group (probably the default thread group, unless someone 
> suggests otherwise) and is what I have included in this patch.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1925) ozonesecure acceptance test broken by HTTP auth requirement

2019-08-07 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1925?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16902654#comment-16902654
 ] 

Hudson commented on HDDS-1925:
--

FAILURE: Integrated in Jenkins build Hadoop-trunk-Commit #17061 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17061/])
HDDS-1925. ozonesecure acceptance test broken by HTTP auth requirement (xyao: 
rev ab6a5c9d07a50b49d696b983e1a1cd4f9ef2a44d)
* (edit) hadoop-ozone/dist/src/main/smoketest/s3/webui.robot
* (edit) hadoop-ozone/dist/src/main/smoketest/basic/basic.robot
* (edit) hadoop-ozone/dist/src/main/smoketest/commonlib.robot
* (edit) hadoop-ozone/dist/src/main/smoketest/s3/commonawslib.robot
* (edit) hadoop-ozone/dist/src/main/smoketest/basic/ozone-shell.robot
* (edit) hadoop-ozone/dist/src/main/compose/ozonesecure/test.sh
* (edit) hadoop-ozone/dist/src/main/compose/testlib.sh


> ozonesecure acceptance test broken by HTTP auth requirement
> ---
>
> Key: HDDS-1925
> URL: https://issues.apache.org/jira/browse/HDDS-1925
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: docker, test
>Affects Versions: 0.4.1
>Reporter: Doroszlai, Attila
>Assignee: Doroszlai, Attila
>Priority: Critical
>  Labels: pull-request-available
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> Acceptance test is failing at {{ozonesecure}} with the following error from 
> {{jq}}:
> {noformat:title=https://github.com/elek/ozone-ci/blob/325779d34623061e27b80ade3b749210648086d1/byscane/byscane-nightly-ds7lx/acceptance/output.log#L2779}
> parse error: Invalid numeric literal at line 2, column 0
> {noformat}
> Example compose environments wait for datanodes to be up:
> {code:title=https://github.com/apache/hadoop/blob/9cd211ac86bb1124bdee572fddb6f86655b19b73/hadoop-ozone/dist/src/main/compose/testlib.sh#L71-L72}
>   docker-compose -f "$COMPOSE_FILE" up -d --scale datanode="${datanode_count}"
>   wait_for_datanodes "$COMPOSE_FILE" "${datanode_count}"
> {code}
> The number of datanodes up is determined via HTTP query of JMX endpoint:
> {code:title=https://github.com/apache/hadoop/blob/9cd211ac86bb1124bdee572fddb6f86655b19b73/hadoop-ozone/dist/src/main/compose/testlib.sh#L44-L46}
>  #This line checks the number of HEALTHY datanodes registered in scm over 
> the
>  # jmx HTTP servlet
>  datanodes=$(docker-compose -f "${compose_file}" exec -T scm curl -s 
> 'http://localhost:9876/jmx?qry=Hadoop:service=SCMNodeManager,name=SCMNodeManagerInfo'
>  | jq -r '.beans[0].NodeCount[] | select(.key=="HEALTHY") | .value')
> {code}
> The problem is that no authentication is performed before or during the 
> request, which is no longer allowed since HDDS-1901:
> {code}
> $ docker-compose exec -T scm curl -s 
> 'http://localhost:9876/jmx?qry=Hadoop:service=SCMNodeManager,name=SCMNodeManagerInfo'
> 
> 
> 
> Error 401 Authentication required
> 
> HTTP ERROR 401
> Problem accessing /jmx. Reason:
> Authentication required
> 
> 
> {code}
> {code}
> $ docker-compose exec -T scm curl -s 
> 'http://localhost:9876/jmx?qry=Hadoop:service=SCMNodeManager,name=SCMNodeManagerInfo'
>  | jq -r '.beans[0].NodeCount[] | select(.key=="HEALTHY") | .value'
> parse error: Invalid numeric literal at line 2, column 0
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1865) Use "ozone.network.topology.aware.read" to control both RPC client and server side logic

2019-08-07 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1865?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16902653#comment-16902653
 ] 

Hudson commented on HDDS-1865:
--

FAILURE: Integrated in Jenkins build Hadoop-trunk-Commit #17061 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17061/])
HDDS-1865. Use "ozone.network.topology.aware.read" to control both RPC (xyao: 
rev 8f9245bc2d2771270488f151b1a41c656bdafc68)
* (edit) 
hadoop-hdds/client/src/main/java/org/apache/hadoop/hdds/scm/XceiverClientGrpc.java
* (edit) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/protocolPB/OzoneManagerRequestHandler.java
* (edit) 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/container/placement/algorithms/SCMContainerPlacementRackAware.java
* (edit) 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/client/rpc/TestOzoneRpcClientAbstract.java
* (edit) 
hadoop-hdds/common/src/main/java/org/apache/hadoop/ozone/OzoneConfigKeys.java
* (edit) 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/client/rpc/TestOzoneRpcClientWithRatis.java
* (edit) hadoop-hdds/common/src/main/resources/ozone-default.xml
* (edit) 
hadoop-hdds/client/src/main/java/org/apache/hadoop/hdds/scm/XceiverClientManager.java
* (edit) 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/client/rpc/TestFailureHandlingByClient.java
* (edit) 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/container/placement/algorithms/ContainerPlacementPolicyFactory.java
* (edit) 
hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/om/protocolPB/OzoneManagerProtocolClientSideTranslatorPB.java
* (edit) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/KeyManagerImpl.java
* (edit) 
hadoop-ozone/client/src/main/java/org/apache/hadoop/ozone/client/rpc/RpcClient.java
* (edit) 
hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/om/helpers/OmKeyArgs.java
* (edit) 
hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/scm/ScmConfigKeys.java
* (edit) hadoop-ozone/common/src/main/proto/OzoneManagerProtocol.proto


> Use "ozone.network.topology.aware.read" to control both RPC client and server 
> side logic 
> -
>
> Key: HDDS-1865
> URL: https://issues.apache.org/jira/browse/HDDS-1865
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Sammi Chen
>Assignee: Sammi Chen
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 2h 10m
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1900) Remove UpdateBucket handler which supports add/remove Acl

2019-08-07 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1900?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16902655#comment-16902655
 ] 

Hudson commented on HDDS-1900:
--

FAILURE: Integrated in Jenkins build Hadoop-trunk-Commit #17061 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17061/])
HDDS-1900. Remove UpdateBucket handler which supports add/remove Acl. (github: 
rev 70b4617cfe69fcbde0dca88827b92505d0925c3d)
* (edit) 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/ozShell/TestOzoneShell.java
* (edit) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/web/ozShell/bucket/BucketCommands.java
* (edit) hadoop-hdds/docs/content/shell/BucketCommands.md
* (edit) hadoop-ozone/dist/src/main/smoketest/basic/ozone-shell.robot
* (delete) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/web/ozShell/bucket/UpdateBucketHandler.java


> Remove UpdateBucket handler which supports add/remove Acl
> -
>
> Key: HDDS-1900
> URL: https://issues.apache.org/jira/browse/HDDS-1900
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Critical
>  Labels: pull-request-available
> Fix For: 0.4.1, 0.5.0
>
>  Time Spent: 2h 50m
>  Remaining Estimate: 0h
>
> This Jira is to remove bucket update handler.
> To add acl/remove acl we should use ozone sh bucket addacl/ozone sh bucket 
> removeacl.
>  
> Otherwise, when security is enabled, old Bucket update handler, uses 
> setBucketProperty and that checks acl acces for WRITE, whereas when 
> add/remove Acl we should check access for WRITE_ACL.
>  
> If we have both ways, even if a USER does not have WRITE_ACL can still 
> add/remove Acls on a bucket.
>  
> This Jira is to clean up the old code and fix this security issue.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1366) Add ability in Recon to track the number of small files in an Ozone cluster.

2019-08-07 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1366?focusedWorklogId=290982&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-290982
 ]

ASF GitHub Bot logged work on HDDS-1366:


Author: ASF GitHub Bot
Created on: 08/Aug/19 04:04
Start Date: 08/Aug/19 04:04
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #1146: HDDS-1366. Add 
ability in Recon to track the number of small files in an Ozone Cluster
URL: https://github.com/apache/hadoop/pull/1146#issuecomment-519354016
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 82 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 1 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 5 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 15 | Maven dependency ordering for branch |
   | +1 | mvninstall | 609 | trunk passed |
   | +1 | compile | 397 | trunk passed |
   | +1 | checkstyle | 72 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 897 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 172 | trunk passed |
   | 0 | spotbugs | 477 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 709 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 21 | Maven dependency ordering for patch |
   | +1 | mvninstall | 595 | the patch passed |
   | +1 | compile | 426 | the patch passed |
   | +1 | javac | 426 | the patch passed |
   | +1 | checkstyle | 86 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 833 | patch has no errors when building and testing 
our client artifacts. |
   | -1 | javadoc | 106 | hadoop-ozone generated 7 new + 13 unchanged - 0 fixed 
= 20 total (was 13) |
   | +1 | findbugs | 706 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 369 | hadoop-hdds in the patch passed. |
   | -1 | unit | 2032 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 44 | The patch does not generate ASF License warnings. |
   | | | 8440 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | 
hadoop.ozone.client.rpc.TestMultiBlockWritesWithDnFailures |
   |   | hadoop.ozone.om.TestOzoneManagerHA |
   |   | hadoop.ozone.TestMiniOzoneCluster |
   |   | hadoop.ozone.client.rpc.Test2WayCommitInRatis |
   |   | hadoop.ozone.client.rpc.TestOzoneRpcClient |
   |   | hadoop.ozone.om.TestKeyManagerImpl |
   |   | hadoop.ozone.client.rpc.TestSecureOzoneRpcClient |
   |   | hadoop.ozone.om.TestScmSafeMode |
   |   | hadoop.ozone.client.rpc.TestBlockOutputStreamWithFailures |
   |   | hadoop.ozone.TestStorageContainerManager |
   |   | hadoop.hdds.scm.pipeline.TestRatisPipelineCreateAndDestory |
   |   | hadoop.ozone.client.rpc.TestOzoneAtRestEncryption |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.1 Server=19.03.1 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1146/3/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1146 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux ac00b7984441 4.15.0-48-generic #51-Ubuntu SMP Wed Apr 3 
08:28:49 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 70b4617 |
   | Default Java | 1.8.0_212 |
   | javadoc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1146/3/artifact/out/diff-javadoc-javadoc-hadoop-ozone.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1146/3/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1146/3/testReport/ |
   | Max. process+thread count | 4088 (vs. ulimit of 5500) |
   | modules | C: hadoop-ozone/ozone-recon-codegen hadoop-ozone/ozone-recon U: 
hadoop-ozone |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1146/3/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org

[jira] [Work logged] (HDDS-1836) Change the default value of ratis leader election min timeout to a lower value

2019-08-07 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1836?focusedWorklogId=290981&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-290981
 ]

ASF GitHub Bot logged work on HDDS-1836:


Author: ASF GitHub Bot
Created on: 08/Aug/19 04:00
Start Date: 08/Aug/19 04:00
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #1133: HDDS-1836. 
Change the default value of ratis leader election min timeout to a lower value
URL: https://github.com/apache/hadoop/pull/1133#issuecomment-519353455
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 62 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | -1 | test4tests | 0 | The patch doesn't appear to include any new or 
modified tests.  Please justify why no new tests are needed for this patch. 
Also please list what manual steps were performed to verify this patch. |
   ||| _ trunk Compile Tests _ |
   | -1 | mvninstall | 351 | hadoop-ozone in trunk failed. |
   | +1 | compile | 337 | trunk passed |
   | +1 | checkstyle | 62 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 799 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 148 | trunk passed |
   | 0 | spotbugs | 428 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 618 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | +1 | mvninstall | 546 | the patch passed |
   | +1 | compile | 379 | the patch passed |
   | +1 | javac | 379 | the patch passed |
   | +1 | checkstyle | 66 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | xml | 2 | The patch has no ill-formed XML file. |
   | +1 | shadedclient | 601 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 151 | the patch passed |
   | +1 | findbugs | 613 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 360 | hadoop-hdds in the patch passed. |
   | -1 | unit | 2477 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 68 | The patch does not generate ASF License warnings. |
   | | | 7909 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.ozone.TestSecureOzoneCluster |
   |   | hadoop.ozone.client.rpc.TestSecureOzoneRpcClient |
   |   | hadoop.ozone.client.rpc.TestMultiBlockWritesWithDnFailures |
   |   | hadoop.ozone.client.rpc.TestOzoneAtRestEncryption |
   |   | hadoop.hdds.scm.pipeline.TestRatisPipelineCreateAndDestory |
   |   | hadoop.ozone.client.rpc.TestBlockOutputStreamWithFailures |
   |   | hadoop.ozone.client.rpc.TestOzoneRpcClient |
   |   | hadoop.ozone.om.TestScmSafeMode |
   |   | hadoop.ozone.om.TestKeyManagerImpl |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.1 Server=19.03.1 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1133/4/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1133 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle xml |
   | uname | Linux b762a0a4074d 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 70b4617 |
   | Default Java | 1.8.0_222 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1133/4/artifact/out/branch-mvninstall-hadoop-ozone.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1133/4/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1133/4/testReport/ |
   | Max. process+thread count | 5072 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdds/common U: hadoop-hdds/common |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1133/4/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 290981)
Time Spent: 1h  (was: 50m)

> Change the default value of ratis l

[jira] [Work logged] (HDDS-1200) Ozone Data Scrubbing : Checksum verification for chunks

2019-08-07 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1200?focusedWorklogId=290977&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-290977
 ]

ASF GitHub Bot logged work on HDDS-1200:


Author: ASF GitHub Bot
Created on: 08/Aug/19 03:53
Start Date: 08/Aug/19 03:53
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #1154: [HDDS-1200] Add 
support for checksum verification in data scrubber
URL: https://github.com/apache/hadoop/pull/1154#issuecomment-519352255
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 38 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 2 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 25 | Maven dependency ordering for branch |
   | +1 | mvninstall | 617 | trunk passed |
   | +1 | compile | 412 | trunk passed |
   | +1 | checkstyle | 72 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 918 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 164 | trunk passed |
   | 0 | spotbugs | 478 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 685 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 32 | Maven dependency ordering for patch |
   | +1 | mvninstall | 554 | the patch passed |
   | +1 | compile | 386 | the patch passed |
   | +1 | javac | 386 | the patch passed |
   | +1 | checkstyle | 77 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 742 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 163 | the patch passed |
   | +1 | findbugs | 658 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 336 | hadoop-hdds in the patch passed. |
   | -1 | unit | 2005 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 43 | The patch does not generate ASF License warnings. |
   | | | 8091 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.ozone.om.TestScmSafeMode |
   |   | hadoop.ozone.client.rpc.TestMultiBlockWritesWithDnFailures |
   |   | hadoop.ozone.TestOzoneConfigurationFields |
   |   | hadoop.ozone.client.rpc.TestOzoneRpcClient |
   |   | hadoop.ozone.client.rpc.TestOzoneAtRestEncryption |
   |   | hadoop.ozone.om.TestKeyManagerImpl |
   |   | hadoop.ozone.client.rpc.TestSecureOzoneRpcClient |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.1 Server=19.03.1 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1154/3/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1154 |
   | JIRA Issue | HDDS-1200 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux 95e336b2f0ac 4.15.0-54-generic #58-Ubuntu SMP Mon Jun 24 
10:55:24 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 70b4617 |
   | Default Java | 1.8.0_222 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1154/3/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1154/3/testReport/ |
   | Max. process+thread count | 5301 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdds/common hadoop-hdds/container-service 
hadoop-ozone/integration-test U: . |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1154/3/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 290977)
Time Spent: 4h  (was: 3h 50m)

> Ozone Data Scrubbing : Checksum verification for chunks
> ---
>
> Key: HDDS-1200
> URL: https://issues.apache.org/jira/browse/HDDS-1200
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Supratim Deka

[jira] [Commented] (HDDS-1200) Ozone Data Scrubbing : Checksum verification for chunks

2019-08-07 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1200?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16902647#comment-16902647
 ] 

Hadoop QA commented on HDDS-1200:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
38s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} dupname {color} | {color:green}  0m  
0s{color} | {color:green} No case conflicting files found. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
25s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 10m 
17s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
52s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
12s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m  
0s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
15m 18s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
44s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} spotbugs {color} | {color:blue}  7m 
58s{color} | {color:blue} Used deprecated FindBugs config; considering 
switching to SpotBugs. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 11m 
25s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
32s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  9m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
26s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  6m 
26s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m  
0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 22s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
43s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 10m 
58s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  5m 
36s{color} | {color:green} hadoop-hdds in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 33m 25s{color} 
| {color:red} hadoop-ozone in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
43s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}134m 51s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.ozone.om.TestScmSafeMode |
|   | hadoop.ozone.client.rpc.TestMultiBlockWritesWithDnFailures |
|   | hadoop.ozone.TestOzoneConfigurationFields |
|   | hadoop.ozone.client.rpc.TestOzoneRpcClient |
|   | hadoop.ozone.client.rpc.TestOzoneAtRestEncryption |
|   | hadoop.ozone.om.TestKeyManagerImpl |
|   | hadoop.ozone.client.rpc.TestSecureOzoneRpcClient |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=19.03.1 Server=19.03.1 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1154/3/artifact/ou

[jira] [Work logged] (HDDS-1768) Audit xxxAcl methods in OzoneManager

2019-08-07 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1768?focusedWorklogId=290976&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-290976
 ]

ASF GitHub Bot logged work on HDDS-1768:


Author: ASF GitHub Bot
Created on: 08/Aug/19 03:48
Start Date: 08/Aug/19 03:48
Worklog Time Spent: 10m 
  Work Description: dineshchitlangia commented on pull request #1204: 
HDDS-1768. Audit xxxAcl methods in OzoneManager
URL: https://github.com/apache/hadoop/pull/1204#discussion_r311846099
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/OzoneManager.java
 ##
 @@ -3029,24 +3044,39 @@ public boolean addAcl(OzoneObj obj, OzoneAcl acl) 
throws IOException {
*/
   @Override
   public boolean removeAcl(OzoneObj obj, OzoneAcl acl) throws IOException {
-if(isAclEnabled) {
-  checkAcls(obj.getResourceType(), obj.getStoreType(), ACLType.WRITE_ACL,
-  obj.getVolumeName(), obj.getBucketName(), obj.getKeyName());
-}
-// TODO: Audit ACL operation.
-switch (obj.getResourceType()) {
-case VOLUME:
-  return volumeManager.removeAcl(obj, acl);
-case BUCKET:
-  return bucketManager.removeAcl(obj, acl);
-case KEY:
-  return keyManager.removeAcl(obj, acl);
-case PREFIX:
-  return prefixManager.removeAcl(obj, acl);
+Map auditMap = obj.toAuditMap();
+auditMap.put(OzoneConsts.ACLS, acl.toString());
 
 Review comment:
   @bharatviswa504 I actually spotted a typo because of your comment.
   removeAcl & addAcl will use `auditMap.put(OzoneConsts.ACL, acl.toString())`
   setAcl will use `auditMap.put(OzoneConsts.ACLS, acls.toString())`
   getAcl will not have such a line.
   The difference between first 2 is that setAcl can have more than one Acl in 
question hence using 'ACLS' to indicate multiple.
   Let me know if you think we still need to move this line to common method 
and avoid such detailed classification.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 290976)
Time Spent: 1h 40m  (was: 1.5h)

> Audit xxxAcl methods in OzoneManager
> 
>
> Key: HDDS-1768
> URL: https://issues.apache.org/jira/browse/HDDS-1768
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Ajay Kumar
>Assignee: Dinesh Chitlangia
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h 40m
>  Remaining Estimate: 0h
>
> Audit permission failures from authorizer



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14455) Fix typo in HAState.java

2019-08-07 Thread Wei-Chiu Chuang (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14455?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16902639#comment-16902639
 ] 

Wei-Chiu Chuang commented on HDFS-14455:


Duplicates HDFS-14398

> Fix typo in HAState.java
> 
>
> Key: HDFS-14455
> URL: https://issues.apache.org/jira/browse/HDFS-14455
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Reporter: hunshenshi
>Priority: Major
>
> There are some typo in HAState
> destructuve -> destructive
> Aleady -> Already
> Transtion -> Transition



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14696) Backport HDFS-11273 to branch-2 (Move TransferFsImage#doGetUrl function to a Util class)

2019-08-07 Thread Wei-Chiu Chuang (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14696?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16902636#comment-16902636
 ] 

Wei-Chiu Chuang commented on HDFS-14696:


+1

bq. (I can't run them directly in IntelliJ due to `webapps/journal not found in 
CLASSPATH`):
Try import from the same repo again. That sometimes clean up weird stuff and 
works for me.

> Backport HDFS-11273 to branch-2 (Move TransferFsImage#doGetUrl function to a 
> Util class)
> 
>
> Key: HDFS-14696
> URL: https://issues.apache.org/jira/browse/HDFS-14696
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Siyao Meng
>Assignee: Siyao Meng
>Priority: Major
> Attachments: HDFS-14696-branch-2.003.patch
>
>
> Backporting HDFS-11273 Move TransferFsImage#doGetUrl function to a Util class 
> to branch-2.
> To avoid confusion with branch-2 patches in HDFS-11273, patch revision number 
> will continue from 003.
> *HDFS-14696-branch-2.003.patch* is the same as 
> *HDFS-11273-branch-2.003.patch*.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1768) Audit xxxAcl methods in OzoneManager

2019-08-07 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1768?focusedWorklogId=290973&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-290973
 ]

ASF GitHub Bot logged work on HDDS-1768:


Author: ASF GitHub Bot
Created on: 08/Aug/19 03:43
Start Date: 08/Aug/19 03:43
Worklog Time Spent: 10m 
  Work Description: dineshchitlangia commented on pull request #1204: 
HDDS-1768. Audit xxxAcl methods in OzoneManager
URL: https://github.com/apache/hadoop/pull/1204#discussion_r311845409
 
 

 ##
 File path: 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/client/rpc/TestOzoneRpcClientForAclAuditLog.java
 ##
 @@ -0,0 +1,422 @@
+package org.apache.hadoop.ozone.client.rpc;
+
+import org.apache.commons.io.FileUtils;
+import org.apache.commons.lang3.RandomStringUtils;
+import org.apache.hadoop.hdds.conf.OzoneConfiguration;
+import org.apache.hadoop.ozone.OzoneAcl;
+import org.apache.hadoop.ozone.audit.AuditEventStatus;
+import org.apache.hadoop.ozone.audit.OMAction;
+import org.apache.hadoop.ozone.client.OzoneVolume;
+import org.apache.hadoop.ozone.security.acl.IAccessAuthorizer;
+import org.apache.hadoop.ozone.security.acl.OzoneObj;
+import org.apache.hadoop.ozone.security.acl.OzoneObjInfo;
+import org.apache.hadoop.security.UserGroupInformation;
+import org.junit.AfterClass;
+import org.junit.Assert;
+import org.junit.BeforeClass;
+import org.junit.FixMethodOrder;
+import org.junit.Test;
+import org.junit.runners.MethodSorters;
+
+import java.io.File;
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.List;
+
+import static org.apache.hadoop.ozone.OzoneAcl.AclScope.ACCESS;
+import static 
org.apache.hadoop.ozone.OzoneConfigKeys.OZONE_ACL_AUTHORIZER_CLASS;
+import static 
org.apache.hadoop.ozone.OzoneConfigKeys.OZONE_ACL_AUTHORIZER_CLASS_NATIVE;
+import static org.apache.hadoop.ozone.OzoneConfigKeys.OZONE_ACL_ENABLED;
+import static org.apache.hadoop.ozone.OzoneConfigKeys.OZONE_ADMINISTRATORS;
+import static 
org.apache.hadoop.ozone.OzoneConfigKeys.OZONE_ADMINISTRATORS_WILDCARD;
+import static 
org.apache.hadoop.ozone.security.acl.OzoneObj.ResourceType.VOLUME;
+import static org.apache.hadoop.ozone.security.acl.OzoneObj.StoreType.OZONE;
+import static org.junit.Assert.assertTrue;
+
+/**
+ * This class is to test audit logs for xxxACL APIs of Ozone Client.
+ */
+@FixMethodOrder(MethodSorters.NAME_ASCENDING)
+public class TestOzoneRpcClientForAclAuditLog extends
+TestOzoneRpcClientAbstract {
+
+  private static UserGroupInformation ugi;
+  private static final OzoneAcl USER_ACL =
+  new OzoneAcl(IAccessAuthorizer.ACLIdentityType.USER,
 
 Review comment:
   @bharatviswa504 I only need the two tests I have added here. The only reason 
I extended the base class is to leverage the setup(). Happy to make it a 
standalone test class.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 290973)
Time Spent: 1.5h  (was: 1h 20m)

> Audit xxxAcl methods in OzoneManager
> 
>
> Key: HDDS-1768
> URL: https://issues.apache.org/jira/browse/HDDS-1768
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Ajay Kumar
>Assignee: Dinesh Chitlangia
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1.5h
>  Remaining Estimate: 0h
>
> Audit permission failures from authorizer



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Resolved] (HDDS-1740) Handle Failure to Update Ozone Container YAML

2019-08-07 Thread Supratim Deka (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1740?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Supratim Deka resolved HDDS-1740.
-
Resolution: Not A Problem

On the Datanode, Container state changes are driven through

KeyValueContainer.updateContainerData()

 

this always resets the in-memory state of the container to the previous state 
in case the update to the container YAML hits any exception. Also, the 
container YAML is sync flushed to persistent storage as implemented in:

ContainerDataYAML.createContainerFile()

 

So I am marking this as not a problem.

 

> Handle Failure to Update Ozone Container YAML
> -
>
> Key: HDDS-1740
> URL: https://issues.apache.org/jira/browse/HDDS-1740
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Ozone Datanode
>Reporter: Supratim Deka
>Assignee: Supratim Deka
>Priority: Major
>
> Ensure consistent state in-memory and in the persistent YAML file for the 
> Container.
> If an update to the YAML fails, then the in-memory state also does not change.
> This ensures that in every container report, the SCM continues to see the 
> specific container is still in the old state. And this triggers a retry of 
> the state change operation from the SCM.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14478) Add libhdfs APIs for openFile

2019-08-07 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14478?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16902633#comment-16902633
 ] 

Hadoop QA commented on HDFS-14478:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
45s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} dupname {color} | {color:green}  0m  
0s{color} | {color:green} No case conflicting files found. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 18m 
15s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
35s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
15s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
30m 56s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green}  1m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 23s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  6m  
6s{color} | {color:green} hadoop-hdfs-native-client in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
30s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 53m 49s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=19.03.1 Server=19.03.1 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-955/7/artifact/out/Dockerfile
 |
| GITHUB PR | https://github.com/apache/hadoop/pull/955 |
| JIRA Issue | HDFS-14478 |
| Optional Tests | dupname asflicense compile cc mvnsite javac unit |
| uname | Linux baba5d1972ff 4.4.0-139-generic #165-Ubuntu SMP Wed Oct 24 
10:58:50 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | personality/hadoop.sh |
| git revision | trunk / 70b4617 |
| Default Java | 1.8.0_212 |
|  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-955/7/testReport/ |
| Max. process+thread count | 413 (vs. ulimit of 5500) |
| modules | C: hadoop-hdfs-project/hadoop-hdfs-native-client U: 
hadoop-hdfs-project/hadoop-hdfs-native-client |
| Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-955/7/console |
| versions | git=2.7.4 maven=3.3.9 |
| Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |


This message was automatically generated.



> Add libhdfs APIs for openFile
> -
>
> Key: HDFS-14478
> URL: https://issues.apache.org/jira/browse/HDFS-14478
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs-client, libhdfs, native
>Reporter: Sahil Takiar
>Assignee: Sahil Takiar
>Priority: Major
>
> HADOOP-15229 added a "FileSystem builder-based openFile() API" that allows 
> specifying configuration values for opening files (similar to HADOOP-14365).
> Support for {{openFile}} will be a little tricky as i

[jira] [Commented] (HDFS-13571) Dead DataNode Detector

2019-08-07 Thread Lisheng Sun (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13571?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16902631#comment-16902631
 ] 

Lisheng Sun commented on HDFS-13571:


Sorry [~linyiqun], I have been working on this JIRA. Recently, the company has 
a lot of things, so there is some delay. I will update this jira as soon as 
possible.

> Dead DataNode Detector
> --
>
> Key: HDFS-13571
> URL: https://issues.apache.org/jira/browse/HDFS-13571
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs-client
>Affects Versions: 2.4.0, 2.6.0, 3.0.2
>Reporter: Gang Xie
>Assignee: Lisheng Sun
>Priority: Major
> Attachments: HDFS-13571-2.6.diff, node status machine.png
>
>
> Currently, the information of the dead datanode in DFSInputStream in stored 
> locally. So, it could not be shared among the inputstreams of the same 
> DFSClient. In our production env, every days, some datanodes dies with 
> different causes. At this time, after the first inputstream blocked and 
> detect this, it could share this information to others in the same DFSClient, 
> thus, the ohter inputstreams are still blocked by the dead node for some 
> time, which could cause bad service latency.
> To eliminate this impact from dead datanode, we designed a dead datanode 
> detector, which detect the dead ones in advance, and share this information 
> among all the inputstreams in the same client. This improvement has being 
> online for some months and works fine.  So, we decide to port to the 3.0 (the 
> version used in our production env is 2.4 and 2.6).
> I will do the porting work and upload the code later.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14455) Fix typo in HAState.java

2019-08-07 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14455?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16902632#comment-16902632
 ] 

Hadoop QA commented on HDFS-14455:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red}  0m 11s{color} 
| {color:red} https://github.com/apache/hadoop/pull/764 does not apply to 
trunk. Rebase required? Wrong Branch? See 
https://wiki.apache.org/hadoop/HowToContribute for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| GITHUB PR | https://github.com/apache/hadoop/pull/764 |
| JIRA Issue | HDFS-14455 |
| Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-764/5/console |
| versions | git=2.17.1 |
| Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |


This message was automatically generated.



> Fix typo in HAState.java
> 
>
> Key: HDFS-14455
> URL: https://issues.apache.org/jira/browse/HDFS-14455
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Reporter: hunshenshi
>Priority: Major
>
> There are some typo in HAState
> destructuve -> destructive
> Aleady -> Already
> Transtion -> Transition



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1659) Define the process to add proposal/design docs to the Ozone subproject

2019-08-07 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1659?focusedWorklogId=290972&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-290972
 ]

ASF GitHub Bot logged work on HDDS-1659:


Author: ASF GitHub Bot
Created on: 08/Aug/19 03:33
Start Date: 08/Aug/19 03:33
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #950: HDDS-1659. Define 
the process to add proposal/design docs to the Ozone subproject
URL: https://github.com/apache/hadoop/pull/950#issuecomment-519349303
 
 
   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 41 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   ||| _ trunk Compile Tests _ |
   | +1 | mvninstall | 599 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 1425 | branch has no errors when building and testing 
our client artifacts. |
   | -0 | patch | 1530 | Used diff version of patch file. Binary files and 
potentially other changes not applied. Please rebase and squash commits if 
necessary. |
   ||| _ Patch Compile Tests _ |
   | +1 | mvninstall | 572 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 633 | patch has no errors when building and testing 
our client artifacts. |
   ||| _ Other Tests _ |
   | +1 | asflicense | 42 | The patch does not generate ASF License warnings. |
   | | | 2905 | |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.1 Server=19.03.1 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-950/5/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/950 |
   | Optional Tests | dupname asflicense mvnsite |
   | uname | Linux 389772af3438 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 70b4617 |
   | Max. process+thread count | 446 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdds/docs U: hadoop-hdds/docs |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-950/5/console |
   | versions | git=2.7.4 maven=3.3.9 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 290972)
Time Spent: 3h 10m  (was: 3h)

> Define the process to add proposal/design docs to the Ozone subproject
> --
>
> Key: HDDS-1659
> URL: https://issues.apache.org/jira/browse/HDDS-1659
> Project: Hadoop Distributed Data Store
>  Issue Type: Task
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 3h 10m
>  Remaining Estimate: 0h
>
> We think that it would be more effective to collect all the design docs in 
> one place and make it easier to review them by the community.
> We propose to follow an approach where the proposals are committed to the 
> hadoop-hdds/docs project and the review can be the same as a review of a PR



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1884) Support Bucket ACL operations for OM HA.

2019-08-07 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1884?focusedWorklogId=290970&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-290970
 ]

ASF GitHub Bot logged work on HDDS-1884:


Author: ASF GitHub Bot
Created on: 08/Aug/19 03:23
Start Date: 08/Aug/19 03:23
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #1202: HDDS-1884. 
Support Bucket ACL operations for OM HA.
URL: https://github.com/apache/hadoop/pull/1202#issuecomment-519347577
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 78 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 1 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 1 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 31 | Maven dependency ordering for branch |
   | +1 | mvninstall | 640 | trunk passed |
   | +1 | compile | 374 | trunk passed |
   | +1 | checkstyle | 72 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 981 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 184 | trunk passed |
   | 0 | spotbugs | 454 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 671 | trunk passed |
   | -0 | patch | 494 | Used diff version of patch file. Binary files and 
potentially other changes not applied. Please rebase and squash commits if 
necessary. |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 21 | Maven dependency ordering for patch |
   | +1 | mvninstall | 630 | the patch passed |
   | +1 | compile | 395 | the patch passed |
   | +1 | cc | 395 | the patch passed |
   | +1 | javac | 395 | the patch passed |
   | +1 | checkstyle | 77 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 741 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 167 | the patch passed |
   | +1 | findbugs | 752 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 374 | hadoop-hdds in the patch passed. |
   | -1 | unit | 2127 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 44 | The patch does not generate ASF License warnings. |
   | | | 8586 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.ozone.om.TestOzoneManagerHA |
   |   | hadoop.ozone.om.TestKeyManagerImpl |
   |   | hadoop.hdds.scm.pipeline.TestRatisPipelineCreateAndDestory |
   |   | hadoop.ozone.client.rpc.TestBlockOutputStreamWithFailures |
   |   | hadoop.ozone.client.rpc.TestMultiBlockWritesWithDnFailures |
   |   | hadoop.ozone.om.TestScmSafeMode |
   |   | hadoop.ozone.client.rpc.TestSecureOzoneRpcClient |
   |   | hadoop.ozone.client.rpc.TestOzoneRpcClient |
   |   | hadoop.ozone.client.rpc.TestOzoneAtRestEncryption |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.1 Server=19.03.1 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1202/14/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1202 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle cc |
   | uname | Linux e3d3a3a29049 4.15.0-52-generic #56-Ubuntu SMP Tue Jun 4 
22:49:08 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 70b4617 |
   | Default Java | 1.8.0_222 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1202/14/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1202/14/testReport/ |
   | Max. process+thread count | 5345 (vs. ulimit of 5500) |
   | modules | C: hadoop-ozone/common hadoop-ozone/ozone-manager 
hadoop-ozone/integration-test U: hadoop-ozone |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1202/14/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 290970)
Time Spent: 5h 50m  (was: 5h 40m)

> Support Bucket ACL operations fo

[jira] [Commented] (HDFS-13677) Dynamic refresh Disk configuration results in overwriting VolumeMap

2019-08-07 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13677?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16902624#comment-16902624
 ] 

Hadoop QA commented on HDFS-13677:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red}  0m 11s{color} 
| {color:red} https://github.com/apache/hadoop/pull/780 does not apply to 
trunk. Rebase required? Wrong Branch? See 
https://wiki.apache.org/hadoop/HowToContribute for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| GITHUB PR | https://github.com/apache/hadoop/pull/780 |
| JIRA Issue | HDFS-13677 |
| Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-780/7/console |
| versions | git=2.7.4 |
| Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |


This message was automatically generated.



> Dynamic refresh Disk configuration results in overwriting VolumeMap
> ---
>
> Key: HDFS-13677
> URL: https://issues.apache.org/jira/browse/HDFS-13677
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: xuzq
>Assignee: xuzq
>Priority: Blocker
> Fix For: 2.10.0, 3.3.0, 2.8.6, 3.2.1, 2.9.3, 3.1.3
>
> Attachments: HDFS-13677-001.patch, HDFS-13677-002-2.9-branch.patch, 
> HDFS-13677-002.patch, image-2018-06-14-13-05-54-354.png, 
> image-2018-06-14-13-10-24-032.png
>
>
> When I added a new disk by dynamically refreshing the configuration, an 
> exception "FileNotFound while finding block" was caused.
>  
> The steps are as follows:
> 1.Change the hdfs-site.xml of DataNode to add a new disk.
> 2.Refresh the configuration by "./bin/hdfs dfsadmin -reconfig datanode 
> :50020 start"
>  
> The error is like:
> ```
> VolumeScannerThread(/media/disk5/hdfs/dn): FileNotFound while finding block 
> BP-233501496-*.*.*.*-1514185698256:blk_1620868560_547245090 on volume 
> /media/disk5/hdfs/dn
> org.apache.hadoop.hdfs.server.datanode.ReplicaNotFoundException: Replica not 
> found for BP-1997955181-*.*.*.*-1514186468560:blk_1090885868_17145082
>  at 
> org.apache.hadoop.hdfs.server.datanode.BlockSender.getReplica(BlockSender.java:471)
>  at 
> org.apache.hadoop.hdfs.server.datanode.BlockSender.(BlockSender.java:240)
>  at 
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.readBlock(DataXceiver.java:553)
>  at 
> org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opReadBlock(Receiver.java:148)
>  at 
> org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103)
>  at 
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:254)
>  at java.lang.Thread.run(Thread.java:748)
> ```
> I added some logs for confirmation, as follows:
> Log Code like:
> !image-2018-06-14-13-05-54-354.png!
> And the result is like:
> !image-2018-06-14-13-10-24-032.png!  
> The Size of 'VolumeMap' has been reduced, and We found the 'VolumeMap' to be 
> overridden by the new Disk Block by the method 'ReplicaMap.addAll(ReplicaMap 
> other)'.
>  



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1871) Remove anti-affinity rules from k8s minkube example

2019-08-07 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1871?focusedWorklogId=290968&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-290968
 ]

ASF GitHub Bot logged work on HDDS-1871:


Author: ASF GitHub Bot
Created on: 08/Aug/19 03:16
Start Date: 08/Aug/19 03:16
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #1180: HDDS-1871. 
Remove anti-affinity rules from k8s minkube example
URL: https://github.com/apache/hadoop/pull/1180#issuecomment-519346521
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 47 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | 0 | shelldocs | 0 | Shelldocs was not available. |
   | 0 | yamllint | 0 | yamllint was not available. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | -1 | test4tests | 0 | The patch doesn't appear to include any new or 
modified tests.  Please justify why no new tests are needed for this patch. 
Also please list what manual steps were performed to verify this patch. |
   ||| _ trunk Compile Tests _ |
   | +1 | mvninstall | 601 | trunk passed |
   | +1 | compile | 360 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 706 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 157 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | +1 | mvninstall | 539 | the patch passed |
   | +1 | compile | 367 | the patch passed |
   | +1 | javac | 367 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | shellcheck | 0 | There were no new shellcheck issues. |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 633 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 169 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 308 | hadoop-hdds in the patch passed. |
   | -1 | unit | 2207 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 52 | The patch does not generate ASF License warnings. |
   | | | 6351 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.ozone.om.TestOzoneManagerHA |
   |   | hadoop.ozone.client.rpc.TestMultiBlockWritesWithDnFailures |
   |   | hadoop.ozone.om.TestKeyManagerImpl |
   |   | hadoop.ozone.client.rpc.TestWatchForCommit |
   |   | hadoop.ozone.client.rpc.TestOzoneAtRestEncryption |
   |   | hadoop.ozone.om.TestOzoneManager |
   |   | hadoop.hdds.scm.pipeline.TestRatisPipelineCreateAndDestory |
   |   | hadoop.ozone.client.rpc.TestOzoneRpcClient |
   |   | hadoop.ozone.scm.TestContainerSmallFile |
   |   | 
hadoop.ozone.container.common.statemachine.commandhandler.TestBlockDeletion |
   |   | hadoop.ozone.client.rpc.TestSecureOzoneRpcClient |
   |   | hadoop.ozone.om.TestKeyPurging |
   |   | hadoop.ozone.om.TestScmSafeMode |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.1 Server=19.03.1 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1180/3/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1180 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient shellcheck shelldocs yamllint |
   | uname | Linux 80c095ca10ed 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 70b4617 |
   | Default Java | 1.8.0_222 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1180/3/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1180/3/testReport/ |
   | Max. process+thread count | 4768 (vs. ulimit of 5500) |
   | modules | C: hadoop-ozone/dist U: hadoop-ozone/dist |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1180/3/console |
   | versions | git=2.7.4 maven=3.3.9 shellcheck=0.4.6 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 290968)
Time Spent: 40m  (was: 0.5h)

> Remove anti-affinity rules from k8s minkube example
> ---
>
> Key: HDDS-1871
>

[jira] [Comment Edited] (HDDS-1926) The new caching layer is used for old OM requests but not updated

2019-08-07 Thread Bharat Viswanadham (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1926?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16902251#comment-16902251
 ] 

Bharat Viswanadham edited comment on HDDS-1926 at 8/8/19 3:16 AM:
--

As discussed offline with [~arp] and [~elek]

We shall use ratisEnabled and define cache policy to bucket and volume table. 
As a quick solution, we are going with this approach.

 

And another discussion we had is We can have CachedTypedTable which extends 
Table and overload put to take transactionIndex.


was (Author: bharatviswa):
As discussed offline with [~arp] and [~elek]

We shall use ratisEnabled and define cache policy to bucket and volume table. 
As a quick solution, we are going with this approach.

 

And another discussion we had is We can have CachedTypedTable which extends 
Table and overload put to take transactionIndex.

> The new caching layer is used for old OM requests but not updated
> -
>
> Key: HDDS-1926
> URL: https://issues.apache.org/jira/browse/HDDS-1926
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: om
>Reporter: Elek, Marton
>Assignee: Bharat Viswanadham
>Priority: Blocker
>  Labels: pull-request-available
>  Time Spent: 1h 20m
>  Remaining Estimate: 0h
>
> HDDS-1499 introduced a new caching layer together with a double-buffer based 
> db writer to support OM HA.
> TLDR: I think the caching layer is not updated for new volume creation. And 
> (slightly related to this problem) I suggest to separated the TypedTable and 
> the caching layer.
> ## How to reproduce the problem?
> 1. Start a docker compose cluster
> 2. Create one volume (let's say `/vol1`)
> 3. Restart the om (!)
> 4. Try to create an _other_ volume twice!
> ```
> bash-4.2$ ozone sh volume create /vol2
> 2019-08-07 12:29:47 INFO  RpcClient:288 - Creating Volume: vol2, with hadoop 
> as owner.
> bash-4.2$ ozone sh volume create /vol2
> 2019-08-07 12:29:50 INFO  RpcClient:288 - Creating Volume: vol2, with hadoop 
> as owner.
> ```
> Expected behavior is an error:
> {code}
> bash-4.2$ ozone sh volume create /vol1
> 2019-08-07 09:48:39 INFO  RpcClient:288 - Creating Volume: vol1, with hadoop 
> as owner.
> bash-4.2$ ozone sh volume create /vol1
> 2019-08-07 09:48:42 INFO  RpcClient:288 - Creating Volume: vol1, with hadoop 
> as owner.
> VOLUME_ALREADY_EXISTS 
> {code}
> The problem is that the new cache is used even for the old code path 
> (TypedTable):
> {code}
>  @Override
>   public VALUE get(KEY key) throws IOException {
> // Here the metadata lock will guarantee that cache is not updated for 
> same
> // key during get key.
> CacheResult> cacheResult =
> cache.lookup(new CacheKey<>(key));
> if (cacheResult.getCacheStatus() == EXISTS) {
>   return cacheResult.getValue().getCacheValue();
> } else if (cacheResult.getCacheStatus() == NOT_EXIST) {
>   return null;
> } else {
>   return getFromTable(key);
> }
>   }
> {code}
> For volume table after the FIRST start it always returns with 
> `getFromTable(key)` due to the condition in the `TableCacheImpl.lookup`:
> {code}
>   public CacheResult lookup(CACHEKEY cachekey) {
> if (cache.size() == 0) {
>   return new CacheResult<>(CacheResult.CacheStatus.MAY_EXIST,
>   null);
> }
> {code}
> But after a restart the cache is pre-loaded by the TypedTable.constructor. 
> After the restart, the real caching logic will be used (as cache.size()>0), 
> which cause a problem as the cache is NOT updated from the old code path.
> An additional problem is that the cache is turned on for all the metadata 
> table even if the cache is not required... 
> ## Proposed solution
> As I commented at HDDS-1499 this caching layer is not a "traditional cache". 
> It's not updated during the typedTable.put() call but updated by a separated 
> component during double-buffer flash.
> I would suggest to remove the cache related methods from TypedTable (move to 
> a separated implementation). I think this kind of caching can be independent 
> from the TypedTable implementation. We can continue to use the simple 
> TypedTable everywhere where we don't need to use any kind of caching.
> For caching we can use a separated object. It would make it more visible that 
> the cache should always be updated manually all the time. This separated 
> caching utility may include a reference to the original TypedTable/Table. 
> With this approach we can separate the different responsibilities but provide 
> the same functionality.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional 

[jira] [Commented] (HDFS-14204) Backport HDFS-12943 to branch-2

2019-08-07 Thread huhaiyang (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14204?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16902620#comment-16902620
 ] 

huhaiyang commented on HDFS-14204:
--

hi [~vagarychen]

Thanks for your reply!
We'll keep an eye on that.

> Backport HDFS-12943 to branch-2
> ---
>
> Key: HDFS-14204
> URL: https://issues.apache.org/jira/browse/HDFS-14204
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Chen Liang
>Assignee: Chen Liang
>Priority: Major
> Attachments: HDFS-14204-branch-2.001.patch, 
> HDFS-14204-branch-2.002.patch, HDFS-14204-branch-2.003.patch, 
> HDFS-14204-branch-2.004.patch, HDFS-14204-branch-2.005.patch, 
> HDFS-14204-branch-2.006.patch
>
>
> Currently, consistent read from standby feature (HDFS-12943) is only in trunk 
> (branch-3). This JIRA aims to backport the feature to branch-2.  



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1879) Support multiple excluded scopes when choosing datanodes in NetworkTopology

2019-08-07 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1879?focusedWorklogId=290964&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-290964
 ]

ASF GitHub Bot logged work on HDDS-1879:


Author: ASF GitHub Bot
Created on: 08/Aug/19 03:05
Start Date: 08/Aug/19 03:05
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #1194: HDDS-1879.  
Support multiple excluded scopes when choosing datanodes in NetworkTopology
URL: https://github.com/apache/hadoop/pull/1194#issuecomment-519344819
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 46 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 2 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 10 | Maven dependency ordering for branch |
   | +1 | mvninstall | 588 | trunk passed |
   | +1 | compile | 366 | trunk passed |
   | +1 | checkstyle | 74 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 819 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 149 | trunk passed |
   | 0 | spotbugs | 412 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 608 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 18 | Maven dependency ordering for patch |
   | +1 | mvninstall | 527 | the patch passed |
   | +1 | compile | 364 | the patch passed |
   | +1 | javac | 364 | the patch passed |
   | -0 | checkstyle | 34 | hadoop-hdds: The patch generated 1 new + 0 
unchanged - 0 fixed = 1 total (was 0) |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 609 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 148 | the patch passed |
   | +1 | findbugs | 627 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 289 | hadoop-hdds in the patch passed. |
   | -1 | unit | 1644 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 43 | The patch does not generate ASF License warnings. |
   | | | 7132 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.ozone.client.rpc.TestOzoneRpcClient |
   |   | hadoop.ozone.om.TestScmSafeMode |
   |   | hadoop.ozone.client.rpc.TestOzoneAtRestEncryption |
   |   | hadoop.ozone.client.rpc.TestSecureOzoneRpcClient |
   |   | hadoop.ozone.client.rpc.TestBlockOutputStreamWithFailures |
   |   | hadoop.ozone.om.TestKeyManagerImpl |
   |   | hadoop.ozone.client.rpc.Test2WayCommitInRatis |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.1 Server=19.03.1 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1194/7/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1194 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux bbef7009df19 4.4.0-139-generic #165-Ubuntu SMP Wed Oct 24 
10:58:50 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 70b4617 |
   | Default Java | 1.8.0_212 |
   | checkstyle | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1194/7/artifact/out/diff-checkstyle-hadoop-hdds.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1194/7/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1194/7/testReport/ |
   | Max. process+thread count | 5283 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdds/common hadoop-hdds/server-scm U: hadoop-hdds |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1194/7/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 290964)
Time Spent: 20m  (was: 10m)

> Support multiple excluded scopes when choosing datanodes in NetworkTopology
> ---
>
> Key: HDD

[jira] [Work logged] (HDDS-1886) Use ArrayList#clear to address audit failure scenario

2019-08-07 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1886?focusedWorklogId=290962&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-290962
 ]

ASF GitHub Bot logged work on HDDS-1886:


Author: ASF GitHub Bot
Created on: 08/Aug/19 02:54
Start Date: 08/Aug/19 02:54
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #1205: HDDS-1886. Use 
ArrayList#clear to address audit failure scenario
URL: https://github.com/apache/hadoop/pull/1205#issuecomment-519342897
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 41 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 1 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | +1 | mvninstall | 635 | trunk passed |
   | +1 | compile | 378 | trunk passed |
   | +1 | checkstyle | 63 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 847 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 156 | trunk passed |
   | 0 | spotbugs | 435 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 625 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | +1 | mvninstall | 574 | the patch passed |
   | +1 | compile | 366 | the patch passed |
   | +1 | javac | 366 | the patch passed |
   | +1 | checkstyle | 72 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 1 | The patch has no whitespace issues. |
   | +1 | shadedclient | 683 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 154 | the patch passed |
   | +1 | findbugs | 641 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 282 | hadoop-hdds in the patch passed. |
   | -1 | unit | 1995 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 43 | The patch does not generate ASF License warnings. |
   | | | 7698 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.ozone.client.rpc.TestOzoneAtRestEncryption |
   |   | hadoop.hdds.scm.pipeline.TestRatisPipelineCreateAndDestory |
   |   | hadoop.ozone.client.rpc.TestCommitWatcher |
   |   | hadoop.ozone.client.rpc.TestOzoneRpcClient |
   |   | hadoop.ozone.client.rpc.TestSecureOzoneRpcClient |
   |   | hadoop.ozone.om.TestKeyManagerImpl |
   |   | hadoop.ozone.client.rpc.TestMultiBlockWritesWithDnFailures |
   |   | hadoop.ozone.om.TestScmSafeMode |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.1 Server=19.03.1 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1205/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1205 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux d459ff4e7ee0 4.4.0-157-generic #185-Ubuntu SMP Tue Jul 23 
09:17:01 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 70b4617 |
   | Default Java | 1.8.0_212 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1205/2/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1205/2/testReport/ |
   | Max. process+thread count | 5402 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdds/common U: hadoop-hdds/common |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1205/2/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 290962)
Time Spent: 1h 20m  (was: 1h 10m)

> Use ArrayList#clear to address audit failure scenario
> -
>
> Key: HDDS-1886
> URL: https://issues.apache.org/jira/browse/HDDS-1886
> Project: Hadoop Distributed Data Store
>  Issue Type: Test
>  Components: test
>Reporter: Dinesh Chitlangia
>Assignee: Dinesh Chitlangia
>Priority: Minor
>  Labels: pull-re

[jira] [Commented] (HDFS-13571) Dead DataNode Detector

2019-08-07 Thread Yiqun Lin (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13571?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16902601#comment-16902601
 ] 

Yiqun Lin commented on HDFS-13571:
--

What's the status of this JIRA, [~leosun08]? I didn't see any update in 
subtasks, :).

> Dead DataNode Detector
> --
>
> Key: HDFS-13571
> URL: https://issues.apache.org/jira/browse/HDFS-13571
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs-client
>Affects Versions: 2.4.0, 2.6.0, 3.0.2
>Reporter: Gang Xie
>Assignee: Lisheng Sun
>Priority: Major
> Attachments: HDFS-13571-2.6.diff, node status machine.png
>
>
> Currently, the information of the dead datanode in DFSInputStream in stored 
> locally. So, it could not be shared among the inputstreams of the same 
> DFSClient. In our production env, every days, some datanodes dies with 
> different causes. At this time, after the first inputstream blocked and 
> detect this, it could share this information to others in the same DFSClient, 
> thus, the ohter inputstreams are still blocked by the dead node for some 
> time, which could cause bad service latency.
> To eliminate this impact from dead datanode, we designed a dead datanode 
> detector, which detect the dead ones in advance, and share this information 
> among all the inputstreams in the same client. This improvement has being 
> online for some months and works fine.  So, we decide to port to the 3.0 (the 
> version used in our production env is 2.4 and 2.6).
> I will do the porting work and upload the code later.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14195) OIV: print out storage policy id in oiv Delimited output

2019-08-07 Thread Wang, Xinglong (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14195?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16902599#comment-16902599
 ] 

Wang, Xinglong commented on HDFS-14195:
---

HDFS-14195.010.patch to address checkstyle in 
https://builds.apache.org/job/PreCommit-HDFS-Build/27439/artifact/out/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt.
 

> OIV: print out storage policy id in oiv Delimited output
> 
>
> Key: HDFS-14195
> URL: https://issues.apache.org/jira/browse/HDFS-14195
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: tools
>Reporter: Wang, Xinglong
>Assignee: Wang, Xinglong
>Priority: Minor
> Attachments: HDFS-14195.001.patch, HDFS-14195.002.patch, 
> HDFS-14195.003.patch, HDFS-14195.004.patch, HDFS-14195.005.patch, 
> HDFS-14195.006.patch, HDFS-14195.007.patch, HDFS-14195.008.patch, 
> HDFS-14195.009.patch, HDFS-14195.010.patch
>
>
> There is lacking of a method to get all folders and files with sort of 
> specified storage policy via command line, like ALL_SSD type.
> By adding storage policy id to oiv output, it will help with oiv 
> post-analysis to have a overview of all folders/files with specified storage 
> policy and to apply internal regulation based on this information.
>  
> Currently, for PBImageXmlWriter.java, in HDFS-9835 it added function to print 
> out xattr which including storage policy already.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14195) OIV: print out storage policy id in oiv Delimited output

2019-08-07 Thread Wang, Xinglong (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14195?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wang, Xinglong updated HDFS-14195:
--
Attachment: HDFS-14195.010.patch
Status: Patch Available  (was: Open)

> OIV: print out storage policy id in oiv Delimited output
> 
>
> Key: HDFS-14195
> URL: https://issues.apache.org/jira/browse/HDFS-14195
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: tools
>Reporter: Wang, Xinglong
>Assignee: Wang, Xinglong
>Priority: Minor
> Attachments: HDFS-14195.001.patch, HDFS-14195.002.patch, 
> HDFS-14195.003.patch, HDFS-14195.004.patch, HDFS-14195.005.patch, 
> HDFS-14195.006.patch, HDFS-14195.007.patch, HDFS-14195.008.patch, 
> HDFS-14195.009.patch, HDFS-14195.010.patch
>
>
> There is lacking of a method to get all folders and files with sort of 
> specified storage policy via command line, like ALL_SSD type.
> By adding storage policy id to oiv output, it will help with oiv 
> post-analysis to have a overview of all folders/files with specified storage 
> policy and to apply internal regulation based on this information.
>  
> Currently, for PBImageXmlWriter.java, in HDFS-9835 it added function to print 
> out xattr which including storage policy already.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1900) Remove UpdateBucket handler which supports add/remove Acl

2019-08-07 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1900?focusedWorklogId=290953&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-290953
 ]

ASF GitHub Bot logged work on HDDS-1900:


Author: ASF GitHub Bot
Created on: 08/Aug/19 02:38
Start Date: 08/Aug/19 02:38
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #1219: HDDS-1900. 
Remove UpdateBucket handler which supports add/remove Acl.
URL: https://github.com/apache/hadoop/pull/1219#issuecomment-519339817
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 169 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 1 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 112 | Maven dependency ordering for branch |
   | +1 | mvninstall | 813 | trunk passed |
   | +1 | compile | 491 | trunk passed |
   | +1 | checkstyle | 92 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 1108 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 217 | trunk passed |
   | 0 | spotbugs | 568 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 850 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 74 | Maven dependency ordering for patch |
   | +1 | mvninstall | 663 | the patch passed |
   | +1 | compile | 425 | the patch passed |
   | +1 | javac | 425 | the patch passed |
   | +1 | checkstyle | 99 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 839 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 204 | the patch passed |
   | +1 | findbugs | 774 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 406 | hadoop-hdds in the patch passed. |
   | -1 | unit | 2269 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 43 | The patch does not generate ASF License warnings. |
   | | | 9839 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | 
hadoop.ozone.client.rpc.TestMultiBlockWritesWithDnFailures |
   |   | hadoop.ozone.client.rpc.TestSecureOzoneRpcClient |
   |   | hadoop.ozone.client.rpc.TestOzoneRpcClient |
   |   | hadoop.ozone.client.rpc.TestBlockOutputStreamWithFailures |
   |   | hadoop.ozone.client.rpc.TestOzoneAtRestEncryption |
   |   | hadoop.ozone.om.TestScmSafeMode |
   |   | hadoop.ozone.client.rpc.Test2WayCommitInRatis |
   |   | hadoop.ozone.om.TestOzoneManagerHA |
   |   | hadoop.hdds.scm.pipeline.TestRatisPipelineCreateAndDestory |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.0 Server=19.03.0 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1219/6/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1219 |
   | Optional Tests | dupname asflicense mvnsite compile javac javadoc 
mvninstall unit shadedclient findbugs checkstyle |
   | uname | Linux c69bf2636810 4.15.0-52-generic #56-Ubuntu SMP Tue Jun 4 
22:49:08 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 3cc0ace |
   | Default Java | 1.8.0_212 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1219/6/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1219/6/testReport/ |
   | Max. process+thread count | 4257 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdds/docs hadoop-ozone/dist 
hadoop-ozone/integration-test hadoop-ozone/ozone-manager U: . |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1219/6/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 290953)
Time Spent: 2h 50m  (was: 2h 40m)

> Remove UpdateBucket handler which supports add/remove Acl
> -
>
> Key: HDDS-1900
> URL: https://issues.apache.org/jira

[jira] [Updated] (HDFS-14195) OIV: print out storage policy id in oiv Delimited output

2019-08-07 Thread Wang, Xinglong (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14195?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wang, Xinglong updated HDFS-14195:
--
Status: Open  (was: Patch Available)

> OIV: print out storage policy id in oiv Delimited output
> 
>
> Key: HDFS-14195
> URL: https://issues.apache.org/jira/browse/HDFS-14195
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: tools
>Reporter: Wang, Xinglong
>Assignee: Wang, Xinglong
>Priority: Minor
> Attachments: HDFS-14195.001.patch, HDFS-14195.002.patch, 
> HDFS-14195.003.patch, HDFS-14195.004.patch, HDFS-14195.005.patch, 
> HDFS-14195.006.patch, HDFS-14195.007.patch, HDFS-14195.008.patch, 
> HDFS-14195.009.patch
>
>
> There is lacking of a method to get all folders and files with sort of 
> specified storage policy via command line, like ALL_SSD type.
> By adding storage policy id to oiv output, it will help with oiv 
> post-analysis to have a overview of all folders/files with specified storage 
> policy and to apply internal regulation based on this information.
>  
> Currently, for PBImageXmlWriter.java, in HDFS-9835 it added function to print 
> out xattr which including storage policy already.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1768) Audit xxxAcl methods in OzoneManager

2019-08-07 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1768?focusedWorklogId=290954&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-290954
 ]

ASF GitHub Bot logged work on HDDS-1768:


Author: ASF GitHub Bot
Created on: 08/Aug/19 02:38
Start Date: 08/Aug/19 02:38
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #1204: HDDS-1768. Audit 
xxxAcl methods in OzoneManager
URL: https://github.com/apache/hadoop/pull/1204#issuecomment-519339871
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 102 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 2 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 26 | Maven dependency ordering for branch |
   | +1 | mvninstall | 591 | trunk passed |
   | +1 | compile | 378 | trunk passed |
   | +1 | checkstyle | 79 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 955 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 190 | trunk passed |
   | 0 | spotbugs | 450 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 654 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 30 | Maven dependency ordering for patch |
   | -1 | mvninstall | 311 | hadoop-ozone in the patch failed. |
   | -1 | compile | 256 | hadoop-ozone in the patch failed. |
   | -1 | javac | 256 | hadoop-ozone in the patch failed. |
   | +1 | checkstyle | 78 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 719 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 165 | the patch passed |
   | -1 | findbugs | 416 | hadoop-ozone in the patch failed. |
   ||| _ Other Tests _ |
   | -1 | unit | 359 | hadoop-hdds in the patch failed. |
   | -1 | unit | 44 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 36 | The patch does not generate ASF License warnings. |
   | | | 5993 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.hdds.scm.block.TestBlockManager |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.1 Server=19.03.1 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1204/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1204 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux 99deb6b4f21e 4.15.0-48-generic #51-Ubuntu SMP Wed Apr 3 
08:28:49 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 70b4617 |
   | Default Java | 1.8.0_212 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1204/2/artifact/out/patch-mvninstall-hadoop-ozone.txt
 |
   | compile | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1204/2/artifact/out/patch-compile-hadoop-ozone.txt
 |
   | javac | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1204/2/artifact/out/patch-compile-hadoop-ozone.txt
 |
   | findbugs | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1204/2/artifact/out/patch-findbugs-hadoop-ozone.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1204/2/artifact/out/patch-unit-hadoop-hdds.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1204/2/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1204/2/testReport/ |
   | Max. process+thread count | 358 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdds/common hadoop-ozone/common 
hadoop-ozone/integration-test hadoop-ozone/ozone-manager U: . |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1204/2/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 290954)
Time Spent: 1h 20m  (was: 1h 10m)

> Audit xxxAcl 

[jira] [Work logged] (HDDS-1829) On OM reload/restart OmMetrics#numKeys should be updated

2019-08-07 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1829?focusedWorklogId=290955&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-290955
 ]

ASF GitHub Bot logged work on HDDS-1829:


Author: ASF GitHub Bot
Created on: 08/Aug/19 02:38
Start Date: 08/Aug/19 02:38
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #1187: HDDS-1829 On OM 
reload/restart OmMetrics#numKeys should be updated
URL: https://github.com/apache/hadoop/pull/1187#issuecomment-519339889
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 68 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 1 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 2 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 29 | Maven dependency ordering for branch |
   | +1 | mvninstall | 616 | trunk passed |
   | +1 | compile | 362 | trunk passed |
   | +1 | checkstyle | 65 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 851 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 164 | trunk passed |
   | 0 | spotbugs | 440 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 640 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 29 | Maven dependency ordering for patch |
   | +1 | mvninstall | 543 | the patch passed |
   | +1 | compile | 351 | the patch passed |
   | +1 | javac | 351 | the patch passed |
   | +1 | checkstyle | 67 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 634 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 144 | the patch passed |
   | -1 | findbugs | 109 | hadoop-ozone in the patch failed. |
   ||| _ Other Tests _ |
   | -1 | unit | 220 | hadoop-hdds in the patch failed. |
   | -1 | unit | 104 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 32 | The patch does not generate ASF License warnings. |
   | | | 5383 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.ozone.container.ozoneimpl.TestOzoneContainer |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.1 Server=19.03.1 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1187/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1187 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux 6c09a6f02e3b 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 70b4617 |
   | Default Java | 1.8.0_222 |
   | findbugs | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1187/2/artifact/out/patch-findbugs-hadoop-ozone.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1187/2/artifact/out/patch-unit-hadoop-hdds.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1187/2/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1187/2/testReport/ |
   | Max. process+thread count | 415 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdds/common hadoop-ozone/common 
hadoop-ozone/ozone-manager U: . |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1187/2/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 290955)
Time Spent: 4h 50m  (was: 4h 40m)

> On OM reload/restart OmMetrics#numKeys should be updated
> 
>
> Key: HDDS-1829
> URL: https://issues.apache.org/jira/browse/HDDS-1829
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Hanisha Koneru
>Assignee: Siyao Meng
>Priority: Major
>  Labels: pull-request

[jira] [Work logged] (HDDS-1768) Audit xxxAcl methods in OzoneManager

2019-08-07 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1768?focusedWorklogId=290946&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-290946
 ]

ASF GitHub Bot logged work on HDDS-1768:


Author: ASF GitHub Bot
Created on: 08/Aug/19 02:22
Start Date: 08/Aug/19 02:22
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on pull request #1204: 
HDDS-1768. Audit xxxAcl methods in OzoneManager
URL: https://github.com/apache/hadoop/pull/1204#discussion_r311832848
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/OzoneManager.java
 ##
 @@ -3029,24 +3044,39 @@ public boolean addAcl(OzoneObj obj, OzoneAcl acl) 
throws IOException {
*/
   @Override
   public boolean removeAcl(OzoneObj obj, OzoneAcl acl) throws IOException {
-if(isAclEnabled) {
-  checkAcls(obj.getResourceType(), obj.getStoreType(), ACLType.WRITE_ACL,
-  obj.getVolumeName(), obj.getBucketName(), obj.getKeyName());
-}
-// TODO: Audit ACL operation.
-switch (obj.getResourceType()) {
-case VOLUME:
-  return volumeManager.removeAcl(obj, acl);
-case BUCKET:
-  return bucketManager.removeAcl(obj, acl);
-case KEY:
-  return keyManager.removeAcl(obj, acl);
-case PREFIX:
-  return prefixManager.removeAcl(obj, acl);
+Map auditMap = obj.toAuditMap();
+auditMap.put(OzoneConsts.ACLS, acl.toString());
 
 Review comment:
   We can move this audit log to the common method.
   As this is common code for all ACL requests.
   
   `auditAcl(OzoneObj ozoneObj, OzoneAcl ozoneAcl, OMAction omAction, Exception 
exception)`

 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 290946)
Time Spent: 1h 10m  (was: 1h)

> Audit xxxAcl methods in OzoneManager
> 
>
> Key: HDDS-1768
> URL: https://issues.apache.org/jira/browse/HDDS-1768
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Ajay Kumar
>Assignee: Dinesh Chitlangia
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h 10m
>  Remaining Estimate: 0h
>
> Audit permission failures from authorizer



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1725) pv-test example to test csi is not working

2019-08-07 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1725?focusedWorklogId=290944&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-290944
 ]

ASF GitHub Bot logged work on HDDS-1725:


Author: ASF GitHub Bot
Created on: 08/Aug/19 02:18
Start Date: 08/Aug/19 02:18
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #1070: HDDS-1725. 
pv-test example to test csi is not working
URL: https://github.com/apache/hadoop/pull/1070#issuecomment-519336104
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 0 | Docker mode activated. |
   | -1 | patch | 9 | https://github.com/apache/hadoop/pull/1070 does not apply 
to trunk. Rebase required? Wrong Branch? See 
https://wiki.apache.org/hadoop/HowToContribute for help. |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | GITHUB PR | https://github.com/apache/hadoop/pull/1070 |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1070/5/console |
   | versions | git=2.17.1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 290944)
Time Spent: 1h 20m  (was: 1h 10m)

> pv-test example to test csi is not working
> --
>
> Key: HDDS-1725
> URL: https://issues.apache.org/jira/browse/HDDS-1725
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Ratish Maruthiyodan
>Assignee: Elek, Marton
>Priority: Blocker
>  Labels: pull-request-available
> Fix For: 0.4.1
>
>  Time Spent: 1h 20m
>  Remaining Estimate: 0h
>
> [~rmaruthiyodan] reported two problems regarding to the pv-test example in 
> csi examples folder.
> pv-test folder contains an example nginx deployment which can use an ozone 
> PVC/PV to publish content of a folder via http.
> Two problems are identified:
>  * The label based matching filter of service doesn't point to the nginx 
> deployment
>  * The configmap mounting is missing from nginx deployment



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1768) Audit xxxAcl methods in OzoneManager

2019-08-07 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1768?focusedWorklogId=290942&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-290942
 ]

ASF GitHub Bot logged work on HDDS-1768:


Author: ASF GitHub Bot
Created on: 08/Aug/19 02:17
Start Date: 08/Aug/19 02:17
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on pull request #1204: 
HDDS-1768. Audit xxxAcl methods in OzoneManager
URL: https://github.com/apache/hadoop/pull/1204#discussion_r311831941
 
 

 ##
 File path: 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/client/rpc/TestOzoneRpcClientForAclAuditLog.java
 ##
 @@ -0,0 +1,422 @@
+package org.apache.hadoop.ozone.client.rpc;
+
+import org.apache.commons.io.FileUtils;
+import org.apache.commons.lang3.RandomStringUtils;
+import org.apache.hadoop.hdds.conf.OzoneConfiguration;
+import org.apache.hadoop.ozone.OzoneAcl;
+import org.apache.hadoop.ozone.audit.AuditEventStatus;
+import org.apache.hadoop.ozone.audit.OMAction;
+import org.apache.hadoop.ozone.client.OzoneVolume;
+import org.apache.hadoop.ozone.security.acl.IAccessAuthorizer;
+import org.apache.hadoop.ozone.security.acl.OzoneObj;
+import org.apache.hadoop.ozone.security.acl.OzoneObjInfo;
+import org.apache.hadoop.security.UserGroupInformation;
+import org.junit.AfterClass;
+import org.junit.Assert;
+import org.junit.BeforeClass;
+import org.junit.FixMethodOrder;
+import org.junit.Test;
+import org.junit.runners.MethodSorters;
+
+import java.io.File;
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.List;
+
+import static org.apache.hadoop.ozone.OzoneAcl.AclScope.ACCESS;
+import static 
org.apache.hadoop.ozone.OzoneConfigKeys.OZONE_ACL_AUTHORIZER_CLASS;
+import static 
org.apache.hadoop.ozone.OzoneConfigKeys.OZONE_ACL_AUTHORIZER_CLASS_NATIVE;
+import static org.apache.hadoop.ozone.OzoneConfigKeys.OZONE_ACL_ENABLED;
+import static org.apache.hadoop.ozone.OzoneConfigKeys.OZONE_ADMINISTRATORS;
+import static 
org.apache.hadoop.ozone.OzoneConfigKeys.OZONE_ADMINISTRATORS_WILDCARD;
+import static 
org.apache.hadoop.ozone.security.acl.OzoneObj.ResourceType.VOLUME;
+import static org.apache.hadoop.ozone.security.acl.OzoneObj.StoreType.OZONE;
+import static org.junit.Assert.assertTrue;
+
+/**
+ * This class is to test audit logs for xxxACL APIs of Ozone Client.
+ */
+@FixMethodOrder(MethodSorters.NAME_ASCENDING)
+public class TestOzoneRpcClientForAclAuditLog extends
+TestOzoneRpcClientAbstract {
+
+  private static UserGroupInformation ugi;
+  private static final OzoneAcl USER_ACL =
+  new OzoneAcl(IAccessAuthorizer.ACLIdentityType.USER,
 
 Review comment:
   If we don't need any other methods for testing, can we make this a new 
separate test class
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 290942)
Time Spent: 1h  (was: 50m)

> Audit xxxAcl methods in OzoneManager
> 
>
> Key: HDDS-1768
> URL: https://issues.apache.org/jira/browse/HDDS-1768
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Ajay Kumar
>Assignee: Dinesh Chitlangia
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> Audit permission failures from authorizer



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1810) SCM command to Activate and Deactivate pipelines

2019-08-07 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1810?focusedWorklogId=290943&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-290943
 ]

ASF GitHub Bot logged work on HDDS-1810:


Author: ASF GitHub Bot
Created on: 08/Aug/19 02:17
Start Date: 08/Aug/19 02:17
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #1224: HDDS-1810. SCM 
command to Activate and Deactivate pipelines.
URL: https://github.com/apache/hadoop/pull/1224#issuecomment-519335888
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 91 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 1 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 2 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 81 | Maven dependency ordering for branch |
   | +1 | mvninstall | 637 | trunk passed |
   | +1 | compile | 385 | trunk passed |
   | +1 | checkstyle | 73 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 936 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 179 | trunk passed |
   | 0 | spotbugs | 476 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 693 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 78 | Maven dependency ordering for patch |
   | +1 | mvninstall | 569 | the patch passed |
   | +1 | compile | 383 | the patch passed |
   | +1 | cc | 383 | the patch passed |
   | +1 | javac | 383 | the patch passed |
   | +1 | checkstyle | 80 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 789 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 193 | the patch passed |
   | +1 | findbugs | 740 | the patch passed |
   ||| _ Other Tests _ |
   | -1 | unit | 258 | hadoop-hdds in the patch failed. |
   | -1 | unit | 2277 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 43 | The patch does not generate ASF License warnings. |
   | | | 8648 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.ozone.container.ozoneimpl.TestOzoneContainer |
   |   | hadoop.ozone.client.rpc.TestSecureOzoneRpcClient |
   |   | hadoop.ozone.om.TestScmSafeMode |
   |   | hadoop.ozone.client.rpc.TestOzoneAtRestEncryption |
   |   | hadoop.ozone.client.rpc.TestBlockOutputStreamWithFailures |
   |   | hadoop.ozone.client.rpc.TestOzoneRpcClient |
   |   | hadoop.hdds.scm.pipeline.TestRatisPipelineCreateAndDestory |
   |   | hadoop.ozone.client.rpc.TestMultiBlockWritesWithDnFailures |
   |   | hadoop.ozone.client.rpc.TestWatchForCommit |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.1 Server=19.03.1 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1224/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1224 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle cc |
   | uname | Linux 7499578d1bdd 4.15.0-54-generic #58-Ubuntu SMP Mon Jun 24 
10:55:24 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 3cc0ace |
   | Default Java | 1.8.0_222 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1224/2/artifact/out/patch-unit-hadoop-hdds.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1224/2/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1224/2/testReport/ |
   | Max. process+thread count | 4541 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdds/client hadoop-hdds/common hadoop-hdds/server-scm 
hadoop-hdds/tools hadoop-ozone/integration-test U: . |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1224/2/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 290943)
Time Spent: 1h 10m  (was: 1h)

> SCM command to Activate and D

[jira] [Work logged] (HDDS-1888) Add containers to node2container map in SCM as soon as a container is created

2019-08-07 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1888?focusedWorklogId=290941&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-290941
 ]

ASF GitHub Bot logged work on HDDS-1888:


Author: ASF GitHub Bot
Created on: 08/Aug/19 02:16
Start Date: 08/Aug/19 02:16
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #1211: HDDS-1888. Add 
containers to node2container map in SCM as soon as a container is created.
URL: https://github.com/apache/hadoop/pull/1211#issuecomment-519335677
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 92 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 3 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | +1 | mvninstall | 696 | trunk passed |
   | +1 | compile | 370 | trunk passed |
   | +1 | checkstyle | 71 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 967 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 181 | trunk passed |
   | 0 | spotbugs | 454 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 688 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | +1 | mvninstall | 576 | the patch passed |
   | +1 | compile | 440 | the patch passed |
   | +1 | javac | 440 | the patch passed |
   | +1 | checkstyle | 82 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 756 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 166 | the patch passed |
   | +1 | findbugs | 699 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 358 | hadoop-hdds in the patch passed. |
   | -1 | unit | 2206 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 41 | The patch does not generate ASF License warnings. |
   | | | 8543 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.ozone.client.rpc.TestOzoneRpcClient |
   |   | hadoop.ozone.client.rpc.TestMultiBlockWritesWithDnFailures |
   |   | hadoop.ozone.client.rpc.TestBlockOutputStreamWithFailures |
   |   | hadoop.ozone.client.rpc.Test2WayCommitInRatis |
   |   | hadoop.ozone.client.rpc.TestOzoneAtRestEncryption |
   |   | hadoop.ozone.client.rpc.TestSecureOzoneRpcClient |
   |   | hadoop.hdds.scm.pipeline.TestRatisPipelineProvider |
   |   | hadoop.hdds.scm.pipeline.TestRatisPipelineCreateAndDestory |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.1 Server=19.03.1 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1211/4/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1211 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux 757e082820c5 4.15.0-54-generic #58-Ubuntu SMP Mon Jun 24 
10:55:24 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 3cc0ace |
   | Default Java | 1.8.0_222 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1211/4/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1211/4/testReport/ |
   | Max. process+thread count | 5278 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdds/server-scm U: hadoop-hdds/server-scm |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1211/4/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 290941)
Time Spent: 1h 40m  (was: 1.5h)

> Add containers to node2container map in SCM as soon as a container is created
> -
>
> Key: HDDS-1888
> URL: https://issues.apache.org/jira/browse/HDDS-1888
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: SCM
>Report

[jira] [Work logged] (HDDS-1895) Support Key ACL operations for OM HA.

2019-08-07 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1895?focusedWorklogId=290938&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-290938
 ]

ASF GitHub Bot logged work on HDDS-1895:


Author: ASF GitHub Bot
Created on: 08/Aug/19 02:12
Start Date: 08/Aug/19 02:12
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #1230: HDDS-1895. 
Support Key ACL operations for OM HA.
URL: https://github.com/apache/hadoop/pull/1230#issuecomment-519334999
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 43 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 1 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 32 | Maven dependency ordering for branch |
   | +1 | mvninstall | 590 | trunk passed |
   | +1 | compile | 372 | trunk passed |
   | +1 | checkstyle | 72 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 811 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 148 | trunk passed |
   | 0 | spotbugs | 428 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 619 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 19 | Maven dependency ordering for patch |
   | +1 | mvninstall | 576 | the patch passed |
   | +1 | compile | 387 | the patch passed |
   | +1 | cc | 387 | the patch passed |
   | +1 | javac | 387 | the patch passed |
   | -0 | checkstyle | 33 | hadoop-ozone: The patch generated 2 new + 0 
unchanged - 0 fixed = 2 total (was 0) |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 644 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 158 | the patch passed |
   | +1 | findbugs | 688 | the patch passed |
   ||| _ Other Tests _ |
   | -1 | unit | 198 | hadoop-hdds in the patch failed. |
   | -1 | unit | 2772 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 41 | The patch does not generate ASF License warnings. |
   | | | 8368 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.ozone.container.ozoneimpl.TestOzoneContainer |
   |   | hadoop.ozone.client.rpc.TestOzoneAtRestEncryption |
   |   | hadoop.ozone.client.rpc.TestCommitWatcher |
   |   | hadoop.ozone.om.TestScmSafeMode |
   |   | hadoop.ozone.client.rpc.TestMultiBlockWritesWithDnFailures |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.1 Server=19.03.1 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1230/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1230 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle cc |
   | uname | Linux dc1375ba6c49 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 3cc0ace |
   | Default Java | 1.8.0_212 |
   | checkstyle | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1230/2/artifact/out/diff-checkstyle-hadoop-ozone.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1230/2/artifact/out/patch-unit-hadoop-hdds.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1230/2/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1230/2/testReport/ |
   | Max. process+thread count | 3635 (vs. ulimit of 5500) |
   | modules | C: hadoop-ozone/common hadoop-ozone/ozone-manager 
hadoop-ozone/integration-test U: hadoop-ozone |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1230/2/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 290938)
Time Spent: 0.5h  (was: 20m)

> Support Key ACL operations for OM HA.
> -
>
> Key: HDDS-1895
>  

[jira] [Work logged] (HDDS-1916) Only contract tests are run in ozonefs module

2019-08-07 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1916?focusedWorklogId=290937&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-290937
 ]

ASF GitHub Bot logged work on HDDS-1916:


Author: ASF GitHub Bot
Created on: 08/Aug/19 02:07
Start Date: 08/Aug/19 02:07
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #1235: HDDS-1916. Only 
contract tests are run in ozonefs module
URL: https://github.com/apache/hadoop/pull/1235#issuecomment-519334141
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 63 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | -1 | test4tests | 0 | The patch doesn't appear to include any new or 
modified tests.  Please justify why no new tests are needed for this patch. 
Also please list what manual steps were performed to verify this patch. |
   ||| _ trunk Compile Tests _ |
   | +1 | mvninstall | 671 | trunk passed |
   | +1 | compile | 438 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 1990 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 192 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | +1 | mvninstall | 655 | the patch passed |
   | +1 | compile | 458 | the patch passed |
   | +1 | javac | 458 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | xml | 2 | The patch has no ill-formed XML file. |
   | +1 | shadedclient | 688 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 160 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 336 | hadoop-hdds in the patch passed. |
   | -1 | unit | 3292 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 53 | The patch does not generate ASF License warnings. |
   | | | 8056 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.ozone.client.rpc.TestOzoneRpcClient |
   |   | hadoop.ozone.client.rpc.TestSecureOzoneRpcClient |
   |   | hadoop.hdds.scm.pipeline.TestRatisPipelineCreateAndDestory |
   |   | hadoop.ozone.client.rpc.TestMultiBlockWritesWithDnFailures |
   |   | hadoop.ozone.client.rpc.TestOzoneAtRestEncryption |
   |   | hadoop.ozone.TestMiniOzoneCluster |
   |   | hadoop.ozone.om.TestScmSafeMode |
   |   | hadoop.ozone.client.rpc.TestOzoneClientRetriesOnException |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.1 Server=19.03.1 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1235/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1235 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient xml |
   | uname | Linux 8e39febe6de2 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 3cc0ace |
   | Default Java | 1.8.0_212 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1235/2/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1235/2/testReport/ |
   | Max. process+thread count | 4935 (vs. ulimit of 5500) |
   | modules | C: hadoop-ozone/ozonefs U: hadoop-ozone/ozonefs |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1235/2/console |
   | versions | git=2.7.4 maven=3.3.9 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 290937)
Time Spent: 50m  (was: 40m)

> Only contract tests are run in ozonefs module
> -
>
> Key: HDDS-1916
> URL: https://issues.apache.org/jira/browse/HDDS-1916
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: test
>Affects Versions: 0.3.0
>Reporter: Doroszlai, Attila
>Assignee: Doroszlai, Attila
>Priority: Minor
>  Labels: pull-request-available
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> {{hadoop-ozone-filesyst

[jira] [Work logged] (HDDS-1832) Improve logging for PipelineActions handling in SCM and datanode

2019-08-07 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1832?focusedWorklogId=290933&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-290933
 ]

ASF GitHub Bot logged work on HDDS-1832:


Author: ASF GitHub Bot
Created on: 08/Aug/19 02:00
Start Date: 08/Aug/19 02:00
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #1217: HDDS-1832 : 
Improve logging for PipelineActions handling in SCM and datanode.
URL: https://github.com/apache/hadoop/pull/1217#issuecomment-519332820
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 43 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | -1 | test4tests | 0 | The patch doesn't appear to include any new or 
modified tests.  Please justify why no new tests are needed for this patch. 
Also please list what manual steps were performed to verify this patch. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 11 | Maven dependency ordering for branch |
   | +1 | mvninstall | 610 | trunk passed |
   | +1 | compile | 346 | trunk passed |
   | +1 | checkstyle | 67 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 788 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 148 | trunk passed |
   | 0 | spotbugs | 423 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 613 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 20 | Maven dependency ordering for patch |
   | +1 | mvninstall | 519 | the patch passed |
   | +1 | compile | 350 | the patch passed |
   | +1 | javac | 350 | the patch passed |
   | +1 | checkstyle | 70 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 644 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 152 | the patch passed |
   | +1 | findbugs | 654 | the patch passed |
   ||| _ Other Tests _ |
   | -1 | unit | 200 | hadoop-hdds in the patch failed. |
   | -1 | unit | 2175 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 41 | The patch does not generate ASF License warnings. |
   | | | 7583 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.ozone.container.ozoneimpl.TestOzoneContainer |
   |   | hadoop.ozone.client.rpc.TestMultiBlockWritesWithDnFailures |
   |   | hadoop.ozone.om.TestScmSafeMode |
   |   | hadoop.ozone.client.rpc.TestOzoneRpcClient |
   |   | hadoop.ozone.client.rpc.TestBlockOutputStreamWithFailures |
   |   | hadoop.ozone.om.TestOzoneManagerHA |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.1 Server=19.03.1 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1217/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1217 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux df20570931d3 4.4.0-139-generic #165-Ubuntu SMP Wed Oct 24 
10:58:50 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 3cc0ace |
   | Default Java | 1.8.0_212 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1217/2/artifact/out/patch-unit-hadoop-hdds.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1217/2/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1217/2/testReport/ |
   | Max. process+thread count | 4693 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdds/container-service hadoop-hdds/server-scm U: 
hadoop-hdds |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1217/2/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 290933)
Time Spent: 2h 10m  (was: 2h)

> Improve logging for PipelineActions handling in SCM and datanode
> 
>
> Key: HDDS-

[jira] [Work logged] (HDDS-1881) Design doc: decommissioning in Ozone

2019-08-07 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1881?focusedWorklogId=290930&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-290930
 ]

ASF GitHub Bot logged work on HDDS-1881:


Author: ASF GitHub Bot
Created on: 08/Aug/19 01:54
Start Date: 08/Aug/19 01:54
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #1196: 
HDDS-1881. Design doc: decommissioning in Ozone
URL: https://github.com/apache/hadoop/pull/1196#discussion_r311828001
 
 

 ##
 File path: hadoop-hdds/docs/content/design/decommissioning.md
 ##
 @@ -0,0 +1,610 @@
+---
+title: Decommissioning in Ozone
+summary: Formal process to shut down machines in a safe way after the required 
replications.
+date: 2019-07-31
+jira: HDDS-1881
+status: current
+author: Anu Engineer, Marton Elek, Stephen O'Donnell
+---
+
+# Abstract
+
+The goal of decommissioning is to turn off a selected set of machines without 
data loss. It may or may not require to move the existing replicas of the 
containers to other nodes.
+
+There are two main classes of the decommissioning:
+
+ * __Maintenance mode__: where the node is expected to be back after a while. 
It may not require replication of containers if enough replicas are available 
from other nodes (as we expect to have the current replicas after the restart.)
+
+ * __Decommissioning__: where the node won't be started again. All the data 
should be replicated according to the current replication rules.
+
+Goals:
+
+ * Decommissioning can be canceled any time
+ * The progress of the decommissioning should be trackable
+ * The nodes under decommissioning / maintenance mode should not been used for 
new pipelines / containers
+ * The state of the datanodes should be persisted / replicated by the SCM (in 
HDFS the decommissioning info exclude/include lists are replicated manually by 
the admin). If datanode is marked for decommissioning this state be available 
after SCM and/or Datanode restarts.
+ * We need to support validations before decommissioing (but the violations 
can be ignored by the admin).
+ * The administrator should be notified when a node can be turned off.
+ * The maintenance mode can be time constrained: if the node marked for 
maintenance for 1 week and the node is not up after one week, the containers 
should be considered as lost (DEAD node) and should be replicated.
+
+# Introduction
+
+Ozone is a highly available file system that relies on commodity hardware. In 
other words, Ozone is designed to handle failures of these nodes all the time.
+
+The Storage Container Manager(SCM) is designed to monitor the node health and 
replicate blocks and containers as needed.
+
+At times, Operators of the cluster can help the SCM by giving it hints. When 
removing a datanode, the operator can provide a hint. That is, a planned 
failure of the node is coming up, and SCM can make sure it reaches a safe state 
to handle this planned failure.
+
+Some times, this failure is transient; that is, the operator is taking down 
this node temporarily. In that case, we can live with lower replica counts by 
being optimistic.
+
+Both of these operations, __Maintenance__, and __Decommissioning__ are similar 
from the Replication point of view. In both cases, and the user instructs us on 
how to handle an upcoming failure.
+
+Today, SCM (*Replication Manager* component inside SCM) understands only one 
form of failure handling. This paper extends Replica Manager failure modes to 
allow users to request which failure handling model to be adopted(Optimistic or 
Pessimistic).
+
+Based on physical realities, there are two responses to any perceived failure, 
to heal the system by taking corrective actions or ignore the failure since the 
actions in the future will heal the system automatically.
+
+## User Experiences (Decommissioning vs Maintenance mode)
+
+From the user's point of view, there are two kinds of planned failures that 
the user would like to communicate to Ozone.
+
+The first kind is when a 'real' failure is going to happen in the future. This 
'real' failure is the act of decommissioning. We denote this as "decommission" 
throughout this paper. The response that the user wants is SCM/Ozone to make 
replicas to deal with the planned failure.
+
+The second kind is when the failure is 'transient.' The user knows that this 
failure is temporary and cluster in most cases can safely ignore this issue. 
However, if the transient failures are going to cause a failure of 
availability; then the user would like the Ozone to take appropriate actions to 
address it.  An example of this case, is if the user put 3 data nodes into 
maintenance mode and switched them off.
+
+The transient failure can violate the availability guarantees of Ozone; Since 
the user is telling us not to take corrective actions. Many times, the user 
does not understand the impact on avai

[jira] [Work logged] (HDDS-1881) Design doc: decommissioning in Ozone

2019-08-07 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1881?focusedWorklogId=290931&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-290931
 ]

ASF GitHub Bot logged work on HDDS-1881:


Author: ASF GitHub Bot
Created on: 08/Aug/19 01:54
Start Date: 08/Aug/19 01:54
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #1196: HDDS-1881. 
Design doc: decommissioning in Ozone
URL: https://github.com/apache/hadoop/pull/1196#issuecomment-519331661
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 44 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   ||| _ trunk Compile Tests _ |
   | +1 | mvninstall | 656 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 1439 | branch has no errors when building and testing 
our client artifacts. |
   ||| _ Patch Compile Tests _ |
   | +1 | mvninstall | 543 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | -1 | whitespace | 0 | The patch 2  line(s) with tabs. |
   | +1 | shadedclient | 633 | patch has no errors when building and testing 
our client artifacts. |
   ||| _ Other Tests _ |
   | +1 | asflicense | 38 | The patch does not generate ASF License warnings. |
   | | | 2849 | |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.1 Server=19.03.1 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1196/7/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1196 |
   | Optional Tests | dupname asflicense mvnsite |
   | uname | Linux 87b0f790b823 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 70b4617 |
   | whitespace | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1196/7/artifact/out/whitespace-tabs.txt
 |
   | Max. process+thread count | 444 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdds/docs U: hadoop-hdds/docs |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1196/7/console |
   | versions | git=2.7.4 maven=3.3.9 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 290931)
Time Spent: 39h 50m  (was: 39h 40m)

> Design doc: decommissioning in Ozone
> 
>
> Key: HDDS-1881
> URL: https://issues.apache.org/jira/browse/HDDS-1881
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
>  Labels: design, pull-request-available
>  Time Spent: 39h 50m
>  Remaining Estimate: 0h
>
> Design doc can be attached to the documentation. In this jira the design doc 
> will be attached and merged to the documentation page.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1881) Design doc: decommissioning in Ozone

2019-08-07 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1881?focusedWorklogId=290929&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-290929
 ]

ASF GitHub Bot logged work on HDDS-1881:


Author: ASF GitHub Bot
Created on: 08/Aug/19 01:54
Start Date: 08/Aug/19 01:54
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #1196: 
HDDS-1881. Design doc: decommissioning in Ozone
URL: https://github.com/apache/hadoop/pull/1196#discussion_r311827995
 
 

 ##
 File path: hadoop-hdds/docs/content/design/decommissioning.md
 ##
 @@ -0,0 +1,610 @@
+---
+title: Decommissioning in Ozone
+summary: Formal process to shut down machines in a safe way after the required 
replications.
+date: 2019-07-31
+jira: HDDS-1881
+status: current
+author: Anu Engineer, Marton Elek, Stephen O'Donnell
+---
+
+# Abstract
+
+The goal of decommissioning is to turn off a selected set of machines without 
data loss. It may or may not require to move the existing replicas of the 
containers to other nodes.
+
+There are two main classes of the decommissioning:
+
+ * __Maintenance mode__: where the node is expected to be back after a while. 
It may not require replication of containers if enough replicas are available 
from other nodes (as we expect to have the current replicas after the restart.)
+
+ * __Decommissioning__: where the node won't be started again. All the data 
should be replicated according to the current replication rules.
+
+Goals:
+
+ * Decommissioning can be canceled any time
+ * The progress of the decommissioning should be trackable
+ * The nodes under decommissioning / maintenance mode should not been used for 
new pipelines / containers
+ * The state of the datanodes should be persisted / replicated by the SCM (in 
HDFS the decommissioning info exclude/include lists are replicated manually by 
the admin). If datanode is marked for decommissioning this state be available 
after SCM and/or Datanode restarts.
+ * We need to support validations before decommissioing (but the violations 
can be ignored by the admin).
+ * The administrator should be notified when a node can be turned off.
+ * The maintenance mode can be time constrained: if the node marked for 
maintenance for 1 week and the node is not up after one week, the containers 
should be considered as lost (DEAD node) and should be replicated.
+
+# Introduction
+
+Ozone is a highly available file system that relies on commodity hardware. In 
other words, Ozone is designed to handle failures of these nodes all the time.
+
+The Storage Container Manager(SCM) is designed to monitor the node health and 
replicate blocks and containers as needed.
+
+At times, Operators of the cluster can help the SCM by giving it hints. When 
removing a datanode, the operator can provide a hint. That is, a planned 
failure of the node is coming up, and SCM can make sure it reaches a safe state 
to handle this planned failure.
+
+Some times, this failure is transient; that is, the operator is taking down 
this node temporarily. In that case, we can live with lower replica counts by 
being optimistic.
+
+Both of these operations, __Maintenance__, and __Decommissioning__ are similar 
from the Replication point of view. In both cases, and the user instructs us on 
how to handle an upcoming failure.
+
+Today, SCM (*Replication Manager* component inside SCM) understands only one 
form of failure handling. This paper extends Replica Manager failure modes to 
allow users to request which failure handling model to be adopted(Optimistic or 
Pessimistic).
+
+Based on physical realities, there are two responses to any perceived failure, 
to heal the system by taking corrective actions or ignore the failure since the 
actions in the future will heal the system automatically.
+
+## User Experiences (Decommissioning vs Maintenance mode)
+
+From the user's point of view, there are two kinds of planned failures that 
the user would like to communicate to Ozone.
+
+The first kind is when a 'real' failure is going to happen in the future. This 
'real' failure is the act of decommissioning. We denote this as "decommission" 
throughout this paper. The response that the user wants is SCM/Ozone to make 
replicas to deal with the planned failure.
+
+The second kind is when the failure is 'transient.' The user knows that this 
failure is temporary and cluster in most cases can safely ignore this issue. 
However, if the transient failures are going to cause a failure of 
availability; then the user would like the Ozone to take appropriate actions to 
address it.  An example of this case, is if the user put 3 data nodes into 
maintenance mode and switched them off.
+
+The transient failure can violate the availability guarantees of Ozone; Since 
the user is telling us not to take corrective actions. Many times, the user 
does not understand the impact on avai

[jira] [Work logged] (HDDS-1925) ozonesecure acceptance test broken by HTTP auth requirement

2019-08-07 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1925?focusedWorklogId=290928&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-290928
 ]

ASF GitHub Bot logged work on HDDS-1925:


Author: ASF GitHub Bot
Created on: 08/Aug/19 01:47
Start Date: 08/Aug/19 01:47
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #1248: HDDS-1925. 
ozonesecure acceptance test broken by HTTP auth requirement
URL: https://github.com/apache/hadoop/pull/1248#issuecomment-519330401
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 72 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | 0 | shelldocs | 0 | Shelldocs was not available. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | -1 | test4tests | 0 | The patch doesn't appear to include any new or 
modified tests.  Please justify why no new tests are needed for this patch. 
Also please list what manual steps were performed to verify this patch. |
   ||| _ trunk Compile Tests _ |
   | +1 | mvninstall | 636 | trunk passed |
   | +1 | compile | 379 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 724 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 167 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | +1 | mvninstall | 565 | the patch passed |
   | +1 | compile | 370 | the patch passed |
   | +1 | javac | 370 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | shellcheck | 0 | There were no new shellcheck issues. |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 631 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 174 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 353 | hadoop-hdds in the patch passed. |
   | -1 | unit | 2552 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 46 | The patch does not generate ASF License warnings. |
   | | | 6873 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.ozone.client.rpc.TestHybridPipelineOnDatanode |
   |   | hadoop.ozone.client.rpc.TestSecureOzoneRpcClient |
   |   | hadoop.ozone.client.rpc.TestMultiBlockWritesWithDnFailures |
   |   | hadoop.ozone.client.rpc.TestOzoneAtRestEncryption |
   |   | hadoop.hdds.scm.pipeline.TestRatisPipelineCreateAndDestory |
   |   | hadoop.ozone.client.rpc.TestCommitWatcher |
   |   | hadoop.ozone.client.rpc.TestFailureHandlingByClient |
   |   | hadoop.ozone.client.rpc.Test2WayCommitInRatis |
   |   | hadoop.hdds.scm.pipeline.TestRatisPipelineProvider |
   |   | hadoop.ozone.client.rpc.TestOzoneClientRetriesOnException |
   |   | hadoop.ozone.client.rpc.TestBlockOutputStreamWithFailures |
   |   | hadoop.ozone.client.rpc.TestOzoneRpcClient |
   |   | hadoop.ozone.om.TestScmSafeMode |
   |   | hadoop.ozone.client.rpc.TestWatchForCommit |
   |   | hadoop.ozone.client.rpc.TestContainerStateMachine |
   |   | hadoop.hdds.scm.pipeline.TestSCMPipelineManager |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.1 Server=19.03.1 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1248/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1248 |
   | Optional Tests | dupname asflicense mvnsite unit shellcheck shelldocs 
compile javac javadoc mvninstall shadedclient |
   | uname | Linux c82d4e12d5af 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 3cc0ace |
   | Default Java | 1.8.0_222 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1248/2/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1248/2/testReport/ |
   | Max. process+thread count | 4698 (vs. ulimit of 5500) |
   | modules | C: hadoop-ozone/dist U: hadoop-ozone/dist |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1248/2/console |
   | versions | git=2.7.4 maven=3.3.9 shellcheck=0.4.6 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 290928)
Time 

[jira] [Work logged] (HDDS-1920) Place ozone.om.address config key default value in ozone-site.xml

2019-08-07 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1920?focusedWorklogId=290927&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-290927
 ]

ASF GitHub Bot logged work on HDDS-1920:


Author: ASF GitHub Bot
Created on: 08/Aug/19 01:44
Start Date: 08/Aug/19 01:44
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #1237: HDDS-1920. Place 
ozone.om.address config key default value in ozone-site.xml
URL: https://github.com/apache/hadoop/pull/1237#issuecomment-519329970
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 94 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | -1 | test4tests | 0 | The patch doesn't appear to include any new or 
modified tests.  Please justify why no new tests are needed for this patch. 
Also please list what manual steps were performed to verify this patch. |
   ||| _ trunk Compile Tests _ |
   | +1 | mvninstall | 625 | trunk passed |
   | +1 | compile | 391 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 1884 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 168 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | +1 | mvninstall | 614 | the patch passed |
   | +1 | compile | 394 | the patch passed |
   | +1 | javac | 394 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | xml | 2 | The patch has no ill-formed XML file. |
   | +1 | shadedclient | 751 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 172 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 356 | hadoop-hdds in the patch passed. |
   | -1 | unit | 2066 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 41 | The patch does not generate ASF License warnings. |
   | | | 6715 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.ozone.client.rpc.TestSecureOzoneRpcClient |
   |   | hadoop.ozone.om.TestScmSafeMode |
   |   | hadoop.ozone.client.rpc.TestOzoneAtRestEncryption |
   |   | hadoop.hdds.scm.pipeline.TestPipelineStateManager |
   |   | hadoop.ozone.client.rpc.TestBlockOutputStreamWithFailures |
   |   | hadoop.ozone.client.rpc.TestOzoneRpcClient |
   |   | hadoop.hdds.scm.pipeline.TestRatisPipelineCreateAndDestory |
   |   | hadoop.ozone.client.rpc.TestMultiBlockWritesWithDnFailures |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.1 Server=19.03.1 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1237/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1237 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient xml |
   | uname | Linux a311d6461542 4.15.0-54-generic #58-Ubuntu SMP Mon Jun 24 
10:55:24 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 3cc0ace |
   | Default Java | 1.8.0_222 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1237/1/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1237/1/testReport/ |
   | Max. process+thread count | 4682 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdds/common U: hadoop-hdds/common |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1237/1/console |
   | versions | git=2.7.4 maven=3.3.9 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 290927)
Time Spent: 20m  (was: 10m)

> Place ozone.om.address config key default value in ozone-site.xml
> -
>
> Key: HDDS-1920
> URL: https://issues.apache.org/jira/browse/HDDS-1920
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Siyao Meng
>Assignee: Siyao Meng
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> {code:xml}
>  

[jira] [Work logged] (HDDS-1610) applyTransaction failure should not be lost on restart

2019-08-07 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1610?focusedWorklogId=290925&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-290925
 ]

ASF GitHub Bot logged work on HDDS-1610:


Author: ASF GitHub Bot
Created on: 08/Aug/19 01:44
Start Date: 08/Aug/19 01:44
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #1226: HDDS-1610. 
applyTransaction failure should not be lost on restart.
URL: https://github.com/apache/hadoop/pull/1226#issuecomment-519329836
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 35 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 1 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 73 | Maven dependency ordering for branch |
   | +1 | mvninstall | 612 | trunk passed |
   | +1 | compile | 387 | trunk passed |
   | +1 | checkstyle | 76 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 916 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 177 | trunk passed |
   | 0 | spotbugs | 467 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 687 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 76 | Maven dependency ordering for patch |
   | +1 | mvninstall | 564 | the patch passed |
   | +1 | compile | 382 | the patch passed |
   | +1 | javac | 382 | the patch passed |
   | -0 | checkstyle | 42 | hadoop-ozone: The patch generated 4 new + 0 
unchanged - 0 fixed = 4 total (was 0) |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 778 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 179 | the patch passed |
   | +1 | findbugs | 731 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 350 | hadoop-hdds in the patch passed. |
   | -1 | unit | 371 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 36 | The patch does not generate ASF License warnings. |
   | | | 6653 | |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.1 Server=19.03.1 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1226/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1226 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux 81e1c0db1839 4.15.0-54-generic #58-Ubuntu SMP Mon Jun 24 
10:55:24 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 3cc0ace |
   | Default Java | 1.8.0_222 |
   | checkstyle | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1226/2/artifact/out/diff-checkstyle-hadoop-ozone.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1226/2/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1226/2/testReport/ |
   | Max. process+thread count | 1105 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdds/container-service hadoop-ozone/integration-test 
U: . |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1226/2/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 290925)
Time Spent: 3h 50m  (was: 3h 40m)

> applyTransaction failure should not be lost on restart
> --
>
> Key: HDDS-1610
> URL: https://issues.apache.org/jira/browse/HDDS-1610
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Shashikant Banerjee
>Assignee: Shashikant Banerjee
>Priority: Blocker
>  Labels: pull-request-available
>  Time Spent: 3h 50m
>  Remaining Estimate: 0h
>
> If the applyTransaction fails in the containerStateMachine, then the 
> container should not accept new writes on restart,.
> 

[jira] [Work logged] (HDDS-1488) Scm cli command to start/stop replication manager

2019-08-07 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1488?focusedWorklogId=290924&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-290924
 ]

ASF GitHub Bot logged work on HDDS-1488:


Author: ASF GitHub Bot
Created on: 08/Aug/19 01:41
Start Date: 08/Aug/19 01:41
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #1221: HDDS-1488. Scm 
cli command to start/stop replication manager.
URL: https://github.com/apache/hadoop/pull/1221#issuecomment-519329393
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 93 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 1 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 1 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 42 | Maven dependency ordering for branch |
   | +1 | mvninstall | 692 | trunk passed |
   | +1 | compile | 369 | trunk passed |
   | +1 | checkstyle | 70 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 885 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 158 | trunk passed |
   | 0 | spotbugs | 444 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 674 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 18 | Maven dependency ordering for patch |
   | +1 | mvninstall | 557 | the patch passed |
   | +1 | compile | 376 | the patch passed |
   | +1 | cc | 376 | the patch passed |
   | +1 | javac | 376 | the patch passed |
   | +1 | checkstyle | 90 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 726 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 159 | the patch passed |
   | -1 | findbugs | 379 | hadoop-ozone in the patch failed. |
   ||| _ Other Tests _ |
   | +1 | unit | 337 | hadoop-hdds in the patch passed. |
   | -1 | unit | 459 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 39 | The patch does not generate ASF License warnings. |
   | | | 6490 | |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.1 Server=19.03.1 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1221/7/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1221 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle cc |
   | uname | Linux 09e331291846 4.15.0-48-generic #51-Ubuntu SMP Wed Apr 3 
08:28:49 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 3cc0ace |
   | Default Java | 1.8.0_212 |
   | findbugs | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1221/7/artifact/out/patch-findbugs-hadoop-ozone.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1221/7/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1221/7/testReport/ |
   | Max. process+thread count | 1197 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdds/common hadoop-hdds/client hadoop-hdds/server-scm 
hadoop-hdds/tools U: hadoop-hdds |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1221/7/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 290924)
Time Spent: 3h  (was: 2h 50m)

> Scm cli command to start/stop replication manager
> -
>
> Key: HDDS-1488
> URL: https://issues.apache.org/jira/browse/HDDS-1488
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: SCM
>Reporter: Nanda kumar
>Assignee: Nanda kumar
>Priority: Blocker
>  Labels: pull-request-available
>  Time Spent: 3h
>  Remaining Estimate: 0h
>
> It would be nice to have scmcli command to start/stop the ReplicationManager 
> thread running in SCM



--
Thi

[jira] [Work logged] (HDDS-1929) OM started on recon host in ozonesecure compose

2019-08-07 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1929?focusedWorklogId=290921&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-290921
 ]

ASF GitHub Bot logged work on HDDS-1929:


Author: ASF GitHub Bot
Created on: 08/Aug/19 01:37
Start Date: 08/Aug/19 01:37
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #1250: HDDS-1929. OM 
started on recon host in ozonesecure compose
URL: https://github.com/apache/hadoop/pull/1250#issuecomment-519328516
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 33 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | 0 | yamllint | 0 | yamllint was not available. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | -1 | test4tests | 0 | The patch doesn't appear to include any new or 
modified tests.  Please justify why no new tests are needed for this patch. 
Also please list what manual steps were performed to verify this patch. |
   ||| _ trunk Compile Tests _ |
   | +1 | mvninstall | 616 | trunk passed |
   | +1 | compile | 404 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 1874 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 167 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | +1 | mvninstall | 604 | the patch passed |
   | +1 | compile | 395 | the patch passed |
   | +1 | javac | 395 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 733 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 174 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 334 | hadoop-hdds in the patch passed. |
   | -1 | unit | 1728 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 44 | The patch does not generate ASF License warnings. |
   | | | 6256 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.ozone.om.TestScmSafeMode |
   |   | hadoop.ozone.client.rpc.TestMultiBlockWritesWithDnFailures |
   |   | hadoop.ozone.client.rpc.TestOzoneRpcClient |
   |   | hadoop.ozone.client.rpc.TestOzoneAtRestEncryption |
   |   | hadoop.ozone.container.server.TestSecureContainerServer |
   |   | hadoop.ozone.client.rpc.TestSecureOzoneRpcClient |
   |   | hadoop.ozone.container.common.transport.server.ratis.TestCSMMetrics |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.1 Server=19.03.1 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1250/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1250 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient yamllint |
   | uname | Linux 798c1fe507a5 4.15.0-54-generic #58-Ubuntu SMP Mon Jun 24 
10:55:24 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 3cc0ace |
   | Default Java | 1.8.0_222 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1250/2/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1250/2/testReport/ |
   | Max. process+thread count | 4422 (vs. ulimit of 5500) |
   | modules | C: hadoop-ozone/dist U: hadoop-ozone/dist |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1250/2/console |
   | versions | git=2.7.4 maven=3.3.9 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 290921)
Time Spent: 1h 10m  (was: 1h)

> OM started on recon host in ozonesecure compose 
> 
>
> Key: HDDS-1929
> URL: https://issues.apache.org/jira/browse/HDDS-1929
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: docker
>Affects Versions: 0.5.0
>Reporter: Doroszlai, Attila
>Assignee: Doroszlai, Attila
>Priority: Minor
>  Labels: pull-request-available
>  Time Spent: 1h 10m
>  Remaining Estimate: 0h
>
> OM is started temporarily on {{recon}} host in {{ozon

[jira] [Work logged] (HDDS-1928) Cannot run ozone-recon compose due to syntax error

2019-08-07 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1928?focusedWorklogId=290920&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-290920
 ]

ASF GitHub Bot logged work on HDDS-1928:


Author: ASF GitHub Bot
Created on: 08/Aug/19 01:35
Start Date: 08/Aug/19 01:35
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #1249: HDDS-1928. 
Cannot run ozone-recon compose due to syntax error
URL: https://github.com/apache/hadoop/pull/1249#issuecomment-519328208
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 42 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | 0 | yamllint | 0 | yamllint was not available. |
   | +1 | @author | 1 | The patch does not contain any @author tags. |
   | -1 | test4tests | 0 | The patch doesn't appear to include any new or 
modified tests.  Please justify why no new tests are needed for this patch. 
Also please list what manual steps were performed to verify this patch. |
   ||| _ trunk Compile Tests _ |
   | +1 | mvninstall | 609 | trunk passed |
   | +1 | compile | 370 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 1691 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 146 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | +1 | mvninstall | 556 | the patch passed |
   | +1 | compile | 362 | the patch passed |
   | +1 | javac | 362 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 619 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 156 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 295 | hadoop-hdds in the patch passed. |
   | -1 | unit | 2056 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 63 | The patch does not generate ASF License warnings. |
   | | | 6146 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | 
hadoop.ozone.client.rpc.TestMultiBlockWritesWithDnFailures |
   |   | hadoop.ozone.om.TestScmSafeMode |
   |   | hadoop.ozone.client.rpc.TestOzoneRpcClient |
   |   | hadoop.ozone.client.rpc.TestBlockOutputStreamWithFailures |
   |   | hadoop.ozone.client.rpc.TestOzoneAtRestEncryption |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.1 Server=19.03.1 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1249/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1249 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient yamllint |
   | uname | Linux 2ee235b6c525 4.4.0-139-generic #165-Ubuntu SMP Wed Oct 24 
10:58:50 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 3cc0ace |
   | Default Java | 1.8.0_212 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1249/2/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1249/2/testReport/ |
   | Max. process+thread count | 5379 (vs. ulimit of 5500) |
   | modules | C: hadoop-ozone/dist U: hadoop-ozone/dist |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1249/2/console |
   | versions | git=2.7.4 maven=3.3.9 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 290920)
Time Spent: 50m  (was: 40m)

> Cannot run ozone-recon compose due to syntax error
> --
>
> Key: HDDS-1928
> URL: https://issues.apache.org/jira/browse/HDDS-1928
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: docker
>Affects Versions: 0.4.1
>Reporter: Doroszlai, Attila
>Assignee: Doroszlai, Attila
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> {noformat}
> $ cd hadoop-ozone/dist/target/ozone-0.5.0-SNAPSHOT/compose/ozone-recon
> $ docker-compose up -d --scale datanode=3
> ERROR: yaml.scanner.ScannerError: mapping values are not

[jira] [Work logged] (HDDS-1909) Use new HA code for Non-HA in OM

2019-08-07 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1909?focusedWorklogId=290918&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-290918
 ]

ASF GitHub Bot logged work on HDDS-1909:


Author: ASF GitHub Bot
Created on: 08/Aug/19 01:30
Start Date: 08/Aug/19 01:30
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #1225: HDDS-1909. Use 
new HA code for Non-HA in OM.
URL: https://github.com/apache/hadoop/pull/1225#issuecomment-519327360
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 42 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 2 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 12 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 77 | Maven dependency ordering for branch |
   | +1 | mvninstall | 586 | trunk passed |
   | +1 | compile | 351 | trunk passed |
   | +1 | checkstyle | 66 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 827 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 175 | trunk passed |
   | 0 | spotbugs | 406 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 602 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 38 | Maven dependency ordering for patch |
   | +1 | mvninstall | 534 | the patch passed |
   | +1 | compile | 365 | the patch passed |
   | +1 | javac | 96 | hadoop-hdds in the patch passed. |
   | +1 | javac | 269 | hadoop-ozone generated 0 new + 4 unchanged - 4 fixed = 
4 total (was 8) |
   | -0 | checkstyle | 33 | hadoop-ozone: The patch generated 2 new + 0 
unchanged - 0 fixed = 2 total (was 0) |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 684 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 148 | the patch passed |
   | +1 | findbugs | 625 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 303 | hadoop-hdds in the patch passed. |
   | -1 | unit | 159 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 41 | The patch does not generate ASF License warnings. |
   | | | 5823 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.ozone.om.TestS3BucketManager |
   |   | hadoop.ozone.om.ratis.TestOzoneManagerDoubleBufferWithOMResponse |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.1 Server=19.03.1 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1225/14/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1225 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux c65235aefcce 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 3cc0ace |
   | Default Java | 1.8.0_222 |
   | checkstyle | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1225/14/artifact/out/diff-checkstyle-hadoop-ozone.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1225/14/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1225/14/testReport/ |
   | Max. process+thread count | 1387 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdds/common hadoop-ozone/common 
hadoop-ozone/ozone-manager hadoop-ozone/ozone-recon U: . |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1225/14/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 290918)
Time Spent: 5h 50m  (was: 5h 40m)

> Use new HA code for Non-HA in OM
> 
>
> Key: HDDS-1909
> URL: https://issues.apache.org/jira/browse/HDDS-1909
> Project: Hadoop Distributed Data Store
>  Issue Type: New Feature
>Reporter: Bharat Viswanadham
>Assignee: 

[jira] [Work logged] (HDDS-1891) Ozone fs shell command should work with default port when port number is not specified

2019-08-07 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1891?focusedWorklogId=290916&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-290916
 ]

ASF GitHub Bot logged work on HDDS-1891:


Author: ASF GitHub Bot
Created on: 08/Aug/19 01:29
Start Date: 08/Aug/19 01:29
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #1218: HDDS-1891. Ozone 
fs shell command should work with default port when port number is not specified
URL: https://github.com/apache/hadoop/pull/1218#issuecomment-519327221
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 44 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | -1 | test4tests | 0 | The patch doesn't appear to include any new or 
modified tests.  Please justify why no new tests are needed for this patch. 
Also please list what manual steps were performed to verify this patch. |
   ||| _ trunk Compile Tests _ |
   | +1 | mvninstall | 594 | trunk passed |
   | +1 | compile | 377 | trunk passed |
   | +1 | checkstyle | 74 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 777 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 146 | trunk passed |
   | 0 | spotbugs | 415 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 620 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | +1 | mvninstall | 526 | the patch passed |
   | +1 | compile | 350 | the patch passed |
   | +1 | javac | 350 | the patch passed |
   | +1 | checkstyle | 69 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 645 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 164 | the patch passed |
   | -1 | findbugs | 353 | hadoop-ozone in the patch failed. |
   ||| _ Other Tests _ |
   | +1 | unit | 295 | hadoop-hdds in the patch passed. |
   | -1 | unit | 340 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 34 | The patch does not generate ASF License warnings. |
   | | | 5750 | |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.1 Server=19.03.1 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1218/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1218 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux de9c20ca4927 4.4.0-139-generic #165-Ubuntu SMP Wed Oct 24 
10:58:50 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 3cc0ace |
   | Default Java | 1.8.0_212 |
   | findbugs | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1218/1/artifact/out/patch-findbugs-hadoop-ozone.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1218/1/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1218/1/testReport/ |
   | Max. process+thread count | 1302 (vs. ulimit of 5500) |
   | modules | C: hadoop-ozone/ozonefs U: hadoop-ozone/ozonefs |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1218/1/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 290916)
Time Spent: 50m  (was: 40m)

> Ozone fs shell command should work with default port when port number is not 
> specified
> --
>
> Key: HDDS-1891
> URL: https://issues.apache.org/jira/browse/HDDS-1891
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Siyao Meng
>Assignee: Siyao Meng
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> {code:bash|title=Without port number -> Error}
> $ ozone fs -ls o3fs://bucket.volume.localhost/
> -ls: Ozone file s

[jira] [Work logged] (HDDS-1914) Ozonescript example docker-compose cluster can't be started

2019-08-07 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1914?focusedWorklogId=290909&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-290909
 ]

ASF GitHub Bot logged work on HDDS-1914:


Author: ASF GitHub Bot
Created on: 08/Aug/19 01:05
Start Date: 08/Aug/19 01:05
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #1232: HDDS-1914. 
Ozonescript example docker-compose cluster can't be started
URL: https://github.com/apache/hadoop/pull/1232#issuecomment-519322749
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 53 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | 0 | shelldocs | 4 | Shelldocs was not available. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | -1 | test4tests | 0 | The patch doesn't appear to include any new or 
modified tests.  Please justify why no new tests are needed for this patch. 
Also please list what manual steps were performed to verify this patch. |
   ||| _ trunk Compile Tests _ |
   | +1 | mvninstall | 608 | trunk passed |
   | +1 | compile | 399 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 729 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 160 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | +1 | mvninstall | 554 | the patch passed |
   | +1 | compile | 366 | the patch passed |
   | +1 | javac | 366 | the patch passed |
   | -1 | hadolint | 2 | The patch generated 1 new + 13 unchanged - 0 fixed = 
14 total (was 13) |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | shellcheck | 0 | There were no new shellcheck issues. |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 627 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 162 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 282 | hadoop-hdds in the patch passed. |
   | -1 | unit | 162 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 40 | The patch does not generate ASF License warnings. |
   | | | 4353 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | 
hadoop.ozone.om.ratis.TestOzoneManagerDoubleBufferWithDummyResponse |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.1 Server=19.03.1 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1232/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1232 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient hadolint shellcheck shelldocs |
   | uname | Linux 84332cf6caf8 4.4.0-139-generic #165-Ubuntu SMP Wed Oct 24 
10:58:50 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 3cc0ace |
   | Default Java | 1.8.0_212 |
   | hadolint | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1232/2/artifact/out/diff-patch-hadolint.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1232/2/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1232/2/testReport/ |
   | Max. process+thread count | 1317 (vs. ulimit of 5500) |
   | modules | C: hadoop-ozone/dist U: hadoop-ozone/dist |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1232/2/console |
   | versions | git=2.7.4 maven=3.3.9 shellcheck=0.4.6 
hadolint=1.11.1-0-g0e692dd |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 290909)
Time Spent: 0.5h  (was: 20m)

> Ozonescript example docker-compose cluster can't be started
> ---
>
> Key: HDDS-1914
> URL: https://issues.apache.org/jira/browse/HDDS-1914
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> the compose/ozonescripts cluster provides an example environment to test the 
> start-ozone.s

[jira] [Commented] (HDFS-14204) Backport HDFS-12943 to branch-2

2019-08-07 Thread Chen Liang (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14204?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16902570#comment-16902570
 ] 

Chen Liang commented on HDFS-14204:
---

Post v006 patch, to rebase, also included two more Jiras that I happen to 
missed in previous one, specifically HDFS-14537 and HDFS-14279.

> Backport HDFS-12943 to branch-2
> ---
>
> Key: HDFS-14204
> URL: https://issues.apache.org/jira/browse/HDFS-14204
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Chen Liang
>Assignee: Chen Liang
>Priority: Major
> Attachments: HDFS-14204-branch-2.001.patch, 
> HDFS-14204-branch-2.002.patch, HDFS-14204-branch-2.003.patch, 
> HDFS-14204-branch-2.004.patch, HDFS-14204-branch-2.005.patch, 
> HDFS-14204-branch-2.006.patch
>
>
> Currently, consistent read from standby feature (HDFS-12943) is only in trunk 
> (branch-3). This JIRA aims to backport the feature to branch-2.  



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14204) Backport HDFS-12943 to branch-2

2019-08-07 Thread Chen Liang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14204?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chen Liang updated HDFS-14204:
--
Attachment: HDFS-14204-branch-2.006.patch

> Backport HDFS-12943 to branch-2
> ---
>
> Key: HDFS-14204
> URL: https://issues.apache.org/jira/browse/HDFS-14204
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Chen Liang
>Assignee: Chen Liang
>Priority: Major
> Attachments: HDFS-14204-branch-2.001.patch, 
> HDFS-14204-branch-2.002.patch, HDFS-14204-branch-2.003.patch, 
> HDFS-14204-branch-2.004.patch, HDFS-14204-branch-2.005.patch, 
> HDFS-14204-branch-2.006.patch
>
>
> Currently, consistent read from standby feature (HDFS-12943) is only in trunk 
> (branch-3). This JIRA aims to backport the feature to branch-2.  



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14204) Backport HDFS-12943 to branch-2

2019-08-07 Thread Chen Liang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14204?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chen Liang updated HDFS-14204:
--
Attachment: (was: HDFS-14204-branch-2.006.patch)

> Backport HDFS-12943 to branch-2
> ---
>
> Key: HDFS-14204
> URL: https://issues.apache.org/jira/browse/HDFS-14204
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Chen Liang
>Assignee: Chen Liang
>Priority: Major
> Attachments: HDFS-14204-branch-2.001.patch, 
> HDFS-14204-branch-2.002.patch, HDFS-14204-branch-2.003.patch, 
> HDFS-14204-branch-2.004.patch, HDFS-14204-branch-2.005.patch
>
>
> Currently, consistent read from standby feature (HDFS-12943) is only in trunk 
> (branch-3). This JIRA aims to backport the feature to branch-2.  



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14204) Backport HDFS-12943 to branch-2

2019-08-07 Thread Chen Liang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14204?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chen Liang updated HDFS-14204:
--
Attachment: HDFS-14204-branch-2.006.patch

> Backport HDFS-12943 to branch-2
> ---
>
> Key: HDFS-14204
> URL: https://issues.apache.org/jira/browse/HDFS-14204
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Chen Liang
>Assignee: Chen Liang
>Priority: Major
> Attachments: HDFS-14204-branch-2.001.patch, 
> HDFS-14204-branch-2.002.patch, HDFS-14204-branch-2.003.patch, 
> HDFS-14204-branch-2.004.patch, HDFS-14204-branch-2.005.patch, 
> HDFS-14204-branch-2.006.patch
>
>
> Currently, consistent read from standby feature (HDFS-12943) is only in trunk 
> (branch-3). This JIRA aims to backport the feature to branch-2.  



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14204) Backport HDFS-12943 to branch-2

2019-08-07 Thread Chen Liang (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14204?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16902559#comment-16902559
 ] 

Chen Liang commented on HDFS-14204:
---

Thanks for your interest [~haiyang Hu]! Currently I'm just fix a couple more 
minor issues, since Konstantin has given a +1, I expect this to be committed to 
branch-2 by the end of this week, at most early next week.

Please be aware though, there are still a few other related Jiras (for example 
HDFS-14162), that I still need to work to backport from trunk to 3.x to 
branch-2. Such Jiras are not blocker but improves robustness. I did not include 
here mostly to restrict the complexity of the already huge patch.

> Backport HDFS-12943 to branch-2
> ---
>
> Key: HDFS-14204
> URL: https://issues.apache.org/jira/browse/HDFS-14204
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Chen Liang
>Assignee: Chen Liang
>Priority: Major
> Attachments: HDFS-14204-branch-2.001.patch, 
> HDFS-14204-branch-2.002.patch, HDFS-14204-branch-2.003.patch, 
> HDFS-14204-branch-2.004.patch, HDFS-14204-branch-2.005.patch
>
>
> Currently, consistent read from standby feature (HDFS-12943) is only in trunk 
> (branch-3). This JIRA aims to backport the feature to branch-2.  



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1915) Remove hadoop script from ozone distribution

2019-08-07 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1915?focusedWorklogId=290898&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-290898
 ]

ASF GitHub Bot logged work on HDDS-1915:


Author: ASF GitHub Bot
Created on: 08/Aug/19 00:23
Start Date: 08/Aug/19 00:23
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #1233: HDDS-1915. 
Remove hadoop script from ozone distribution
URL: https://github.com/apache/hadoop/pull/1233#issuecomment-519315376
 
 
   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 77 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | 0 | shelldocs | 0 | Shelldocs was not available. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   ||| _ trunk Compile Tests _ |
   | +1 | shadedclient | 849 | branch has no errors when building and testing 
our client artifacts. |
   ||| _ Patch Compile Tests _ |
   | +1 | shellcheck | 0 | There were no new shellcheck issues. |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 751 | patch has no errors when building and testing 
our client artifacts. |
   ||| _ Other Tests _ |
   | +1 | asflicense | 32 | The patch does not generate ASF License warnings. |
   | | | 1822 | |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=18.09.7 Server=18.09.7 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1233/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1233 |
   | Optional Tests | dupname asflicense shellcheck shelldocs |
   | uname | Linux ba42f1de95fb 4.15.0-52-generic #56-Ubuntu SMP Tue Jun 4 
22:49:08 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 3cc0ace |
   | Max. process+thread count | 334 (vs. ulimit of 5500) |
   | modules | C: hadoop-ozone/dist U: hadoop-ozone/dist |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1233/2/console |
   | versions | git=2.7.4 maven=3.3.9 shellcheck=0.4.6 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 290898)
Time Spent: 40m  (was: 0.5h)

> Remove hadoop script from ozone distribution
> 
>
> Key: HDDS-1915
> URL: https://issues.apache.org/jira/browse/HDDS-1915
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> /bin/hadoop script is included in the ozone distribution even if we a 
> dedicated /bin/ozone
> [~arp] reported that it can be confusing, for example "hadoop classpath" 
> returns with a bad classpath (ozone classpath ) should be used 
> instead.
> To avoid such confusions I suggest to remove the hadoop script from 
> distribution as ozone script already provides all the functionalities.
> It also helps as to reduce the dependencies between hadoop 3.2-SNAPSHOT and 
> ozone as we use the snapshot hadoop script as of now.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1900) Remove UpdateBucket handler which supports add/remove Acl

2019-08-07 Thread Bharat Viswanadham (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1900?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDDS-1900:
-
Description: 
This Jira is to remove bucket update handler.

To add acl/remove acl we should use ozone sh bucket addacl/ozone sh bucket 
removeacl.

 

Otherwise, when security is enabled, old Bucket update handler, uses 
setBucketProperty and that checks acl acces for WRITE, whereas when add/remove 
Acl we should check access for WRITE_ACL.

 

If we have both ways, even if a USER does not have WRITE_ACL can still 
add/remove Acls on a bucket.

 

This Jira is to clean up the old code and fix this security issue.

  was:
This Jira is to remove bucket update handler.

To add acl/remove acl we should use ozone sh bucket addacl/ozone sh bucket 
removeacl.

 

Otherwise, when security is enabled, old Bucket update handler, uses 
setBucketProperty and that checks acl acces for WRITE, whereas when add/remove 
Acl we should check access for WRITE_ACL.

 

If we have both ways, even if a USER does not have WRITE_ACL can still 
add/remove Acls on a bucket.

 

This Jira is to clean up the old code.


> Remove UpdateBucket handler which supports add/remove Acl
> -
>
> Key: HDDS-1900
> URL: https://issues.apache.org/jira/browse/HDDS-1900
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Critical
>  Labels: pull-request-available
> Fix For: 0.4.1, 0.5.0
>
>  Time Spent: 2h 40m
>  Remaining Estimate: 0h
>
> This Jira is to remove bucket update handler.
> To add acl/remove acl we should use ozone sh bucket addacl/ozone sh bucket 
> removeacl.
>  
> Otherwise, when security is enabled, old Bucket update handler, uses 
> setBucketProperty and that checks acl acces for WRITE, whereas when 
> add/remove Acl we should check access for WRITE_ACL.
>  
> If we have both ways, even if a USER does not have WRITE_ACL can still 
> add/remove Acls on a bucket.
>  
> This Jira is to clean up the old code and fix this security issue.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1900) Remove UpdateBucket handler which supports add/remove Acl

2019-08-07 Thread Bharat Viswanadham (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1900?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDDS-1900:
-
   Resolution: Fixed
Fix Version/s: 0.5.0
   0.4.1
   Status: Resolved  (was: Patch Available)

> Remove UpdateBucket handler which supports add/remove Acl
> -
>
> Key: HDDS-1900
> URL: https://issues.apache.org/jira/browse/HDDS-1900
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Critical
>  Labels: pull-request-available
> Fix For: 0.4.1, 0.5.0
>
>  Time Spent: 2h 40m
>  Remaining Estimate: 0h
>
> This Jira is to remove bucket update handler.
> To add acl/remove acl we should use ozone sh bucket addacl/ozone sh bucket 
> removeacl.
>  
> Otherwise, when security is enabled, old Bucket update handler, uses 
> setBucketProperty and that checks acl acces for WRITE, whereas when 
> add/remove Acl we should check access for WRITE_ACL.
>  
> If we have both ways, even if a USER does not have WRITE_ACL can still 
> add/remove Acls on a bucket.
>  
> This Jira is to clean up the old code.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1925) ozonesecure acceptance test broken by HTTP auth requirement

2019-08-07 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1925?focusedWorklogId=290881&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-290881
 ]

ASF GitHub Bot logged work on HDDS-1925:


Author: ASF GitHub Bot
Created on: 08/Aug/19 00:07
Start Date: 08/Aug/19 00:07
Worklog Time Spent: 10m 
  Work Description: xiaoyuyao commented on pull request #1248: HDDS-1925. 
ozonesecure acceptance test broken by HTTP auth requirement
URL: https://github.com/apache/hadoop/pull/1248
 
 
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 290881)
Time Spent: 50m  (was: 40m)

> ozonesecure acceptance test broken by HTTP auth requirement
> ---
>
> Key: HDDS-1925
> URL: https://issues.apache.org/jira/browse/HDDS-1925
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: docker, test
>Affects Versions: 0.4.1
>Reporter: Doroszlai, Attila
>Assignee: Doroszlai, Attila
>Priority: Critical
>  Labels: pull-request-available
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> Acceptance test is failing at {{ozonesecure}} with the following error from 
> {{jq}}:
> {noformat:title=https://github.com/elek/ozone-ci/blob/325779d34623061e27b80ade3b749210648086d1/byscane/byscane-nightly-ds7lx/acceptance/output.log#L2779}
> parse error: Invalid numeric literal at line 2, column 0
> {noformat}
> Example compose environments wait for datanodes to be up:
> {code:title=https://github.com/apache/hadoop/blob/9cd211ac86bb1124bdee572fddb6f86655b19b73/hadoop-ozone/dist/src/main/compose/testlib.sh#L71-L72}
>   docker-compose -f "$COMPOSE_FILE" up -d --scale datanode="${datanode_count}"
>   wait_for_datanodes "$COMPOSE_FILE" "${datanode_count}"
> {code}
> The number of datanodes up is determined via HTTP query of JMX endpoint:
> {code:title=https://github.com/apache/hadoop/blob/9cd211ac86bb1124bdee572fddb6f86655b19b73/hadoop-ozone/dist/src/main/compose/testlib.sh#L44-L46}
>  #This line checks the number of HEALTHY datanodes registered in scm over 
> the
>  # jmx HTTP servlet
>  datanodes=$(docker-compose -f "${compose_file}" exec -T scm curl -s 
> 'http://localhost:9876/jmx?qry=Hadoop:service=SCMNodeManager,name=SCMNodeManagerInfo'
>  | jq -r '.beans[0].NodeCount[] | select(.key=="HEALTHY") | .value')
> {code}
> The problem is that no authentication is performed before or during the 
> request, which is no longer allowed since HDDS-1901:
> {code}
> $ docker-compose exec -T scm curl -s 
> 'http://localhost:9876/jmx?qry=Hadoop:service=SCMNodeManager,name=SCMNodeManagerInfo'
> 
> 
> 
> Error 401 Authentication required
> 
> HTTP ERROR 401
> Problem accessing /jmx. Reason:
> Authentication required
> 
> 
> {code}
> {code}
> $ docker-compose exec -T scm curl -s 
> 'http://localhost:9876/jmx?qry=Hadoop:service=SCMNodeManager,name=SCMNodeManagerInfo'
>  | jq -r '.beans[0].NodeCount[] | select(.key=="HEALTHY") | .value'
> parse error: Invalid numeric literal at line 2, column 0
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1900) Remove UpdateBucket handler which supports add/remove Acl

2019-08-07 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1900?focusedWorklogId=290882&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-290882
 ]

ASF GitHub Bot logged work on HDDS-1900:


Author: ASF GitHub Bot
Created on: 08/Aug/19 00:07
Start Date: 08/Aug/19 00:07
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on pull request #1219: 
HDDS-1900. Remove UpdateBucket handler which supports add/remove Acl.
URL: https://github.com/apache/hadoop/pull/1219
 
 
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 290882)
Time Spent: 2h 40m  (was: 2.5h)

> Remove UpdateBucket handler which supports add/remove Acl
> -
>
> Key: HDDS-1900
> URL: https://issues.apache.org/jira/browse/HDDS-1900
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Critical
>  Labels: pull-request-available
>  Time Spent: 2h 40m
>  Remaining Estimate: 0h
>
> This Jira is to remove bucket update handler.
> To add acl/remove acl we should use ozone sh bucket addacl/ozone sh bucket 
> removeacl.
>  
> Otherwise, when security is enabled, old Bucket update handler, uses 
> setBucketProperty and that checks acl acces for WRITE, whereas when 
> add/remove Acl we should check access for WRITE_ACL.
>  
> If we have both ways, even if a USER does not have WRITE_ACL can still 
> add/remove Acls on a bucket.
>  
> This Jira is to clean up the old code.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1900) Remove UpdateBucket handler which supports add/remove Acl

2019-08-07 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1900?focusedWorklogId=290880&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-290880
 ]

ASF GitHub Bot logged work on HDDS-1900:


Author: ASF GitHub Bot
Created on: 08/Aug/19 00:07
Start Date: 08/Aug/19 00:07
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on issue #1219: HDDS-1900. 
Remove UpdateBucket handler which supports add/remove Acl.
URL: https://github.com/apache/hadoop/pull/1219#issuecomment-519312311
 
 
   Test failures are not related to this patch.
   I will commit this to the trunk and ozone-0.4.1
   Thank You @xiaoyuyao for the review.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 290880)
Time Spent: 2.5h  (was: 2h 20m)

> Remove UpdateBucket handler which supports add/remove Acl
> -
>
> Key: HDDS-1900
> URL: https://issues.apache.org/jira/browse/HDDS-1900
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Critical
>  Labels: pull-request-available
>  Time Spent: 2.5h
>  Remaining Estimate: 0h
>
> This Jira is to remove bucket update handler.
> To add acl/remove acl we should use ozone sh bucket addacl/ozone sh bucket 
> removeacl.
>  
> Otherwise, when security is enabled, old Bucket update handler, uses 
> setBucketProperty and that checks acl acces for WRITE, whereas when 
> add/remove Acl we should check access for WRITE_ACL.
>  
> If we have both ways, even if a USER does not have WRITE_ACL can still 
> add/remove Acls on a bucket.
>  
> This Jira is to clean up the old code.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-1930) Test Topology Aware Job scheduling with Ozone Topology

2019-08-07 Thread Xiaoyu Yao (JIRA)
Xiaoyu Yao created HDDS-1930:


 Summary: Test Topology Aware Job scheduling with Ozone Topology
 Key: HDDS-1930
 URL: https://issues.apache.org/jira/browse/HDDS-1930
 Project: Hadoop Distributed Data Store
  Issue Type: Sub-task
Reporter: Xiaoyu Yao


My initial results with Terasort does not seem to report the counter properly. 
Most of the requests are handled by rack locl but no node local. This ticket is 
opened to add more system testing to validate the feature. 

Total Allocated Containers: 3778
Each table cell represents the number of NodeLocal/RackLocal/OffSwitch 
containers satisfied by NodeLocal/RackLocal/OffSwitch resource requests.
Node Local Request  Rack Local Request  Off Switch Request
Num Node Local Containers (satisfied by)0   
Num Rack Local Containers (satisfied by)0   3648
Num Off Switch Containers (satisfied by)0   96  34



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1894) Support listPipelines by filters in scmcli

2019-08-07 Thread Xiaoyu Yao (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1894?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16902551#comment-16902551
 ] 

Xiaoyu Yao commented on HDDS-1894:
--

[~timmylicheng] something similar but this can be more powerful predicates and 
efficient that string grep. 

> Support listPipelines by filters in scmcli
> --
>
> Key: HDDS-1894
> URL: https://issues.apache.org/jira/browse/HDDS-1894
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Xiaoyu Yao
>Assignee: Li Cheng
>Priority: Major
>
> Today scmcli has a subcmd that allow list all pipelines. This ticket is 
> opened to filter the results by switches, e.g., filter by Factor: THREE and 
> State: OPEN. This will be useful for trouble shooting in large cluster.
>  
> {code}
> bin/ozone scmcli listPipelines
> Pipeline[ Id: a8d1b0c9-e1d4-49ea-8746-3f61dfb5ee3f, Nodes: 
> cce44fde-bc8d-4063-97b3-6f557af756e1\{ip: 10.17.112.65, host: 
> ia0230.halxg.cloudera.com, networkLocation: /default-rack, certSerialId: 
> null}, Type:RATIS, Factor:ONE, State:OPEN]
> Pipeline[ Id: c9c453d1-d74c-4414-b87f-1d3585d78a7c, Nodes: 
> 0b7b0b93-8323-4b82-8cc0-a9a5c10ab827\{ip: 10.17.112.29, host: 
> ia0138.halxg.cloudera.com, networkLocation: /default-rack, certSerialId: 
> null}c756a0e0-5a1b-4d03-ba5b-cafbcabac877\{ip: 10.17.112.27, host: 
> ia0134.halxg.cloudera.com, networkLocation: /default-rack, certSerialId: 
> null}bee45bd7-1ee6-4726-b3d1-81476dc1eb49\{ip: 10.17.112.28, host: 
> ia0136.halxg.cloudera.com, networkLocation: /default-rack, certSerialId: 
> null}, Type:RATIS, Factor:THREE, State:OPEN]
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Resolved] (HDDS-1865) Use "ozone.network.topology.aware.read" to control both RPC client and server side logic

2019-08-07 Thread Xiaoyu Yao (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1865?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao resolved HDDS-1865.
--
   Resolution: Fixed
Fix Version/s: 0.5.0

Thanks [~Sammi] for the contribution. I've merged the patch to trunk.

> Use "ozone.network.topology.aware.read" to control both RPC client and server 
> side logic 
> -
>
> Key: HDDS-1865
> URL: https://issues.apache.org/jira/browse/HDDS-1865
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Sammi Chen
>Assignee: Sammi Chen
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 2h 10m
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1865) Use "ozone.network.topology.aware.read" to control both RPC client and server side logic

2019-08-07 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1865?focusedWorklogId=290879&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-290879
 ]

ASF GitHub Bot logged work on HDDS-1865:


Author: ASF GitHub Bot
Created on: 07/Aug/19 23:59
Start Date: 07/Aug/19 23:59
Worklog Time Spent: 10m 
  Work Description: xiaoyuyao commented on pull request #1184: HDDS-1865. 
Use "ozone.network.topology.aware.read" to control both RPC client and server 
side logic
URL: https://github.com/apache/hadoop/pull/1184
 
 
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 290879)
Time Spent: 2h 10m  (was: 2h)

> Use "ozone.network.topology.aware.read" to control both RPC client and server 
> side logic 
> -
>
> Key: HDDS-1865
> URL: https://issues.apache.org/jira/browse/HDDS-1865
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Sammi Chen
>Assignee: Sammi Chen
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 2h 10m
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1865) Use "ozone.network.topology.aware.read" to control both RPC client and server side logic

2019-08-07 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1865?focusedWorklogId=290878&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-290878
 ]

ASF GitHub Bot logged work on HDDS-1865:


Author: ASF GitHub Bot
Created on: 07/Aug/19 23:58
Start Date: 07/Aug/19 23:58
Worklog Time Spent: 10m 
  Work Description: xiaoyuyao commented on issue #1184: HDDS-1865. Use 
"ozone.network.topology.aware.read" to control both RPC client and server side 
logic
URL: https://github.com/apache/hadoop/pull/1184#issuecomment-519310591
 
 
   +1 I will merge it shortly. 
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 290878)
Time Spent: 2h  (was: 1h 50m)

> Use "ozone.network.topology.aware.read" to control both RPC client and server 
> side logic 
> -
>
> Key: HDDS-1865
> URL: https://issues.apache.org/jira/browse/HDDS-1865
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Sammi Chen
>Assignee: Sammi Chen
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 2h
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1865) Use "ozone.network.topology.aware.read" to control both RPC client and server side logic

2019-08-07 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1865?focusedWorklogId=290877&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-290877
 ]

ASF GitHub Bot logged work on HDDS-1865:


Author: ASF GitHub Bot
Created on: 07/Aug/19 23:57
Start Date: 07/Aug/19 23:57
Worklog Time Spent: 10m 
  Work Description: xiaoyuyao commented on pull request #1184: HDDS-1865. 
Use "ozone.network.topology.aware.read" to control both RPC client and server 
side logic
URL: https://github.com/apache/hadoop/pull/1184#discussion_r311809156
 
 

 ##
 File path: 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/container/placement/algorithms/ContainerPlacementPolicyFactory.java
 ##
 @@ -36,7 +36,7 @@
 
   private static final Class
   OZONE_SCM_CONTAINER_PLACEMENT_IMPL_DEFAULT =
-  SCMContainerPlacementRandom.class;
+  SCMContainerPlacementRackAware.class;
 
 Review comment:
   Thanks for the details.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 290877)
Time Spent: 1h 50m  (was: 1h 40m)

> Use "ozone.network.topology.aware.read" to control both RPC client and server 
> side logic 
> -
>
> Key: HDDS-1865
> URL: https://issues.apache.org/jira/browse/HDDS-1865
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Sammi Chen
>Assignee: Sammi Chen
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h 50m
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1917) Ignore failing test-cases in TestSecureOzoneRpcClient

2019-08-07 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1917?focusedWorklogId=290876&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-290876
 ]

ASF GitHub Bot logged work on HDDS-1917:


Author: ASF GitHub Bot
Created on: 07/Aug/19 23:54
Start Date: 07/Aug/19 23:54
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #1234: HDDS-1917. 
Ignore failing test-cases in TestSecureOzoneRpcClient.
URL: https://github.com/apache/hadoop/pull/1234#issuecomment-519309763
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 0 | Docker mode activated. |
   | -1 | patch | 9 | https://github.com/apache/hadoop/pull/1234 does not apply 
to trunk. Rebase required? Wrong Branch? See 
https://wiki.apache.org/hadoop/HowToContribute for help. |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | GITHUB PR | https://github.com/apache/hadoop/pull/1234 |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1234/2/console |
   | versions | git=2.17.1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 290876)
Time Spent: 50m  (was: 40m)

> Ignore failing test-cases in TestSecureOzoneRpcClient
> -
>
> Key: HDDS-1917
> URL: https://issues.apache.org/jira/browse/HDDS-1917
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Nanda kumar
>Assignee: Nanda kumar
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> Ignore failing test-cases in TestSecureOzoneRpcClient. This will be fixed 
> when HA support is added to acl operations.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1865) Use "ozone.network.topology.aware.read" to control both RPC client and server side logic

2019-08-07 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1865?focusedWorklogId=290875&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-290875
 ]

ASF GitHub Bot logged work on HDDS-1865:


Author: ASF GitHub Bot
Created on: 07/Aug/19 23:53
Start Date: 07/Aug/19 23:53
Worklog Time Spent: 10m 
  Work Description: xiaoyuyao commented on pull request #1184: HDDS-1865. 
Use "ozone.network.topology.aware.read" to control both RPC client and server 
side logic
URL: https://github.com/apache/hadoop/pull/1184#discussion_r311808404
 
 

 ##
 File path: 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/client/rpc/TestOzoneRpcClientWithRatis.java
 ##
 @@ -91,60 +94,59 @@ public void testGetKeyAndFileWithNetworkTopology() throws 
IOException {
 String keyName = UUID.randomUUID().toString();
 
 // Write data into a key
-OzoneOutputStream out = bucket.createKey(keyName,
+try (OzoneOutputStream out = bucket.createKey(keyName,
 value.getBytes().length, ReplicationType.RATIS,
-THREE, new HashMap<>());
-out.write(value.getBytes());
-out.close();
+THREE, new HashMap<>())) {
+  out.write(value.getBytes());
+}
 
 // Since the rpc client is outside of cluster, then getFirstNode should be
 // equal to getClosestNode.
 OmKeyArgs.Builder builder = new OmKeyArgs.Builder();
 builder.setVolumeName(volumeName).setBucketName(bucketName)
 .setKeyName(keyName).setRefreshPipeline(true);
 
-// read key with topology aware read enabled(default)
-try {
-  OzoneInputStream is = bucket.readKey(keyName);
+// read key with topology aware read enabled
+try (OzoneInputStream is = bucket.readKey(keyName)) {
   byte[] b = new byte[value.getBytes().length];
   is.read(b);
   Assert.assertTrue(Arrays.equals(b, value.getBytes()));
 } catch (OzoneChecksumException e) {
-  fail("Reading key should success");
+  fail("Read key should succeed");
 }
-// read file with topology aware read enabled(default)
-try {
-  OzoneInputStream is = bucket.readFile(keyName);
+
+// read file with topology aware read enabled
+try (OzoneInputStream is = bucket.readKey(keyName)) {
   byte[] b = new byte[value.getBytes().length];
   is.read(b);
   Assert.assertTrue(Arrays.equals(b, value.getBytes()));
 } catch (OzoneChecksumException e) {
-  fail("Reading file should success");
+  fail("Read file should succeed");
 }
 
 // read key with topology aware read disabled
-conf.set(ScmConfigKeys.DFS_NETWORK_TOPOLOGY_AWARE_READ_ENABLED, "false");
-OzoneClient newClient = OzoneClientFactory.getRpcClient(conf);
-ObjectStore newStore = newClient.getObjectStore();
-OzoneBucket newBucket =
-newStore.getVolume(volumeName).getBucket(bucketName);
-try {
-  OzoneInputStream is = newBucket.readKey(keyName);
-  byte[] b = new byte[value.getBytes().length];
-  is.read(b);
-  Assert.assertTrue(Arrays.equals(b, value.getBytes()));
-} catch (OzoneChecksumException e) {
-  fail("Reading key should success");
-}
-// read file with topology aware read disabled
+conf.setBoolean(OzoneConfigKeys.OZONE_NETWORK_TOPOLOGY_AWARE_READ_KEY,
+false);
+try (OzoneClient newClient = OzoneClientFactory.getRpcClient(conf)) {
 
 Review comment:
   I'm glad you fix this OzoneClient leak here in this tests.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 290875)
Time Spent: 1h 40m  (was: 1.5h)

> Use "ozone.network.topology.aware.read" to control both RPC client and server 
> side logic 
> -
>
> Key: HDDS-1865
> URL: https://issues.apache.org/jira/browse/HDDS-1865
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Sammi Chen
>Assignee: Sammi Chen
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h 40m
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1366) Add ability in Recon to track the number of small files in an Ozone cluster.

2019-08-07 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1366?focusedWorklogId=290868&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-290868
 ]

ASF GitHub Bot logged work on HDDS-1366:


Author: ASF GitHub Bot
Created on: 07/Aug/19 23:22
Start Date: 07/Aug/19 23:22
Worklog Time Spent: 10m 
  Work Description: vivekratnavel commented on pull request #1146: 
HDDS-1366. Add ability in Recon to track the number of small files in an Ozone 
Cluster
URL: https://github.com/apache/hadoop/pull/1146#discussion_r311801586
 
 

 ##
 File path: 
hadoop-ozone/ozone-recon/src/test/java/org/apache/hadoop/ozone/recon/api/TestUtilizationService.java
 ##
 @@ -70,39 +77,51 @@ public void testGetFileCounts() throws IOException {
 verify(utilizationService, times(1)).getFileCounts();
 verify(fileCountBySizeDao, times(1)).findAll();
 
-assertEquals(41, resultList.size());
-long fileSize = 4096L;
+assertEquals(maxBinSize, resultList.size());
+long fileSize = 4096L;  // 4KB
 int index =  findIndex(fileSize);
 long count = resultList.get(index).getCount();
 assertEquals(index, count);
 
-fileSize = 1125899906842624L;
+fileSize = 1125899906842624L;   // 1PB
 index = findIndex(fileSize);
-if (index == Integer.MIN_VALUE) {
-  throw new IOException("File Size larger than permissible file size");
-}
+count = resultList.get(index).getCount();
+assertEquals(maxBinSize - 1, index);
+assertEquals(index, count);
 
-fileSize = 1025L;
+fileSize = 1025L;   // 1 KB + 1B
 index = findIndex(fileSize);
-count = resultList.get(index).getCount();
+count = resultList.get(index).getCount(); //last extra bin for files >= 1PB
 assertEquals(index, count);
 
 fileSize = 25L;
 index = findIndex(fileSize);
 count = resultList.get(index).getCount();
 assertEquals(index, count);
+
+fileSize = 1125899906842623L;   // 1PB - 1B
+index = findIndex(fileSize);
+count = resultList.get(index).getCount();
+assertEquals(index, count);
+
+fileSize = 1125899906842624L * 4;   // 4 PB
+index = findIndex(fileSize);
+count = resultList.get(index).getCount();
+assertEquals(maxBinSize - 1, index);
+assertEquals(index, count);
   }
 
   public int findIndex(long dataSize) {
-int logValue = (int) Math.ceil(Math.log(dataSize)/Math.log(2));
-if (logValue < 10) {
-  return 0;
-} else {
-  int index = logValue - 10;
-  if (index > maxBinSize) {
-return Integer.MIN_VALUE;
-  }
-  return (dataSize % oneKb == 0) ? index + 1 : index;
+if (dataSize > Math.pow(2, (maxBinSize + 10 - 2))) {  // 1 PB = 2 ^ 50
+  return maxBinSize - 1;
+}
+int index = 0;
+while(dataSize != 0) {
+  dataSize >>= 1;
+  index += 1;
 
 Review comment:
   This makes the unit test void. If we have the same logic used in the actual 
methods here, then the unit tests are always going to assert to true. We should 
use constant values to test against the actual methods.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 290868)

> Add ability in Recon to track the number of small files in an Ozone cluster.
> 
>
> Key: HDDS-1366
> URL: https://issues.apache.org/jira/browse/HDDS-1366
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Ozone Recon
>Reporter: Aravindan Vijayan
>Assignee: Shweta
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 9.5h
>  Remaining Estimate: 0h
>
> Ozone users may want to track the number of small files they have in their 
> cluster and where they are present. Recon can help them with the information 
> by iterating the OM Key Table and dividing the keys into different buckets 
> based on the data size. 



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1366) Add ability in Recon to track the number of small files in an Ozone cluster.

2019-08-07 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1366?focusedWorklogId=290867&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-290867
 ]

ASF GitHub Bot logged work on HDDS-1366:


Author: ASF GitHub Bot
Created on: 07/Aug/19 23:22
Start Date: 07/Aug/19 23:22
Worklog Time Spent: 10m 
  Work Description: vivekratnavel commented on pull request #1146: 
HDDS-1366. Add ability in Recon to track the number of small files in an Ozone 
Cluster
URL: https://github.com/apache/hadoop/pull/1146#discussion_r311799843
 
 

 ##
 File path: 
hadoop-ozone/ozone-recon/src/main/java/org/apache/hadoop/ozone/recon/tasks/FileSizeCountTask.java
 ##
 @@ -78,7 +78,8 @@ protected long getMaxFileSizeUpperBound() {
   protected int getMaxBinSize() {
 if (maxBinSize == -1) {
   // extra bin to add files > 1PB.
-  maxBinSize = calculateBinIndex(maxFileSizeUpperBound) + 1;
+  // 1 KB (2 ^ 10) is the smallest tracked file.
+  maxBinSize = nextClosetPowerIndexOfTwo(maxFileSizeUpperBound) - 10 + 1;
 
 Review comment:
   nit: typo in `nextClosestPowerIndexOfTwo`
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 290867)
Time Spent: 9.5h  (was: 9h 20m)

> Add ability in Recon to track the number of small files in an Ozone cluster.
> 
>
> Key: HDDS-1366
> URL: https://issues.apache.org/jira/browse/HDDS-1366
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Ozone Recon
>Reporter: Aravindan Vijayan
>Assignee: Shweta
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 9.5h
>  Remaining Estimate: 0h
>
> Ozone users may want to track the number of small files they have in their 
> cluster and where they are present. Recon can help them with the information 
> by iterating the OM Key Table and dividing the keys into different buckets 
> based on the data size. 



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1366) Add ability in Recon to track the number of small files in an Ozone cluster.

2019-08-07 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1366?focusedWorklogId=290866&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-290866
 ]

ASF GitHub Bot logged work on HDDS-1366:


Author: ASF GitHub Bot
Created on: 07/Aug/19 23:22
Start Date: 07/Aug/19 23:22
Worklog Time Spent: 10m 
  Work Description: vivekratnavel commented on pull request #1146: 
HDDS-1366. Add ability in Recon to track the number of small files in an Ozone 
Cluster
URL: https://github.com/apache/hadoop/pull/1146#discussion_r311801756
 
 

 ##
 File path: 
hadoop-ozone/ozone-recon/src/test/java/org/apache/hadoop/ozone/recon/tasks/TestFileSizeCountTask.java
 ##
 @@ -116,13 +126,13 @@ public void testFileCountBySizeReprocess() throws 
IOException {
 when(fileSizeCountTask.getMaxFileSizeUpperBound()).
 thenReturn(4096L);
 when(fileSizeCountTask.getOneKB()).thenReturn(1024L);
-when(fileSizeCountTask.getMaxBinSize()).thenReturn(3);
+//when(fileSizeCountTask.getMaxBinSize()).thenReturn(3);
 
 Review comment:
   This line should be removed.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 290866)
Time Spent: 9h 20m  (was: 9h 10m)

> Add ability in Recon to track the number of small files in an Ozone cluster.
> 
>
> Key: HDDS-1366
> URL: https://issues.apache.org/jira/browse/HDDS-1366
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Ozone Recon
>Reporter: Aravindan Vijayan
>Assignee: Shweta
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 9h 20m
>  Remaining Estimate: 0h
>
> Ozone users may want to track the number of small files they have in their 
> cluster and where they are present. Recon can help them with the information 
> by iterating the OM Key Table and dividing the keys into different buckets 
> based on the data size. 



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1366) Add ability in Recon to track the number of small files in an Ozone cluster.

2019-08-07 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1366?focusedWorklogId=290869&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-290869
 ]

ASF GitHub Bot logged work on HDDS-1366:


Author: ASF GitHub Bot
Created on: 07/Aug/19 23:22
Start Date: 07/Aug/19 23:22
Worklog Time Spent: 10m 
  Work Description: vivekratnavel commented on pull request #1146: 
HDDS-1366. Add ability in Recon to track the number of small files in an Ozone 
Cluster
URL: https://github.com/apache/hadoop/pull/1146#discussion_r311800232
 
 

 ##
 File path: 
hadoop-ozone/ozone-recon/src/main/java/org/apache/hadoop/ozone/recon/tasks/FileSizeCountTask.java
 ##
 @@ -98,7 +99,9 @@ protected int getMaxBinSize() {
 keyIter = omKeyInfoTable.iterator()) {
   while (keyIter.hasNext()) {
 Table.KeyValue kv = keyIter.next();
-countFileSize(kv.getValue());
+
+// reprocess() is a PUT operation on the DB.
+updateUpperBoundCount(kv.getValue(), "PUT");
 
 Review comment:
   nit: update this with enum `Operation.PUT` in the future. 
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 290869)
Time Spent: 9h 40m  (was: 9.5h)

> Add ability in Recon to track the number of small files in an Ozone cluster.
> 
>
> Key: HDDS-1366
> URL: https://issues.apache.org/jira/browse/HDDS-1366
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Ozone Recon
>Reporter: Aravindan Vijayan
>Assignee: Shweta
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 9h 40m
>  Remaining Estimate: 0h
>
> Ozone users may want to track the number of small files they have in their 
> cluster and where they are present. Recon can help them with the information 
> by iterating the OM Key Table and dividing the keys into different buckets 
> based on the data size. 



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1926) The new caching layer is used for old OM requests but not updated

2019-08-07 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1926?focusedWorklogId=290853&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-290853
 ]

ASF GitHub Bot logged work on HDDS-1926:


Author: ASF GitHub Bot
Created on: 07/Aug/19 23:08
Start Date: 07/Aug/19 23:08
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #1247: HDDS-1926. The 
new caching layer is used for old OM requests but not updated.
URL: https://github.com/apache/hadoop/pull/1247#issuecomment-519300619
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 36 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 1 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 35 | Maven dependency ordering for branch |
   | +1 | mvninstall | 627 | trunk passed |
   | +1 | compile | 406 | trunk passed |
   | +1 | checkstyle | 79 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 958 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 163 | trunk passed |
   | 0 | spotbugs | 428 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 624 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 20 | Maven dependency ordering for patch |
   | +1 | mvninstall | 546 | the patch passed |
   | +1 | compile | 367 | the patch passed |
   | +1 | javac | 367 | the patch passed |
   | +1 | checkstyle | 75 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 743 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 164 | the patch passed |
   | +1 | findbugs | 706 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 345 | hadoop-hdds in the patch passed. |
   | -1 | unit | 1523 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 41 | The patch does not generate ASF License warnings. |
   | | | 7614 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.ozone.om.TestScmSafeMode |
   |   | hadoop.ozone.client.rpc.TestMultiBlockWritesWithDnFailures |
   |   | hadoop.ozone.client.rpc.TestOzoneRpcClient |
   |   | hadoop.ozone.client.rpc.TestOzoneAtRestEncryption |
   |   | hadoop.ozone.client.rpc.TestSecureOzoneRpcClient |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.1 Server=19.03.1 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1247/3/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1247 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux f7b2e3bcf499 4.15.0-54-generic #58-Ubuntu SMP Mon Jun 24 
10:55:24 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 3cc0ace |
   | Default Java | 1.8.0_222 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1247/3/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1247/3/testReport/ |
   | Max. process+thread count | 4256 (vs. ulimit of 5500) |
   | modules | C: hadoop-ozone/ozone-manager hadoop-ozone/integration-test U: 
hadoop-ozone |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1247/3/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 290853)
Time Spent: 1h 20m  (was: 1h 10m)

> The new caching layer is used for old OM requests but not updated
> -
>
> Key: HDDS-1926
> URL: https://issues.apache.org/jira/browse/HDDS-1926
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: om
>Reporter: Elek, Marton
>Assignee: Bharat Viswanadham
>Priority: Blocker
>   

[jira] [Work logged] (HDDS-1619) Support volume acl operations for OM HA.

2019-08-07 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1619?focusedWorklogId=290848&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-290848
 ]

ASF GitHub Bot logged work on HDDS-1619:


Author: ASF GitHub Bot
Created on: 07/Aug/19 22:59
Start Date: 07/Aug/19 22:59
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #1147: HDDS-1619. 
Support volume acl operations for OM HA. Contributed by…
URL: https://github.com/apache/hadoop/pull/1147#issuecomment-519298611
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 43 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 5 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | +1 | mvninstall | 572 | trunk passed |
   | +1 | compile | 347 | trunk passed |
   | +1 | checkstyle | 65 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 831 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 154 | trunk passed |
   | 0 | spotbugs | 444 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 650 | trunk passed |
   | -0 | patch | 481 | Used diff version of patch file. Binary files and 
potentially other changes not applied. Please rebase and squash commits if 
necessary. |
   ||| _ Patch Compile Tests _ |
   | +1 | mvninstall | 567 | the patch passed |
   | +1 | compile | 372 | the patch passed |
   | +1 | javac | 372 | the patch passed |
   | -0 | checkstyle | 36 | hadoop-ozone: The patch generated 1 new + 0 
unchanged - 0 fixed = 1 total (was 0) |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 621 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 166 | the patch passed |
   | +1 | findbugs | 656 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 307 | hadoop-hdds in the patch passed. |
   | -1 | unit | 1989 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 53 | The patch does not generate ASF License warnings. |
   | | | 7607 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.hdds.scm.pipeline.TestRatisPipelineProvider |
   |   | hadoop.ozone.client.rpc.TestMultiBlockWritesWithDnFailures |
   |   | hadoop.ozone.om.TestScmSafeMode |
   |   | hadoop.ozone.client.rpc.TestSecureOzoneRpcClient |
   |   | hadoop.ozone.client.rpc.TestOzoneRpcClient |
   |   | hadoop.ozone.client.rpc.TestOzoneRpcClientWithRatis |
   |   | hadoop.ozone.client.rpc.TestBlockOutputStreamWithFailures |
   |   | hadoop.ozone.client.rpc.TestOzoneAtRestEncryption |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.1 Server=19.03.1 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1147/18/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1147 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux 2e8fae687387 4.4.0-139-generic #165-Ubuntu SMP Wed Oct 24 
10:58:50 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 3cc0ace |
   | Default Java | 1.8.0_212 |
   | checkstyle | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1147/18/artifact/out/diff-checkstyle-hadoop-ozone.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1147/18/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1147/18/testReport/ |
   | Max. process+thread count | 4507 (vs. ulimit of 5500) |
   | modules | C: hadoop-ozone/ozone-manager U: hadoop-ozone/ozone-manager |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1147/18/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 290848)
Time Spent: 9h 50m  (was: 9h 40m)

> Support volume acl operations for OM HA.
> -

[jira] [Work logged] (HDDS-1619) Support volume acl operations for OM HA.

2019-08-07 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1619?focusedWorklogId=290844&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-290844
 ]

ASF GitHub Bot logged work on HDDS-1619:


Author: ASF GitHub Bot
Created on: 07/Aug/19 22:53
Start Date: 07/Aug/19 22:53
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #1147: HDDS-1619. 
Support volume acl operations for OM HA. Contributed by…
URL: https://github.com/apache/hadoop/pull/1147#issuecomment-519297449
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 106 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 1 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 5 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | +1 | mvninstall | 746 | trunk passed |
   | +1 | compile | 454 | trunk passed |
   | +1 | checkstyle | 94 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 1069 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 198 | trunk passed |
   | 0 | spotbugs | 547 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 806 | trunk passed |
   | -0 | patch | 590 | Used diff version of patch file. Binary files and 
potentially other changes not applied. Please rebase and squash commits if 
necessary. |
   ||| _ Patch Compile Tests _ |
   | +1 | mvninstall | 676 | the patch passed |
   | +1 | compile | 431 | the patch passed |
   | +1 | javac | 431 | the patch passed |
   | -0 | checkstyle | 50 | hadoop-ozone: The patch generated 1 new + 0 
unchanged - 0 fixed = 1 total (was 0) |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 752 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 187 | the patch passed |
   | +1 | findbugs | 786 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 403 | hadoop-hdds in the patch passed. |
   | -1 | unit | 2445 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 36 | The patch does not generate ASF License warnings. |
   | | | 9440 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.ozone.client.rpc.TestOzoneRpcClient |
   |   | hadoop.ozone.client.rpc.TestSecureOzoneRpcClient |
   |   | hadoop.hdds.scm.pipeline.TestRatisPipelineCreateAndDestory |
   |   | hadoop.ozone.client.rpc.TestMultiBlockWritesWithDnFailures |
   |   | hadoop.ozone.client.rpc.TestOzoneAtRestEncryption |
   |   | hadoop.ozone.client.rpc.TestCloseContainerHandlingByClient |
   |   | hadoop.ozone.client.rpc.TestBlockOutputStreamWithFailures |
   |   | hadoop.ozone.client.rpc.TestWatchForCommit |
   |   | hadoop.ozone.om.TestScmSafeMode |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.1 Server=19.03.1 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1147/17/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1147 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux 3816b0f834b7 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 11f750e |
   | Default Java | 1.8.0_212 |
   | checkstyle | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1147/17/artifact/out/diff-checkstyle-hadoop-ozone.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1147/17/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1147/17/testReport/ |
   | Max. process+thread count | 5065 (vs. ulimit of 5500) |
   | modules | C: hadoop-ozone/ozone-manager U: hadoop-ozone/ozone-manager |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1147/17/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 290844)
Time Spent: 9h 40m  (wa

[jira] [Work stopped] (HDFS-14696) Backport HDFS-11273 to branch-2 (Move TransferFsImage#doGetUrl function to a Util class)

2019-08-07 Thread Siyao Meng (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14696?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HDFS-14696 stopped by Siyao Meng.
-
> Backport HDFS-11273 to branch-2 (Move TransferFsImage#doGetUrl function to a 
> Util class)
> 
>
> Key: HDFS-14696
> URL: https://issues.apache.org/jira/browse/HDFS-14696
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Siyao Meng
>Assignee: Siyao Meng
>Priority: Major
> Attachments: HDFS-14696-branch-2.003.patch
>
>
> Backporting HDFS-11273 Move TransferFsImage#doGetUrl function to a Util class 
> to branch-2.
> To avoid confusion with branch-2 patches in HDFS-11273, patch revision number 
> will continue from 003.
> *HDFS-14696-branch-2.003.patch* is the same as 
> *HDFS-11273-branch-2.003.patch*.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14696) Backport HDFS-11273 to branch-2 (Move TransferFsImage#doGetUrl function to a Util class)

2019-08-07 Thread Siyao Meng (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14696?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16902524#comment-16902524
 ] 

Siyao Meng commented on HDFS-14696:
---

[~jojochuang] Verified that both unit tests passed locally on my Mac with mvn 
command (I can't run them directly in IntelliJ due to `webapps/journal not 
found in CLASSPATH`):
{code}
$ mvn test -fn -Dsurefire.printSummary 
-Dtest=org.apache.hadoop.hdfs.qjournal.server.TestJournalNodeRespectsBindHostKeys
...
[INFO] Running 
org.apache.hadoop.hdfs.qjournal.server.TestJournalNodeRespectsBindHostKeys
[INFO] Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.91 s - 
in org.apache.hadoop.hdfs.qjournal.server.TestJournalNodeRespectsBindHostKeys
...
{code}
{code}
$ mvn test -fn -Dsurefire.printSummary 
-Dtest=org.apache.hadoop.hdfs.server.datanode.TestDirectoryScanner#testScanDirectoryStructureWarn
...
[INFO] Running org.apache.hadoop.hdfs.server.datanode.TestDirectoryScanner
[INFO] Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.828 s 
- in org.apache.hadoop.hdfs.server.datanode.TestDirectoryScanner
...
{code}

> Backport HDFS-11273 to branch-2 (Move TransferFsImage#doGetUrl function to a 
> Util class)
> 
>
> Key: HDFS-14696
> URL: https://issues.apache.org/jira/browse/HDFS-14696
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Siyao Meng
>Assignee: Siyao Meng
>Priority: Major
> Attachments: HDFS-14696-branch-2.003.patch
>
>
> Backporting HDFS-11273 Move TransferFsImage#doGetUrl function to a Util class 
> to branch-2.
> To avoid confusion with branch-2 patches in HDFS-11273, patch revision number 
> will continue from 003.
> *HDFS-14696-branch-2.003.patch* is the same as 
> *HDFS-11273-branch-2.003.patch*.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1366) Add ability in Recon to track the number of small files in an Ozone cluster.

2019-08-07 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1366?focusedWorklogId=290833&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-290833
 ]

ASF GitHub Bot logged work on HDDS-1366:


Author: ASF GitHub Bot
Created on: 07/Aug/19 22:43
Start Date: 07/Aug/19 22:43
Worklog Time Spent: 10m 
  Work Description: shwetayakkali commented on pull request #1146: 
HDDS-1366. Add ability in Recon to track the number of small files in an Ozone 
Cluster
URL: https://github.com/apache/hadoop/pull/1146#discussion_r311792973
 
 

 ##
 File path: 
hadoop-ozone/ozone-recon-codegen/src/main/java/org/hadoop/ozone/recon/schema/UtilizationSchemaDefinition.java
 ##
 @@ -65,5 +69,12 @@ void createClusterGrowthTable(Connection conn) {
 .execute();
   }
 
-
+  void createFileSizeCount(Connection conn) {
+DSL.using(conn).createTableIfNotExists(FILE_COUNT_BY_SIZE_TABLE_NAME)
+.column("file_size_kb", SQLDataType.BIGINT)
 
 Review comment:
   Sure.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 290833)
Time Spent: 9h 10m  (was: 9h)

> Add ability in Recon to track the number of small files in an Ozone cluster.
> 
>
> Key: HDDS-1366
> URL: https://issues.apache.org/jira/browse/HDDS-1366
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Ozone Recon
>Reporter: Aravindan Vijayan
>Assignee: Shweta
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 9h 10m
>  Remaining Estimate: 0h
>
> Ozone users may want to track the number of small files they have in their 
> cluster and where they are present. Recon can help them with the information 
> by iterating the OM Key Table and dividing the keys into different buckets 
> based on the data size. 



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1366) Add ability in Recon to track the number of small files in an Ozone cluster.

2019-08-07 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1366?focusedWorklogId=290830&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-290830
 ]

ASF GitHub Bot logged work on HDDS-1366:


Author: ASF GitHub Bot
Created on: 07/Aug/19 22:42
Start Date: 07/Aug/19 22:42
Worklog Time Spent: 10m 
  Work Description: shwetayakkali commented on pull request #1146: 
HDDS-1366. Add ability in Recon to track the number of small files in an Ozone 
Cluster
URL: https://github.com/apache/hadoop/pull/1146#discussion_r311792886
 
 

 ##
 File path: 
hadoop-ozone/ozone-recon/src/main/java/org/apache/hadoop/ozone/recon/tasks/FileSizeCountTask.java
 ##
 @@ -0,0 +1,241 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * 
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.ozone.recon.tasks;
+
+import com.google.inject.Inject;
+import org.apache.commons.lang3.tuple.ImmutablePair;
+import org.apache.commons.lang3.tuple.Pair;
+import org.apache.hadoop.ozone.om.OMMetadataManager;
+import org.apache.hadoop.ozone.om.helpers.OmKeyInfo;
+import org.apache.hadoop.utils.db.Table;
+import org.apache.hadoop.utils.db.TableIterator;
+import org.hadoop.ozone.recon.schema.tables.daos.FileCountBySizeDao;
+import org.hadoop.ozone.recon.schema.tables.pojos.FileCountBySize;
+import org.jooq.Configuration;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.Collection;
+import java.util.Iterator;
+import java.util.List;
+
+/**
+ * Class to iterate over the OM DB and store the counts of existing/new
+ * files binned into ranges (1KB, 2Kb..,4MB,.., 1TB,..1PB) to the Recon
+ * fileSize DB.
+ */
+public class FileSizeCountTask extends ReconDBUpdateTask {
+  private static final Logger LOG =
+  LoggerFactory.getLogger(FileSizeCountTask.class);
+
+  private int maxBinSize = -1;
+  private long maxFileSizeUpperBound = 1125899906842624L; // 1 PB
+  private long[] upperBoundCount;
+  private long oneKb = 1024L;
+  private Collection tables = new ArrayList<>();
+  private FileCountBySizeDao fileCountBySizeDao;
+
+  @Inject
+  public FileSizeCountTask(OMMetadataManager omMetadataManager,
+  Configuration sqlConfiguration) {
+super("FileSizeCountTask");
+try {
+  tables.add(omMetadataManager.getKeyTable().getName());
+  fileCountBySizeDao = new FileCountBySizeDao(sqlConfiguration);
+} catch (Exception e) {
+  LOG.error("Unable to fetch Key Table updates ", e);
+}
+upperBoundCount = new long[getMaxBinSize()];
+  }
+
+  protected long getOneKB() {
+return oneKb;
+  }
+
+  protected long getMaxFileSizeUpperBound() {
+return maxFileSizeUpperBound;
+  }
+
+  protected int getMaxBinSize() {
+if (maxBinSize == -1) {
+  // extra bin to add files > 1PB.
+  maxBinSize = calculateBinIndex(maxFileSizeUpperBound) + 1;
+}
+return maxBinSize;
+  }
+
+  /**
+   * Read the Keys from OM snapshot DB and calculate the upper bound of
+   * File Size it belongs to.
+   *
+   * @param omMetadataManager OM Metadata instance.
+   * @return Pair
+   */
+  @Override
+  public Pair reprocess(OMMetadataManager omMetadataManager) {
+LOG.info("Starting a 'reprocess' run of FileSizeCountTask.");
+Table omKeyInfoTable = omMetadataManager.getKeyTable();
+try (TableIterator>
+keyIter = omKeyInfoTable.iterator()) {
+  while (keyIter.hasNext()) {
+Table.KeyValue kv = keyIter.next();
+countFileSize(kv.getValue());
+  }
+} catch (IOException ioEx) {
+  LOG.error("Unable to populate File Size Count in Recon DB. ", ioEx);
+  return new ImmutablePair<>(getTaskName(), false);
+}
+populateFileCountBySizeDB();
+
+LOG.info("Completed a 'reprocess' run of FileSizeCountTask.");
+return new ImmutablePair<>(getTaskName(), true);
+  }
+
+  @Override
+  protected Collection getTaskTables() {
+return tables;
+  }
+
+  void updateCountFromDB() {
+// Read - Write operations to DB are in ascending order
+// of file size upper bounds.
+List resultSet = fileCountBySizeDao.findAll();
+int index = 0;
+if (res

[jira] [Work logged] (HDDS-1366) Add ability in Recon to track the number of small files in an Ozone cluster.

2019-08-07 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1366?focusedWorklogId=290832&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-290832
 ]

ASF GitHub Bot logged work on HDDS-1366:


Author: ASF GitHub Bot
Created on: 07/Aug/19 22:42
Start Date: 07/Aug/19 22:42
Worklog Time Spent: 10m 
  Work Description: shwetayakkali commented on pull request #1146: 
HDDS-1366. Add ability in Recon to track the number of small files in an Ozone 
Cluster
URL: https://github.com/apache/hadoop/pull/1146#discussion_r311792914
 
 

 ##
 File path: 
hadoop-ozone/ozone-recon/src/main/java/org/apache/hadoop/ozone/recon/tasks/FileSizeCountTask.java
 ##
 @@ -0,0 +1,241 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * 
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.ozone.recon.tasks;
+
+import com.google.inject.Inject;
+import org.apache.commons.lang3.tuple.ImmutablePair;
+import org.apache.commons.lang3.tuple.Pair;
+import org.apache.hadoop.ozone.om.OMMetadataManager;
+import org.apache.hadoop.ozone.om.helpers.OmKeyInfo;
+import org.apache.hadoop.utils.db.Table;
+import org.apache.hadoop.utils.db.TableIterator;
+import org.hadoop.ozone.recon.schema.tables.daos.FileCountBySizeDao;
+import org.hadoop.ozone.recon.schema.tables.pojos.FileCountBySize;
+import org.jooq.Configuration;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.Collection;
+import java.util.Iterator;
+import java.util.List;
+
+/**
+ * Class to iterate over the OM DB and store the counts of existing/new
+ * files binned into ranges (1KB, 2Kb..,4MB,.., 1TB,..1PB) to the Recon
+ * fileSize DB.
+ */
+public class FileSizeCountTask extends ReconDBUpdateTask {
+  private static final Logger LOG =
+  LoggerFactory.getLogger(FileSizeCountTask.class);
+
+  private int maxBinSize = -1;
+  private long maxFileSizeUpperBound = 1125899906842624L; // 1 PB
+  private long[] upperBoundCount;
+  private long oneKb = 1024L;
+  private Collection tables = new ArrayList<>();
+  private FileCountBySizeDao fileCountBySizeDao;
+
+  @Inject
+  public FileSizeCountTask(OMMetadataManager omMetadataManager,
+  Configuration sqlConfiguration) {
+super("FileSizeCountTask");
+try {
+  tables.add(omMetadataManager.getKeyTable().getName());
+  fileCountBySizeDao = new FileCountBySizeDao(sqlConfiguration);
+} catch (Exception e) {
+  LOG.error("Unable to fetch Key Table updates ", e);
+}
+upperBoundCount = new long[getMaxBinSize()];
+  }
+
+  protected long getOneKB() {
+return oneKb;
+  }
+
+  protected long getMaxFileSizeUpperBound() {
+return maxFileSizeUpperBound;
+  }
+
+  protected int getMaxBinSize() {
+if (maxBinSize == -1) {
+  // extra bin to add files > 1PB.
+  maxBinSize = calculateBinIndex(maxFileSizeUpperBound) + 1;
+}
+return maxBinSize;
+  }
+
+  /**
+   * Read the Keys from OM snapshot DB and calculate the upper bound of
+   * File Size it belongs to.
+   *
+   * @param omMetadataManager OM Metadata instance.
+   * @return Pair
+   */
+  @Override
+  public Pair reprocess(OMMetadataManager omMetadataManager) {
+LOG.info("Starting a 'reprocess' run of FileSizeCountTask.");
+Table omKeyInfoTable = omMetadataManager.getKeyTable();
+try (TableIterator>
+keyIter = omKeyInfoTable.iterator()) {
+  while (keyIter.hasNext()) {
+Table.KeyValue kv = keyIter.next();
+countFileSize(kv.getValue());
+  }
+} catch (IOException ioEx) {
+  LOG.error("Unable to populate File Size Count in Recon DB. ", ioEx);
+  return new ImmutablePair<>(getTaskName(), false);
+}
+populateFileCountBySizeDB();
+
+LOG.info("Completed a 'reprocess' run of FileSizeCountTask.");
+return new ImmutablePair<>(getTaskName(), true);
+  }
+
+  @Override
+  protected Collection getTaskTables() {
+return tables;
+  }
+
+  void updateCountFromDB() {
+// Read - Write operations to DB are in ascending order
+// of file size upper bounds.
+List resultSet = fileCountBySizeDao.findAll();
+int index = 0;
+if (res

[jira] [Work logged] (HDDS-1366) Add ability in Recon to track the number of small files in an Ozone cluster.

2019-08-07 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1366?focusedWorklogId=290826&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-290826
 ]

ASF GitHub Bot logged work on HDDS-1366:


Author: ASF GitHub Bot
Created on: 07/Aug/19 22:41
Start Date: 07/Aug/19 22:41
Worklog Time Spent: 10m 
  Work Description: shwetayakkali commented on pull request #1146: 
HDDS-1366. Add ability in Recon to track the number of small files in an Ozone 
Cluster
URL: https://github.com/apache/hadoop/pull/1146#discussion_r311792637
 
 

 ##
 File path: 
hadoop-ozone/ozone-recon/src/main/java/org/apache/hadoop/ozone/recon/tasks/FileSizeCountTask.java
 ##
 @@ -0,0 +1,241 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * 
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.ozone.recon.tasks;
+
+import com.google.inject.Inject;
+import org.apache.commons.lang3.tuple.ImmutablePair;
+import org.apache.commons.lang3.tuple.Pair;
+import org.apache.hadoop.ozone.om.OMMetadataManager;
+import org.apache.hadoop.ozone.om.helpers.OmKeyInfo;
+import org.apache.hadoop.utils.db.Table;
+import org.apache.hadoop.utils.db.TableIterator;
+import org.hadoop.ozone.recon.schema.tables.daos.FileCountBySizeDao;
+import org.hadoop.ozone.recon.schema.tables.pojos.FileCountBySize;
+import org.jooq.Configuration;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.Collection;
+import java.util.Iterator;
+import java.util.List;
+
+/**
+ * Class to iterate over the OM DB and store the counts of existing/new
+ * files binned into ranges (1KB, 2Kb..,4MB,.., 1TB,..1PB) to the Recon
+ * fileSize DB.
+ */
+public class FileSizeCountTask extends ReconDBUpdateTask {
+  private static final Logger LOG =
+  LoggerFactory.getLogger(FileSizeCountTask.class);
+
+  private int maxBinSize = -1;
+  private long maxFileSizeUpperBound = 1125899906842624L; // 1 PB
+  private long[] upperBoundCount;
+  private long oneKb = 1024L;
+  private Collection tables = new ArrayList<>();
+  private FileCountBySizeDao fileCountBySizeDao;
+
+  @Inject
+  public FileSizeCountTask(OMMetadataManager omMetadataManager,
+  Configuration sqlConfiguration) {
+super("FileSizeCountTask");
+try {
+  tables.add(omMetadataManager.getKeyTable().getName());
+  fileCountBySizeDao = new FileCountBySizeDao(sqlConfiguration);
+} catch (Exception e) {
+  LOG.error("Unable to fetch Key Table updates ", e);
+}
+upperBoundCount = new long[getMaxBinSize()];
+  }
+
+  protected long getOneKB() {
+return oneKb;
+  }
+
+  protected long getMaxFileSizeUpperBound() {
+return maxFileSizeUpperBound;
+  }
+
+  protected int getMaxBinSize() {
+if (maxBinSize == -1) {
+  // extra bin to add files > 1PB.
+  maxBinSize = calculateBinIndex(maxFileSizeUpperBound) + 1;
+}
+return maxBinSize;
+  }
+
+  /**
+   * Read the Keys from OM snapshot DB and calculate the upper bound of
+   * File Size it belongs to.
+   *
+   * @param omMetadataManager OM Metadata instance.
+   * @return Pair
+   */
+  @Override
+  public Pair reprocess(OMMetadataManager omMetadataManager) {
+LOG.info("Starting a 'reprocess' run of FileSizeCountTask.");
+Table omKeyInfoTable = omMetadataManager.getKeyTable();
+try (TableIterator>
+keyIter = omKeyInfoTable.iterator()) {
+  while (keyIter.hasNext()) {
+Table.KeyValue kv = keyIter.next();
+countFileSize(kv.getValue());
+  }
+} catch (IOException ioEx) {
+  LOG.error("Unable to populate File Size Count in Recon DB. ", ioEx);
+  return new ImmutablePair<>(getTaskName(), false);
+}
+populateFileCountBySizeDB();
+
+LOG.info("Completed a 'reprocess' run of FileSizeCountTask.");
+return new ImmutablePair<>(getTaskName(), true);
+  }
+
+  @Override
+  protected Collection getTaskTables() {
+return tables;
+  }
+
+  void updateCountFromDB() {
+// Read - Write operations to DB are in ascending order
+// of file size upper bounds.
+List resultSet = fileCountBySizeDao.findAll();
+int index = 0;
+if (res

  1   2   3   >